Grigori Fursin

Grigori Fursin is a British computer scientist, president of the non-profit CTuning foundation, founding member of MLCommons, co-chair of the MLCommons Task Force on Automation and Reproducibility and founder of cKnowledge. His research group created open-source machine learning based self-optimizing compiler, MILEPOST GCC, considered to be the first in the world. At the end of the MILEPOST project he established cTuning foundation to crowdsource program optimisation and machine learning across diverse devices provided by volunteers. His foundation also developed Collective Knowledge Framework to support open research. Since 2015 Fursin leads Artifact Evaluation at several ACM and IEEE computer systems conferences. He is also a founding member of the ACM taskforce on Data, Software, and Reproducibility in Publication.

Education
Fursin completed his PhD in computer science at the University of Edinburgh in 2005. While in Edinburgh, he worked on foundations of practical program autotuning and performance prediction.

Notable projects

 * Collective Mind - collection of portable, extensible and ready-to-use automation recipes with a human-friendly interface to help the community compose, benchmark and optimize complex AI, ML and other applications and systems across diverse and continuously changing models, data sets, software and hardware.
 * Collective Knowledge – open-source framework to help researchers and practitioners organize their software projects as a database of reusable components and portable workflows with common APIs based on FAIR principles, and quickly prototype, crowdsource and reproduce research experiments.
 * MILEPOST GCC – open-source technology to build machine learning based compilers.
 * Interactive Compilation Interface – plugin framework to expose internal features and optimisation decisions of compilers for external auto tuning and learning.
 * cTuning foundation – non-profit research organisation developing open-source tools and common methodology for collaborative and reproducible experimentation.
 * Artifact Evaluation - validation of experimental results from published papers at the computer systems and machine learning conferences.