A 10-Minute Questionnaire on Tool Support for Machine Learning Experiments
Dear participant,

Since you are an experienced machine learning practitioner, we (a group of researchers from Sweden, Netherlands, and Germany) would like to hear your opinion on machine learning experiment management tools. We kindly invite you to participate and forward this invitation to further colleagues who might also be interested in this survey. Completing the survey takes approximately 10 minutes.

Experiment management tools support practitioners performing machine learning (ML) or deep learning (DL) experiments to manage all involved artefacts and metadata (datasets, features, scripts, hyperparameters, evaluation metrics, models, …). Such tools are used to reproduce or trace experiments, analyse experiment results, and collaborate with other practitioners. Popular tools are, for instance, Neptune.ai, DVC, or MLFlow. These tools allow users to track or log artefacts when performing experiment runs. For instance, some tools provide APIs for logging hyperparameters, script versions, metrics, and other artefacts/metadata used during experiments. Some also provide visual user interfaces to later query and visualise experiments, as illustrated below.
Sign in to Google to save your progress. Learn more
Illustration of artefact tracking with experiment management tools
This survey aims to elicit information from practitioners on performing ML/DL experiments with and without experiment management tools. If you are not using such tools, you are welcome to report your experiences and challenges with experimentation and give suggestions for improving tooling for ML/DL experiments. If you are using such tools, we aim to investigate the management tools you use, the benefits you perceive, as well as their challenges.

+ What's in it for you?
As a participant, you will learn about experiment management tools, their features and benefits, and how they can be valuable for your own projects. Also, you can receive a state-of-the-art report with the study results to learn more about these tools and their trends. Consequently, you can reflect on your practices and learn about others' practices.

+ What happens to your data?
All collected data will be stored securely and analysed, with information from participants aggregated and reported in an anonymised form for a scientific publication (the state-of-the-art report).

+ What's in it for us?
We will use the collected data to understand the landscape of ML/DL experiment management tools and identify their actual value to practitioners. We will also derive and suggest improvements based on identified challenges.

+ What's in it for everyone else?
Your contribution will benefit both the research and industrial communities to direct future research and eventually lead to better practices and tools. Researchers will obtain information to form systematic knowledge on ML experiment management in ML/DL engineering. Practitioners will receive guidance to design and select better ML experiment management tools for their projects.



Thanks in advance,
Carl Vågfelt Nihlmar (Chalmers|University of Gothenburg) - gusnihca@student.gu.se
Samuel Idowu (Chalmers|University of Gothenburg) - samuelid@chalmers.se
Daniel Struber (Chalmers|University of Gothenburg, Radboud University Nijmegen, Netherlands) - danstru@chalmers.se
Thorsten Berger (Chalmers|University of Gothenburg, Ruhr University Bochum, Germany) -  thorsten.berger@rub.de

www.easelab.org
Informed  consent *
Required
Next
Clear form
Never submit passwords through Google Forms.
This content is neither created nor endorsed by Google. Report Abuse - Terms of Service - Privacy Policy