Reproducible Neuroimaging Complaints Box: Translating Principles into Practice

Vent to us!

We are ECRs involved in development and improvement of tools and training focusing on Open and Reproducible neuroscience practices.

This survey will help all of us to estimate awareness, understanding, and adoption of FAIR data principles in neuroimaging research.

Thank you for your time and feedback!
Yu-Fang, Sebastian, Nikhil 

Expected response time 5-10 mins.

Sign in to Google to save your progress. Learn more
We begin with questions about your academic life
Where do you locate yourself on science-time continuum? *
Which of these research domains you primarily work in?  *
Required
Which of these tasks take the biggest chunk out of your work, day to day? 

Please rank these tasks from 1 (most time consuming) to 7 (least time consuming)
*
1
2
3
4
5
6
7
Data collection / capture
Data curation / organization
Data processing
Data maintainance and access control
Quality control
Data annotation
Data publication
Now some questions about your data wrangling experience
Are you familiar / aware of FAIR Principles

FAIR principles for scientific data management and stewardship were developed to improve the Findability, Accessibility, Interoperability and Reusability of digital assets (e.g. raw data, processing pipelines, derived phenotypes, tabulated results, figures etc.); all of which support research reproducibility. 
*
Have you ever used data collected and made available by other people (e.g. other lab member, other lab, large open datasets) in your research?
If so, what type of data did you reuse
*
Required
If you have used other people's data, please tell us what your experience.
If you have worked with several datasets, pick one from recent times (~1 year) that transpired into your most impactful work
Findability: How did you find (i.e. learn about the existence of) the dataset?
Clear selection
Accessibility: How difficult was it to get access to the data once you knew of its existence/initiated the process?
Clear selection
Interoperability: How easy/difficult was it to understand the data well enough to use it in your own analysis?
Clear selection
Reusability: Describe the level of reusability of the dataset (select all that apply).
How would you rate "FAIRness" of your data (0: least, 5: highest)

Data: any digital object (e.g. raw scans, clinical scores, processed phenotypes) 
Metadata: information about that digital object (e.g. demographic labels, acquisition protocols, pipeline versions) 
0
1
2
3
4
5
I don't know what this means in practice!
F1. (Meta) data have unique identifiers
F2. Data are described with rich metadata (ideally machine-readable)
A1. Data are retrievable (upon appropriate authorization) through a well-defined protocol
A2. Metadata are accessible, even when the data may not be available
I1. (Meta) data is machine-actionable.
I2. (Meta) data formats utilize shared vocabularies and/or standards and/or ontologies.
R1. (Meta)data are released with a clear and accessible data usage license
R2. (Meta)data are associated with detailed provenance
Clear selection
Now going beyond data wrangling, these last few questions are about your typical research workflow
If it's easier, you can respond based on your work-life within the last 6 months
Based on your own experience,
how long would it take for:

Note: "work" includes {data processing + analysis} but NOT data collection duration.

*
< 1 week
1 week - 1 month
1 month - 6 months
> 6 months
you to reproduce your work from last year?
you to reproduce someone else's work that you have cited in your papers / presentations?
someone else to reproduce your most impactful work that has been published (including preprints)?
In your view, how serious is the "reproducibility crisis" is in neuroimaging? 
(Here reproducibility implies working with same data + same analysis pipeline) 
1: Not serious, 10: Very serious
*
What makes it hard to implement reproducible workflows?

Please rank these tasks from 1 (most difficult / time consuming) to 5 (least time consuming)
*
1
2
3
4
5
Difficulty in "finding" details on data, processing steps, quality checks
Getting "access" to data (raw or processed) and code
Poor "interoperability" of metadata / data dictionaries (e.g. confusing column names) and portability of code
Challenging "reusability" of data due to privacy constraints and/or insufficient provenance information
Lack of incentives for publishing data-centric or replication papers (Reviewer-2 already hates me..)
What are your importance criteria that determine the validity/quality of published research?  

very little
little
medium
high
very high
Authors
Institute affiliations
Journal
Sample size
Data availability
Code availability
Documentation on reproduction of analysis
Replication effort on multiple datasets
Other
Clear selection
If there are  any "Other" factors with high importance in previous question - please specify them here. 
Which of these (meta) data curation and processing standards and tools do you have experience with?
never heard
aware, but not tried
tried once and abandoned
occasional
regular user
not applicable to my work
DataLad
git-annex
RedCAP
BIDS
BIDS App Bootstrap (BABS)
Containers (Docker, Singulairty/Apptainer)
Boutiques
Clinica
FAIRly big
Clear selection
Which of these informatics platforms (for data processing and sharing) do you have experience with?
never heard
aware, but not tried
tried once and abandoned
occasional
regular user
not applicable to my work
OpenNeuro
Canadian Open Neuroscience Platform (CONP)
Brainlife
Brain-Code
ChRIS
Zenodo
Open Science Framework (OSF)
LORIS
XNAT
CBRAIN
Clear selection
Data dictionaries help with reuse of research objects. In your own work, what documentation practices do you use? 
never heard
aware, but not tried
tried once and abandoned
occasional
regular user
not applicable to my work
README file
data dictionary (e.g. jsons or spreadsheets)
data dictionary with machine readable identifiers
lab-notebooks
Web Markdown tools (e.g. MkDocs, Read the Docs)
Clear selection
If there are any "Other" standards, tools, platforms, documentation practices that you use - please specify them here. 
Which of these data-annotation and harmonization resources are you aware of?
In your view, what would encourage researchers to invest time and resources to improve reproducibility of their work ?
email (optional)
Your academic affiliation - institute, center, university etc. (Optional)
Gender
Clear selection
Location of your academic institute
(Just specify the country)
Any other thoughts, suggestions, memes you would like to share! 
(For example: the chaos prediction and a phd comic)
Submit
Clear form
Never submit passwords through Google Forms.
This content is neither created nor endorsed by Google. Report Abuse - Terms of Service - Privacy Policy