WANT@NeurIPS 2023 Poll
Please fill in the poll below to help us understand how the deep learning community trains neural networks, what the computational needs are, and which tricks to improve efficiency are used or missing. The poll statistics will be shared on December 16 at WANT@NeurIPS 2023, a workshop on advancing neural network training : https://want-ai-hpc.github.io . Thanks a lot for your time and contribution!
Sign in to Google to save your progress. Learn more
How long have you been doing deep learning? 
Clear selection
What is your main application domain?  (you can choose multiple answers)
What is the size (number of parameters) of the  biggest model you’ve trained/fine-tuned recently? 
*
What are typical architectures you use?  (you can choose multiple answers)
*
Required
Do you usually train your models on several GPU/TPU devices? *
If you chose "Other" in previous question, please, specify what type of devices you use for training 
Which deep learning framework do you usually use for the training? *
What libraries do you use to optimise the training? (you can choose multiple answers) *
Required
What type of parallelisms do you use during training?  (you can choose multiple answers)
*
Required
Do you use mixed-precision training?
Clear selection
Do you use activation checkpointing (re-materialization) during training? *
Do you use offloading during training?
*
Do you use other techniques to improve training efficiency? If yes, please, specify
What is a typical size of input data sample you use for training? 
Do you use some libraries to optimize data loading during training? If yes, please, specify
In your opinion, what technology/theoretical concept could boost the neural network training to a new level of efficiency?
Submit
Clear form
Never submit passwords through Google Forms.
This content is neither created nor endorsed by Google. Report Abuse - Terms of Service - Privacy Policy