OpenFF Optimization Benchmark - Season 1 : Retrospective Survey
In order to build on our experience and lessons-learned from the Season 1 campaign, please answer as many of the following questions as you can.

We are looking for your candid feedback, and this will inform our decision-making for the next Season.

Sign in to Google to save your progress. Learn more
Coordination from organizers
On a scale from 1 to 10, with 1 being poor, 10 being excellent, how would you rate our coordination of the Season 1 campaign?
poor
excellent
Clear selection
What is one thing we did poorly, or you believe we could improve at?
What is one thing we did well, that you appreciated?
Protocol execution
On a scale from 1 to 10, with 1 being poor, 10 being excellent, how would you rate your satisfaction with the Season 1 protocol?
poor
excellent
Clear selection
What was the most difficult or annoying part of executing the Season 1 protocol?
What aspect of the protocol did you appreciate? Is there something we should be sure to do in future protocols?
Software components
If you are reusing components from the benchmarking workflow for your own work, how could we design the next iteration of the components to be more re-usable?
If some of your input molecules were discarded during validation, how many of those specific structures do you feel it was reasonable to discard?
Clear selection
(Optional) Explain your answer from above.
If some of your input molecules were discarded during validation, was the fraction of the dataset that was discarded acceptable for a tool that you would use in production work?
Clear selection
(Optional) Explain your answer from above.
Overall comments
In hindsight, if you could change one thing about the Season 1 benchmark, what would it be?
What would you absolutely not change about the Season 1 benchmark, or in future seasons?
Submit
Clear form
Never submit passwords through Google Forms.
This form was created inside of Open Force Field Initiative.

Does this form look suspicious? Report