Sign in to Google to save your progress. Learn more
What is your P(doom | AGI, no alignment)? Probability that all human potential is lost given a non-aligned AGI.
By what year do you expect alignment to be solved? Median estimate.
What is your probability estimate that we will solve alignment before humans become unable to control AI?
By what year do you expect humans become unable to control AI? Median estimate.
Which of these metrics for AI safety progress do you like the most? (and add your own ideas) *
Required
Which of these ways to guide AI safety research do you like the most? (and add your own ideas)
What could be a parametric definition of an alignment solution? E.g. "Follows Asimov's three laws of robotics". You are very welcome to write something uncertain and creative.
Any other comments?
Your name
Your email address
Submit
Clear form
Never submit passwords through Google Forms.
This form was created inside of Apart Research. Report Abuse