AGI Safety Fundamentals Interest Form

This form is the permanent interest form for the AI Alignment McGill (AIAM)-run AGI Safety Fundamentals reading group. We expect this application to take 5 minutes.

This programme focuses on the alignment and control problems surrounding future artificial general intelligence. ​​Topics include AI interpretability and explainability, reward learning, misalignment in large language models, careers in AI safety, and more.

If you're interested in learning more about AI safety to get a sense of this program, we recommend starting with the introductory materials found in our resources: https://agisf.com/resources.

The program is based on a curriculum by Richard Ngo, a researcher at OpenAI.

The reading group will run for seven weeks, date TBD. This program involves weekly in-person one-hour discussions in small groups of 3-6 participants and one discussion facilitator, as well as 1–2 hours of readings and exercises before discussion meetings. You can view a list of topics in this program at https://course.aisafetyfundamentals.com/alignment, which we will use as our syllabus.

Discussion group meeting times will be scheduled at a time when you are available.

If you have any questions, feel free to email us at alignmentmcgill@gmail.com.

Join the AI Alignment McGill discord server: https://discord.gg/8F5NKntrDt

Sign in to Google to save your progress. Learn more
Full Name *
Preferred email address
*
We will use this to send announcements related to the reading group. If you use a different email address for GCal invites, please add that one as well after the first.
Which features of the course do you expect to find most valuable?
Please select 1-4 of your top priorities.
Clear selection
What AGI safety topic(s) are you most excited to learn more about, and why? [2-4 sentences]
It's okay if you don't know much about AGI safety yet and can't go into much detail. However we'd like to know what you're most excited to learn about in AGI safety to help us make sure our course and programming is helpful to you. Please expand on: 1. What topic or concept you have found most interesting in AI safety so far. 2. Why you think that topic seems important. 3. What your current confusions are about it.
Would you want to receive more information by email?
Clear selection
Anything else you'd like us to know?
Submit
Clear form
Never submit passwords through Google Forms.
This content is neither created nor endorsed by Google. Report Abuse - Terms of Service - Privacy Policy