EA NYUAD - AI Safety Fundamentals, AGI 201, and AIS Labs Interest Form
DEADLINE: June 17th, 2023

AI SAFETY FUNDAMENTALS:

None of these questions are evaluative, and rather just serve to help place you in well-suited cohorts. The curriculum is based on EA Cambridge's AGI Safety Fundamentals Program.

Program Goals • Break the field down into the important concepts, and have participants learn about them in a clear and high-fidelity way. • Support participants to better understand the field, gain a foothold in the space, and be better placed to do high-quality research • Provide collaboration and networking opportunities, as well as knowledge of relevant career opportunities, for participants Program Structure and Opportunities • 7-week program during June-August 2023
• 2-3 hours of reading per week • 1 hour facilitated discussions per week in cohorts of 4-5 • Timings for each cohort will be selected according to the availabilities of each cohort member • Facilitators are EA NYUAD members -- not necessarily researchers themselves, but will have a fair amount of AI alignment context • Networking opportunities during the programme to meet and socialise with other participants • The programme may involve speaker events with experienced AI alignment and governance researchers and professionals Target Audience • Students, academics and professionals who are motivated by AI safety arguments or other arguments about long-term risks from AI, and want to learn more about current technical and governance research/questions • People who may be interested in pursuing a career in ensuring future AI systems are beneficial for humanity • We expect the technical track to be most useful to people with technical backgrounds (e.g. maths or CS), although the curriculum is intended to be accessible for those who aren’t familiar with machine learning, and participants will be put in groups with others from similar backgrounds. We also recommend participating in our Introductory Effective Altruism Programs, if you're yet to engage with the EA community. https://www.effectivealtruism.org/virtual-programs/ Curricula and Contact The curriculum for the AI technical alignment track can be found here: https://aisafetyfundamentals.com/alignment-insession-readings


AGI 201:

This curriculum aims to give participants more advanced knowledge about alignment questions to understand the frontier of current research discussions. It is preferable, but not required for students to have basic ML knowledge.

Prerequisites:

• AI Safety Fundamentals


Curricula and Contact The general curriculum we’ll be covering will be adapted from here:

https://aisafetyfundamentals.com/alignment-201-curriculum


AI Safety Labs:

We are open to providing mentorship, community, and other forms of support for other projects you may want to work on over summer related to AI alignment. Some ideas are upskilling (e.g. independently working on the Machine Learning Alignment Bootcamp), trying your hands on Mechanistic Interpretability projects, or learning a different theory branch. If you'd like to try different ideas like these or others, feel free to leave it below and we'll get in touch with you with the forms of support we can provide!


 
Sign in to Google to save your progress. Learn more
Email *
Name *
Which initiative are you applying for? *
Required
netID (if applicable)
Major(s)/Field(s) *
Class
Clear selection
University *
Experience with Machine Learning (this is not a pre-requisite for this seminar program but helps us create well-matched cohorts)

What (if any) experience do you have working with machine learning (ML)? Tick all that apply.  
*
Required
AI Safety background knowledge (this is not a pre-requisite for this seminar program, but helps us create well-matched cohorts)
*
Required
What are some topics you're excited to learn about within the field of AI alignment and/or governance, and why? (feel free to be brief)
*
Are you happy to commit 3 hours per week (2 hours of reading, 1 hour discussion) throughout the duration of this seminar program?

*
Is there anything else you think we should know or be thinking about?

Submit
Clear form
Never submit passwords through Google Forms.
reCAPTCHA
This form was created inside of New York University. Report Abuse