Instructions: Rank the Plausibility of Each Scenario Parameter
This project aims to develop a futures modeling framework for advanced AI scenario development. The goal is to cover the full spectrum of AI development paths and identify interesting combinations, or ideally, entirely new AI scenarios. The project aims to highlight risks and paths that receive less consideration (e.g., structural, decision/value erosion, global failure cascades) and structure a framework of potential futures.

**Note:  The survey questions are not strictly speaking "questions" and should be viewed rather as a ranking or scoring. Each question is a dimension (E.g., AI paradigm) with three conditions (e.g., current, new, hybrid). The goal is to rank each condition on its degree of plausibility and impact to create classes for the model (similar to the four quadrants of a risk matrix - https://tinyurl.com/riskmat). The extensive definitions below are only for reference (provided, based on feedback), if you understand each concept, such as takeoff, distribution, paradigm, alignment, you're probably good to go. **

- This project is not about prediction or forecast, nor statistics, but exploratory scenario development only. Your best assessment is enough for this purpose and is highly valued. Further iterations will refine specifics.

This form lists several potential paths for high-level machine intelligence (HLMI). Each question is a dimension (e.g., takeoff speed) with three to four conditions (e.g., fast) on the left and asks the participant to:
 
1) Please rank the likelihood (plausibility) of each condition to occur (from highly likely to highly unlikely). Scale: Very unlikely= 5 - 20%, somewhat unlikely= 20-40%, even chance = 40-60%, somewhat likely= 60 - 80%, Very likely= 80-95%). For details on scale, see: http://tiny.cc/Risk_Scale 

The survey responses will be used to create broad categories (e.g., these 500 scenarios are in x bucket), and then separated (mixed) and clustered for scenario development. I'm looking forward to sharing the results.

For further details on purpose and methodology, see: http://tiny.cc/q2mquz, the full folder with documents/definitions is here: https://tinyurl.com/AIfutures 
Sign in to Google to save your progress. Learn more
Frame of reference: Uncertain year between 2030 and 2100
I picked 2030-2070 just to ground the questions to a solid date range which is preferable for many participants. However, this project is ultimately timeframe agnostic. The original version had no date; the goal is to collect how you'd rank the likelihood of each of the below conditions irrespective of time. So please don't judge any of the rankings with 2070 as a hard cutoff date.

The assumption is that AI systems are capable of reaching radically transformative abilities this century, with the ability to recursively self-improve. It is also assumed that additional catastrophic events that could disrupt development will not occur in the timeframe specified.

Definitions: Advanced AI and High-Level Machine Intelligence (HLMI)
For this project, advanced AI and High-Level Machine Intelligence (HLMI) refer to a spectrum of AI systems that could bring radical change to the world, including unevenly powerful agents capable of independent decision making (human-level in several domains, less in others, superhuman in some) to human-level AGI and artificial superintelligence (ASI).

I use both terms interchangeably at times, broadly similar to the transformative AI framing, but referring to systems more than overall impact. The goal with this is just to cover the spectrum of possible variations, above the threshold of independent decision making across more than two domains.
Please rank the below conditions on their likelihood of occurrence:   Very unlikely= 0 - 10, Unlikely= 11 - 40, Even chance = 41 - 60%, Likely= 61 - 90, Very likely= 91 - 100
Think in terms of plausibility and consistency given your understanding of the issue.

This survey is not meant for forecasting, but rather the development of scenario classes. Each condition can be considered a broad category to rank, as many of these overlaps and are not mutually exclusive. Scale derived from MITRE risk: http://tiny.cc/Risk_Scale
Capability & generality - How capable and general do you expect AI systems to become (upper bounds, this century)?
Definitions: capability & generality refers to the relationship between system power and the ability to generalize that power across different domains of learning. 1)  Condition Low: as capable as current systems, minimum generality but extremely capable in certain tasks. 2)  Moderate: increase in power, and able to generalize across several domains (e.g., the system can dominate Go, and transfer skills to legal research, or accounting, but not sales), with limited problem-solving autonomy. 3) High narrow:  human to superhuman capable across most tasks, but domain-specific only (AI services). 4)  High general: this includes systems with human-level capabilities that generalize across most tasks, to full AGI, and superintelligence.
Very likely
Likely
Even chance
Unlikely
Very unlikely
Low (as capable as current systems, low generality)
Moderate (power can generalize across several domains)
High narrow (human level to ASI, but task specific)
High general (AGI to ASI)
Clear selection
Distribution - How widely distributed will the development of advanced AI be (and accessibility)?
Distribution measures how widespread AI system are developed and how accessible. Distribution and access are strongly related to the balance of power. Condition 1) Widely distributed:  developed and shared openly, with low resource requirements. 2) Moderate: multipolar outcome, multiple large companies keep pace with development. 3) Concentrated: one powerful system(s) developed by one organization with a significant lead.
Very likely
Likely
Even chance
Unlikely
Highly unlikely
Widely distributed (open source, developed and shared
Moderate (multipolar, leading companies)
Concentrated (one lab or system, controlled)
Clear selection
Takeoff Speed & Rate of Change
Condition 1) Slow: status quo, incremental development (multiple decades); low or minimal self-improvement. 2) Moderate (uncontrolled): significant change that is unexpected and difficult for society to normalize (years/decade); moderate self-improvement. 3)  Moderate (controlled): Significant change but controlled and actively pursued for competition (multipolar), years/decade; moderate self-improvement. 4) Fast: the standard hard takeoff "foom" scenario (hours/days), with powerful recursive self-improvement feedback loop.
Very likely
Likely
Even chance
Unlikely
Very unlikely
Slow (incremental, decades or longer)
Moderate (uncontrolled/unexpected - months/years)
Moderate (controlled/competitive - months/years)
Fast takeoff (hours/weeks)
Clear selection
Paradigm - What AI paradigm could lead to high-level capabilities?
As high-level capabilities near, what paradigm will get us there? 1) The current paradigm (e.g., deep learning) scales to HLMI. 2) A new learning paradigm or architecture like quantum computing is needed for HLMI. 3)  Deep learning plus an innovation or hybrid learning model is needed for HLMI.
Very likely
Likely
Even chance
Somewhat unlikely
Unlikely
Current AI paradigm
New paradigm (e.g., System 2 reasoning, Quantum)
Current plus innovation (e.g., hybrid, evolutionary, common sense)
Clear selection
Accelerants - What technological developments could cause unexpected capability gains?
What innovation or change could cause rapid capability gains? 1) Compute overhang: suddenly a large portion of compute becomes available (usable). 2) New insight: a new paradigm, insight, or architecture causes rapid change. 3) New data type (qualitatively different), embodiment, or simulated embodiment drives rapid capability gains.
Very likely
Likely
Even chance
Unlikely
Very unlikely
Compute overhang or bottleneck
New insight (neuroscience, quantum ML)
Embodiment (simulated, or novel datatype)
Clear selection
Timeframe - from your perspective, what is the plausible timeframe for the development of high-level machine intelligence?  
Description: Less than 20 years: High-level machine intelligence or a close approximation is developed before 2040.  2) 20 to 40 years: High-level machine intelligence or a close approximation is developed before sometime between 2040 and 2070 3) Greater than 40 years: High-level machine intelligence or a close approximation is developed before 2040.
Very likely
Likely
Even chance
Somewhat unlikely
Unlikely
Under 20 years
20-40 years
Over 40 years
Clear selection
Race Dynamics - How would you expect competition over advanced AI to play out? (Race to the top, no race at all, race to the bottom, or worse)
1) Cooperation: AI technologies are recognized as a global public good and cooperation increases between companies and national governments. Race to the top scenario. 2) Isolation: Global governments take a protectionist turn and cooperation decreases. AI is developed in isolation.  3) Monopolization: Technology companies increase acquisitions of smaller companies and talent to control AI resources. Corporations increasingly control the direction of research 4) AI Arms Race: AI is named a strategic national asset and countries race for global dominance. AI is militarized and conflict is more likely.
Very likely
Likely
Even chance
Unlikely
Very unlikely
Cooperation (race to the top, broad cooperation)
Isolation (sharp decline in trade and cooperation)
Monopolization (tech giants consolidate control)
AI "Arms-Race" (government-backed or led, militarized)
Clear selection
Most Dangerous Risk Class - Which risk category could be most prevalent (unsolved) and dangerous from advanced AI systems?
1) Misuse: Alignment is under control and Cyber-attacks and disinformation campaigns increase in frequency and disruptive potential. 2)  Accidents or failures: AI systems are given more control over decision processes making failure modes more consequential and goal alignment remains the key danger. 3) Structural: Increased decision autonomy of AI systems brings subtle changes to the functioning of society. Overlap between nations’ offense/defense balance makes it more likely for military escalation.
Very likely
Likely
Even chance
Unlikely
Very unlikely
Misuse (e.g., cyber, disinformation)
Accidents or failures (goal alignment, or systemic)
Structural (value or decision erosion)
Clear selection
AI Safety - What safety techniques would most likely be successful for advanced AI systems?
To control high-level systems, which of the below are more plausible? Will our current techniques scale to HLMI? Will new safety techniques need to be developed from first principles? Or will new custom methods be required for each instantiation?
Very likely
Likely
Even chance
Unlikely
Very unlikely
Current techniques scale to HLMI
New techniques required for advanced systems
Custom techniques for each new instantiation
Clear selection
Safety risk - Which alignment risks could be the most prominent (unsolved) and dangerous with advanced AI?
As advanced capabilities near, what will remain the most difficult unsolved problem? Will 1) goal alignment, 2) deception/power-seeking, or 3) inner alignment (mesa optimization) remain the primary concerns? Goal alignment has had significant success, but inner aligned agent models remain a problem and are extremely difficult to identify.
Very likely
Likely
Even chance
Unlikely
Very unlikely
Goal alignment failure
Power-seeking and deception
Inner alignment (mesa optimization)
Clear selection
Developer - What entity would most likely be the first to develop advanced AI?
In your perspective, what entity will plausibly develop the first high-level machine intelligence: 1) A coalition of countries, like the West vs Eastern blocs in the cold war; 2) individual countries, powerful corporations (e.g., Google, Tencent), or 3) individual developers?
Very likely
Likely
Even chance
Unlikely
Very unlikely
Coalition of states (e.g., EU, NATO)
Country
Corporation/Academia
Individual developer
Clear selection
Governance - Looking forward, when the first transformative system(s) are developed, how strong do you expect our governance structures will be for managing change from AI?
Definitions/description: Looking forward: from your perspective, when radically advanced capabilities are deployed, do you believe that international governance capacity will be weak, capable, or strong? 1) Weak: preparation stays the same as today (reactive), or we could see a decrease in cooperation and weakening of norms and institutions, possibly due to race dynamics. 2) Capable: A strengthening of international norms and consolidation of institutions focused on AI. 3) Strong: Int. safety regimes established (e.g., IAEA), multilateral agreements, and verification measures (e.g., IAEA nuclear inspections) enacted for states unwilling to sign on to AI safety agreements?
Very likely
Likely
Even chance
Unlikely
Very unlikely
Weak (norms and institutions stay the same or worsen)
Capable (norms strengthen, with modest institutional growth)
Strong (Int. agreements, safety regime, control measures)
Clear selection
Corporate governance - When advanced AI is developed, is it more plausible that safety standards will have decreased, remained the same or improved modestly, or strengthened?
From your perspective, when radically transformative capabilities are developed what will the safety/governance posture look like for major AI corporations: 1) decrease: a decline in cooperation due to competition, weakened safety standards. 2) Moderate improvement: Same as today, with modest improvements in cooperation on safety and standards. 3) Strengthen: full commitments on standards for safe development and use.
Very likely
Likely
Even chance
Unlikely
Very unlikely
Decrease (standards and cooperation)
Moderate improvement (increase in coordination & best practices)
Strengthen (Safety standards established)
Clear selection
Developer location - What world region do you believe would first develop advanced AI?
Which region do you believe will most likely develop the first high-level systems?
Very likely
Likely
Even chance
Unlikely
Very unlikely
USA-Western European
Asia-Pacific
Africa or Latin America/Caribbean
Clear selection
How would you rank the below non-standard machine intelligence scenarios?
Description: This section is ranking speculative ideas of emergent intelligence. 1) The idea that the internet is an emergent complex intelligent system that we're unable to detect; 2) The possibility of an agentized cognitive IoT (IoT plus AI) that could act independently on sensors and systems, or 3) That narrow AI systems will converge into one general intelligence (e.g., strands of DNA idea)
Very likely
Likely
Even chance
Unlikely
Very unlikely
The internet as emergent intelligence
Cognitive Internet-of-Things (globally distributed agents)
Narrow AI systems convergence
Clear selection
How familiar are you with AI safety?
Clear selection
How familiar are you with AI governance?
Clear selection
Thank you very much for the participation! Please leave and comments below.
Submit
Clear form
Never submit passwords through Google Forms.
This content is neither created nor endorsed by Google. Report Abuse - Terms of Service - Privacy Policy