Instructions: Rank each condition (least to most) as to what extent you would expect each to increase or decrease stability, safety, or security.
This project aims to develop a futures modeling framework for advanced AI scenario development. The goal is to cover the full spectrum of AI development paths and identify interesting combinations, or ideally, entirely new AI scenarios.

**Note:  The survey questions are not strictly speaking "questions" and should be viewed rather as a ranking or scoring. Each question is a dimension (E.g., AI paradigm) with three conditions (e.g., current, new, hybrid). The goal is to rank each condition on its degree of plausibility and impact to create classes for the model (similar to the four quadrants of a risk matrix - https://tinyurl.com/riskmat). The extensive definitions below are only for reference (provided, based on feedback), if you understand each concept, such as takeoff, distribution, paradigm, and alignment, you're probably good to go. **

1) Please rank each condition from the highest potential benefit on stability, safety, or security (greatly increase) to the highest downside risk (greatly decrease). For conditions (e.g., technologies) that you don't believe could cause an increase or a decrease, just choose the best/worst option or leave it as "no effect." Rank from best to worst, assuming the condition has occurred. For conditions that are all bad or all good, again rank least bad to most and vice versa.

- This project is not about prediction or forecast, nor statistics, but exploratory scenario development only. Your best assessment is enough for this purpose and is highly valued. Further iterations will refine specifics

This form lists several potential paths for high-level machine intelligence (HLMI). Each question is a dimension (e.g., takeoff speed) with three/four conditions (e.g., fast) on the left and asks the participant to:

Frame of reference: Uncertain year between 2030 and 2100
Sign in to Google to save your progress. Learn more
Definitions: Advanced AI and High-Level Machine Intelligence (HLMI)
For this project, advanced AI and High-Level Machine Intelligence (HLMI) refer to a spectrum of AI systems that could bring radical change to the world, including unevenly powerful agents capable of independent decision making (human-level in several domains, less in others, superhuman in some) to human-level AGI and artificial superintelligence (ASI).

I use both terms interchangeably at times, broadly similar to the transformative AI framing, but referring to systems more than overall impact. The goal with this is just to cover the spectrum of possible variations, above the threshold of independent decision making across more than two domains.
Instructions: Rank each condition (choices on left) on whether it would tend to increase, decrease, or have no effect on international stability and security.
Please rank the degree to which each condition could potentially impact social stability, safety (technical or otherwise), or international security (greatly increase to greatly decrease). For conditions (e.g., technologies) where you can't determine an increase or a decrease, or none exist, just choose the best option in your view or leave it as "no effect."  

**Further explanation if this series of questions remain unclear: **

*A key point is that the goal is to rank these hierarchically; so, we don't aim to judge badness with perfect precision, but along a spectrum of what has the most potential to decrease or increase risk. For example, between misuse, accidents (alignment and failures), or structural risk - which do you believe will remain unsolved and have the greatest potential downside (considering potential less obvious concerns - e.g., structural risks leading to nuclear war, or is alignment still worse? Rank each from most to least).
Capability & generality - Rank each condition from the highest potential benefit (greatly increase) to the lowest (greatly decrease).
Definitions: capability & generality refers to the relationship between system power and the ability to generalize that power across different domains of learning. Condition - Low: as capable as current systems, minimum generality but extremely capable in certain tasks. Moderate: increase in power, and able to generalize across several domains (e.g., the system can dominate Go, and transfer skills to legal research, or accounting, but not sales), with limited problem-solving autonomy. High narrow:  human to superhuman capable across most tasks, but domain-specific only (AI services). High general: this includes systems with human-level capabilities that generalize across most tasks, to full AGI, and superintelligence.
Greatly increase
Somewhat increase
Neither increase/decrease
Somewhat decrease
Greatly decrease
Low (as capable as current systems, low generality)
Moderate capability (highly capable/general across several domains)
High narrow (human-level to ASI in narrow domains)
High general (AGI to superintelligence)
Clear selection
Distribution - Rank each condition from the highest potential benefit (greatly increase) to the lowest (greatly decrease).
Distribution measures how widespread AI system are developed and how accessible. Distribution and access are strongly related to the balance of power. Widely distributed:  developed and shared openly, with low resource requirements. Moderate: multipolar outcome, multiple large companies keep pace with development. Concentrated: one powerful system(s) developed by one organization with a significant lead.
Greatly increase
Somewhat increase
Neither increase/decrease
Somewhat decrease
Greatly decrease
Widely distributed (open source)
Moderate (multipolar, leading companies)
Concentrated (one lab or system, controlled)
Clear selection
Takeoff Speed - Rank each condition from the highest potential benefit (greatly increase) to the highest downside risk (greatly decrease).
Slow: status quo, incremental development (multiple decades); low or minimal self-improvement. Mid (uncontrolled): significant change that is unexpected and difficult for society to normalize (years/decade); moderate self-improvement. Moderate (controlled): Significant change but controlled and pursued for competition (multipolar), years/decade; moderate self-improvement. Fast: the standard hard takeoff "foom" scenario (hours/days), with powerful recursive self-improvement feedback loop.
Greatly increase
Somewhat increase
Neither increase/decrease
Somewhat decrease
Greatly decrease
Fast takeoff (hours/weeks)
Moderate (uncontrolled - months/years)
Moderate (controlled-competitive - months/years)
Slow (incremental, decades or longer)
Clear selection
Timeframe - Rank each condition from the highest potential benefit (greatly increase) to the lowest (greatly decrease).
Description: the first instance of a system or systems where a portion of human decision-making and complex tasks can be delegated (and at least moderately capable of self-improvement).
Greatly increase
Somewhat increase
Neither increase/decrease
Somewhat decrease
Greatly decrease
Under 20 years
20 to 50 years
Over 50 years
Clear selection
Accelerants - Rank each condition from the highest potential benefit (greatly increase) to the lowest (greatly decrease).
What innovation or change could cause rapid capability gains? A compute overhang: suddenly a large portion of compute becomes available (usable). New insight: a new paradigm, insight, or architecture causes rapid change. New data type (qualitatively different), embodiment, or simulated embodiment drives rapid capability gains.
Greatly increase
Somewhat increase
Neither increase/decrease
Somewhat decrease
Greatly decrease
Compute overhang or bottleneck
New insight (neuroscience, quantum ML)
Embodiment (simulated, or novel datatype)
Clear selection
Paradigm - Rank each condition from the highest potential benefit (greatly increase) to the lowest (greatly decrease).
As high-level capabilities near, what paradigm will get us there? The current paradigm (e.g., deep learning), a new learning paradigm or architecture like quantum computing. Deep learning plus an innovation or hybrid learning model?
Greatly increase
Somewhat increase
Neither increase/decrease
Somewhat decrease
Greatly decrease
Current AI paradigm
New paradigm (e.g., System 2 reasoning, Quantum)
Current plus innovation (e.g., hybrid, evolutionary, common sense)
Clear selection
Race Dynamics - Rank each condition from the highest potential benefit (greatly increase) to the lowest (greatly decrease).
Definition: The economic and geopolitical dynamics of competition over AI. Cooperation suggests a race to the top and cooperation on development. Isolation is a retreat from globalization (e.g., due to COVID, Ukraine, etc.) and isolated AI development. Monopolization is corporate control over the market and research. AI Arms race: an increase in competitiveness, government control, and militarization (race to the bottom)
Greatly increase
Somewhat increase
Neither increase/decrease
Somewhat decrease
Greatly decrease
Cooperation (race to the top, broad cooperation)
Isolation (sharp decline in trade and cooperation)
Monopolization (tech giants consolidate control)
AI "Arms-Race"
Clear selection
Risk Class - Rank each condition from the highest potential benefit (greatly increase) to the lowest (greatly decrease).
With the development of more advanced systems, will the primary security risks be from misuses such as cyber-attacks, accidents and failure modes (misaligned goals), or structural, e.g., creeping normalization and decision autonomy erosion? Think in terms of both prevalence (unsolved and undefended) and danger. E.g., if alignment research pays off and we're moderately protected from misaligned agents, and we're well-defended against cyber-attacks, is decision erosion a larger concern? Or the other way around.
Greatly increase
Somewhat increase
Neither increase/decrease
Somewhat decrease
Greatly decrease
Misuse (e.g., cyber, disinformation)
Accidents or failures (goal alignment, or systemic)
Structural (value or decision erosion)
Clear selection
AI Safety-Capability Relationship - Rank each condition from the highest potential benefit (greatly increase) to the lowest (greatly decrease).
To control high-level systems, which of the below are more plausible? Will our current techniques scale to HLMI? Will new safety techniques need to be developed from first principles? Or will new custom methods be required for each instantiation?
Greatly increase
Somewhat increase
Neither increase/decrease
Somewhat decrease
Greatly decrease
Current techniques scale to HLMI
New techniques required for advanced systems
Custom techniques for each new instantiation
Clear selection
Safety risk -Rank each condition from the highest potential benefit (greatly increase) to the lowest (greatly decrease).
As advanced capabilities near, what will remain the most difficult unsolved problem? Will goal alignment, deception/power-seeking, or inner alignment (mesa optimization) remain the primary concerns?
Greatly increase
Somewhat increase
Neither increase/decrease
Somewhat decrease
Greatly decrease
Goal alignment failure
Power-seeking and deception
Inner alignment (mesa optimization)
Clear selection
Developer - Rank each condition from the highest potential benefit (greatly increase) to the lowest (greatly decrease).
In your perspective, what entity will plausibly develop the first high-level machine intelligence: A coalition of countries, like the West vs Eastern blocs in the cold war; individual countries, powerful corporations (e.g., Google, Tencent), or individual developers?
Greatly increase
Somewhat increase
Neither increase/decrease
Somewhat decrease
Greatly decrease
Coalition
Country
Corporation/Academia
Individual
Clear selection
Developer location - Rank each condition from the highest potential benefit (greatly increase) to the lowest (greatly decrease).
Which region do you believe will most likely develop the first high-level systems?
Greatly increase
Somewhat increase
Neither increase/decrease
Somewhat decrease
Greatly decrease
USA-Western European
Asia-Pacific
Africa or Latin America/Caribbean
Clear selection
Governance - Rank each condition from the highest potential benefit (greatly increase) to the lowest (greatly decrease).
Definitions/description: Looking forward: from your perspective, when radically advanced capabilities are deployed, do you believe that international governance capacity will be weak, capable, or strong? Weak: preparation stays the same as today (reactive), or we could see a decrease in cooperation and weakening of norms and institutions, possibly due to race dynamics. Capable: A strengthening of international norms and consolidation of institutions focused on AI. Strong: Int. safety regimes established (e.g., IAEA), multilateral agreements, and verification measures (e.g., IAEA nuclear inspections) enacted for states unwilling to sign on to AI safety agreements?
Greatly increase
Somewhat increase
Neither increase/decrease
Somewhat decrease
Greatly decrease
Weak (norms and institutions stay the same or worsen)
Capable (norms strengthen, with modest institutional growth)
Strong (Int. agreements, safety regime, control measures)
Clear selection
Corporate governance - Rank each condition from the highest potential benefit (greatly increase) to the lowest (greatly decrease).
From your perspective, when radically transformative capabilities are developed what will the safety/governance posture look like for major AI corporations: decrease: a decline in cooperation due to competition, weakened safety standards. Moderate improvement: Same as today, with modest improvements in cooperation on safety and standards. Strengthen: full commitments on standards for safe development and use.
Greatly increase
Somewhat increase
Neither increase/decrease
Somewhat decrease
Greatly decrease
Decrease (standards and cooperation)
Moderate improvement (increase in coordination & best practices)
Strengthen (Safety standards established)
Clear selection
How would you rank the below non-standard machine intelligence scenarios?
Description: This section is ranking speculative ideas of emergent intelligence. 1) The idea that the internet is an emergent complex intelligent system that we're unable to detect; 2) The possibility of an agentized cognitive IoT (IoT plus AI) that could act independently on sensors and systems, or 3) That narrow AI systems will converge into one general intelligence (e.g., strands of DNA idea)
Greatly increase
Somewhat increase
Neither increase/decrease
Somewhat decrease
Greatly decrease
The internet as emergent intelligence
Cognitive Internet-of-Things (globally distributed agents)
Narrow AI systems convergence
Clear selection
How familiar are you with AI safety?
Clear selection
How familiar are you with AI governance?
Clear selection
Thank you very much for the participation! Please leave any comments below.
Submit
Clear form
Never submit passwords through Google Forms.
This content is neither created nor endorsed by Google. Report Abuse - Terms of Service - Privacy Policy