Responsible AI Investment Framework (RAIIF) - Simple Assessment Submission Form
The Responsible AI Investment Framework is a modular and practical tool for tech investors and venture capitalists at all stages to assess the alignment of target investment prospects with best practices within Responsible AI governance, development and deployment. The framework takes a two tiered approach: 1. defining the company's AI risk category, 2. assessing the preparedness, stakeholder alignment and technical know-how of the leadership team. A technical due diligence of the underlying technology should be undertaken separately.

If you're curious about how we work with Responsible AI at J12, or have any other questions/concerns, please contact us at alexander@j12ventures.com.
Sign in to Google to save your progress. Learn more
Email *
Name of assessor  *
Name of assessed company *
Risk classification
Does the AI system or application operate in sectors to be categorised as "Unacceptable Risk" by the EU AI Act:
*
Required
Reasoning *
Please explain why the AI system is/isn't classified in the "Unacceptable Risk" category
Does the AI system or application operate in sectors to be categorised as "High Risk" by the EU AI Act
*
Required
Reasoning *
Please explain why the AI system is/isn't classified in the "High Risk" category
RAIIF Score
Select the choice that best describes the degree that the company's AI systems align with the following questions

Yes/Positive response (indicate low risk or adherence to best practices) = 2 points

Partial/Neutral response (indicate areas where the company might be in a gray zone or has some measures in place but not comprehensive ones) = 1 point

No/Negative response (highlight high-risk areas or potential red flags) = 0 points
RAIIF Questions
2 points (Yes/Positive alignment)
1 point (Partial/Neutral)
0 points (No/Negative alignment)
Is there a feedback loop in place for continuous improvement in AI ethics and preparedness?
Has the company faced any legal challenges or backlash due to its AI implementation?
Has the company conducted third-party audits of its AI systems?
Are there mechanisms in place to continuously monitor and evaluate the AI's performance and impact?
How transparent is the company about its AI's decision-making processes?
Are stakeholder concerns integrated into the company's product development lifecycle?
Are there dedicated teams or personnel responsible for AI risk management?
Are users informed about how the AI works and the kind of data it uses?
Are there any stakeholders who have raised significant concerns about the application?
How transparent is the company about potential biases in its AI systems?
Are there dedicated teams for AI research and development?
How does the company ensure its AI systems are resilient against adversarial attacks?
Are there collaborations with tech giants or industry leaders for AI development?
Does the company's technical team have multidisciplinary expertise, including ethics?
How does the company benchmark its AI systems against competitors?
Does the company's technical strategy consider long-term sustainability and scalability of its AI systems?
Does the company have a routine to keep up with the latest advancements in AI to ensure their solutions remain state-of-the-art?
How does the company ensure its AI models remain accurate and relevant over time?
Clear selection
Questions and comments
A copy of your responses will be emailed to the address you provided.
Submit
Clear form
Never submit passwords through Google Forms.
reCAPTCHA
This form was created inside of J12 Ventures. Report Abuse