What is the impact of the upcoming AI Act on the AI Innovation Power in Europe?
Dear Members and Partners,

AI Cluster Bulgaria kindly asking you to support our work next to the European AI Forum!

Important: The AI Act (regulation) will drastically affect your business. Now is the time to give feedback to the European Parliament. Later will be too late. So please invest this time - it will save you headaches later. 

This Pulse Survey (~10min) is targeted at AI Startups and other companies developing AI across Europe. It is hosted by appliedAI and distributed in collaboration with other national AI Initiatives. With this data, we intended to create a summary about how AI innovators are thinking about the upcoming AI Regulation by the EU. The results will be discussed with the parliament and published. No infos about individual startups are collected. Details about the Privacy Policy by appliedAI can be found here: https://www.appliedai.de/privacy 

Thank you for your participation!
Sign in to Google to save your progress. Learn more
Email *
In which country is your company located?
*
Are you....
*

In a nutshell, the AI Act will regulate the development, marketing and usage of AI in Europe following a risk-based approach, meaning, the higher the risk of a AI System (low-risk, high-risk, prohibited) the stricter the regulatory requirements. High-risk AI covers safety-related applications across industries such as cars, airplanes, medical devices, or toys, but also certain areas of usage, including education, employment, law enforcement or critical infrastructure. High-risk AI needs to undergo a conformity assessment to meet requirements, e.g. on data governance, explainability, accuracy, cybersecurity and human oversight. Developers of high-risk AI face comprehensive obligations, too. 

Have you heard about the AI Act before?
*
Is your company working on the development of AI System(s)? 

An AI Systems that
- uses machine learning and/or logic- and knowledge based approaches
- operates with a certain level of autonomy
- produces outputs such as content (generative AI systems), predictions, recommendations or decisions
Clear selection
Considering the flag-ship Use Case of your Startup/company, in which risk class of the AI Act is it going to fall?

See Annex II and Annex III of the AI Act as reference to determine if your AI is considered high-risk.
*
If you are not sure, what information is missing on your side? What is unclear or ambigious? Please describe.
High-risk AI Systems have to comply with several requirements as a precondition for usage. 

Please rate how difficult / easy you consider the implementation of these requirements. 
Very difficult
Somewhat difficult
Indifferent
Somewhat easy
Very easy
Risk management (Risk Mgt. along the ML Lifecycle, mitigate risk until acceptable)
Data and data governance (Use high-quality training, validation and testing data, bias monitoring)
Technical documentation (document the AI system before placing on the market)
Record-keeping (logs-tracking, post-market-monitoring, person involved in the verification)
Transparency and provision of information to users (User shall understand the system, provide instructions for use, metrics, limitations for usage)
Human oversight (feature a human-machine interface, avoid automation bias, “stop button”
Accuracy, robustness and cybersecurity (quality assurance, accuracy to be in instructions for usage, resilience against unauthorised access)
Clear selection
If your organization is developing AI, you are likely to fall into the role of a so called 'provider' in the AI Act. 

Providers of high-risk AI Systems face comprehensive obligations. Please review the obligations below and indicate how difficult / easy you rate their implementation:
Very difficult
Somewhat difficult
Indifferent
Somewhat easy
Very easy
Have a Quality Management System in place
Create technical documentation about the AI System
Control and keep logs of the AI System e.g. for reproducability
Conduct a conformity assessment before putting the AI System on the market
Register the AI System in an EU Database
Implement corrective action should the AI System become non-conform after placing it on the market
Affix the CE-Mark to the AI System
Collaborate with competent authorities to demonstrate compliance to the AI Act
Clear selection
The AI Act is going to pose obligations on the providers/developers of AI Systems, e.g. regarding risk management, data governance, robustness and transparency in the case of High-Risk AI Systems (see chapter 2 for details).

What impact do you foresee for your company and how are your going to respond to those obligations?
*
Required

The EU conducted an Impact assessment of the AI Act and estimated the cost for compliance for an enterprise of 50 employees for one high-risk AI Product (covering the requirements for the AI System and the obligations for the company, e.g. introducing a Quality Management System (QMS)); see this report, Section 5.       

If the company has no QMS, “the set-up of a QMS and the conformity assessment process for one AI product is estimated to cost between EUR 193,000 and EUR 330,050."

If the company has an existing QMS, it would roughly "pay EUR 159,000-EUR 202,000 for upgrading and maintaining the QMS, and bringing one AI product to market."

If you consider all efforts to comply with the requirements mentioned above, what cost of compliance do you estimate for your company?
Do you assume the AI Act will help you in the global competition (e.g. through more trust in your solution) or will help the competition outside the EU (Solution providers outside Europe have to comply too, when offered in Europe)?
*
Do you consider your AI System to be of 'general purpose' according to this definition: 

"General purpose AI system' means an AI system that is able to perform generally applicable functions such as image or speech recognition, audio or video generation, pattern detection, question answering, translation or others; general purpose AI system may be used in different contexts and may be integrated in a range of other AI systems."

Background: Providers of General Purpose AI would have to foresee, if the user (e.g. a customer) uses the AI System for a purpose that might be considered high-risk, and if so, provide the necessary information to the user to be compliant with the AI Act (e.g. conduct a conformity assessment).
Clear selection
What kind of support or assistance would you like to see for meeting the requirements and obligations from the AI Act?
*
Required
If you selected "other", please describe what kind of support on implementing the AI Act you are wishing for:
The AI Act foresees the setup of so called 'Regulatory Sandboxes' which are intended for 'trial and error', i.e. for implementing the AI Act "without being charged" when things go wrong. 

What are your expectations for such a Sandbox in your country? What kind of services, actors or waivers would you like to see there? Please elaborate
If you could change one thing in the AI Act. What would it be?
Вашият отговор

Would you like to receive the results of this survey?

Clear selection
Please enter your email address, if you'd like to receive the results. 
Submit
Clear form
Never submit passwords through Google Forms.
This form was created inside of Cluster Artificial Intelligence Bulgaria. Report Abuse