When the President of the European Commission gave her first speech to the European Parliament in December 2019, she officially recognized ‘Artificial Intelligence’ as an area of strategic importance for the European Union. Europe. Nine months later, speaking to the European Parliament in her first “Union State” speech, she switched from spelling “Artificial Intelligence” to talking about ‘AI’ – very famously turmeric in the EU bubble now. This is not too surprising given that AI is being deployed across most (if not all) sectors of the economy, from diagnosing diseases to minimizing the environmental impact of agriculture, says Angeliki Dedopoulou, EU public affairs manager with Huawei Technologies writes.
It is true that much of the work has been done by the European Commission since President Ursula Von der Leyen and her team took office. Has been promised December 2019 as a “legislative proposition” on AI – what was put in place as the AI White Paper in February. Despite this, admittedly, not a legislative proposition, but it is a document that has kicked off the debate about human AI and ethics, the use of Big Data, and how public This turmeric can be used to create wealth for society and businesses.
The Commission’s white paper emphasizes the importance of establishing a unified approach to AI across the 27 member states of the EU, where different countries have begun to take their own approach to with regulation, and therefore potentially, is building up barriers to the single EU market. It also, important for Huawei, talks about its plans to take a risk-based approach to regulating AI.
At Huawei, we studied the White Paper with interest and, together with (more than 1,250!) Other stakeholders, contributed to the Commission’s public consultation, which closed on June 14, bringing express our opinions and ideas as experts working in the field.
Find your balance
The main point we emphasize with the Commission is the need to find the right balance between allowing innovation and ensuring adequate protection of citizens.
In particular, we focus on the need for high-risk applications regulated in a clear legal framework and propose an idea for what the definition of AI should be. In this regard, we believe the definition of AI should go into its application, with risk assessments focusing on the intended application usage and the type of impact caused by the AI functionality. If there is a detailed audit list and procedure for companies to do their own assessment, this will reduce the cost of the initial risk assessment – which must be in accordance with industry specific requirements.
We asked the Commission to review the set of consumer organizations, academia, member states and businesses to assess whether an AI system can qualify as high-risk. There is already an established agency to deal with these – the High Risk Systems Permanent Technical Committee (TCRAI). We believe the agency can evaluate and evaluate AI systems based on both legal and technical high-risk criteria. If some control, combined with a voluntary labeling system, would be provided as a governance model:
• Review of the entire supply chain;
• establishes consistent criteria and goals with the intended transparency goals for consumers / businesses;
• fostering responsible AI development and deployment, and;
• Create an ecosystem of trust.