Home / Featured / AI in the EU: Balancing interests and controls

AI in the EU: Balancing interests and controls

When the President of the European Commission gave her first speech to the European Parliament in December 2019, she officially recognized ‘Artificial Intelligence’ as an area of ​​strategic importance for the European Union. Europe. Nine months later, speaking to the European Parliament in her first “Union State” speech, she switched from spelling “Artificial Intelligence” to talking about ‘AI’ – very famously turmeric in the EU bubble now. This is not too surprising given that AI is being deployed across most (if not all) sectors of the economy, from diagnosing diseases to minimizing the environmental impact of agriculture, says Angeliki Dedopoulou, EU public affairs manager with Huawei Technologies writes.

It is true that much of the work has been done by the European Commission since President Ursula Von der Leyen and her team took office. Has been promised December 2019 as a “legislative proposition” on AI – what was put in place as the AI ​​White Paper in February. Despite this, admittedly, not a legislative proposition, but it is a document that has kicked off the debate about human AI and ethics, the use of Big Data, and how public This turmeric can be used to create wealth for society and businesses.

The Commission’s white paper emphasizes the importance of establishing a unified approach to AI across the 27 member states of the EU, where different countries have begun to take their own approach to with regulation, and therefore potentially, is building up barriers to the single EU market. It also, important for Huawei, talks about its plans to take a risk-based approach to regulating AI.

At Huawei, we studied the White Paper with interest and, together with (more than 1,250!) Other stakeholders, contributed to the Commission’s public consultation, which closed on June 14, bringing express our opinions and ideas as experts working in the field.

Find your balance

The main point we emphasize with the Commission is the need to find the right balance between allowing innovation and ensuring adequate protection of citizens.

In particular, we focus on the need for high-risk applications regulated in a clear legal framework and propose an idea for what the definition of AI should be. In this regard, we believe the definition of AI should go into its application, with risk assessments focusing on the intended application usage and the type of impact caused by the AI ​​functionality. If there is a detailed audit list and procedure for companies to do their own assessment, this will reduce the cost of the initial risk assessment – which must be in accordance with industry specific requirements.

We asked the Commission to review the set of consumer organizations, academia, member states and businesses to assess whether an AI system can qualify as high-risk. There is already an established agency to deal with these – the High Risk Systems Permanent Technical Committee (TCRAI). We believe the agency can evaluate and evaluate AI systems based on both legal and technical high-risk criteria. If some control, combined with a voluntary labeling system, would be provided as a governance model:

• Review of the entire supply chain;

• establishes consistent criteria and goals with the intended transparency goals for consumers / businesses;

• fostering responsible AI development and deployment, and;

• Create an ecosystem of trust.

In addition to AI’s high-risk applications, we have stated to the Commission that an existing legal framework based on fault-based and contractual liability is sufficient – even for advanced technologies such as AI, where there may be concern that new technology requires new rules. However, additional regulation is not necessary; it would be too heavy and discourage the adoption of AI.

From what we know about current thinking within the Commission, it looks like they also plan to take a risk-based approach to regulating AI. Specifically, the Commission recommends a short-term focus on “high-risk” AI applications – meaning high-risk areas (like healthcare) or high-risk use (e.g. : whether it creates a legal or similar significant effect on an individual’s rights).

So what happens next?

The Commission has a lot of work to do in passing all advisory responses, taking into account the needs of businesses, civil society, trade associations, NGOs and others. The additional burden of getting over the coronavirus crisis didn’t help, with an official response from the Commission currently not expected until Q1 2021.

Of course, coronavirus has changed the game for using technology in healthcare and will certainly have an impact on the Commission’s thinking in this area. Terms like “telemedicine” have been covered for many years, but the crisis has made virtual consultations a reality – almost overnight.

In addition to healthcare, we see AI implementations continually being rolled out in areas like agriculture and in EU efforts to combat climate change. We are proud at Huawei to be part of this continued digital growth in Europe – an area where we’ve been working for 20 years. The development of digital skills is at the heart of this, not only equipping future generations with the tools to capture the potential of AI, but also enabling the current workforce. Be active and agile in an ever-changing world: a holistic, lifelong learning and innovative approach to AI education and training is needed, to help people transition between jobs seamlessly. The job market has been hit hard by the crisis and needs quick solutions.

As we wait for the Commission’s official response to the White Paper, what is left to say about AI in Europe? Better health care, safer and cleaner transportation, more efficient production, smart farming, and cheaper and more sustainable sources of energy: these are just a few of the benefits AI We can bring it to our society and to the ENTIRE EU. Huawei will work with eu policymakers and will strive to ensure the region has the right balance: innovation in combination with consumer protection.

About anrikaz

Check Also

Secure RDP access to an Azure VM with a rebound server

A rebound server can be a handy tool for improving the security of a deployment …

Leave a Reply

Your email address will not be published. Required fields are marked *