EU and US step up AI cooperation amid political crisis – EURACTIV.com

Estimated read time: 4 min

Washington and Brussels are stepping up their formal cooperation on artificial intelligence (AI) research at a crucial time for EU regulatory efforts on emerging technology.

The European Commission and the US administration signed an “Administrative Agreement on Artificial Intelligence for the Public Good” in a virtual ceremony on Friday evening (January 27).

The agreement was signed as part of the EU-US Trade and Technology Council (TTC), launched in 2021 as a permanent platform for transatlantic cooperation in several priority areas, from chain security to sourcing of emerging technologies.

The last high-level meeting of the TTC was held in the United States in December, and artificial intelligence was presented as one of the most advanced areas in terms of cooperation.

In particular, the two blocks endorsed a common roadmap to achieve a common approach on critical aspects of this emerging technology, such as metrics to measure reliability and risk management methods.

“Based on shared values ​​and interests, European and American researchers will join forces to develop societal applications of AI and work with other international partners for a truly global impact,” said the Market Commissioner. interior Thierry Breton in a press release.

Research collaboration

Building on the AI ​​Roadmap, the US and EU executive branches are stepping up their collaboration to identify and develop AI research that has the potential to address global and societal challenges such as as climate change and natural disasters.

Five priority areas have been identified: extreme weather and climate forecasting, emergency response management, improving health and medicine, optimizing the power grid and optimizing agriculture. This type of collaboration was until now closer and limited to more specific subjects.

Although the two partners will build joint models, they will not share training datasets with each other.

Large datasets often contain personal data that is difficult to disentangle from the rest. There is currently no legal framework for sharing personal data across the Atlantic due to the disproportionate nature of the US surveillance regime certified under the Schrems II verdict of the EU Court of Justice.

“US data stays in the US and European data stays there, but we can build a model that speaks to European and US data, because the more data and the more diverse it is, the better the model,” said said a senior US official. Reuters.

The Commission stressed that, under the agreement, the two partners would share findings and resources with other international partners who share their values ​​but lack the capacity to address these issues.

As Washington and Brussels note that the agreement builds on the Declaration for the Future of the Internet, signatories to the Declaration are likely candidates who could benefit from the results of this research.

Risk management framework

While the EU-US collaboration on AI has taken a, for now, symbolic step forward with the administrative agreement, Washington seems determined to put some of its standards on the map as the EU finalizes the world’s first regulation on artificial intelligence.

Last Thursday, the day before the announcement, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework, which sets out guidelines for developers of AI on risk mapping, measurement and management.

This voluntary framework, developed in consultation with private companies and public administrations, represents the non-binding American approach to new technologies. When regulated, this often occurs at the state level in relation to specific sectors such as health care.

By contrast, the EU is currently advancing work on the AI ​​Act, horizontal legislation to regulate all AI use cases according to their level of risk, including a list of areas to high risk such as health, employment and law enforcement.

The AI ​​law is expected to have a great influence and possibly set international standards on several regulatory aspects via the so-called Brussels effect. Since most of the major global companies in the field are American, it is not surprising that the US administration has tried to shape it.

In October, EURACTIV revealed that Washington was pushing for the high-risk categorization to be based on a more individualized risk assessment. Importantly, the US administration has argued that compliance with NIST standards should be considered as an alternative means of complying with the self-assessment mandated in the EU AI bill. .

The publication of this framework comes at a critical time for AI law, as EU lawmakers are set to finalize their position before entering into inter-institutional negotiations with the European Commission and member states.

“The AI ​​Risk Management Framework can help businesses and other organizations across all industries and sizes get started or improve their AI risk management approaches,” said the Director of the NIST, Laurie Locascio, in a statement.

[Edited by Nathalie Weatherald]

Post a Comment

Cookie Consent
We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.
Oops!
It seems there is something wrong with your internet connection. Please connect to the internet and start browsing again.
AdBlock Detected!
We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.
Site is Blocked
Sorry! This site is not available in your country.