Facial recognition and other high-risk artificial intelligence applications will face strict constraints under new rules unveiled by the European Union that threaten hefty fines for companies that don’t comply.
The European Commission, the bloc’s executive body, proposed measures on Wednesday that would ban certain AI applications in the EU, including those that exploit vulnerable groups, deploy subliminal techniques or score people’s social behavior.
The use of facial recognition and other real-time remote biometric identification systems by law enforcement would also be prohibited, unless used to prevent a terror attack, find missing children or tackle other public security emergencies.
Facial recognition is a particularly controversial form of AI. Civil liberties groups warn of the dangers of discrimination or mistaken identities when law enforcement uses the technology, which sometimes misidentifies women and people with darker skin tones.
Digital rights group EDRI has warned against loopholes for public security exceptions use of the technology.
Other high-risk applications that could endanger people’s safety or legal status—such as self-driving cars, employment or asylum decisions — would have to undergo checks of their systems before deployment and face other strict obligations.
The measures are the latest attempt by the bloc to leverage the power of its vast, developed market to set global standards that companies around the world are forced to follow, much like with its General Data Protection Regulation.
The U. S. and China are home to the biggest commercial AI companies — Google and Microsoft Corp., Beijing-based Baidu, and Shenzhen-based Tencent — but if they want to sell to Europe’s consumers or businesses, they may be forced to overhaul operations.
- Fines of 6% of revenue are foreseen for companies that don’t comply with bans or data requirements
- Smaller fines are foreseen for companies that don’t comply with other requirements spelled out in the new rules
- Legislation applies both to developers and users of high-risk AI systems
- Providers of risky AI must subject it to a conformity assessment before deployment
- Other obligations for high-risk AI includes use of high quality datasets, ensuring traceability of results, and human oversight to minimize risk
- The criteria for ‘high-risk’ applications includes intended purpose, the number of potentially affected people, and the irreversibility of harm
- AI applications with minimal risk such as AI-enabled video games or spam filters are not subject to the new rules
- National market surveillance authorities will enforce the new rules
- EU to establish European board of regulators to ensure harmonized enforcement of regulation across Europe
- Rules would still need approval by the European Parliament and the bloc’s member states before becoming law, a process that can take years
Business Standard has always strived hard to provide up-to-date information and commentary on developments that are of interest to you and have wider political and economic implications for the country and the world. Your encouragement and constant feedback on how to improve our offering have only made our resolve and commitment to these ideals stronger. Even during these difficult times arising out of Covid-19, we continue to remain committed to keeping you informed and updated with credible news, authoritative views and incisive commentary on topical issues of relevance.
We, however, have a request.
As we battle the economic impact of the pandemic, we need your support even more, so that we can continue to offer you more quality content. Our subscription model has seen an encouraging response from many of you, who have subscribed to our online content. More subscription to our online content can only help us achieve the goals of offering you even better and more relevant content. We believe in free, fair and credible journalism. Your support through more subscriptions can help us practise the journalism to which we are committed.
Support quality journalism and subscribe to Business Standard.