(Reuters) — The European Union’s rights watchdog has warned of the risks of using artificial intelligence in predictive policing, medical diagnoses, and targeted advertising as the bloc mulls rules next year to address challenges posed by the technology.
While AI is widely used by law enforcement agencies, rights groups say it is also abused by authoritarian regimes for mass and discriminatory surveillance. Critics also worry about the violation of people’s fundamental rights and data privacy rules.
In a report issued on Monday, the Vienna-based EU Agency for Fundamental Rights (FRA) urged policymakers to provide more guidance on how existing rules apply to AI and ensure that future AI laws protect fundamental rights.
“AI is not infallible, it is made by people — and humans can make mistakes. That is why people need to be aware when AI is used, how it works, and how to challenge automated decisions,” FRA director Michael O’Flaherty said in a statement.
The FRA’s report comes as the European Commission, the EU executive, considers legislation next year to cover “high-risk sectors” such as health care, energy, transport, and parts of the public sector.
The agency said AI rules must respect all fundamental rights, with safeguards to ensure this and a guarantee that people can challenge decisions made by AI, and that companies need to be able to explain how their systems incorporate AI decisions.
It also said there should be more research into the potentially discriminatory effects of AI so Europe can guard against it and that the bloc must further clarify how data protection rules apply to the technology.
The FRA’s report is based on more than 100 interviews with public and private organizations already using AI, with the analysis based on uses of AI in Estonia, Finland, France, the Netherlands, and Spain.
(Reporting by Foo Yun Chee. Editing by Alex Richardson.)