Tech
Public Projects Content

Consumer organisations call for regulators’ actions on generative AI

Consumer associations from 13 European countries and the United States pointed out the risks of generative AI models like ChatGPT in a report published on Tuesday (20 June) and urged regulators to step in. Generative AI models can produce sophisticated…

This article is part of our special report AI4TRUST – AI-based-technologies for trustworthy solutions against disinformation

Access the full report
Content-Type:

News Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources.

[Koshiro K/Shutterstock]

Luca Bertuzzi Euractiv's Public Projects Jun 20, 2023 06:01 3 min. read
News

Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources.

Consumer associations from 13 European countries and the United States pointed out the risks of generative AI models like ChatGPT in a report published on Tuesday (20 June) and urged regulators to step in.

Generative AI models can produce sophisticated text, images or video content based on users’ prompts. This technology became increasingly popular with the launch of ChatGPT in November, prompting massive hopes related to its potential and concerns about its risks.

“Generative AI such as ChatGPT has opened up all kinds of possibilities for consumers, but there are serious concerns about how these systems might deceive, manipulate and harm people. They can also be used to spread disinformation, perpetuate existing biases which amplify discrimination, or be used for fraud,” said Ursula Pachl, Deputy Director General of the European Consumer Organisation.

For Pachl, European safety, data protection and consumer authorities should not wait idly but immediately launch investigations into how their respective legislation applies to these AI systems before they can cause any harm.

The Italian data protection authority was the first to look into the matter, requesting ChatGPT’s provider OpenAI implement some corrective measures. A task force was consequently established at the EU level to coordinate privacy enforcement actions on this technology.

Italian data protection authority bans ChatGPT citing privacy violations

The Italian privacy watchdog mandated a ban on the popular chatbot ChatGPT and launched an investigation on its provider OpenAI for suspected breaches of EU data protection rules.

Forbrukerrådet, the Norwegian consumer association, published a report to detail the outstanding consumer concerns on ChatGPT.

The accountability of these systems is a top concern as consumer organisations point to the fact that major Big Tech companies have closed off their systems to external scrutiny, making it nearly impossible to understand their data collection practices and decision-making process.

Another crucial threat identified is inaccuracy, namely, the fact that ChatGPT and the likes tend to take information out of context or even 'hallucinate', making things up from non-existing sources that sound realistic to the users.

The creation of manipulating or misleading content is an additional worry, for instance, to emotionally manipulate consumers into buying a certain product.

Impersonations and deep fakes have already been singled out as potential fuels for disinformation campaigns, especially in the context of electoral turnouts, with the European Commission proposing to introduce labels for AI-generated content in the EU Code of Practice on Disinformation.

Code of Practice on Disinformation signatories regroup with AI focus

Signatories to the Code of Practice on Disinformation gathered in Brussels on Monday to discuss the revised initiative’s first year of progress in the wake of Twitter’s withdrawal from the voluntary programme last week. 

The capacity to accurately impersonate someone can also enable scammers to produce deceptive content, such as phishing emails at scale, a risk already highlighted in a flash report from Europol, the EU law enforcement agency, back in March.

The reproduction of existing social biases is another burning concern for Artificial Intelligence, as it tends to replicate discriminations embedded in their datasets. For instance, AI avatar-creator Lensa was found to sexualise women, particularly of a specific ethnic group.

Under the European Parliament’s position on the AI Act, foundation models, including generative AI, will have to follow strict requirements regarding risk management, data governance and system robustness.

Brando Benifei, one of the leading MEPs on the file, has proposed to anticipate the AI rulebook’s entry into application for foundation models or generative AI. The matter will be discussed at the next negotiating session on 18 July.

Europol warns against potential criminal uses for ChatGPT and the likes

The EU law enforcement agency published a flash report on Monday (27 March) warning that ChatGPT and other generative AI systems can be employed for online fraud and other cybercrimes.

[Edited by Nathalie Weatherald]

Subscribe