OpenAI is facing another privacy complaint in Europe over its viral AI chatbot’s tendency to hallucinate false information — and this one might prove tricky for regulators to ignore.

Privacy rights advocacy group Noyb is supporting an individual in Norway who was horrified to find ChatGPT returning made-up information that claimed he’d been convicted for murdering two of his children and attempting to kill the third.

Earlier privacy complaints about ChatGPT generating incorrect personal data have involved issues such as an incorrect birth date or biographical details that are wrong. One concern is that OpenAI does not offer a way for individuals to correct incorrect information the AI generates about them. Typically OpenAI has offered to block responses for such prompts. But under the European Union’s General Data Protection Regulation (GDPR), Europeans have a suite of data access rights that include a right to rectification of personal data.

Another component of this data protection law requires data controllers to make sure that the personal data they produce about individuals is accurate — and that’s a concern Noyb is flagging with its latest ChatGPT complaint.

“The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”

Confirmed breaches of the GDPR can lead to penalties of up to 4% of global annual turnover.

Enforcement could also force changes to AI products. Notably, an early GDPR intervention by Italy’s data protection watchdog that saw ChatGPT access temporarily blocked in the country in spring 2023 led OpenAI to make changes to the information it discloses to users, for example. The watchdog subsequently went on to fine OpenAI €15 million for processing people’s data without a proper legal basis.

Since then, though, it’s fair to say that privacy watchdogs around Europe have adopted a more cautious approach to GenAI as they try to figure out how best to apply the GDPR to these buzzy AI tools.

Two years ago, Ireland’s Data Protection Commission (DPC) — which has a lead GDPR enforcement role on a previous Noyb ChatGPT complaint — urged against rushing to ban GenAI tools, for example. This suggests that regulators should instead take time to work out how the law applies.

And it’s notable that a privacy complaint against ChatGPT that’s been under investigation by Poland’s data protection watchdog since September 2023 still hasn’t yielded a decision.

Noyb’s new ChatGPT complaint looks intended to shake privacy regulators awake when it comes to the dangers of hallucinating AIs.

The nonprofit shared the (below) screenshot with TechCrunch, which shows an interaction with ChatGPT in which the AI responds to a question asking “who is Arve Hjalmar Holmen?” — the name of the individual bringing the complaint — by producing a tragic fiction that falsely states he was convicted for child murder and sentenced to 21 years in prison for slaying two of his own sons.

While the defamatory claim that Hjalmar Holmen is a child murderer is entirely false, Noyb notes that ChatGPT’s response does include some truths, since the individual in question does have three children. The chatbot also got the genders of his children right. And his home town is correctly named. But that just it makes it all the more bizarre and unsettling that the AI hallucinated such gruesome falsehoods on top.

A spokesperson for Noyb said they were unable to determine why the chatbot produced such a specific yet false history for this individual. “We did research to make sure that this wasn’t just a mix-up with another person,” the spokesperson said, noting they’d looked into newspaper archives but hadn’t been able to find an explanation for why the AI fabricated child slaying.

Large language models such as the one underlying ChatGPT essentially do next word prediction on a vast scale, so we could speculate that datasets used to train the tool contained lots of stories of filicide that influenced the word choices in response to a query about a named man.

Whatever the explanation, it’s clear that such outputs are entirely unacceptable.

Noyb’s contention is also that they are unlawful under EU data protection rules. And while OpenAI does display a tiny disclaimer at the bottom of the screen that says “ChatGPT can make mistakes. Check important info,” it says this cannot absolve the AI developer of its duty under GDPR not to produce egregious falsehoods about people in the first place.

OpenAI has been contacted for a response to the complaint.

While this GDPR complaint pertains to one named individual, Noyb points to other instances of ChatGPT fabricating legally compromising information — such as the Australian major who said he was implicated in a bribery and corruption scandal or a German journalist who was falsely named as a child abuser — saying it’s clear that this isn’t an isolated issue for the AI tool.

One important thing to note is that, following an update to the underlying AI model powering ChatGPT, Noyb says the chatbot stopped producing the dangerous falsehoods about Hjalmar Holmen — a change that it links to the tool now searching the internet for information about people when asked who they are (whereas previously, a blank in its data set could, presumably, have encouraged it to hallucinate such a wildly wrong response).

In our own tests asking ChatGPT “who is Arve Hjalmar Holmen?” the ChatGPT initially responded with a slightly odd combo by displaying some photos of different people, apparently sourced from sites including Instagram, SoundCloud, and Discogs, alongside text that claimed it “couldn’t find any information” on an individual of that name (see our screenshot below). A second attempt turned up a response that identified Arve Hjalmar Holmen as “a Norwegian musician and songwriter” whose albums include “Honky Tonk Inferno.”

chatgpt shot: natasha lomas/techcrunch

While ChatGPT-generated dangerous falsehoods about Hjalmar Holmen appear to have stopped, both Noyb and Hjalmar Holmen remain concerned that incorrect and defamatory information about him could have been retained within the AI model.

“Adding a disclaimer that you do not comply with the law does not make the law go away,” noted Kleanthi Sardeli, another data protection lawyer at Noyb, in a statement. “AI companies can also not just ‘hide’ false information from users while they internally still process false information.”

“AI companies should stop acting as if the GDPR does not apply to them, when it clearly does,” she added. “If hallucinations are not stopped, people can easily suffer reputational damage.”

Noyb has filed the complaint against OpenAI with the Norwegian data protection authority — and it’s hoping the watchdog will decide it is competent to investigate, since oyb is targeting the complaint at OpenAI’s U.S. entity, arguing its Ireland office is not solely responsible for product decisions impacting Europeans.

However an earlier Noyb-backed GDPR complaint against OpenAI, which was filed in Austria in April 2024, was referred by the regulator to Ireland’s DPC on account of a change made by OpenAI earlier that year to name its Irish division as the provider of the ChatGPT service to regional users.

Where is that complaint now? Still sitting on a desk in Ireland.

“Having received the complaint from the Austrian Supervisory Authority in September 2024, the DPC commenced the formal handling of the complaint and it is still ongoing,” Risteard Byrne, assistant principal officer communications for the DPC told TechCrunch when asked for an update.

He did not offer any steer on when the DPC’s investigation of ChatGPT’s hallucinations is expected to conclude.

By sapbeu

Leave a Reply

Your email address will not be published. Required fields are marked *