“The fact that someone could read this output and believe it is true, is what scares me the most,” Hjalmar Holmen said in a statement shared by Noyb.
OpenAI has since updated ChatGPT to search for information on the internet when asked about individuals, meaning it would in theory no longer hallucinate about individuals, Noyb said. But it added that the incorrect information may still be part of the AI model’s dataset.
In its complaint filed with Norway’s data protection authority (Datatilsynet), it asked the authority to fine OpenAI and order it to delete the defamatory output and fine-tune its model to eliminate inaccurate results.
Noyb said that by knowingly allowing ChatGPT to produce defamatory results, OpenAI is violating the General Data Protection Regulation (GDPR)’s principle of data accuracy.
ChatGPT presents users with a disclaimer at the bottom of its main interface that says that the chatbot may produce false results. But Noyb data protection lawyer Joakim Söderberg said that “isn’t enough.”
“You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true,” he said. “The GDPR is clear. Personal data has to be accurate. And if it’s not, users have the right to have it changed to reflect the truth.”