AI Provider Faces Potential Scrutiny Over ChatGPT's Misrepresentations of Individuals
OpenAI's AI chatbot, ChatGPT, is causing quite a stir in the European Union, this time due to its "hallucinations" about individuals' personal data. The privacy rights nonprofit group noyb has lodged a complaint with the Austrian Data Protection Authority (DPA) against OpenAI, claiming that the company can't correct the inaccurate information generated by ChatGPT.
Even though hallucinations are common among large language models like ChatGPT, noyb's complaint focuses on the European Union's General Data Protection Regulation (GDPR). The GDPR governs how personal data of individuals in the EU is collected and stored. Despite the GDPR's requirements, OpenAI seems unbothered, according to noyb, as they can't pinpoint the data sources or correct the information generated by ChatGPT.
In the GDPR, individuals in the EU have the right to have incorrect information about them corrected. Since OpenAI can't correct the data, they're noncompliant with this rule. A public individual who asked ChatGPT about their birthday was repeatedly given incorrect information and was refused when requesting rectification or erasure of the data. OpenAI allegedly responded by stating it could only filter or block the data on certain prompts, like the complainant's name.
noyb is requesting an investigation by the DPA into OpenAI's data processing methods and their efforts to maintain accurate personal data in training their AI models. They're also asking for OpenAI to comply with the complainant's right under the GDPR to access their data, including the data sources.
OpenAI hasn't commented on the matter yet. Maartje de Graaf, a data protection lawyer at noyb, pointed out that companies must comply with access requests and it seems that some companies think their innovative products don't have to abide by the law.
Failing to comply with GDPR can lead to fines of up to 4% of the company's global annual turnover or €20 million (whichever is higher) and additional damages if individuals choose to seek them. OpenAI is already facing data protection cases in EU member states, Italy and Poland.
This article was originally published on Quartz.
Under the GDPR, OpenAI could face several penalties due to its inability to correct incorrect personal data generated by ChatGPT. These include fines, rectification obligations, reputation and legal actions, regulatory changes, and informational campaigns that could increase costs and oversight.
- The privacy rights nonprofit group noyb asserts that OpenAI's AI chatbot, ChatGPT, is noncompliant with the European Union's General Data Protection Regulation (GDPR) due to its inability to correct inaccurate information about individuals.
- OpenAI allegedly cannot pinpoint the data sources or correct the information generated by ChatGPT, a situation that noyb deems problematic under the GDPR as it violates individuals' right to have incorrect information about them corrected.
- In its complaint with the Austrian Data Protection Authority (DPA), noyb seeks an investigation into OpenAI's data processing methods and their efforts to maintain accurate personal data in training their AI models, including OpenAI's compliance with the complainant's right under the GDPR to access their data, including the data sources.
- Given the ongoing data protection cases against OpenAI in EU member states, Italy and Poland, and the potential fines under the GDPR (up to 4% of the company's global annual turnover or €20 million), chatGPT's "hallucinations" about personal data could have significant implications for the tech company's future and its adherence to artificial-intelligence-related privacy laws.