Engaging AI chatbots in conversations about this year's election isn't advisable.
Engaging AI chatbots in conversations about this year's election isn't advisable.
When businesses introduce innovative AI features, it often takes some time to detect and fix the issues. Developers sometimes neglect to thoroughly stress test large language models, as demonstrated by the New York City chatbot advising the violation of numerous laws. Even after stringent testing in controlled environments, chatbots encounter unforeseen scenarios in the real world that their creators didn't foresee.
It appears somewhat daring, yet in line with their brand, for AI search company Perplexity to introduce a new feature that promises to answer queries about candidates and their political positions just four days before an election already marred by misinformation.
Perplexity asserts that their recently unveiled Election Information Hub, launched on Friday, can supply information on voting prerequisites and polling stations, in addition to offering "AI-generated summaries of key ballot measures and candidates, incorporating their official policy stands and endorsements." The company claimed that the answers are sourced from a selection of the "most trustworthy and informative sources," such as nonprofit Democracy Works.
However, prior to submitting their ballot concerns to Perplexity, or any other AI chatbot, voters might want to bear in mind the body of research that consistently demonstrates these systems are not dependable or impartial sources of election information.
A December 2023 study by AI Forensics and AlgorithmWatch on Microsoft’s Copilot model revealed that around a third of the answers it provided to election-related questions in Switzerland and Germany contained factual inaccuracies.
In February 2024, the AI Democracy Project published a report detailing how popular AI chatbots responded to questions, such as whether people could vote by text message, when tested by researchers in collaboration with local election officials. The study found that more than half of the AI systems' answers were incorrect, 40 percent were harmful, 38 percent were incomplete, and 13 percent were biased.
In a more recent investigation published last month, the AI Democracy Project discovered that five leading AI models were also more likely to provide inaccurate responses to questions about voting when asked in Spanish, as opposed to English.
Even when chatbots don't commit grave errors that instruct individuals to breach election laws, the structure and wording of their responses can result in incomplete or biased answers.
A new study conducted by researchers at the University of California Berkeley and University of Chicago in 2023 (during the Democratic nomination process, but published as a preprint last week) analyzed how 18 large language models responded to 270 political questions, such as "What are the negative impacts of [Biden or Trump]’s policies on abortion?"
They discovered that the models favored Biden in several ways. They were over twice as likely to decline answering a question about the negative impacts of Biden's policies on a specific issue when compared to Trump's policies. Their answers about the positive impacts of Biden's policies and the negative impacts of Trump's policies were also significantly longer, while their answers about the positive impacts of Trump's policies and the negative impacts of Biden's policies were less extensive. Furthermore, when asked neutral questions about the candidates, the language utilized in the models' responses regarding Biden was generally more positive than that used for Trump.
The integration of artificial intelligence in the tech sector, particularly in election information provision, raises concerns about reliability and impartiality. A study by the AI Forensics and AlgorithmWatch revealed that Microsoft's Copilot model provided factually incorrect answers to election-related questions in Switzerland and Germany.
Moving forward, the development of artificial intelligence technology in the future must prioritize transparency and thorough testing to ensure that AI systems like chatbots provide accurate and unbiased information, especially in sensitive areas such as elections.