National: AI chatbots got questions about the 2024 election wrong 27% of the time, study finds | Aaron Franco and Morgan Radford/NBC
If you ask some of the most popular artificial intelligence-powered chatbots how many days are left until the November election, you might want to double check the answer. A study published by data analytics startup GroundTruthAI found that large language models including Google’s Gemini 1.0 Pro and OpenAI’s ChatGPT gave incorrect information 27% of the time when asked about voting and the 2024 election. Researchers sent 216 unique questions to Google’s Gemini 1.0 Pro and OpenAI’s GPT-3.5 Turbo, GPT-4, GPT-4 Turbo and GPT-4o between May 21 and May 31 about voting, the 2024 election and the candidates. Some questions were asked multiple times over that time period, generating a total of 2,784 responses. According to their analysis, Google’s Gemini 1.0 Pro initially responded with correct answers just 57% of the time. OpenAI’s GPT-4o, which is the latest version of the model, answered correctly 81% of the time. Read Article