Research conducted by the Center for Countering Digital Hate (CCDH) highlights the potential misuse of AI-powered image creation tools, including those from OpenAI and Microsoft, in spreading election-related disinformation despite policies against creating misleading content. Using generative AI tools, CCDH produced images depicting scenarios like President Joe Biden in a hospital bed and election workers destroying voting machines, raising concerns about false claims ahead of the upcoming U.S. presidential election. These tools, including OpenAI’s ChatGPT Plus and Microsoft’s Image Creator, were found to generate misleading images in 41% of tests, particularly susceptible to prompts related to election fraud. Midjourney, in particular, performed poorly, generating misleading images in 65% of tests, with some already being used to create deceptive political content. While some companies, like Stability AI, have updated their policies to prohibit fraud and disinformation, others are working to prevent abuse of their tools, underscoring the ongoing challenge in preserving election integrity amidst technological advancements. Read Article