The Rising Threat of Bad-Actor AI in Global Elections: A Comprehensive Scientific Analysis
Category Machine Learning Wednesday - January 24 2024, 13:18 UTC - 10 months ago A new study by GWU predicts a significant increase in bad-actor AI activity during global elections in mid-2024. The study is the first of its kind to quantitatively analyze this threat, finding that AI bots and sockpuppets can amplify and disseminate disinformation campaigns at an alarming rate. Governments and online platforms must take proactive measures to control and combat this threat.
In recent years, the use of artificial intelligence (AI) has become increasingly prevalent in our daily lives. However, this technology is not without its downsides. One of the major concerns surrounding AI is its potential to be used for malicious purposes, particularly in the realm of politics and elections. Now, a new study led by researchers at the George Washington University (GW) has shed light on the growing threat of bad-actor AI in global elections, predicting a significant increase in daily AI activity by mid-2024.
Published in the journal PNAS Nexus, the paper titled "Controlling bad-actor-AI activity at scale across online battlefields" is the first of its kind to quantitatively analyze how bad actors will misuse AI in political campaigns around the world. Lead study author and GW professor of physics, Neil Johnson, highlights the importance of understanding the battlefield in order to effectively combat this threat, stating, "Everybody is talking about the dangers of AI, but until our study there was no science of this threat. You cannot win a battle without a deep understanding of the battlefield." .
Among the key findings of the study, the researchers predict that AI bots and "sockpuppets" (fake online personas controlled by a single entity) will act as a force multiplier, amplifying and disseminating disinformation campaigns at a rate of 4–10 times faster than human actors. Additionally, countries with higher levels of AI knowledge and usage, such as the United States and India, are more likely to be targeted by AI-driven disinformation campaigns.
With over 50 countries set to hold national elections in 2024, the potential impact of bad-actor AI on election results is a cause for concern. The researchers emphasize the need for governments and online platforms to take proactive measures to control and combat this threat, stating, "Our study answers the what, where, and when AI will be used by bad actors globally, and how it can be controlled. It is paramount that we take action to prevent the misuse of AI in global elections." .
As technology continues to rapidly advance, it is crucial for institutions and individuals to stay vigilant and informed about potential threats like bad-actor AI. The findings of this study serve as a wake-up call for governments and technology companies to prioritize the development of countermeasures and ensure the integrity of future elections.
Share