Getting your Trinity Audio player ready...
|
Introduction
As described by John McCarthy in 1956, Artificial intelligence (AI) refers to the science and engineering of making intelligent machines. These Machines are programmed to perform tasks as humans would by using human intelligence (Singh et al., 2013). Over the years, artificial intelligence has advanced and holds great potential benefits for humanity, such as increasing efficiency and lowering costs. In recent years, we have been experiencing rapid advancements in the deployment of artificial intelligence tools, which have found their way to the general public and institutions through mediums such as mobile devices and internet media outlets. For instance, in late 2022, OpenAI, an AI research company, made its generative AI system ChatGPT available to the public for free—a system known to generate human intelligence responses given specific prompt input from users.
As the European Union and United States (EU.US) terminology and taxonomy for Artificial Intelligence explains, “generative AI systems are usually trained on large language models that use deep learning algorithms to learn and adapt without following explicit instructions and to draw inferences from patterns in data.” Following the introduction of the OpenAI ChatGPT, there have been similar tools designed to provide not just text responses alone but also trained to provide images, audio and even video output to users given specific prompts from other research companies such as Google Gemini, Bing AI, and remaker.ai. Further, users can generate these outputs from these tools from anywhere and within seconds, and the output generated from these tools will seem as though they are genuine or authentic (see Fig. 1).
Figure 1: A Thread of AI Fabricated Images Examples. Source: Eliot Higgins
While there have been arguments about AI’s potential to improve the democratic process of states, there have also been several cases where the results are the opposite, with claims that AI is eroding public trust in the system. Drik Brand (2023) reckoned that voters have been influenced by many false promises and misinformation provided by human political actors to influence their votes over the years. However, the recent growth of Generative AI in producing fabricated content has amplified this misinformation and its impact on free and fair elections (Jungherr, 2023), thus increasing the need for discussions regarding AI’s impact on elections.
These discussions about AI threatening democracy and the scalability of its impact worldwide have made it one of the biggest global concerns surrounding elections. With the series of elections that will take place in 2024 in about 60 countries around the world, which represents about 50% of the world’s total population (Brand, 2024), the potential use of AI in disrupting democracies cannot be overemphasised.
AI Disinformation in Elections
The UNESCO report outlined certain AI benefits/opportunities for the government to improve democratic values. Stating that AI can aid in summarising citizens’ comments, drafting personalised citizens replies, analysing large amounts of data, indicating how to manage resource allocation throughout the country and support voters abroad (Brand, 2024), and for campaign strategy and integrity (Narayanan, 2024). We see an example from the 2016 Presidential election in the USA, where the Donald Trump Campaign employed a data analytics company called Cambridge Analytica, which utilised an AI model to influence voters through micro-targeted advertisement.
Joining the conversation, the Yiaga Africa survey of electoral commissions in 22 African nations found that AI is being utilised for managing voter registers, engaging with voters through automated chatbots, authenticating voters, and detecting cyber threats in South Africa, Eswatini, Madagascar, and Nigeria. The goal is to eliminate human interaction and inefficiencies in elections by ensuring swift decision-making. Additionally, advanced analytics and machine learning models were employed to identify irregularities in electoral data and prevent election manipulation (Itodo, 2024).
While these are positive, promising opportunities to take advantage of AI to enhance democratic processes in Africa and the world at large, the use of AI for political micro-targeting, as was the case in Donald Trump’s Campaign, has raised concerns about how the rapid growth of generative AI in recent times has embedded new opportunities to be wielded as a tool for dis/misinformation(Brand, 2024), triggering tensions resulting in electoral-related conflict and extreme cases of violence. A summary of the European Parliament briefing on AI, democracy, and elections highlighted that AI could generate false information or spread biased opinions that do not capture public sentiments, thus weakening democracies.
Furthermore, Drik Brand argued that the fabrication of content with AI, such as images, text, audio and videos, has exerted a negative impact on the ability to attain a free and fair election as these fabricated outputs from AI in whatever form or languages have the potential of appearing as though they are genuine or authentic, hence aiding in misleading voters to support a specific candidate or political parties thus undermining democracy (Brand, 2024). We see examples from a field experiment by Kreps & Kriner (2023), using both human writers and GPT-3, an OpenAI’s predecessor to the GPT-4 system. The output from both the human writers and GPT0-3 was sent to over 7,000 state legislators, who responded that the AI-generated text on six policy areas obtained by legislators was just as credible as human-written messages. Thus, AI can be used as a tool for an illusion of political agreement (Kreps & Kriner, 2023).
As pointed out in this PBS report titled “AI-generated disinformation poses a threat of misleading voters in 2024 election.” The report asserted that since:
“…Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at a minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low.”
While this assertion may seem far-fetched, an interview conducted during the just concluded Senegal 2024 elections highlighted how AI tools could have been used by political actors or stakeholders, such as voters, candidates, and political parties, to shape public opinion. Similarly, a recent discussion held by the Centre for Journalism Innovation and Development at the Digital Rights and Inclusion Forum (DRIF, 2024) on Safeguarding Truth and Accountability in Digital Space Against Electoral Violation highlighted how AI-manipulated content such as Deepfakes, Cheapfakes, etc., can threaten electoral integrity.
A report by the Carnegie Endowment for Peace on Countering Disinformation Effectively by Bateman and Jackson (2024) discussed the use of generative AI in the context of disinformation. The report acknowledged the potential dangers of misusing generative AI. It highlighted the concept of political deepfake videos and audio clips to misinform or mislead people, which has had a limited impact. According to the report, studies have argued that people’s willingness to believe false information often depends on factors such as a viewer’s state of mind, their group identity or who they perceive as an authority more so than the quality of the fake image. For example, the Coalition for Content Provenance and Authenticity offers a digital “signing authority” that certifies the source of content and allows end users to verify its authenticity.
Cases of AI-Generated Disinformation in Election
As explained, the prevailing advancement in Generative AI has sparked significant concerns as highly realistic yet fabricated images, audio, and videos flood the global digital landscape. There have been several reports on how these deepfakes and other Geneative AI outputs are challenging the authenticity of public figures as well as the democratic process of a state (Cervini & Carro, 2024). Donald Trump was instrumental in popularising the phrase “fake news.” Although individuals can now more readily utilise generative AI tools to produce content and take advantage of political division for financial gain, foreign governments are still involved in these activities (Roth, 2024).
According to the report “Countering Disinformation Effectively” by the Carnegie Endowment for Peace (Bateman & Jackson, 2024), these deepfakes have the potential to sway voter opinion and undermine the integrity of electoral processes in various countries—posing a significant threat to the integrity of online information and citizens’ ability to access reliable, truthful content. Such use cases identified include:
- Synthetic Text Generation: The report notes that generative language models can produce large volumes of fake social media posts, comments, and news articles that appear to be created by real people, making it challenging to identify trustworthy sources. Microsoft alleged that social media posts with AI-generated imagery used in Chinese influence operations have gained higher engagement and are shared more widely than those from previous influence operations.
- Video Deepfakes: In Slovakia, fake videos of candidates containing hate speech and disinformation ahead of their recent national election also showed how AI deepfake campaigns could become a part of political reality next year. While there is no way to quantify AI’s impact on the election, journalists noted that deepfakes favoured the talking points associated with the populist party that ultimately won most of the votes.
- Targeting Specific Demographics: AI-generated false identities are being utilised to tailor messages to particular demographic groups. We see an example of this in the 2024 US election, where MAGA supporters are suspected of being behind the use of AI-generated deepfake images to appeal to Black voters and influence the upcoming US election. The images, which show a cheerful Trump surrounded by Black people, are being widely circulated online despite clear signs of AI manipulation.
Figure 2: A Viral image of Trump portraying black trump supporters.
- Cloning a Person’s Voice: Here, deepfakes come into play. With the advancement of AI, deepfakes have become more accessible for people to make and share, and they even appear as though they are genuine. A “deepfake” is a piece of audio or video created to make it appear that people are saying or doing things they never did (VOA, 2024). They often depict public figures saying or doing things they did not do. In January 2024, the Justice Department of New Hampshire was investigating deep fake robocalls sounding like President Biden sent to voters in New Hampshire urging the voters not to vote in an upcoming primary election. This was recognised as the first notable use of AI for voter suppression in the US campaign circle (Verma & Vynck, 2024), with the spark of investigations and lawsuits that followed, raising concerns about the potential impact of deep fakes on elections.
- Dodging Responsibility: As sophisticated technology rises and the public becomes more aware of its possibilities, it has resulted in deepfakes being used to escape accountability for the truth. People try to dodge responsibility for their words or actions and attribute them as deepfakes when genuine video or audio evidence is provided. Since the public is aware of the potential use cases of deep fakes, they may become sceptical and primed to doubt the authenticity of the said video or audio evidence. “The former US president, Donald Trump, dismissed an ad on Fox News that featured a video of his public gaffes, attributing the footage to AI generation. Even though this claim was reported to be widely covered and witnessed in real life by many independent observers. Trump has been known for his relentless distrust of media and quickness to label news as fake.
Potential Impact of AI and Possible Actions to be taken
In Addition to 2024 being a year of a series of elections, it also marks a significant year in the development of AI with multimodal generative models like ChatGPT, Google Gemini, Sora, Midjourney, and BingAI, achieving new heights in technical capability and public recognition and adoption. With this, threats like deep fakes and fabricated text contents are becoming cheaper and more widespread. Hence, these upcoming elections will face the dangers/risks AI poses in the context of AI-powered disinformation. Sierra opined that the 2024 elections would serve as a test run for democracy in the artificial intelligence (AI) era. Some of the reasons for this are:
- AI is accessible to the public free of charge, and this increasing accessibility widens the pool of potential creators of deepfakes, amongst other fabricated content.
- These GenAI can rapidly create deepfakes that go viral, making them harder to find and eradicate before they spread.
- GenAI deepfakes can be generated vastly, amplifying their societal impact. Furthermore, social networks facilitate their dissemination and virality, circumventing traditional mass media channels.
Further, as governments of half the world’s population prepare to host potential world-changing elections, they are now forced by this new threat posed by AI deekfakes and other fabricated content. Along with its potential to destabilise “the concept of truth itself.” Trump’s claim highlights the growing influence of AI in shaping public perception and the challenges it poses to discerning truth from fabrication (Verma & Vynck, 2024). Hence, we look at the potential use cases and impact in addition to the ones discussed earlier and the possible actions that could be taken to curtail them.
- The January 2024 deepfake incident involving Biden foreshadows the onset of the “deepfake era” in US politics. Experts caution that deepfakes may be employed to disseminate false information, sway public perception, and disrupt elections.
- Given the case of dogging responsibility, the popularisation of “fake news” to cover the truth will likely extend more to “deep-fake news” in the future. As deep fakes become widespread, the public may have difficulty believing what their eyes or ears tell them—even when the information is real. In turn, the spread of deep fakes threatens to erode the trust necessary for democracy to function effectively (Chesney & Citron, 2019).
- This “believe-nothing-and-nobody” cycle threatens informed decision-making and democratic processes (Barnett, 2024). Without the public’s trust and confidence to discern between authentic and manipulated media, the foundations of informed decisions and democratic discourse will be jeopardised in subsequent elections. We see an example in the case involving the presidential candidate of the Labour Party of Nigeria, who denied a proposed audio conversation between him and Bishop David Oyedepo, the Founder of one of the largest churches in Nigeria. The audio was denied by the presidential candidate and described as deepfake by his party.
- Cervini and Carro (2024) reckoned that AI-generated images or videos used to target women in politics or marginalised groups have the potential to reduce the said group’s visibility and influence in political spheres and public discourse.
- There is also the risk of automated disinformation campaigns. Bad actors can use generative AI to automate the creation and dissemination of disinformation at scale, potentially overwhelming fact-checking efforts and manipulating online discourse.
- AI have the potential to be used to create fake profiles and accounts that impersonate legitimate organisations, individuals, or authoritative sources, undermining trust in authentic information.
The Carnegie Endowment for Peace report “Countering Disinformation Effectively” discusses various strategies for countering disinformation, including counter-messaging, fact-checking, and platform regulation.
- Authoritative Source Prioritisation: Platforms should prioritise content from reputable news outlets and fact-checking organisations by granting them greater visibility and prominence.
- Targeted Counter-Messaging: Platforms need to create focused counter-messaging initiatives that directly tackle and refute particular cases of disinformation. This approach is effective in reaching the people who encounter and believe false assertions.
- Empowering Users: Platforms should provide users with tools and resources to identify and report disinformation. This includes media literacy campaigns to help users evaluate online content critically.
- Cross-Platform Collaboration: The report also highlighted the need for Platforms, governments, and civil society to collaborate to coordinate counter-messaging efforts and share best practices. The key is for platforms to take a proactive, multi-faceted, and collaborative approach to effectively counter the spread of disinformation.
Conclusion
This article highlights the growing concern about the potential impact of AI-generated fake content on elections. We saw how AI technology spreads disinformation during electoral processes, eroding public trust, swaying voter behaviour, and destabilising the concept of objective truth. The proliferation of AI-powered manipulation of media, such as deepfakes, also raises alarms about the ability to discredit candidates and spread misinformation about voting procedures, ultimately threatening core democratic principles. In conclusion, policymakers, technology companies, and the public urgently need to proactively address the challenges posed by AI-generated disinformation in elections to safeguard the integrity of the electoral process and preserve the health of democratic systems.
References
Bateman, J. & Jackson, D. (2024). Countering Disinformation Effectively: An Evidence-Based Policy Guide. Carnegie Endowment for International Peace. DOI: https://doi.org/10.15779/Z38RV0D15J
Barnett, P. T. (2024). The believe-nothing-and-nobody election cycle. https://thomaspmbarnett.substack.com/p/the-believe-nothing-and-nobody-election
Bender, M. L. S. (2023). Algorithmic Elections. Michigan Law Review, Vol. 121, No. 3, 2022
Brand, D. (2023). The use of AI in Elections. Eurac Research. https://www.eurac.edu/en/blogs/eureka/the-use-of-ai-in-elections
Chesney B. & Citron, D. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.
“C2PA Explainer,” Coalition for Content Provenance and Authenticity, accessed March 13, 2023, https://c2pa.org/specifications/specifications/1.0/explainer/Explainer.html.
Cervini, M. E. & Carro V. M. (2024). An Overview of the Impact of GenAI and Deepfakes on Global Electoral Processes. https://www.ispionline.it/en/publication/an-overview-of-the-impact-of-genai-and-deepfakes-on-global-electoral-processes-167584
Council of Europe, (2022). Artificial Intelligence and Electoral Integrity, Concept Paper, European Conferences of Electoral Management and Bodies.
European Parliament, 2023). Artificial intelligence, democracy and Elections. https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/751478/EPRS_BRI(2023)751478_EN.pdf
Itodo, S. (2024). Artificial Intelligence and the Integrity of African Elections. https://www.idea.int/news/artificial-intelligence-and-integrity-african-elections
Jungherr, A. (2023). Artificial Intelligence and Democracy: A Conceptual Framework. Social Media + Society. https://doi.org/10.1177/20563051231186353
Kreps, S. & Kriner, D. (2023). Commentary: How Generative AI Impacts Democratic Engagement. https://www.brookings.edu/articles/how-generative-ai-impacts-democratic-engagement/
Kulkarni, A. (2024). Fact Check: Video Of Donald Trump Promising To Support Imran Khan Is Deepfake. https://news.abplive.com/fact-check/video-of-donald-trump-promising-to-support-imran-khan-is-a-deepfake-1675464
Microsoft Threat Intelligence. “Sophistication, scope and scale: Digital threats from East Asia increase in breadth and effectiveness.” Microsoft, September 2023. https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1aFyW
Narayanan, K. M. (2024). Many Elections, AI’s dark dimension. https://www.thehindu.com/opinion/lead/many-elections-ais-dark-dimension/article67961995.ece
Schneier, B., Farrell, H., & Sandres, E. N. (2023). How Artificial Intelligence Can Aid Democracy. https://slate.com/technology/2023/04/ai-public-option.html
Shukla V. (2024), “Deepfakes and Elections: The Risk to Women’s Political Participation” Tech Policy Press. https://www.techpolicy.press/deepfakes-and-elections-the-risk-to-womens-political-participation/
Singh, Gyanendra & Vedrtnam, Ajitanshu & Sagar, Dheeraj. (2013). An Overview of Artificial Intelligence. 10.13140/RG.2.2.20660.19840.
Verma, P. & Vynck, G. (2024). AI destabilizing ‘the concept of Truth in 2024 Election. https://www.washingtonpost.com/technology/2024/01/22/ ai-deepfake-elections-politicians/
Voice of America, (2024). ‘Deepfake’ of Biden’s Voice Called Early Examples of US Election Disinformation. https://learningenglish.voanews.com/a/deepfake-of-biden-s-voice-called-early-example-of-us-election-disinformation/7455392.html