Rechercher
Fermer ce champ de recherche.

Misinformation in the 2024 US Election: The role of AI

With the 2024 U.S. election, misinformation has once again taken center stage, but this time the challenge has grown more complex with the involvement of artificial intelligence (AI).

Partagez ce post

Da-kuk, Getty Images

From manipulated images and deepfake videos to AI-generated texts that blur the lines between truth and fiction, the US election is highlighting both the power and potential dangers of AI-driven content. AI has enabled new methods of spreading misinformation, making it easier to produce convincing but misleading content on a large scale and with a low budget. Key advancements that contribute to this are generative AI models that can create images, videos, and text that mimic real-world scenarios.

Deepfake videos and images

Deepfake technology, which uses AI to create highly realistic but entirely fabricated videos, has reached a level of sophistication that makes it difficult to detect without specialist tools. During elections, such videos are used to alter candidates’ speeches, create false supporters or portray candidates in compromising situations. These videos can circulate rapidly on social media, making it difficult for viewers to discern the real from the fake. In addition, AI is also used to create fake photos to fool opponents and ‘bring them to their senses’, as one Trump supporter said it. 

The Dilley Meme Team, led by Brenden Dilley, is a collective of internet users dedicated to creating and disseminating political memes, in order to influence public opinion, particularly in swing states where the vote gap is often very small. ‘You are the campaign, don’t forget it’, reflects their strategy of addressing voters directly through hard-hitting viral memes. Their ‘meme factory’ focuses on crucial states such as Arizona, Nevada, Wisconsin, Michigan, Pennsylvania, North Carolina and Georgia, and makes extensive use of social networks, particularly X (ex- Twitter) and Facebook, to reach a large and strategic audience.

Donald Trump and Brenden Dilley, “La Fabrique à mensonge: États-Unins: une élection sous IA”, série FranceTv

AI-generated fake news articles

Large language models are being used to write convincing articles that contain misleading or entirely fabricated information about the election. Some actors are using these tools to produce articles that mimic the tone and style of credible news sources, creating fake stories about candidates’ policies or personal lives. With AI’s ability to generate variations of the same story, disinformation campaigns can flood social media platforms with different but similarly misleading narratives.

Synthetic voices and audio manipulation

In addition to images and videos, AI can now create synthetic voices that sound exactly like real people. This capability has been exploited to create audio clips in which candidates are made to say things they have never actually said. For example, an AI-generated sound can simulate the voice of a candidate endorsing a controversial point of view or revealing ‘secret plans’, with the potential to influence public perception. However, as an example, a well-known AI platform has banned the creation of voices that resemble those of candidates in the United States. By contrast, this is not banned in Europe. In addition, supporters can use a celebrity to mislead the public and use his or her audience as a vote, as in the case of Taylor Swift, who recently gave her support to candidate Kamala Harris.

Social media platforms and fact-checking organizations have stepped up their efforts to combat misinformation, but AI-generated content poses new challenges.

The US government and international organizations are aware of the impact AI-generated misinformation could have on the election process. New regulations and initiatives are in development to ensure election integrity, including  legislators who are pushing social media companies to be more transparent about their content moderation policies. Moreover,  investment in AI detection tools is growing, with both private companies and government agencies developing technology to identify synthetic content.

While technology and regulation are critical, digital literacy is also essential in combatting misinformation. Educating voters about the realities of AI-generated misinformation can empower them to question what they see and hear online. Initiatives in media literacy, such as school and community programs, are increasingly focused on helping individuals recognize and critically evaluate the information they encounter.

A Csactu article related to Operation Matryoshka: between political destabilization and disinformation.

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Total
0
Share

CSMAG, votre point actu’ mensuel !

Nous souhaitons faire de ce magazine le reflet de l’esprit de CSactu, en y intégrant toute nos nouveautés : articles de fond, podcasts, émissions sur Twitch, conférences et bien plus encore. 

Chaque mois, nous nous engageons à vous offrir un magazine qui, tout en valorisant le travail de notre rédaction, mettra en lumière l’ensemble des initiatives portées par CSactu.