Home > NewsRelease > How and Why “Fun” AI Generated Spam On Social Media Will Manipulate the 2024 Election
Text
How and Why “Fun” AI Generated Spam On Social Media Will Manipulate the 2024 Election
From:
Robert Siciliano -- Cyber Security Expert Speaker Robert Siciliano -- Cyber Security Expert Speaker
For Immediate Release:
Dateline: Boston, MA
Friday, May 31, 2024

 

The primary intention behind artificial intelligence (AI) generated spam on social media appears to be financial gain through deceptive means. Facebook algorithms are suggesting users to visit, view and like pages that are 100% artificially intelligent generated photos of people, places, and things that are simply not real.

Artificial Intelligence

The content includes too good to be true pictures of everyday people, their projects that are to most of us “extraordinary” in their nature. This might include a crudites made to look like the face of Jesus. Or someone crocheting a child’s amazing sweater, or something as simple as 103 year old woman’s birthday celebration. All fake, all designed to engage us. And that engagement is 100% trickery.

AI Enables High Volume of Engaging Content

AI tools like text and image generators allow spammers to produce large volumes of visually appealing and engaging content cheaply and quickly. This AI-generated content draws attention and interactions (likes, comments, shares) from users, signaling to social media algorithms to promote it further.

Driving Traffic for Monetary Gain

The engaging AI posts often contain links or lead to external websites filled with ads, allowing spammers to generate ad revenue from the traffic. Some spammers use AI images to grab attention, then comment with spam links on those posts. The ultimate goal is to drive traffic to these ad-laden websites or promote dubious products/services for profit. This same content can be directed towards the election process and fake websites containing photos, videos, and content to manipulate hearts and minds on why and who they should vote for.

Circumventing Detection

AI allows spammers to generate unique content at scale, making it harder for platforms to detect patterns and filter out spam. As AI language models improve, the generated content becomes more human-like, further evading detection.

Spreading Misinformation

While profit is the primary motive with social media related spam, AI-generated spam can also be leveraged to spread misinformation and false narratives on social media. Automated AI bots can amplify misinformation campaigns by flooding platforms with synthetic content.

In essence, AI provides spammers with powerful tools to create deceptive, viral content that circumvents detection while enabling them to monetize through dubious means like ad farms, product promotion, or even misinformation in election campaigns.

And spreading misinformation is exactly how generated artificially intelligent spam “socializes” the process of election manipulation. Over decades and decades, we have come to believe most if not everything we see, everything we read, and therefore we go deeper into the rabbit hole of fakery.

Joe Biden Deepfake in New Hampshire

In May 2024, a New Hampshire man named was fined $6 million by the Federal Election Commission for creating and distributing a deep fake audio clip that falsely portrayed President Joe Biden making controversial statements.

The man used advanced AI technology to generate a synthetic version of Biden’s voice, making it appear the President said things he never actually said. The deep fake audio was released online just weeks before the election and quickly went viral on social media.

The FEC determined the mans actions constituted an “expensive virtual disinformation campaign” aimed at undermining the election process. His $6 million fine is the largest ever levied by the FEC for such a violation of election laws prohibiting the distribution of disinformation and deep fakes intended to sway voters.

This case highlights the growing threat of deep fake technology being weaponized to mislead the public and interfere in U.S. elections. It has prompted calls for stricter regulations around the creation and dissemination of synthetic media.

Is There Any Way to Stop It?

There are several measures that can be taken to prevent AI from being used to spread misinformation during elections:

AI System Design

·         Implement robust fact-checking and verification processes into AI systems to ensure they do not generate or amplify false or misleading information.

·         Train AI models on high-quality, fact-based data from reliable sources to reduce the risk of learning and propagating misinformation.

·         Build in safeguards and filters to flag potential misinformation and disinformation attempts.

Regulation and Oversight

·         Enact laws and regulations governing the use of AI in elections and political campaigns to prohibit manipulative tactics.

·         Establish independent oversight bodies to audit AI systems for fairness, accuracy and resistance to misinformation.

Public Awareness

·         Increase public education about AI capabilities and limitations to raise awareness of artificial intelligence and deepfakes potential misuse.

·         Promote media literacy to help people identify misinformation and verify information sources.

Collaboration

·         Foster collaboration between AI developers, election officials, fact-checkers and civil society to share best practices.

·         Support research into AI-powered misinformation detection and prevention methods.

Ultimately, a multi-stakeholder approach involving responsible AI development, strong governance, public engagement and cross-sector partnerships will be crucial to mitigating the risks of AI-enabled misinformation during elections.

Robert Siciliano CSP, CSI, CITRMS is a security expert and private investigator with 30+ years experience, #1 Best Selling Amazon author of 5 books, and the architect of the CSI Protection certification; a Cyber Social Identity and Personal Protection security awareness training program. He is a frequent speaker and media commentator, and CEO of Safr.Me and Head Trainer at ProtectNowLLC.com.

Pickup Short URL to Share
News Media Interview Contact
Name: Robert Siciliano
Title: Cyber Security Expert Speaker
Group: Cyber Security Expert Speaker
Dateline: Boston, MA United States
Direct Phone: (617)329-1182
Jump To Robert Siciliano -- Cyber Security Expert Speaker Jump To Robert Siciliano -- Cyber Security Expert Speaker
Contact Click to Contact
Other experts on these topics