Taylor Swift's endorsement of Harris makes headlines but her AI concerns are real

Moments after Wednesday's presidential debate, superstar artist Taylor Swift came out with her most anticipated new release since "The Tortured Poets Department": an endorsement.

But before Swift announced her support for Vice President Kamala Harris, there was another message: a warning about the dangers of AI. 

It comes weeks after former President Donald Trump posted an AI-generated image purportedly of Swift, clad in Uncle Sam-like garb, urging people to vote for him. One problem: it was fake. 

"Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation," Swift wrote on Instagram. "It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth." 

In a recent interview on "The Final 5" with Jim Lokay, Jessica Hetrick, Vice President of Federal Services at Optiv + ClearShark, said AI has enhanced both opportunities and risks, especially as the 2024 election cycle approaches.

"AI, in this generation, is such a beneficial thing in so many ways," Hetrick noted. "But it’s also adding so many new capabilities and enhancements to malicious cyber attacks, disinformation, and misinformation campaigns." 

RELATED: AI memes of cats, Trump and Harris flood social media during debate

She emphasized that AI has been weaponized to manipulate public perception, adjusting the narrative through the creation of fake articles and web content. This, she explained, makes it harder for people to trust the authenticity of what they consume online.

Hetrick pointed out how AI has blurred reality, making it increasingly difficult to distinguish between what’s real and what’s fake. 

"The quality of AI deep fakes is getting so good that it’s sometimes hard to differentiate. We’re seeing fake messages, articles, satirical messaging, and robo calls all increasing the threat landscape," she said.

One of the trends Hetrick highlighted is the rise of AI-generated content aimed at influencing elections. She cited examples of AI-generated audio messages dissuading people from voting, including a robocall earlier this year made to New Hampshire voters, alleging to be from President Joe Biden, urging them not to vote. 

"This isn’t just limited to the U.S.," she warned. "It’s broad, and it’s going to continue to grow."

RELATED: How do you register to vote? Taylor Swift encourages first-time voters

And while Hetrick hopes Americans heed the calls from cybersecurity experts to be vigilant in the uncharted waters, she acknowledged that public awareness brought on by the likes of Swift is critical. 

"We’re not going to argue the political stance of any celebrity figures, but the reality is we have to think very critically and ask the right questions," she said.

Hetrick also stressed the need for policies and regulations to keep pace with AI advancements. 

"AI and machine learning can be leveraged for good, like increasing threat detection and monitoring capabilities," she said. "But we need decision-makers to focus on policies that critically examine how AI is used, especially in elections."

With deep fakes and other forms of AI manipulation gaining traction, Hetrick urged the public to question the sources of the information they consume. 

"Ask the right questions. Does it promote an emotional response? Is it exaggerated or distorted? Are there links embedded? Is it clickbait?" she said.

As the conversation wrapped up, Hetrick reflected on the global impact of AI on election integrity, emphasizing the need for greater awareness. 

"I think awareness drives action," she added. "But it also provokes critical thinking and how we approach the information we’re consuming."

The Final 5PoliticsNewsTechnology