4 Minute Read
As the intensity of a critical election year builds, the greatest threat isn’t necessarily the security of ballot machines. Instead, it lies in misinformation, amplified by social media, biased algorithms, and the spread of fake news. During the 2020 U.S. presidential election, researchers found that 19% of Americans encountered misinformation about the candidates, and this number is expected to rise in the upcoming election. The advent of generative AI has heightened this threat, especially with the emergence of sophisticated deepfakes. Voters, campaign workers, and media professionals must recognize these risks and take proactive measures. This includes cybersecurity training and implementing effective detection tools to protect the integrity of our elections. Deepfakes—artificially generated media where someone appears to say or do something they did not—have become alarmingly easy to produce. With the advent of advanced AI tools and freely available apps, almost anyone can create a believable deepfake video or audio sample, and the results are increasingly convincing. The impact of this threat goes beyond election security—it has the potential to spread fake news, enable blackmail, and be leveraged as a tactic in advanced phishing campaigns. For example, a campaign email could carry a malicious link that says something along the lines of, “click here to see a video from the candidate.” That link could very well be realistic enough for the average citizen to click on it, to download the malware, leading to potentially compromising the device and online privacy and identity. Educating voters to be skeptical of unusual communications is crucial. Everyone, from the voting booth to campaign advisors, should feel empowered to question the authenticity of what they see or hear, especially if it seems out of character or unrealistic. Deepfakes are powerful because they exploit the multifaceted nature of election issues. Candidates debate over personal topics such as healthcare, the economy, and education, making each industry open for misinterpretation. A well-made deepfake can sway voter opinions by presenting false statements or actions from a candidate on hot-button issues. For instance, a malicious actor could produce a video showing a candidate making disparaging remarks about K12 educators, potentially causing supporters with a vested interest in educational issues to rethink their vote. Today’s social media-driven political environment makes these hypersensitive scenarios much less far-fetched. Imagine a video showing a prominent candidate declaring an end to a widely supported policy. Without proper verification, this could spread rapidly across a myriad of platforms—Facebook, Instagram, and X—instantaneously and simultaneously, only serving to lend credence to the video, however fake it is. To combat this, campaign leaders can set up rapid response teams to verify and debunk misinformation swiftly. Social media platforms should enhance their algorithms to detect and flag deepfakes promptly. Training voters to cross-check information from social media with multiple credible sources can also reduce the spread of false information. Deepfakes can be tailored to exploit the fears and biases of specific demographic groups, potentially swaying public opinion against a candidate. By targeting individuals based on age, race, or orientation, malicious actors can create videos designed to attack a target's deepest fears. This targeted approach makes deepfakes a potent tool for misinformation. For example, a fake video could show a candidate making derogatory comments about a particular community, aiming to alienate that group's support. This manipulation isn't limited to elections; it can impact organizations, businesses, and even celebrities. Similarly, a deepfake of a CEO making negative statements about their company's future could harm stock prices and economic stability. Given the difficulty in spotting deepfakes, it's essential for everyone—from average citizens to media professionals—to be vigilant. Traditionally, the media plays a crucial role in verifying reported information. Campaign organizations can also create awareness by urging the public and technology companies to review and filter unverified videos. However, the average person must also bear the responsibility of vetting campaign ads, videos, and other media they encounter in the wild on social platforms and elsewhere. Similar to how people are cautious of phishing scams, it is becoming just as necessary to question the authenticity of the photo and video media people see. An organization is only as strong as its weakest link, meaning that security is ultimately everyone’s job to ensure the tools and protocols are in place to thwart threats. The same can be said for monitoring deepfakes, but on a far more expansive level. To combat the misuse of AI, several nations are developing or rolling out protective legislation. In the US, the Federal Artificial Intelligence Risk Management Act of 2023 directs federal agencies to follow guidelines for managing AI-related risks. States like California and New York are also enacting laws to regulate AI systems and ensure ethical conduct. Internationally, the EU AI Act represents the world's first comprehensive AI law, setting varying regulations based on the risk level of AI systems. These legislative efforts are crucial in creating a framework to prevent the malicious use of AI technologies like deepfakes. Encouraging international cooperation on AI regulation can also help create a unified approach to combating this and other rapidly proliferating threats. Beyond legislation, security leaders can develop and leverage practical tools to help organizations and individuals differentiate between real and fake media. These tools use machine learning to analyze videos for signs of manipulation. Some popular options include Intel's FakeCatcher, Microsoft Video AI Authenticator, and Deepware. Developing advanced detection tools for deepfakes is crucial. AI-based technologies can scrutinize videos for manipulation indicators like irregularities in lighting, shadows, and facial movements. Additionally, public awareness campaigns are essential to educate voters about deepfakes and offer tips on identifying them. While these tools can be effective, it's important to remember that deepfake technology is constantly evolving. This means some deepfakes might still evade detection. Therefore, human expertise remains a critical component in identifying deepfakes. Human-led threat-hunting teams and trained professionals can analyze videos for inconsistencies in lighting, skin texture, blinking patterns, and other subtle signs of manipulation. As we navigate an era where misinformation can significantly impact elections, understanding and addressing the threat of deepfakes is paramount. Voters, campaign workers, media professionals, and technology companies must work together to enhance awareness and verification processes. With robust legislative frameworks and advanced detection tools, combined with vigilant citizens, we can mitigate the risks posed by deepfakes and protect the integrity of our elections. A version of this blog originally appeared in CPO Magazine.Deepfakes: Easy to Create, Easy to Believe
Influence on Voter Decisions
Misinformation Targeting
Detection and Prevention
Legislative Measures
Deepfake Detection Tools