Weather     Live Markets

As Election Day approaches, concerns are growing about artificial intelligence being used to manipulate videos and audio clips to mislead the public. A manipulated video mimicking the voice of Vice President Kamala Harris has gained attention after being shared by tech billionaire Elon Musk on social media platform X. In the video, an impersonator claims Harris is a “diversity hire” and questions her ability to run the country. The video, which uses authentic past clips of Harris and retains her campaign branding, raises questions about the regulation of AI tools in politics and how to handle content that blurs the lines of appropriate use, especially when it comes to satire.

Despite the original user disclosing that the video is a parody, Musk’s post on X, which has been viewed more than 123 million times, did not explicitly indicate this. Some users have questioned whether Musk’s post violates X’s policies on sharing synthetic, manipulated, or out-of-context media that could deceive or confuse people. While the policy includes exceptions for memes and satire, there are concerns about the potential harm such content can cause by spreading misinformation or influencing voters. Experts in AI-generated media confirmed that much of the fake ad’s audio was produced using AI technology, highlighting the power of generative AI and deepfakes to create convincing impersonations.

Hany Farid, a digital forensics expert, emphasized the importance of generative AI companies ensuring their tools are not used in ways that could harm individuals or democracy. However, Rob Weissman, co-president of advocacy group Public Citizen, argued that many people could be fooled by the video, as it feeds into preexisting themes about Harris and could be perceived as real. The lack of federal regulation on AI in politics has left rules guiding its use largely to states and social media platforms, with more than one-third of states having laws regulating AI in campaigns and elections.

Similar instances of generative AI deepfakes being used to influence voters with misinformation, humor, or both have been reported in the U.S. and elsewhere. Congress has yet to pass legislation on AI in politics, and federal agencies have only taken limited steps to address the issue. Social media companies like YouTube have implemented policies requiring users to reveal if they have used generative artificial intelligence to create videos, or face suspension. The concerns raised by the manipulated video of Harris underscore the need for greater regulation and oversight of AI tools in politics to prevent the spread of misleading or deceptive content that could impact elections and democracy.

Share.
Exit mobile version