TrueMedia, a Seattle-based nonpartisan nonprofit organization, is using AI technology to detect deepfakes and combat disinformation. The organization made its technology available to the public, offering a no-cost, web-based tool that can analyze social media posts such as images, videos, and audio files for evidence of manipulation. The tool, first released to journalists and fact-checkers earlier this year, is now accessible to everyone ahead of the U.S. elections. TrueMedia’s AI technology, combined with deepfake detection tools, can provide real-time analysis of content to help users verify the authenticity of the information they see online.
Oren Etzioni, the founder of TrueMedia and an experienced computer scientist and AI specialist, emphasized the importance of making deepfake detection technology available to the public, especially during an election cycle filled with rampant disinformation. TrueMedia aims to empower individuals to have the tools needed to identify and combat manipulation in online content. A recent incident involving pop star Taylor Swift highlighted the dangers of misinformation spread through AI-generated content, prompting concerns about the role of technology in spreading fake news.
TrueMedia has successfully identified various deepfakes during major global events, showcasing the effectiveness of its technology in detecting manipulated content. The organization helped identify several AI content farm accounts that published thousands of videos, accumulating a significant number of views over a specific period. By utilizing deepfake detection tools, TrueMedia can contribute to countering the spread of manipulated information, ensuring that users have access to reliable and accurate content online.
In an effort to raise awareness and educate the public about the potential risks associated with deepfakes, TrueMedia released a quiz to test individuals’ ability to spot fake images, videos, and audio clips. The quiz aimed to assess people’s “deepfake IQ” and their capacity to differentiate between authentic and manipulated content. Etzioni warned against the widespread use of deepfakes to manipulate voters, describing it as a form of “disinformation terrorism” that is now accessible to anyone. The collaboration between TrueMedia and Microsoft’s AI for Good Lab further enhances deepfake detection capabilities, emphasizing the importance of leveraging AI technology for combatting the negative impact of manipulated content.
Microsoft President Brad Smith praised TrueMedia’s efforts in utilizing AI technology to combat the rise of deepfakes, emphasizing the importance of using good AI to counteract bad AI. As deepfakes become increasingly sophisticated and challenging to identify, tools like TrueMedia’s offer a valuable resource for individuals seeking to verify the accuracy of online content. By democratizing access to deepfake detection technology, TrueMedia is empowering the public to address the spread of disinformation and ensure the credibility of information shared online.
Overall, TrueMedia’s commitment to leveraging AI technology for deepfake detection represents a proactive approach to combating misinformation in the digital space. By providing the public with accessible tools and resources to verify the authenticity of online content, the organization is playing a crucial role in safeguarding the integrity of information shared on social media platforms. As the prevalence of deepfakes continues to pose a threat to the validity of online information, initiatives like TrueMedia’s partnership with Microsoft’s AI for Good Lab demonstrate the collaborative efforts needed to address this growing challenge.