The FINANCIAL — Microsoft announced two new technologies to combat disinformation, new work to help educate the public about the problem, and partnerships to help advance these technologies and educational efforts quickly. Video Authenticator can analyze a still photo or video to provide a percentage chance, or confidence score, that the media is artificially manipulated. In the case of a video, it can provide this percentage in real-time on each frame as the video plays. They also announced new technology that can both detect manipulated content and assure people that the media they’re viewing is authentic.
Deepfakes came to prominence in early 2018 after a developer adapted cutting-edge artificial intelligence techniques to create software that swapped one person’s face for another. The process worked by feeding a computer lots of still images of one person and video footage of another. Software then used this to generate a new video featuring the former’s face in the place of the latter’s, with matching expressions, lip-synch and other movements. Since then, the process has been simplified – opening it up to more users – and now requires fewer photos to work. Some apps exist that require only a single selfie to substitute a film star’s face with that of the user within clips from Hollywood movies. But there are concerns the process can also be abused to create misleading clips, as reported by BBC.
Microsoft stated that disinformation comes in many forms, and no single technology will solve the challenge of helping people decipher what is true and accurate. Microsoft has been working on two separate technologies to address different aspects of the problem. They said one major issue is deepfakes, or synthetic media. They could appear to make people say things they didn’t or to be places they weren’t, and the fact that they’re generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology. However, in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.
Microsoft’s tech looks at the tiny imperfections at the edge of a deepfaked image, which may result in grayscale or fading undetectable by the human eye. The company also said it was releasing tech that will help flag manipulated content. It works by allowing reliable content producers such as news organizations to affix hashes — a kind of digital watermark — to their content. Then, a browser extension called a reader checks for hashes on digital content, and matches them against the original to check if they have been altered, according to Business Insider.
The tool, dubbed Microsoft Video Authenticator, is rolling out ahead of the US presidential election and is meant to help stop the spread of digitally altered videos. Video Authenticator will be able to “analyze a still photo or video” and give it a “confidence score” that provides the percentage chance that it has been artificially manipulated. When it is analyzing a video, the tool “can provide this percentage in real-time on each frame as the video plays,” which will allow it to identify exactly which parts of a clip have been manipulated, The New York Post wrote.
Microsoft has also launched another new technology that can detect manipulated content and assure people whether it is real or not. Content producers will be able to add a certificate to say the content is real, which travels with it wherever it goes online. A browser extension will be able to read the certificate and tell people if it is the authentic version of a video. Even with new technology to ‘catch and warn’ people about deepfakes, it is impossible to stop everyone being fooled or every deepfake from getting through. Microsoft has also worked with the University of Washington to improve media literacy to help people sort disinformation from genuine facts. Microsoft has launched an interactive quiz for voters in the upcoming US election to help them learn about synthetic media, develop critical media literacy skills and get a deeper understanding of the impact deepfakes can have on democracy, Daily Mail reported.
Lawmakers and experts have raised alarms about the emergence of manipulated media, especially when forged using artificial intelligence and machine learning, as a tool for spreading political misinformation. The technology has not advanced far enough for forged videos to be indistinguishable from real ones, but even imperfect edits can be damaging. Simpler edits — like the slowing down or selective cropping — are also being deployed by political campaigns to score points on social media. Theoretically, wider use of digital hashes could address those fakes, according to The Hill.
It is interesting to note that in line with its congressional mandate, the GEC released a special report that provides an overview of Russia’s disinformation and propaganda ecosystem. The report outlines the five pillars of Russia’s disinformation and propaganda ecosystem and how these pillars work together to create a media multiplier effect. In particular, it details how the tactics of one pillar, proxy sources, interact with one another to elevate malicious content and create an illusion of credibility. Read more.
Discussion about this post