What is deep fake news?
Deep fake news refers mainly to artificial audiovisual content – most notably videos and audio clips – that has been produced using machine learning and AI technology. An example is the now-infamous deep fake video that supposedly portrays Barack Obama badmouthing Donald Trump. The producers used AI to simulate Obama based on hours of video, and then added a voiceover by Jordan Peele. Right now it’s fairly easy to spot a fake video, but experts reckon that within just a couple of years, it will be very difficult to distinguish between real and fake.
What are the risks of deep fake news?
Deep fake news really took off last winter when Reddit user “deepfakes” created an algorithm for producing fake porn videos using machine learning and open-source code. But the risks of deep fake news go way beyond the initial porn paranoia that took Reddit by storm. In this era of pointing fingers – at Trump, at the Russians, at the Middle East – disinformation campaigns that make use of deep fake news could cause serious political turmoil. We’ve already seen how social media can be leveraged to sway elections. Deep fake news can take this one step further, using inflammatory fake news stories to spark political unrest within and across borders. We wouldn’t be able to believe anything we see anymore. And the interesting thing about fake news is that due to its sensationalist nature, it seems to spread much faster than real news.
Is there any point in putting a brake on AI innovation of deep fake news?
Although deepfakes was quickly banned from Reddit, the technology was picked up by several other sites and is easily available today for anyone who wants to produce a fake video. This example serves to show just how pointless it is to try to stop people from developing deep fake news technology. So if we can’t stop it, how can we regulate it?
How can we counteract the spread of deep fake news?
Does the responsibility for quashing fake news lie with the viewer, the disseminator or the state? How do we fight fake news without sacrificing independent news sources or freedom of speech? Professor Madeleine de Cock Buning suggests a variety of methods to fight against fake news while avoiding all-out censorship or totalitarian control of news sources. She points to fact-checking technology, transparency in algorithms, clear identification of sources and sponsored content, and finally legislation as ways to minimize the impact of fake news.
There are plenty of tools already out there, albeit imperfect ones. Websites such as Snopes have traditionally been used to debunk fake news stories, but these are slow and studies have found that people are often unlikely to change their opinions based on the findings of these websites. Fact-checking “guardians” and tools such as FullFacts are being developed as a quicker way to combat fake news stories, although as Lucas Graves points out, it will be difficult to 100% automatize fact-checking. Meanwhile, researchers recently published a study that considers the effectiveness of focusing on the reliability of news sources instead of individual news articles as a way of verifying accuracy. Some people have thrown about the idea of using digital signatures as a way of authenticating images as having been recorded on a specific device.
Using AI to detect deep fake news
Currently, social media websites are employing human fact checkers in the hunt for fake news. But artificial intelligence may also be able to play a role in monitoring itself. Facebook recently acquired the London-based startup Bloomsbury AI and has developed a machine-learning model to tackle fake news stories. A joint project by researchers from the University of Amsterdam and the University of Michigan, as well as a similar project by researchers at Sofia University and the Qatar Computing Research Institute, have had some success using linguistic cues to detect fake news. The question is whether AI will ever be 100% able to replace human judgment.
Technology knows no borders, and we can be sure that the spread of deep fake news will only increase over the next few years. If fake news spreads just as fast as real news and becomes more and more difficult to distinguish, then ultimately people will need to be equipped with the tools to view the news through their own critical lens. Ironically, these tools will likely be heavily reliant on AI – cues such as the abovementioned linguistic cues or news source reliability.
Does the solution to fighting fake news lie in a combination of AI and human guile?