Deepfake: A fake news on steroids

Instead of being mostly textual, the fake news era brings digitally altered video and audio also known as deepfakes. These have a real potential to further erode already undermined public trust in journalism but also to cause serious security impediments.

The term deepfake was first coined on the online platform Reddit in 2017 by an anonymous user who called himself ‘deepfakes’. It was coined out of two terms ‘deep learning’ and ‘fake’ and represents a technique for human image synthesis based on artificial intelligence. It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique known as Generative Adversarial Network (GNAs).

The Generative Adversarial Networks was invented in 2014 and is based on artificial intelligence, which powers the deepfake videos and audios. In short, the GANs are made up of two rival computer networks, which use the synthesiser and detector (discriminator) to create the deepfake content.

Experts believe that there can be found about 10,000 deepfake videos circulating on the Internet. The deepfake apps are proliferating and their use is exponentially rising. For example, the cybersecurity firm Deeptrace reported that the four leading deepfake-themed pornography websites, supported by advertising, had attracted 134 million views for their videos since February 2018. Some reports found that the amount of deepfake videos circulating online have doubled in less than a year, jumping from 7,964 in December 2018 to 14,698 this year so far.

There is also evidence that the production of these videos is becoming a lucrative business. In August, The Wall Street Journal reported on one of the first known cases of synthetic media becoming part of a classic identity fraud scheme. The Financial Times added that scammers are believed to have used commercially available voice-changing technology to pose as a chief executive in order to swindle funds.

Moreover, the quality of deepfake videos is rapidly increasing. “In January 2019, deepfakes were buggy and flickery. Nine months later, I’ve never seen anything like how fast they’re going. This is the tip of the iceberg”, said Hany Farid, a professor at the University of California, Berkeley. Meanwhile, the BBC reported that the audio deepfakes are also on the rise.

Why do deepfakes flourish?

The success of deepfakes is attributed to its multi-centric (audio and visual) experiences. Many studies show that we tend to trust broadcast images, particularly moving images, much more than the written text. It is proven that videos instantly evoke emotions.

“It’s now become possible to create a passable deepfake with only a small amount of input material – the algorithms need smaller and smaller amounts of video or picture footage to train on,” explained Katja Bego, principal researcher at innovation foundation Nesta. One such portal required 250 photos of the target subject and two days of processing to generate a video. The Deeptrace says the prices charged for deepfakes vary but can be as little as USD 2.99 per video.

While the evidence shows that pornography accounts for the overwhelming majority of the deepfake clips, the report from Deeptrace highlights the potential for the use of deepfake technology in political campaigns.

It is well known that disinformation is as old as politics, but its practitioners have kept pace with technological changes. Where written fake news was the hallmark of the most recent election cycle in the US and UK, images and videos are increasingly the new focus of propaganda, says Vidya Narayanan, a researcher at the Oxford Internet Institute.

The Hollywood movies show that manipulating video is nothing new. Although it has been possible to alter video footage for decades, it would take a long time, highly skilled artists, and a lot of money.

However, deepfake technology is rapidly changing the game. It is now commonly accessible so that almost anybody could make a convincing fake video. For example, a simple search on the Github platform for the deepfake free software returns over 100 results, although most of them offer a modified ‘face swap’ technique. Furthermore, it is easy to find people who are offering deepfake services for as little as USD 20 per request.

Can ordinary people be a target?

A simple answer is: yes. This is due to the plentifully available photo, video and audio material on various social media platforms such as Facebook or Instagram. Hence, basically, anyone can become a potential target.

Unfortunately, currently, the most imminent threat of deepfakes comes from weaponizing them against women. Thus far experience shows that the deepfake makers use women’s faces without consent and paste it on the pornographic content. This humiliating tendency is described as ‘revenge porn’. A viral deepfake video can reach an audience of millions and make headlines within a matter of hours but proving after the fact that the video was altered might be too little too late.

In any case, it appears that in the short term the real victims of malicious creators of deepfake videos will not be governments and corporations but individuals, most of them women. It is unlikely that they will be able to afford to hire specialists to combat the abusers, believes Rory Cellan-Jones from BBC.

Tom Van de Weghe, a Stanford University researcher explains that the “deepfake creators only have to download these pictures and train their models if they want to use it for identity theft, blackmailing or spreading negative news about anyone – not only politicians, CEOs or other influential people. This could be used for information warfare, misleading public opinion, manipulating stock prices or getting electoral support”.

Yet, it could get worse. Imagine that only one Facebook profile picture is sufficient to create a deepfake video. Researchers at the Samsung AI Center in Moscow are already experimenting with this. They recently developed a way to create ‘living portraits’ from a very small dataset (one picture or portrait) and generated animations from cultural icons such as Leonardo da Vinci’s Mona Lisa, Albert Einstein or Marilyn Monroe. This new algorithm goes beyond what other algorithms, using generative adversarial networks, can accomplish.

The dangers of deepfakes are real indeed. They can be used to create ‘digital wildfires’. They can be used by any autocratic regime to discredit dissidents. They can be used to convince people that a dead leader is still alive. They can generate false statements.

A Japanese start-up Deepfakes Web is charging USD 2 per hour of processing time to create videos. The Fiverr, an online marketplace is connecting freelancers with customers, offering to put customers’ faces into movie clips.

While it gets cheaper to create deepfake videos, the costs of consequences could be sweeping. Deceitful videos of business leaders could sink companies while these politicians can instigate a political turmoil even in fairly stable countries.

The Financial Times cautioned that false audio of central bankers could swing markets. Small businesses and individuals could face crippling reputational or financial risk. The news outlet also warned that, as elections approach in the US, UK and elsewhere, deepfakes could raise the stakes once more in the electorate’s struggle to know the truth.

The list of examples is almost endless but we can genuinely call deepfakes fake news on steroids as this dangerous practice can create disbelief by default: they can question the veracity of real videos in order to undermine credibility and cast doubt. This can further erode trust in journalism and create havoc in societies.

Vidya Narayanan believes that the deepfakes videos and audios may be particularly destructive in countries such as India or Brazil, in which there is heavy use of the WhatsApp platform for sharing videos, images and voice messages. The perceived security of this closed platform can deceive users of these countries with large populations and without much basic literacy. In such a setting, it is difficult to give the population basic media literacy.

The Power in Powerlessness

Unfortunately, there are no commercially available tools to reliably detect deepfakes as yet. However, it is clear that any technological solution must involve artificial intelligence as the deepfake creators are quick to catch up with the latest detection techniques.

Not so reliable indicators

While solutions to countering deepfakes are still far off, there are some measures that can and should be considered.  Some of the indicators people could use to spot a false video include, for example, observation of the subject’s face between the person’s chin and nose. In other words, it should be observed if there are blurring marks, a dropped frame or discolouration in these areas.

Almost motionless eyes, (with a fewer flicker) is also a sign of deepfakes. The speech out of sync with the movement of the lips is also a sign of fake video. Observation of an emotional mismatch signals deepfakes as well as jittering and blurring at unexpected places. The appearance of some strange objects and analysing the light angels can also point out to the fake videos.

Watching and scrutinising suspicious videos should raise the questions whether there are any glitches and inconsistencies in the video or audio, can the footage be corroborated and is the source trustworthy? Of course, the correctness of these observations depends on the observer’s accurate training.

Preferable solutions

With the deepfake technology rapidly evolving, it will be harder and harder for an ordinary person to recognise false videos, images and audio recordings. This inevitably evokes machine vs machine battle: creating artificial intelligence vs detecting artificial intelligence.

In a study from the University of California Berkeley and the University of Southern California, researchers used machine learning to analyse politicians’ style of speech and facial and body movement. In the study, detecting artificial intelligence was accurate 92% of the time in distinguishing a deepfake.

It, however, seems that major social media sites are still struggling to restrain the misinformation campaigns. The Facebook CEO Mark Zuckerberg admitted in June this year that the company’s systems were too slow in detecting and removing the false video.

But it also appears that the tech giants are catching up. While YouTube announced that it was aware of the issue and was working on it, Google reported that they use deepfakes to fight malicious deepfakes. In the meantime, Facebook has funded USD 10 million into an effort to spot deepfake videos. In September, Facebook also announced the launch of the Deepfake Detection Challenge alongside Microsoft.

The blockchain storage is probably the most promising approach to restore trust in videos and audios, believes the group of students at Stanford’s Design School. They have designed a decentralised prototype that allows tracking origin of digital imagery by providing proof of authenticity. By design, this model does not include intermediaries or trusted third parties.

Stanford Computer Science students Cheerla and Suri allow content creators to embed inerasable digital watermarks in their media using deep neural networks. Even if malicious attackers modify a video, distort the audio, swap in another person’s face – they cannot remove the digital signature burned into the content. The creators said that this signature should always point back to the original, unmodified content.

However, some researchers believe that the watermarking   will completely change the video. Hence, trying to prove something is a fake without reference to the original footage would be extremely hard. Watermarking can also lead to false positives, in other words, the real videos can be flagged as deepfakes.

As fighting deepfakes and fake news is not only a technological problem, governments must also play their roles. “As the technology is advancing so rapidly, it is important for policymakers to now think about possible responses. This means looking at developing detection tools and raising public awareness, but also [to] consider the underlying social and political dynamics that make deepfakes potentially so dangerous”, advises theDeeptrace report.

China and the U.S recently announced legislative initiatives. Also, California made it illegal to create or distribute deepfakes in a move, meant to protect voters from misinformation. It is, however, still not clear how difficult it might be to enforce such legislation. On the other hand, there must be exercised caution about legislation that is targeting deepfakes as these laws and regulations could be misused against journalists, dissidents or human rights activists. In that regard, deepfakes should not be treated like other forms of misinformation or fake news.

Leave a Reply

Your email address will not be published. Required fields are marked *