Deepfake: media manipulation at its peak point

It was 2016 when supposedly the former president of the United States, Barack Obama, pronounced a phrase that could generate a great media controversy: “President Trump is a total and complete idiot.” Social networks polarized. The problem is that it was not true. He did not say it. Those words didn’t come out of his mouth, but it seemed like it. It was a simple experiment from the University of Washington named Synthesizing Obama, but it was beginning to demonstrate the ability to manipulate a video through artificial intelligence (AI) algorithms. The result was so real that it scared.

 

Phishing is one of the biggest concerns of citizens. In an era in which anyone can open a profile in social networks to impersonate another person, artificial intelligence algorithms have achieved a remarkable perfection that, applied to multimedia content, has given way to a worrying phenomenon that is already known as deepfakes The problem is that you can manipulate not only videos with a surprising index of realism but also the human voice.

 

The deepfakes are a technique of synthesis of the human image based on artificial intelligence that mixed the words deep learning and fake. In itself, its AI is so powerful that it allows you to change a person’s face for that of another, but not in a static photo, but in a video, and see how the new face overlaps and adapts in real time to the real face of the person in question.

 

Deep learning belongs to artificial intelligence that tries to imitate the human brain to use multiple data and recognize patterns using neural networks in a layered process. The first layers join data to reach simple conclusions and in each layer there is more level of complexity and abstraction in the results, so that the system learns by itself.

 

Deep learning is used in multiple fields today: in the diagnostic accuracy of a disease when analyzing medical images; in the fight against credit card fraud; in the dictaphones of mobile phones that recognize what we tell them; in autonomous driving and facial and images recognition.

 

Previously it was something similar to the technique of putting together a model from the different views of it in Autocad; But, today, thanks to artificial intelligence and the large amount of data that users have shared on social networks, it is a process that has been simplified to the point that some applications that run on our smartphones can do it without difficulty.

 

The issue of deepfakes can become so critical that it is possible that only the Amish and the like (who have had no contact with selfies, social networks, facial recognition processes of airports and, above all, whose photos only they have been generated by the traditional process of revealing the scroll) they could feel confident that their face will not be the protagonist of a false porn video, political discourse outside their will, a call of usury or some false multimedia content.

 

 

In this sense, a truculent case has allowed to raise the alarm against a possible trend that, according to experts, has all the attributes to be generalized in the future. As he revealed at the end of August the media The Wall Street Journal, a group of cybercriminals used software based on artificial intelligence to impersonate an executive director and fraudulently transfer about 220,000 euros from a British energy company. The scam occurred in March, but it was only recently when it has transcended.

 

On the other hand, the Pentagon, through the Defense Advanced Research Projects Agency (DARPA), is working with several research institutions to get ahead of these manipulated videos. The objective of these new mechanisms is to anticipate the phenomenon, although they must know how they work.

 

Last year, Google introduced its own technology, Duplex, which is able to chat with people over the phone. The goal with which he was born was to save his users the effort to call directly. The software was able to mimic the main attributes of the human voice, from color, accent, intonation or modulation.

 

Experts believe that these techniques will grow in the coming years, and accompanied by the virality offered by social networks, can anticipate the arrival of manipulated audios that can change the interpretation of news events. However, they point out that unlike video deepfakes, advanced knowledge and very sophisticated technology that is not available to anyone is required.

 

Faced with this threat, on September 5, several large technology groups, including the famous Gafam (Google, Amazon, Facebook, Apple, Microsoft), as well as leading American universities launched the Deepfake Detection Challenge, which promises a millionaire prize to who is able to develop detection tools for this technology.

 

On the positive side, this initiative shows that these organizations recognize the risks linked to this technology. But from a more pessimistic perspective, it can also be interpreted as a sample of their own limitations: it is worrying that they have to summon external engineers to their own research centers to find a way to counter it.

 

The process will consist precisely in creating deepfakes. A group of researchers will produce very realistic fake videos to generate a good volume of data on which to test the detection tools they currently have.

 

Facebook has indicated in a statement that it will hire actors who give their consent for the use of their image for this purpose. Afterwards, a group of developers will work to improve the detection platforms of manipulated videos.

 

 

Comments are closed.