deepfakes will soon be able to make the truth inseparable from fiction

https://habr.com/ru/company/neuronet/blog/582310/image

Artificial intelligence, i.e. its weak form, gives humanity a large number of benefits, including economic, scientific, social and all others. But AI has a downside – it is deepfakes and everything connected with them.

Already, there are enough high-quality fakes – supposedly real videos of politicians, actors and other celebrities. So far, experts can distinguish a deepfake from a real video, but the time is not far off when this may become impossible.

What is the danger?

Back in July 2019, Symantec discovered several cases of large-scale deception by cybercriminals of representatives of large businesses. Audio fake scams have cost the victim companies many millions of dollars. In all cases, the forgeries consisted of copying the voice of a company executive telling an employee to transfer money to a specific account.

As for the videos, the videos are still not very real. Yes, a person uninitiated in the technology of creating deepfakes may not even distinguish a fake. But a specialist is able to do this without difficulty. But this video, but audio, is a serious problem – a fake can only be detected after a serious analysis. And then, there will always be a possibility that the “original” is still a fake. After a year or two, the analysis will not help.

How it works?

Deepfakes are most often based on deep learning technologies, including generative adversarial networks (GANs). In this case, the GAN is represented by two competing neural networks. They play, as it were, competing with each other, which leads to the creation of increasingly high-quality synthesized audio.

This technology is also used in image processing. For example, a GAN designed to create fake photos consists of two integrated deep neural networks. The first network, called “generator”, generates images. The second network, trained on a dataset of real photographs, is called a “discriminator”.

This “union” results in amazingly realistic images. Try entering the query “fake GAN faces” and you will immediately understand what we are discussing here. At the same time, the images, for the most part, are generated from scratch, they depict non-existent people.

GANs can also be used to train the autopilot of a car, or to create highly reliable face recognition systems.

And GANs can also be used to generate voice for people who have lost the ability to speak. Well, either people who are no longer with us. This is already being done – for a number of living celebrities with voice problems, an analogue has been synthesized that is as close as possible to their own voice.

So what’s the danger?

The explanation has already been given above, with an example about deceiving companies. You can also impose the face of a politician on the face of a human actor who performs some reprehensible actions or says something out of the ordinary. On the eve of an election – a great way to lower your rating.

These technologies are actively used. In 2019, for example, there were at least 15,000 deepfakes on the network, which is 84% ​​more than in 2018. True, then, for the most part, deepfakes of the XXX category were laid out. Now the situation is gradually changing; these are far from harmless jokes.

A fairly realistic looking deepfake can literally change the course of history. Such tools can end up in the hands of very unscrupulous people. Although, why should a conscientious person if he does not create content for films or cartoons / advertisements, such things?

But for scammers, all this is just a bonanza. In addition to compromising videos and voice recordings, deepfakes can be used for blackmail, financial or insurance fraud, and manipulation of the stock market. Above, we have already mentioned the option of downgrading the politician’s rating. But in the same way, you can bring down the stock price of any company.

For the judiciary, deepfakes are just a nightmare. If there is no reliable way to distinguish fiction from reality, the evidence presented in court can simply be thrown away. In order to avoid this, software is already emerging that plays the role of a kind of antivirus – it identifies deepfakes. There will be more such software in the future.

How serious is this problem?

The creator of the GAN claims that it is unlikely that you can recognize a real or fake image just by looking at it. We’ll have to use authentication mechanisms for original photos and videos – something like watermarks.

It may well be that cameras and mobile phones will embed their own digital signature in content in the near future. Startup Truepic already offers the technology for this – and the clients of the company from large insurance companies are satisfied.

But if deepfakes reach a new level, it will be impossible to distinguish them from the original. In this case, we will find ourselves in the conditions of a new reality – where everything that we see and hear may turn out to be an illusion. Actually, all this can be a threat to both the social order, structure, and the economic system.

The risks are very high, so they should be considered in the future when developing new AI technologies. How to do this is the second question. Maybe develop a legislative framework, or create some kind of new technology. But this must be done now, before a truly serious problem appears.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *