How to protect your voice?

Recently, the issue of voice fakes has arisen as professional voice actors discovered that their vocal tracks had been secretly recreated using generative artificial intelligence (AI). The resulting synthetic voice was forced to say things for which the actors were not paid and to which they might have moral, professional or personal objections. In one particularly disturbing example, a video game modder (i.e., a person who independently creates new scenes for a video game based on existing content) used generative AI on a public website to create and distribute deepfake pornographic content using an actor's vocal tracks without her consent.

What about in practice?

So the case of Alena Andronova against Tinkoff Bank is quite resonant. In 2019, an agreement was concluded between the actress and the bank to record voices for the bank’s internal tasks and to irradiate the voice assistant. A few years later, the actress discovered using her voice in advertising. Andronova came to the conclusion that a speech synthesis tool available from Tinkoff Software was trained on her voice – it’s called “Alena’s Voice.” The actress claims that the contract did not stipulate the use of voice for training neural networks. Created as a method of influencing the company petition on Change.org, in which Alena demands that a person’s voice be legally recognized as an intangible good, so that it cannot be alienated under a contract without a special indication that it will be used for synthesis. Currently and Abroad In addition, there are several professional associations, such as SAG‑AFTRA and the National Voice Actors Association, that can assist interested individuals in protecting the violated “right to voice.”

However, this method does not seem to be a completely accurate interpretation of the legal relationship that has arisen, since contractual relations between the parties may already involve structures that will avoid the misuse of voice, including through processing by artificial intelligence. However, the question of using a voice that has already been published and generated without the consent of the author remains open.

Violations of the right to an individual voice are possible by changing the sound recording (distorting the sound, changing the voice using a computer program, adding other new sounds and audio effects, shortening the sound recording to the detriment of the meaning (content), etc.). One of the proposed methods of protection against such violations may be to secure for an authorized citizen the ability to demand the prevention of violation of the integrity of the recording, as well as to demand the cessation of the use of a voice recording in ways or in a form that affects the honor, dignity or business reputation of the owner of the voice.

Domestic law enforcement practice has not yet formed a clear position regarding voice as an object of legal protection, but foreign practice is also ambiguous.

So in the case Midler v. Ford Motor Co. the court found that: “The voice is as distinctive and individual a feature as the face. The human voice is one of the most tangible ways of expressing personality.” The court ruled that not every commercial use of someone else's voice is a violation of the law; in particular, a person whose voice is recognizable and widely known receives protection under the law through the right of publicity as a protection against invasion of privacy by appropriation. This protects public figures and celebrities from misappropriation of their identity and potential exploitation for commercial purposes.

Legal regulation posthumous use of intangible goods becomes relevant for the same reason. Modern technology makes it easy to recreate voices and images. For example, with the help of deepfake technology, “digital clones” of deceased celebrities are generated, who “tour” with concert tours, “participate” in television shows, “advertise” products, and actively “run” pages on social networks. Thanks to this technology, Bruce Lee “starred” in an advertisement for Johnny Walker whiskey, and James Dean “received” a role in a feature film. In 2020, with the consent of the heirs, the image and voice of actor Leonid Kuravlev were used in the image of the famous character Georges Miloslavsky in advertising Sberbank.

Since the right to exercise a voice is inextricably linked with a person’s image and is an integral part of his perception, the issue of regulation should be approached comprehensively. The legislation of many countries uses the concept of protecting the “digital image” of a person as a whole, presenting a person’s digital trace as an integral object of legal protection.

Since the voice as an integral part of a person can be falsified through the use of deepfake technology, the phrase “deep-learning” and “fake” means the artificial creation, manipulation and modification of data to create a false impression of an existing person, object, place or subject.

What to do?

Due to the obvious problem of using this technology, many countries are introducing appropriate regulation in order to minimize the risks of using a person’s “digital image”.

During sessions at the XI St. Petersburg International Legal Forum in May 2023, opinions were voiced on the need to create special legislation that takes into account the peculiarities of the use of AI technologies in creativity. In addition, efforts have been made to improve the technical capabilities of detecting deepfakes. Thus, the ANO “Dialogue Regions” presented a system for monitoring deepfakes in real time, called “Zephyr”.

Most recently, a bill to combat manipulation on the Internet was introduced to the State Duma. As Anton Gorelkin, Deputy Chairman of the State Duma Committee on Information Policy, Information Technologies and Communications, noted, “This is the first bill in Russia to regulate an artificial intelligence product. In the fall, we will introduce a number of bills, because with the development of the artificial intelligence program, new entities are appearing that also need regulation. In particular, deepfake and a number of others.” First of all bill The concept of “deepfake” should be legislated.

Currently, work is underway to develop and amend the current legislation, which provides for punishment for the use of digital identity substitution technology (deepfake) for illegal purposes. Proposals on this matter are being prepared by the Ministry of Digital Development together with the Ministry of Internal Affairs and Roskomnadzor. On November 1, they are planned to be presented to the government commission on crime prevention, chaired by the head of the Ministry of Internal Affairs, Vladimir Kolokoltsev.

Thus, to guarantee the rights of citizens carrying out professional activities with the help of a recognizable voice, it is necessary to provide for specifics in the relevant contracts and agreements with the employer/customer. Domestic legislation provides tools for protecting one’s rights, including protecting the results of work related to the recording and reproduction of a person’s voice. It seems irrelevant to secure an exclusive right or include a definition of voice as an intangible good, since this may create more gaps than it will provide a working tool to prevent violation of rights.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *