machine content cannot be detected by the machines themselves. And that’s the problem

The capabilities of neural networks in terms of content generation are being increasingly used. Marketers, copywriters, PR specialists and representatives of other creative and not so creative professions work with neural networks every day. Over the past couple of years, many services have appeared that offer to generate text, pictures, and videos for free or for a small fee.
All this is good, but in some cases it is important to know where is what is written or drawn “by hand”, and where is generated content (here we expect sparkling jokes in the comments about this article itself). For what? This is important in the case of scientific work, student thesis, drawings for various kinds of competitions. And there seem to be services for identifying machine content. But here’s the problem – they don’t work. Why and what to do about it?
Texts and GPTZero

Most recently, the OpenAI team published article for representatives of the education sector. It was devoted to techniques for working with ChatGPT, and the same article stated that machine content detection services do not work. And this is despite the fact that teachers at universities are trying to use such services to identify scientific and not-so-scientific texts created by neurons that students should work on.
The article from Open AI states, in part: “In short, machine content detection services are broken. Although several companies, including OpenAI, have introduced tools designed to detect generated content, none of these tools have become particularly effective. Machines simply don’t see much difference between content that is created by a neural network or a person.”
One of the well-known services, GPTZero, often produces false positives, marking “human” text as machine text. But machine-written is most often defined as written by a person. The same company OpenAI recently launched its own service for detecting machine content – AI Classifier. He worked exclusively with texts. As a result, it turned out that the efficiency of this service does not even reach 30%. So it’s easier to guess where the text is than to determine it using a service.
Such services are also unable to perform fact-checking, but checking what is written in an article, note or news is important. We all know that ChatGPT tends to add something of its own to the content if it cannot find information on the topic on the Internet. This tendency of the neural network has failed many authors and even lawyers.
What do we have left? Either wait for a more reliable service for detecting machine-made content, or determine the author yourself. If you don’t know a person, then it’s difficult to say whose text, if it’s written well, is difficult. But if the teacher is well acquainted with the peculiarities of his students’ “handwritten” works, he will have no problem determining whether it was written by his student or someone else.
Images and Google DeepMind

Google has developed a different kind of service – it can identify and watermark graphic content. But with a caveat: not just any pictures, but only those created using the Google Imagen image generator. Unfortunately, we are not talking about recognizing any synthetic images yet.
As is the case with texts, the relevance of the problem is growing every day. There are more and more generated images, including deepfakes, works for competitions, etc. But it is difficult to determine what kind of content it is, if it was created by a high-class neuron. It is not always possible for a specialist to do this.
According to many experts, the time has come to standardize and regulate machine-generated graphic content. The problem of deepfakes is very relevant now. And they may well work as a political or social tool. In addition, neural networks take the basis for generating content from the network, including copyright-protected graphics. The courts are currently considering many lawsuits from graphic designers, artists and photographers against companies that have developed generative neural networks.
So, Google DeepMind is trying to solve this problem using watermarks. When generating a picture, the neural network adds watermarks that are not visible to humans, but for a machine they are very visible. At the same time, we are not talking about ordinary watermarks that can be cut off or erased. Everything is more complicated here – such markings are “hardwired” into the picture itself, so that the service, which is trained to identify the generated content by watermarks, will be able to do this immediately, no matter how you crop the graphics.
New rules for everyone

In July 2023, Google and six renowned AI developers signed a joint agreement to securely develop and use AI. In practice, this will be expressed in marking the generated content using marks invisible to humans. The machine will be able to identify them immediately – even after changing the graphics in a graphics editor.
All this is just the first step. Experts believe that common standards for labeling generated content are needed. Now the same Google DeepMind can identify images that it created. He is not yet able to identify pictures from Midjourney, Kandinsky or Stable Diffusion.
Accordingly, we need tools for both marking, which will be unified for everyone, and for identifying watermarks, and on any content – from graphics to texts. With the latter, everything is quite complicated, since it is unclear how texts can be marked. While it’s quite easy to embed information into a picture, you won’t be able to do the same with text.
In the EU, by the way, already thought about it on new laws regarding tagging of generated text. Thus, the legislative bodies of the European Union proposed introducing mandatory labeling of any content (text, images, video and audio files) created by artificial intelligence. The goal of this initiative is to protect society from attempts at manipulation through fake content.