New tool for Asian newsrooms to spot fake images

Hello everyone. Today we are sharing with you a translation of an article that was prepared on the eve of the launch of a new course from OTUS – “Computer Vision”.


Journalists and fact checkers face enormous difficulties in separating reliable information from rapidly spreading misinformation. And this applies not only to the texts that we read. Viral images and memes they fill out our news feeds and chats, and often they distort the context or are fakes. In Asia, where users of social networks eight times biggerthan in North America, the scale of the problem is much more serious.

There are tools that Asian journalists can use to determine the origin and reliability of news images, but they are relatively old, unreliable, and for the most part only available on desktop computers. This is an obstacle for fact checkers and journalists in countries where most people connect to the Internet using their mobile phone.

Over the past two years, the Google News Initiative has been working in collaboration with journalists on the technology for identifying processed images. In 2018 on Trusted Media Summit in Singapore, a team of experts from Google, Storyful and a wide range of news industry representatives joined forces to develop a new tool optimized for mobile devices and using the achievements of artificial intelligence. Supported by Google News Initiative, GNI Cloud Program and volunteer engineers from Google, the prototype obtained then turned into an application called Source, powered by Storyful.

Now that the application is already in use by journalists across the region, we asked Eamonn Kennedy, Storyful’s product director, to tell us a little more about him.

What does Storyful see the problems faced by journalists and fact checkers all over the world, and particularly in Asia?

[Имонн Кеннеди] Publication on a social network is often the result of an emotional impulse, rather than a full rational analysis. Anyone can share the story with thousands of other people before he or she finishes reading the above. Attackers know this and rely on people’s emotions. They want to abuse free access to social platforms and pollute conversations with false facts and stories, including extremist ones. For fact-takers, this means that any conversation is vulnerable to lies and manipulation from anywhere in the world at any time.

Could you tell us a little about the Source development process and how AI helped to cope with some of the tasks?

[ИК] At Storyful, we observe how old, inaccurate, or processed images are shared to promote misleading narratives in large and small news cycles.

The usual way for journalists to solve this problem is to use the reverse image search to prove that the image is old and reused, but it has a couple of problems. Firstly, these recycled images are often faked – the journalist should be able to identify manipulations with the image so that he has a better chance of finding the original. Secondly, search results are sorted by date with new results at the beginning of the sample, while journalists, as a rule, are interested in old results, which in the end implies a lengthy scrolling in order to find the original.

Source uses Google’s AI technology to provide instant access to the public history of the image, allowing you to sort, analyze and understand its origin, including any manipulation. This alone is already quite useful, but we go even further. Source also helps to detect and translate text in images, which is especially useful for journalists cataloging or analyzing memes on the Internet.


The Source application improves the ability of journalists to verify the origin or authenticity of a particular image, track the source and evolution of memes.

How do newsrooms use Source and what are their plans for 2020?
[ИК] Currently, 130 people from 17 different countries have used the application to check the origin of images on social networks, messaging applications and news sites. It’s especially nice to see that 30 percent of Source users access the site from their mobile phone, and that our largest user base is in India, where members of the Digital News Publishers Association – a coalition of leading media companies fighting disinformation – provide us with important feedback communication.

Looking ahead, we listen to fact checkers when we think about what the next version of the application will be. We know that Source was used, for example, to study frames from video, which shows us the development potential of an application for working not only with text or images. The ultimate goal is to create a “toolbox” of publicly available fact-checking resources, with Source at the center, using Google AI to help journalists around the world.


On this the translation came to an end, but we asked for comment on the article by the course leader – Arthur Kadurin:

One of the current “hot” topics in the field of computer vision, “Adversarial attacks”, is the methods of “deceiving” modern algorithms for recognizing and processing visual information using new, specially designed images. In recent years, applications that have been processing photos and videos in a special way (FaceApp, Deepfake, etc.) have been widely publicized, one of the key questions is whether we can use neural networks to distinguish real images from processed ones. One of the course topics “Computer Vision” is devoted to this issue, in the lesson we will analyze modern approaches to how to correctly determine “deception” using neural networks and how to successfully “deceive” them.

Learn more about the course

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *