They are giving the initiative under the sauce of combating prohibited content on social networks and instant messengers. Human rights activists see this as a serious threat to privacy and information security. Let’s talk about what’s going on.
Encryption-related issues have long been of concern to Western legislators. Some countries – among them, for example, Australia – have obliged application developers to provide decrypted user data at the request of law enforcement officers (essentially banning e2e). A similar bill two years ago started discuss in the UK, and now it consider at the parliamentary level.
The largest messengers have already opposed the proposal of politicians. Representatives of the companies said that they would not change the protection system or introduce backdoors and would accept the consequences of blocking in the country. At the same time, some developers will simply close access to their services for English users so as not to break the law.
But despite the criticism from human rights organizations, telecoms and IT companies that such initiatives regularly face, the EU has also decided to follow this path. Last May, the European Commission proposed oblige the largest services look for prohibited content by scanning users’ devices – emails, files uploaded to the Internet, and even messages in computer game chats.
Before sending a message or a file, its hash is offered to be compared with a special database of prohibited content. There is no specifics on this issue yet, but given the amount of data, it can be assumed that machine learning and AI systems will have to be used for these purposes.
idea almost immediately criticized in the Council of the EU and the European Data Protection Supervisory Authority. They considered that the proposed measures pose a threat to privacy and are disproportionate to potential threats. In March 2023, the German parliament also opposed the initiative. Politicians declaredthat will not support the invasion of the privacy of citizens.
One of the main problems of the proposed law is that it is extremely blurry formulation of criteria for prohibited content and methods for its search. They can create opportunities for abuse, raise the risk of IT services and user devices being compromised by hackers.
At the same time, automated search systems based on artificial intelligence can be wrong. Experts evaluate error at the level of 10–20%. To avoid accidental positives, the content will have to be checked manually. This approach, again, compromises privacy.
Experts also have questions about the effectiveness of the approach. If the software will find violators of the law by hashes, attackers will eventually will learn bypass algorithms (for example, using special hash filters). But increased protection may cause to an increase in the number of false positives.
Finally, proponents of the new bill claim that encryption only covers a small portion of Internet interactions. And algorithms are already analyzing open data for violations. However, they forget that as a result of tightening the screws, conditional messenger developers simply stop to work officially in the EU, as major messengers in the UK promise to do.
Some Experts consider the use of AI tools by the development of existing trends in moderation. Services already have to compromise between user privacy and compliance with laws, and this is unlikely to change in the near future.
However, the new bill may still finalize or cancel altogether. But so far it looks like politicians are not planning to abandon the idea. There is a chance that the new norm will be adopted, and the technical details will be finalized later. So already happened in Australia in 2018 with a ban on end-to-end encryption. There, local services work together with the government to stay within the legal framework.
What else we write about in our corporate blog: