Jump to content
In the picture is a laptop and hands that are tipping on it © John Schnobrich
Freedom in the digital age
Art. 5, 10

Chat control: incompatible with fundamental rights

The Chat Control regulations aims to combat sexual violence against children. In this blog post we explain why the draft is a threat to fundamental rights.

The EU Commission has presented a draft regulation that is to lay down rules for preventing and combating sexual violence against children (Chat Control Regulation). The planned regulation raises such significant fundamental rights concerns that the GFF is joining the debate while the draft is still being deliberated at EU level. The most important points of criticism at a glance.

External Content from YouTube

Please see the privacy policy of YouTube if you are loading external content.

Felix Reda explains the five biggest threats for our fundamental rights

The EU Commission's draft regulation on chat control is currently being negotiated in the European Parliament and the Council of Ministers. With the fight against sexual violence against children, the draft pursues an objective that is essential for the protection of children and their rights and can justify restrictions of fundamental rights. However, there are considerable doubts about the effectiveness of the proposed measures. We are convinced that the draft violates the EU Charter of Fundamental Rights in crucial points. We have summarised the five most important fundamental rights objections to the chat control proposal here.

Chat control violates the right to privacy

The EU Commission's proposal provides for a whole range of obligations for certain online services such as internet access providers, app stores, hosting platforms and interpersonal communications services. Interpersonal communications services are, for example, email services such as GMail or instant messaging services such as WhatsApp. The term “chat control" is often used colloquially to refer to the EU Commission’s draft regulation as a whole. Chat control in the narrower sense is the part of the draft according to which authorities can oblige providers of communications services such as WhatsApp to monitor private communications. This is a particularly serious restriction on the right to privacy and the protection of personal data (Art. 7 and 8 of the EU Charter of Fundamental Rights): The monitoring is not limited to persons specifically suspected of having committed a crime. Additionally, unlike data retention, which is also incompatible with the Charter but is limited to metadata – i.e. information about who communicated with whom at what time – chat control includes the surveillance of the contents of private messages.

Authorities can impose so-called "detection orders" against providers of interpersonal communications services. This means that authorities can, for example, oblige messenger services to monitor the communications of all their users. It is sufficient that the authority has identified a significant risk that the service in question is being used for the dissemination of depictions of sexual violence against children. Detection orders do not have to be limited to monitoring the communications of specific users who are under suspicion. Instead, authorities can order that the content of all communications of all users of the service be monitored preventively. This is therefore a form of mass surveillance without probable cause.

Such a detection order can oblige service providers to filter content for known as well as unknown depictions of sexual violence against children. In addition, they can include an obligation to detect attempts by adults to solicit minors (grooming). Content detected in this way must be forwarded by the service providers to a newly created EU centre, which will pass the information on to the law enforcement authorities of the member states after a plausibility check. Although service providers are free to choose which technologies they use to comply with the detection order, these technologies must in any case be able to analyse the contents of communications. In order to detect known depictions of sexual violence against children, an automated comparison of sent media files with a reference database may be sufficient. To detect unknown depictions of sexual violence and grooming, machine learning must be used to analyse the semantic content of chats. These methods are particularly prone to error: they only make an assumption about the meaning of the content based on patterns in the analysed communication - without actually understanding the content or the context of the conversation. In its case law on data retention, the European Court of Justice has indicated that indiscriminate mass surveillance of the contents of communications would violate the essence of the right to privacy.

Indiscriminate mass surveillance is incompatible with the fundamental rights to privacy and data protection under the EU Charter, whether it involves encrypted or unencrypted communications. At the centre of public criticism of chat control, however, is the fact that the draft regulation does not exempt end-to-end encrypted communication services from detection orders. These services ensure that only the people involved in a private conversation can read the communication content – neither the service provider nor third parties can decrypt it. More and more people are specifically choosing end-to-end encrypted messengers to protect themselves. If the provider of such a messenger receives a detection order, it cannot reject it on the grounds that the service provider cannot access the contents of its users’ communications. The EU Commission's draft pays lip service to the importance of end-to-end encryption. However, service providers may only choose between technologies that allow them to detect illegal content in private communications, it states. In other words, service providers who offer end-to-end encryption without backdoors will not be able to implement any detection orders they may receive from authorities and thus come into conflict with the law. This attack on end-to-end encryption increases the intensity of the restriction of fundamental rights caused by indiscriminate mass surveillance.

Threat of chilling effects for communication freedoms

The European Court of Justice has already warned on several occasions that indiscriminate mass surveillance has an indirect negative impact on freedom of expression (Article 11 of the EU Charter of Fundamental Rights): communication participants are prevented from freely expressing their opinions if they cannot be sure of the confidentiality of their communications. This particularly affects professional secrecy holders, such as journalists communicating with their sources, whistleblowers and opposition activists. This danger will be exacerbated if the Chat Control Regulation, as proposed by the EU Commission, attacks the end-to-end encryption of messenger services. The aforementioned groups of people use such messengers for good reason. If this possibility is taken away from them because service providers have to weaken end-to-end encryption, considerable "chilling effects", i.e. a deterrent effect for the exercise of the fundamental right to freedom of expression and information, can be expected.

This effect occurs regardless of whether service providers monitor the contents of private communications through a backdoor in the encryption technology or by scanning the content on the user's device before it is encrypted (client-side scanning). The communication participants expect their communication to remain confidential from the moment when they enter a message into the chat programme on their mobile phone – not only at the moment when this message is delivered to its addressee. The decisive factor is that the expectation of confidentiality and integrity of the communication process is shaken to such an extent that those affected feel compelled to restrict the exercise of their freedom of communication themselves.

De facto filtering obligations for hosting providers without safeguards

Public criticism of the proposal has concentrated on the phrase “chat control”, which highlights the planned obligations on messengers to scan private chats. But the planned obligations for hosting services that store third-party content on behalf of their users do not stand up to fundamental rights scrutiny either. Hosting services include those that make third-party content publicly available (platforms such as YouTube, hosting services of public websites) as well as those that offer their customers private cloud storage (Dropbox, iCloud Drive). They also include services where content is only accessible to a certain closed group of people (private accounts on Twitter, closed groups on Facebook, hosting providers of company websites with restricted access). Insofar as the planned obligations for hosting providers relate to non-public content, the threats to privacy and freedom of expression described under 1. and 2. are also relevant for hosting services. In addition, there are specific problems: many of the envisaged procedural fundamental rights safeguards for detection orders may end up being evaded entirely in the case of hosting services. This is due to the different privacy rules for communications on messengers on the one hand and hosting services on the other.

Hosting services (including private cloud storage providers such as Google Drive or Dropbox) can not only be required to scan private content under the Chat Control Regulation, they may also scan content voluntarily. The Chat Control Regulation stipulates that all service providers must first carry out their own risk analysis as to whether their services pose a risk of being abused for sexual violence against children. Only if, in the view of the authorities, a service provider responds to this risk analysis with insufficient voluntary measures will they impose a detection order. In the context of these self-selected measures, hosting service providers may resort to error-prone filters to monitor private user uploads. In this scenario, there is no public scrutiny of the impact of such measures on the fundamental rights of users.

In this respect, hosting services differ from messenger services: Messenger and email programmes such as Whatsapp, Signal or ProtonMail fall under the e-Privacy Directive, which in principle prohibits these service providers from monitoring the private communication content of their users. A temporary derogation from this prohibition, which itself raises serious fundamental rights concerns, is to be replaced by the Chat Control Regulation. After the Chat Control Regulation comes into force, messengers and email service providers may only access the contets of private communications on the basis of a detection order. For hosting providers such as private cloud storage, on the other hand, the e-Privacy Directive with its ban on monitoring private communications does not apply.

For hosting providers, it will regularly be attractive to avoid a looming detection order through "voluntary" measures. In this way, the companies retain more control – also over the costs. There is a strong incentive to avoid costly measures to protect users' fundamental rights. A likely scenario, then, is that hosting services will 'voluntarily' deploy error-prone filtering programmes without the procedural safeguards foreseen for authorities’ detection orders.

Before imposing a detection order, an authority must weigh the risk posed by the service against the interference with the users' fundamental rights. In this regard, the European Court of Justice has set narrow limits for the mandatory use of filtering systems. These are only compatible with the prohibition of general monitoring obligations if the filters function so faultlessly that the service providers do not have to perform an "independent assessment of the content" in order to rule out false positives. At least in the case of unknown depictions of sexual violence against children and grooming, the filter systems are incapable of meeting the Court’s standards. If a hosting service "voluntarily" filters content as part of its duty to minimise risk, there is no public assessment of whether the filtering systems are compatible with users’ fundamental rights. As a result, innocent users may inadvertently locked out of their accounts or even falsely reported to law enforcement authorities.

Website blocking obligations require surveillance of Internet users

The draft regulation provides for blocking obligations on internet access providers relating to individual websites (URLs). Before an authority issues a blocking order, it must require internet access providers to provide the authority with information about users' access to the URL in question. To be able to collect the necessary information about the access to individual URLs and pass it on to the authorities, internet access providers would have to monitor the surfing behaviour of all their customers preventively and comprehensively. However, such surveillance would be incompatible with the prohibition on general monitoring obligations and with the fundamental right to privacy. Additionally, this information is technically inaccessible to the internet access providers if the URL is encrypted using the https protocol. Almost all websites now use https to ensure that, for example, address or credit card data that users enter into web forms is transmitted in encrypted form. The widespread use of https is recommended by the Federal Office for Information Security.

The targeted blocking of individual URLs is equally impossible for internet access providers without abandoning https encryption and monitoring the contents of their users’ online activities. DNS-based website blocking is not suitable for the planned blocking of individual URLs, because DNS blocking always affects entire domains. A DNS block directed against an individual file on a share hosting platform would also affect all other content hosted by the same share hoster and would thus not meet the requirement of the European Court of Justice that website blocking must be strictly targeted. In practice, therefore, there is a considerable danger that internet access providers will either over-comply with the blocking orders to the detriment of users’ freedom of expression and information by using DNS blocking to block access to an entire domain. Or they will attempt to implement more targeted blocking and monitor the surfing behaviour of their customers, while sacrificing the security of online communications via https encryption in the process.

Age verification endangers freedom of communication

The draft regulation stipulates that all providers of messenger and email services that are at risk of being used for grooming must verify the age of their users. The risk identified does not have to be significant – the obligation to implement age verification would therefore apply in principle to all email and messaging services that enable communication between minors and adults. In addition, the age verification obligation also applies to all app store providers. They must also prevent underage users from downloading apps that pose a significant risk of being used for grooming.

Service providers may choose between age assessment methods (for example, AI-based facial analysis, as already used by Instagram) and age verification methods (using an identity document or digital proof of identity). Both procedures are extremely intrusive for users. Age verification via identity documents comes close to banning anonymous internet use. AI-supported facial analysis, on the other hand, is often outsourced by service providers to external companies, leaving users with little control over the handling of this particularly sensitive personal data. If the technology makes a wrong assessment, young-looking adults can also be excluded from using certain apps. Those who do not possess identification documents or do not want to entrust their biometric data to a company are excluded from crucial communication technologies. Using a modern smartphone without an app store is hardly possible. Doing without messenger services is also unreasonable, especially for people who, for good reason, attach particular importance to anonymous internet use (whistleblowers, victims of stalking, politically persecuted people). In contrast to service providers, users cannot always choose between different age verification procedures.

For underage users (especially teenagers), their fundamental rights to freedom of expression and information are severely restricted if app stores categorically refuse to allow them to install certain apps without weighing these rights against the risk the app poses to underage users. Due to the strong market concentration in this area, the possibilities to switch to an alternative app store are limited.


Grundrechte verteidigen.
Fördermitglied werden!