In an attempt to reduce the impact of the terrorist use of the Internet, the European Commission is funding the Clean IT project. The aim of this project is to examine the question “if we can reduce the impact of the use of the Internet for terrorist purposes, without affecting our online freedom”. However, a recently leaked document reveals that the project seems to have lost track of this original question.
The goal of the project is of course an admirable one. Nobody in his right mind wants to give terrorists free use of the Internet. However, the question quoted above is based on the assumption that it is possible to both:
- distinguish between terrorists communication and the communication of regular Internet users and
- to do so without violating regular Internet user’s privacy and rights.
While I am perfectly willing to accept that it could be possible to distinguish between the two kinds of traffic, I have a hard time believing that it is possible to do so without violating the privacy and rights of regular Internet users. Assuming that 99% of the people using the Internet are not terrorists, how do you find that 1% traffic that does originate from as yet unknown terrorists without violating the freedom and privacy of the 99%? I am assuming that known terrorists are already under scrutiny, as we should hope that they are! The only way to distinguish between the two would be to look for it, based on certain criteria. That means you would have to identify characteristics that traffic from terrorists might possibly have and check existing traffic for those characteristics. That method is flawed, ineffective and dangerous. It is flawed because a lot of legitimate traffic could meet those criteria (for instance the use of encryption) there are ways for terrorists to circumvent this kind of monitoring (by using private networks or pre-arranged code) and dangerous because it would necessitate a dragnet approach to traffic monitoring, meaning that the general population would be under constant surveillance.
Now let’s take a look at some of the proposed measures in the leaked document of the Clean IT project.
Removal of any legislation preventing filtering/surveillance of employees’ Internet connection.
Imagine the can of worms this opens, just for a second. It would mean that any employer, government or otherwise, would be allowed to monitor all his employees’ internet communication. Now imagine the amount of spying an unscrupulous boss would be allowed to do, under the protection of this arrangement: seeing who uses the Internet or private use, who is using Facebook to complain about his salary, who is looking online for another job, the list goes on and on. Workplace privacy, which is well protected now, would be all but annihilated. And all in the hope of maybe catching a few terrorists. In my opinion, this is far too large a sacrifice to make.
Law enforcement authorities should be able to have content removed “without following the more labour-intensive and formal procedures for ‘notice and action'”.
Putting that another way, it reads: “Law enforcement authorities should be able to pick up the phone, call a hosting provider and demand they remove objectionable material from their servers, without any form of due process and without informing the poster of said material. This is censorship, pure and simple.
“Knowingly” providing links to “terrorist content” (the draft does not refer to content which has been ruled to be illegal by a court, but undefined “terrorist content” in general) will be an offence “just like” the terrorist.
Another can of worms. What exactly is “terrorist content” and who determines this? Also, the word “knowingly” is suspect for how would it be established without a shadow of a doubt that the person making the link, knew at that time that the “terrorist content” was indeed “terrorist content”? This is much too vaguely worded.
Legal underpinning of “real name” rules to prevent anonymous use of online services.
And yet again a can of worms. What exactly are “online services”? Does that refer to any online service? Would it mean that I may no longer sign up to Twitter under an assumed identity? And how would this affect the terrorists? Presumably, these men and women have false identities already. And how would Twitter verify that I am me? Will they require a copy of my passport? Or a government issued digital certificate? Just how trustworthy would that be, the Diginotar débâcle in mind?
ISPs to be held liable for not making “reasonable” efforts to use technological surveillance to identify (undefined) “terrorist” use of the Internet.
In other words, ISPs will be forced to monitor their customers’ activities, whether they like it or not. If they don’t, they themselves will be liable for making “terrorist content” available or passing along “terrorist communication”. In effect, this will mean that ISPs will turn into a private “Internet police”, monitoring everyone who signs up for their service. Or do you think that any ISP will run the risk of being held liable and choose to protect their customer’s privacy? And what exactly are “reasonable efforts”? Does it mean storing and analysing all traffic? Does it mean Deep Packet Inspection? More? Less?
Companies providing end-user filtering systems and their customers should be liable for failing to report “illegal” activity identified by the filter.
So not only ISPs will be held liable but also companies who provide filtering services will be required to snoop on and tell on their customers. The customers in turn would be expected to snoop on their users. Without trying to invoke Godwin’s Law, this sounds almost like the situation in Nazi Germany, where everybody would tell on everybody, for fear they themselves would get a visit from Herr Flick.
But it gets worse, because:
Customers should also be held liable for “knowingly” sending a report of content which is not illegal.
So not only will customers have to make sure that the content they are reporting, is in fact illegal but if they report something which turns out to be perfectly okay (according to whom?), they may get in trouble themselves for “knowingly” sending a false report. And how does one make sure that content is illegal? Well, you would have to examine it one way or another. And suddenly, said customer could be in possession of illegal material. I wonder what the penalties for that will be. And why is it suddenly called “illegal content”? What happened to “terrorist content”? Surely, illegal content is a far broader definition than terrorist content.
Governments should use the helpfulness of ISPs as a criterion for awarding public contract.
What a terribly wordy way to say “blackmail”. So for the ISPs it will either be “snoop on everyone and tell us what you find or you will no longer be eligible for public contracts”. While you are at it, negotiate with a loaded gun on the table, why don’t you?
Blocking or “warning” systems should be implemented by social media platforms – somehow it will be both illegal to provide (undefined) “Internet services” to “terrorist persons” and legal to knowingly provide access to illegal content, while “warning” the end-user that they are accessing illegal content.
I know the Internet can be a confusing place for non-technical autocrats, but come on, make up your minds, will you? So it is illegal to link to “terrorist content”, illegal to provide Internet services to terrorist persons but if you have a warning message in front of it, you’re okay? Surely you must be joking?
The anonymity of individuals reporting (possibly) illegal content must be preserved… yet their IP address must be logged to permit them to be prosecuted if it is suspected that they are reporting legal content deliberately and to permit reliable informants’ reports to be processed more quickly.
I wonder how they will pull this one of. So if I report something to them that might be illegal, I have nothing to fear because I can do so anonymously. Yet, when my report turns out to be false and they believe I knew it to be false, suddenly they have my information to prosecute me. How? Are you going to be pulling it out of a hat? Of course not, the first part is obviously bogus. If my IP address is logged and I am no longer allowed to make anonymous use of the Internet (see above), how on Earth could I have even a shred of anonymity here?
Companies should implement upload filters to monitor uploaded content to make sure that content that is removed – or content that is similar to what is removed – is not re-uploaded.
And pray tell, how would companies know which content is removed (from where, the entire Internet?)? Probably because they will all have to tap in to some kind of government or EU run filtering service, I suspect. There would be no other way of implementing this.
It proposes that content should not be removed in all cases but “blocked” (i.e. make inaccessible by the hosting provider – not “blocked” in the access provider sense) and, in other cases, left available online but with the domain name removed.
Why? If it is illegal, what would be the point of merely blocking it? If the powers-that-be make a mistake, it can easily be reverted? This makes no sense whatsoever.
The list above was gleaned from Clean IT – Leak shows plans for large-scale, undemocratic surveillance of all communications, an Edri.org publication. The leaked document from which the above was taken, can also be downloaded there. It should be clear from the above though, that rather than dive into the question “can we reduce the impact of the use of the Internet for terrorist purposes, without affecting our online freedom?” the Clean IT project has somehow developed a set of proposals or ideas that in my view are a lot worse than ACTA. It would literally turn the mass of the people, filtering companies and ISPs into government snitches and would give the EU unprecedented monitoring and censoring powers. It is high time that we raise the alarm and expose the Clean IT project for what it seems to be: a group of people who seem to have no regard for the rights and liberties of the European citizens.
Since EDRI published the leaked document online, the Clean IT project has posted a short message online, explaining the supposed status of the document. In the statement, they claim the following:
However EDRI suggests otherwise, a posted document on their website does not provide concrete proposal to tackle terrorism on the internet. The document is food for discussion only, and summarizes possible solutions and ideas that have to be evaluated by all partners, public and private. While taking into account that any measure taken should not affect our online freedom, the advantages and disadvantages of the possible measures will be discussed in next meetings.
The glaring textual errors in this statement, suggest that it was published rather in a hurry. What is worse though, is that they seem to admit that they have not been busy trying to answer the question whether it is possible to reduce the impact of the use of the Internet for terrorist purposes without affecting online freedom, but rather have been busy thinking up solutions. Solutions which are far, far worse than the terrorism they are supposed to help prevent. Many of these proposed solutions should never, ever have made it into this document in the first place because they are terrible ideas, as I have tried to show above. Most, if not all of the above measures do affect our online freedom, for the worst. Perhaps someone should be telling these people that “1984” is not an instruction manual?