Not just the seasons, or my attempts to appear in the office in an outfit other than holey conference shirts, shorts and Birkenstock slippers that are cyclical. The desire of politicians for a “government trojan” or surveillance of digital communication seemingly follows a constant rhythm as well – and apparently it’s that time again. Federal Chancellor Karl Nehammer is making the surveillance of digital communication a fixed condition for a future political coalition.
A government-sanctioned malware has resurfaced on the political table again as well. What the government thinks about such a digital trojan horse could clearly be seen in a legislative draft circulating in the media. Allegedly, this draft was supposed to be in accordance with the restrictions placed on digital surveillance by the Constitutional Court back in 2019. Although the draft was, fortunately, rejected, I would still like to discuss some parts of it. Mainly because I assume that we will be having this very same conversation again in a few months’ time.
Specifically, the draft talks about the monitoring of messages that are “sent, transmitted or received in encrypted form” by “introducing a program into a computer system of the person concerned”. I find the given need to “technically ensure that only messages sent or received within a specified, pre-authorized time period are monitored” particularly interesting.
According to legal experts, this would enable constitutionally compliant surveillance because the surveillance software available on the market is already “much more focused on chat messages” and no longer “applicable to the entire cell phone”. Whatever that means.
Personally, I would wish that not just lawyers were consulted on this topic, but that security experts and technicians were also allowed to contribute their part. Letting legal experts make technical judgments is like letting me make legal assessments – not exactly optimal
As a technician, I must clearly disagree with the lawyers here. The current solutions from various providers of commercial spyware (which would probably be used, I do not assume that the responsible authorities in Austria would develop their own solutions) are all not designed to monitor only certain applications. As far as I know, there is no spyware that monitors (for example) only Telegram or only Viber.
Of course, it would be possible for the provider to configure its system in such a way that only messages from (for example) Telegram or Viber are displayed to customers. However, this is an organizational safeguard, not a technical guarantee that only relevant messages are monitored. The same applies to the requirement that only messages within a monitoring period specified in an order may be targeted.
Compromising an end user device with commercial spyware always means that the privacy of the person concerned is completely compromised. Keyword compromised: I assume that “introducing a program into a computer system of the person concerned” does not mean that the persons concerned go to their local police station, hand in their cell phone or computer (including the necessary credentials to access them) and go for a coffee around the corner while the officers “introduce” the “program”. At least I hope that’s not what is meant.
No, what is being talked about here is that security vulnerabilities are exploited to install malware on the devices. For security gaps to be exploited, the vulnerabilities must remain unpatched. At the same time, however, this also means that a system remains insecure in the broadest sense. And other actors with (depending on how you look at it, “even”) more malicious intentions can also abuse the vulnerabilities in question.
This fact, especially in view of the upcoming implementation of NIS-2, that the state is putting itself in a dilemma here:
The state wants IT systems to be secure so that citizens, organizations, companies and authorities can communicate through them confidentially and exchange data securely. This requires, among other things, that the systems are protected according to the best available standards and technology. If there are vulnerabilities, these must be patched as quickly as possible or reported to the manufacturer so that they can provide patches.
The state wants to gain insight into the communications and data of suspects in order to prevent and / or solve crimes, terrorism or espionage. This presupposes, among other things, that the systems used by the suspects have vulnerabilities that can be exploited to place software on the suspects’ devices to enable the insights.
It will not be possible to completely fulfill both requirements. My colleague Otmar Lendl already pointed this out seven years ago in an article on a very similar topic.
In principle, I understand the authorities’ desire to gain insight into the communications of suspects (the FBI even felt this desire so strongly that they simply made their own messenger available to the criminals). But the way politicians envision it – clean, clearly defined, secure, safeguarding fundamental rights – is simply not possible. No matter how often they wish it were.
Even if I am not a criminalist, investigator or expert on terrorism, I can think of measures off the top of my head that are very likely to be more promising and are also much easier to reconcile with constitutional law.
I know of colleagues in federal employment who had to wait several months to get the peripherals they needed for their work computers due to outdated processes and the miserably slow grindings of bureaucracy. Or cases in which one department is almost drowning in the amount of work to be done while another team with the same technical expertise is bored into “boreout”, but for whatever reason is not allowed to provide support.
In contact with CSIRTs and law enforcement agencies from other countries, we hear time and again that cooperation with institutions from Austria works well – when it does come about, because motivated people throw in the towel in frustration with alarming regularity.
When I talk to people I know who work in social and probation services, the massive lack of resources is a regular topic. I’m going to go out on a limb and say that with all the resources, time and energy (and probably a certain amount of budget) that has gone into the issue of messenger surveillance and state malware, many other things could have been improved that would have had a more lasting positive effect on our security.
To conclude with a very personal example: although temporal correlation naturally does not imply a causal link, I cannot help suspecting that the topic is being brought to the boil in connection with the foiled attack attempts surrounding the Taylor Swift concerts in Vienna.
In my (still) younger years, I worked for some time as a security guard for major events. And even back then, the only requirement for employment was a willingness to work nights in the pouring rain for €6.50 an hour. As a result, I guarded the back entrance of a well-known cultural institution with two colleagues, one of whom trained team kickboxing on dirt tracks in his spare time in the context of a Viennese soccer club, while my second colleague had to cover up some of his tattoos to avoid coming into conflict with various sections of the VbtG. That seemed strange to me at the time.
My eyes widened even more a few years ago when it became known that during the committee of inquiry into the BVT affair, a security employee with close links to a right-wing extremist who had been known to the authorities for decades was working in parliament. Apparently, in this case too, no real checks were carried out on who was employed for €6.50 per hour (hopefully adjusted for inflation).
After all, this led to the creation of “clear and binding security standards” for security companies in the Turquoise-Green government program from 2020. As part of the investigations following the cancellation of Taylor Swift’s concerts in Vienna a few weeks ago, it emerged that eight of the security staff employed at the concerts had already been found guilty of jihadism. It seems that the security standards haven’t quite worked out so far.
The point I want to make is that government malware is not the solution. It is not even one of several possible solutions, like the examples I mentioned in the previous paragraphs. The “Bundestrojaner” is a problem.
The targeted monitoring of individual conversations or just certain chat applications while at the same time avoiding excessive invasion of privacy and ensuring the technical security of monitored devices is not possible in the way the decision-makers imagine. I wish this didn’t have to be explained anew every few years.
Source: Read More
CERT.at – Blog