In my previous post I talked about why consent matters when it comes to privacy; and yet, privacy is only one of the areas where tech companies take advantage of users without their consent. Recently, tech companies have come to a troubling consensus: that they can change your computer, remotely (and often silently) without your knowledge or permission.
Some examples of this include:
Below you will find the origins of this mentality, the risks and harm that arise from it, and what it says about who really owns a computer.
Anyone who has ever worked for a large company in the computer age has experienced first-hand the authoritarian, controlling, and restrictive policies that IT employs to manage company computers. Starting with centralized systems like Active Directory, IT teams were able to create policies that controlled what sorts of passwords employees could use and whether employees could install applications, access printers, and even, in some cases, insert USB drives.
These centralized tools have evolved over the years: they can now add and remove files, install new software and software updates, remotely control machines over the network in order to view what’s on their screens and access local files. This controls extends into Active Management Technology features embedded into the Intel Management Engine, that lets administrators remotely control computers even if they are turned off. Now that smartphones are critical tools in many organizations, MDM (Mobile Device Management) tools are also often employed at enterprises to bring those devices under a similar level of control–with the added benefit of using GPS to track employee phones even outside the office.
The most common justification for these policies is convenience. If you are an IT department and have thousands of employees–each with at least one computer and one smartphone that you need to support–one of the ways to make sure that the appropriate software is on the systems, and updates get applied, is to push them from a central location. Companies often have custom in-house software their employees rely on to do their jobs, and throughout the life of the company more tools are added to their toolbox. You can’t expect the IT team to go desk-by-desk installing software by hand when you have thousands of employees working at offices all over the world: when an employee’s computer breaks, these same tools make it easy for IT to replace the computer so the employee can get back to work quickly.
The main justification for the strictest–and most controlling–IT policies isn’t convenience, though: it’s security. IT pushes software updates for protection against security bugs. They push anti-virus, anti-malware and remote monitoring tools, to protect both employee and company from dangerous email attachments, from software they might download from their web browser. IT removes local administrative privileges from employees in the name of protecting them from installing malware (and, practically speaking, from installing games and other time-wasting apps). They disable USB storage devices so employees can’t insert disks containing malware or copy off sensitive company documents. Each of these practices have valid reasons behind them for companies facing certain threats.
Information security professionals spend much of their time solving problems in the enterprise IT space; as a result, they often take on some of the same patronizing views of users you find in IT. Many view themselves as parents and users as children, their role being to wrap the hard corners of the digital world in foam so users don’t hurt themselves. This patronizing view leads them to pick security measures that remove control and autonomy from end users, and centralizes that power in the hands of IT or information security. The repeating refrain is “just trust us” and that users must place full trust in the internal security team, or the third party enterprise security vendor, to be safe.
Most users tend to bristle against this kind of security policies–especially as generations are entering the workforce who grew up with computers, and are increasingly savvy and knowledgeable about how to use them. All the same, in the workplace employees have grown accustomed to giving up much of their autonomy, control, and privacy for the sake of the company. Yet you can tell that this approach runs against our nature, because so many companies have had to explain these policies in new hire documents and require that employees agree to, and sign them, when they are hired. These documents inform the employees that the computers they use and the documents they access are company property–and that the company is authorized to monitor and control their property at all times.
You could make a convincing argument that, since companies have paid for, and do own, all of the computers they provide to their employees, and pay IT teams to maintain them, it’s their right to set up software to control them remotely. As draconian and privacy-invading as some corporate policies are, you can still argue that employees consented to this level of control when they signed their employee contract. The problem is that this patronizing, authoritarian approach to enterprise IT has now found its way into consumer devices as well, because it’s in a tech company’s interest to have as much power over their customer as possible. Unlike in the enterprise, though, this remote control is on by default and without explicit consent.
More and more tech companies are hiring themselves as their customers’ IT staff, are granting themselves remote control over their customers’ computers, always in the name of convenience and security. The most common form of remote control is that of automatic updates; on the surface, automatic security updates make sense–people can’t be expected to know about all of the security vulnerabilities in all of their software, so it makes sense to make patching easier for them.
The problem is that many companies now set this behavior as the default–without user consent–and don’t limit themselves to security updates: instead, they also push other changes they want, including normal feature updates, adding new advertising to the OS, automatically logging users into their Google accounts, and any other change they want on your computer. These updates often have critical bugs themselves, but since they go along for the ride with security updates, people are left with the false choice between security and stability.
Because these updates happen behind the scenes, without any prompts or notices for the user, users have little to no control over whether, or when, the updates happen. On phones, this control can also extend to whether a user is allowed to install an application, use it after they installed it, or in the famous example of Google and Huawei being caught up in the US/China trade war, a customer losing the ability to update their phone. Most recently, Adobe has told its customers they could be sued if they don’t upgrade–using older versions of the software they bought apparently being against their licensing agreement!
The irony is that, decades ago, when your average person had minimal experience with computers, those inexperienced users had much more control and autonomy over them. Many people grew up with computers and smartphones today, and technology is second-nature to them. Many switch between operating systems, laptops and phone vendors as effortlessly as if they were switching between car brands. Yet, at a time when individuals are much more capable of using computers, and computers are simpler to use than ever before, tech companies have decided people can’t be trusted to manage their own devices, that the vendor should have more control than ever before.
In the case of enterprise IT, it’s clear that the company owns employee computers and exercises their rightful control over their own property. But what does it mean if a tech company exercises the same kind of control over consumer computers or phones? If hardware vendors have the power to change your computer silently, without your consent–including 3rd party applications you installed yourself–is the computer really yours? If phone vendors decide which applications you can install, can remotely disable applications from running and can stop you from getting updates, is the phone really yours? If software vendors can install major feature changes without your permission, force you to update, even sue you if you don’t update to their latest versions–is the software really yours?
The solution to this problem of remote control is pretty simple: consent. While many people in security circles believe the ends justify the means, there are many examples where the same action, leading to the same result, takes on a completely different tone– all depending on whether or not the actor got consent.
Some people may be more than happy to make their hardware or software vendor, or the IT department, in charge of their devices, but the vendor should still get permission first. While many vendors will point to their click-through agreements as proof of consent, customers aren’t expected to read (or understand) these agreements, and so they are no more valid a form of consent than a click-through privacy policy. If you have to accept a license agreement before you can use a computer or software, it’s not really consent–it’s an ultimatum.
Consent doesn’t need to mean users will be at risk from malware or security bugs; it just means they give permission before a company changes files on their computer. Vendors can add a simple prompt that explains what’s about to happen, so the customer can approve it. The customers that don’t care or that fully trust the vendor will still click Accept regardless; customers that do care retain control over their computer and can investigate and approve the change first. The problem with removing everyone’s power because you assume most people are apathetic, is that many people are apathetic precisely because they feel powerless in the face of Big Tech companies.
All of Purism’s products are aimed at removing control from tech vendors (including ourselves) and giving freedom back to users. This is true in the free software we use throughout our hardware, the open standards (again, and free software) we use for our services, in our approach to moderation for Mail, Chat and Social. We ask for your permission before we update software on your computer and explain exactly what’s being updated and why. You shouldn’t have to outsource all of your trust and control to a vendor to be secure. With Purism products, you are in control.