All indications are that software supply chain security will be the biggest issue for the security industry in 2021. The largest security story of 2020 was the supply chain compromise of SolarWinds Orion which allowed attackers to ship malicious updates with backdoors to Orion customers with perfectly valid signatures. Once these updates were applied and attackers were in these networks, this access allowed a large-scale attack of government agencies and tech and security companies, perhaps one of the single largest attacks of US networks in history. In some cases the level of compromise was so deep, including compromised administrator credentials, that the general guidance has been for victims to rebuild infrastructure from the ground up.
Supply chain security is not a new concept (I wrote about how Purism protects the digital supply chain over two years ago) and many researchers have recognized it as a legitimate threat for a long time. Yet the industry overall has been slow to recognize the risk and in fact perverse incentives have led to many in the industry doubling-down on security solutions that rely heavily (in many cases rely entirely) on the exact kind of security measures supply chain hacks defeat.
The proprietary software industry can’t fix the software supply chain problem because they largely created it and depend on it to maintain control over customers. In this article I’m going to explain how this happened, and what the future of supply chain security looks like.
The core problem with the security industry is the perverse incentives that drive security architects to design solutions where security is a secondary effect or sometimes even a marketing excuse, when the main priority is to increase a customer’s dependence on the vendor. The majority of professional security architects out there use the same playbook, and are unable to design secure software without falling back to chains of binaries signed with vendor keys.
There’s nothing wrong with code signing as a security measure when it’s limited to its intended purpose: a “seal of approval” assuring a customer that software has not been changed after it left a vendor. This seal is especially important when you are shipping software in binary form since you can’t as easily audit that software for malicious changes like you can with source code. Code signing is a widespread practice and even Linux distributions use it as a way for users to verify software packages came from that project.
The problem with code signing is in how it has been extended to exert control. In addition to verifying whether software has been modified, those signatures are also used to enforce policies that only allow software to be installed or to run that the vendor explicitly approves. The proprietary software industry is dependent on code signing with vendor keys as the foundation for most if not all of its security, because it enables vendors to exert this control over their customers in the name of security.
Nowhere do these perverse incentives have a stronger impact than the smartphone industry, which has become the test bed for the most advanced applications of code signing to exert control, with Apple at the forefront. In the name of security, every piece of software you install or run on an iPhone must be approved by Apple. They act as the gatekeeper over what’s allowed in the App Store and can revoke previously-approved applications from competitors, which has led to lawsuits and anti-trust hearings.
In the beginning this control was enforced by comparing code signatures in software, but as customers have gotten more sophisticated in their ability to bypass this control (literally called jailbreaking because these controlled environments are called jails), vendors have doubled-down on code signatures backed by specialized hardware. From the moment the computer starts, code is sent to this hardware for approval–only if signatures match vendor approval does this hardware allow it to run.
While the explanations for these sophisticated measures is security–stopping hackers and even governments from breaking into your computer–the reality is that the majority of the time these measures just prevent end users and competitors from doing something the vendor doesn’t like. Worse, this approach anchors all security and all trust in the vendor and their signing keys. Compromise a signing key and the whole house of cards falls down.
Most security experts agree that end-to-end (e2e) encryption (where only the two endpoints control the keys) is the best way to secure communication between two people. Experts also almost universally agree that adding an encryption backdoor–an extra key controlled by the vendor or handed over to authorities that can unlock e2e encrypted messages–cannot be done securely. This is because there is no such thing as a backdoor only authorities know about. Even if you trusted authorities to have a key, eventually attackers will get access or otherwise compromise that key and then the security of all of these previously-secure messages is defeated.
This, by the way, is why the NSA is known to store encrypted communication automatically and indefinitely. Even if they can’t decrypt it today, they might be able to decrypt it eventually, due to a future flaw discovered in the encryption, or the disclosure of the key.
Ironically, many of the same experts who speak out against encryption backdoors, design security systems that anchor all trust in their company’s signing key. Little effort is spent designing systems that can detect and respond in the event a signing key gets compromised. Yet we know these keys get compromised, and between the Stuxnet malware and the SolarWinds Orion supply chain compromise we have two large-scale global examples of how high security systems can be compromised for months without anyone knowing, when they blindly trust key signatures.
This contradiction between what security experts say is secure and what they design for their companies illustrates how perverse incentives compromise secure design in favor of control. Improving supply chain security requires giving up some or all of this control, which is why you will likely not see real solutions come from proprietary software vendors.
We could learn a lot about how to secure the software supply chain from how we secure the food supply chain, and in my article Protecting the Digital Supply Chain I draw many analogies between them:
The food supply chain is important. Food is sealed not just so that it will keep longer, but also so that you can trust that no one has tampered with it between the time it left the supplier to the time it goes in your grocery bag. Some food goes even further and provides a tamper-evident seal that makes it obvious if someone else opened it before you. Again, the concern isn’t just about food freshness, or even someone stealing food from a package, it’s about the supplier protecting you from a malicious person who might go as far as poisoning the food.
The supply chain ultimately comes down to trust and your ability to audit that trust. You trust the grocery and the supplier to protect the food you buy, but you still check the expiry date and whether it’s been opened before you buy it. The grocery then trusts and audits their suppliers and so on down the line until you get to a farm that produces the raw materials that go into your food. Of course it doesn’t stop there. In the case of organic farming, the farmer is also audited for the processes they use to fertilize and remove pests in their crops, and in the case of livestock this even extends to the supply chain behind the food the livestock eats.
If the food supply chain worked like the proprietary software supply chain, we’d buy food in opaque jars with a factory tamper seal on them, but without expiration dates, ingredient lists, food allergy warnings, or nutritional information. The factories would never get inspected for cleanliness or audited to see if they use spoiled ingredients or processed peanuts in the same facility. Most importantly, we wouldn’t be able to check the food ourselves beyond that tamper seal–we wouldn’t have a sense of smell, taste, or sight. The only way we’d know if the food was tainted is by eating it and waiting to see if we get sick.
To improve software supply chain security we need the ability to audit software like we audit food and this requires much more transparency–transparency beyond what proprietary software vendors allow. Tamper seals (code signing) are important, but not close to being sufficient to catch tainted software. As the SolarWinds Orion hack shows, food can be tainted at the factory before it gets into those tamper-sealed jars.
The software supply chain will get attacked, and third parties and motivated customers must have the ability to detect tainted code quickly, beyond simply relying on their vendor to notice, looking at a tamper seal, or waiting to see if their network gets sick. The best hope we have to improve supply chain security is in the combination of free software and Reproducible Builds.
At the initial level free software and proprietary software use similar security measures to protect against supply chain attacks. A software repository is owned by a limited list of maintainers who control what source code and files are allowed in the repository and approve all changes. Both free and proprietary software developers these days typically sign their code changes with a personal signature verifying that the change came from them. When the software gets packaged, that binary package is also typically signed with a key owned by the company or software project so the end user can verify that the package hasn’t been modified by anyone else, before they install it.
Free software adds an additional layer of supply chain security that proprietary software simply can’t, due to the freedom of the code. While an attacker can try to sneak malicious code into the source code itself, it’s much more challenging to hide that code long-term, given that code changes are not only audited by the software maintainers themselves, but any interested third party as well as security researchers and even regular end users. While some security researchers are just as comfortable auditing binaries as source code, for many it’s a lot easier and faster to audit code for backdoors when the code is freely available.
This is one reason why Purism offers a 100% free software operating system, PureOS, on our computers. By only installing free software, all of the source code in the operating system can be audited by anyone for backdoors or other malicious code. For processed food to be labeled as organic, it must be made only from organic sources, and having our operating system certified as 100% free software means you can trust the software supply chain all the way to the source.
Unlike proprietary software, free software can also address the risk from an attacker who can inject malicious code somewhere in the build process before it’s signed. With Reproducible Builds you can download the source code used to build your software, build it yourself, and compare your output with the output you get from a vendor. If the output matches, you can be assured that no malicious code was injected somewhere in the software supply chain and it 100% matches the public code that can be audited for backdoors. Think of it like the combination of a food safety inspector and an independent lab that verifies the nutrition claims on a box of cereal all rolled into one.
Much of PureOS is already reproducibly built, and we are working so that ultimately all software within PureOS can be reproducibly built starting with the base install and expanding from there. We not only intend on publishing our own reproducible build results, but also tools and guidance so third parties and customers can perform their own audits. That way, customers aren’t limited to learning about supply chain attacks from us, they can audit and detect attacks themselves.
While free software and Reproducible Builds don’t prevent supply chain hacks entirely, they make those attacks much more difficult to hide and provide valuable methods of detection you can’t find anywhere else. For instance, in the case of the SolarWinds Orion supply chain attack, if it had used free, reproducibly built software, third parties could have compared the tainted binary against their own audit infrastructure and detected the compromised software update within hours. Instead, the attack was only noticed over a year later when FireEye was investigating a hack that released their own internal tools.
If critical software were free and reproducibly built, even if companies didn’t audit every binary they get from a vendor, they might at least audit their highest-risk third-party software with the most access inside their network. Given the cost of repairing the damage from these kind of supply chain attacks on government and private infrastructure, building this audit infrastructure for critical software seems like a wise investment. The load could also be distributed among public and private agencies across the world, starting with critical software projects and expanding beyond that as resources allowed.
Over the next year or two you will likely see many vendors touting proprietary solutions for supply chain security that coincidentally require you to anchor all trust in them. Solutions to this problem won’t come from proprietary software and can’t come from any one vendor, it requires a collaborative approach that gives customers more control over their software, and grants them and independent third-parties the ability to audit the supply chain themselves.