A blog of the Science and Technology Innovation Program
Open Source Software and Cybersecurity: How unique is this problem?
Open systems aren’t inherently less secure than their proprietary counterparts, and open source code is not inherently less secure than proprietary code. Instead, Open Source Software (OSS) poses familiar cybersecurity challenges. Despite this, focusing on the security of OSS is broadly beneficial.
Introduction
The recent widespread Log4Shell vulnerability reinvigorated debates over security and Open Source Software (OSS), software that is published under a license that allows anyone to freely use, study, copy, modify, and redistribute computer programs. To some, Log4Shell reinforced a widespread public perception that OSS is, by definition, a uniquely concerning cybersecurity problem. They pointed to, for example, volunteer maintainers “desperately trying to deal with a massive vulnerability that has put billions of machines at risk.” Other security assessments were quick to defend OSS, citing equivalent security concerns in proprietary software and calling for more attention to cybersecurity best practices across the board. For them, these vulnerabilities and their attempts to address them were largely “business as usual.”
Which is it? Does OSS pose a uniquely challenging cybersecurity problem for industry and policymakers? Or do security concerns with OSS largely mirror broader cybersecurity concerns across the board? Our answer: it depends.
Defining the “Open” in Open Source Software
“Open Source Software” is defined and used in many ways; in software communities, by security experts, and more broadly. At the most basic level, the Open Source Initiative defines OSS as software that is published under a license that allows anyone to freely use, study, copy, modify, and redistribute computer programs. OSS can refer to code, software, packages, and systems; the “code” is what forms the basic building blocks, much like a set of Lego building blocks, and “software” is simply usable code. A “package” is code in a distributable form that has meaning, analogous to a house made of Legos. The “system” is what holds all of the different packages together to be able to interact and have business meaning, just as a Lego village with houses and sidewalks would connect these separate pieces together.
Because of the permissive licensing, code is often modified and combined in new ways to create new applications, programs, and even businesses.
The permissive licensing of OSS creates a model of software development that encourages—and ultimately, relies—on collaboration by many players. Many contributors work both independently and in tandem, with modular pieces of code that are combined and recombined by many users and for many purposes. This is true for software development, and importantly, the continued maintenance of software throughout its lifecycle.
The openness and relative ease of use has led OSS to be widely adopted, as code, packages, and systems, in both open and proprietary systems. A recent report from the Linux Foundation & The Laboratory for Innovation Science at Harvard estimated that OSS composes 80-90% of any given software; this number is likely to continue to grow. Redhat’s "The State of Enterprise Open Source" report found that “79% of respondents expect that over the next two years, their organization will increase use of enterprise open source software for emerging technologies.” In the past two decades companies have used open source code with increasing frequency, and companies are increasingly contributing to open-source projects that they use, even collaborating with competitors. While there are many, Soundcloud and Netflix are two examples of this paradigm with their projects Prometheus, allowing for easy monitoring of applications, and React, a javascript library for creating user interfaces, respectively.
A Familiar Cybersecurity Challenge
At its core, OSS is not unique from software, and arguments for or against its inherent security can be distracting; these arguments do not, by themselves, drive the conversation forward or help establish more secure systems.
Both open source and proprietary models of producing software will inevitably contain vulnerabilities. Vulnerabilities can come from dependency management (what, how, and which software packages are pulled into a new software project) to bad-faith actors (people that intentionally break into systems, or contributors intentionally changing the software to be exploitable) whether the software is developed internally or in the open.
Best practices for enhancing security in software already exist, and these apply to both open and proprietary code, packages, and systems. No matter the model of development, these best practices can guide developers during every stage of the life cycle, from development of software to architecting a system. The National Institute of Standards and Technology (NIST) created guidelines about best practices for secure software; code reviews, scanning for vulnerabilities, visibility into the system, knowing the attack surface, having zero-trust architecture, and red teaming are just some of the ways that code, packages, and systems can be evaluated for security. These best practices and evaluation methods do not change if the whole, part, or none of the system is open; security is ultimately about the system and knowing how all the pieces interact.
The Responsibility Model Challenge Posed by OSS
There is one important difference of particular note, and where government intervention can play an important role. In OSS, each stage of development can belong to many people or organizations with no central governance. Individuals, private companies, governments, and more create, use, and contribute to OSS. This results in an ill-defined responsibility model when things go wrong.
Openness exists on a spectrum
More than just code, packages, and systems, “open source” is used in practice as an umbrella term for a set of things, each with different security concerns.
The projects Prometheus and React created by Soundcloud and Netflix, respectively, exist on one end of the open spectrum. The projects are open for use, modification and distribution under an open license. Anyone in the community can submit, request or comment on changes. However, these projects are steered by the thought leadership and engineers of the companies that created them. Company-controlled projects often prioritize features and improvements that will mainly benefit them.
The collaborative model inherent in OSS fosters creativity and innovation and is unique from a business and development perspective. However, the use of open source code in proprietary systems is not unique for security. In these systems, just as in all proprietary systems, there is “one throat to choke” when things go wrong.
However, one of the most well known open source projects exemplifies the other end of the openness spectrum. Linux is the poster child for open and free software, helped move the OSS ideology forward, and has proven its usefulness time and time again. More than just open source code, Linux is a whole system with every part of the system open to view, collaborate, and modify. Most importantly for this discussion, Linux is not controlled by a company, but is completely owned by community contributors. An open source system that is controlled completely by the OSS community requires a different approach than OSS used in a proprietary system.
The success of OSS makes cybersecurity more challenging
OSS is everywhere; OSS is widely adopted in both open and proprietary systems, resulting in decentralized usage of code that can contain vulnerabilities. Because the use of OSS is more widespread than proprietary code, it is difficult to track these vulnerabilities. In turn, it can be harder to identify and remediate them. This is not due to the nature of OSS itself, but to its widespread success and use.
Looking at Log4j as an example
Log4j2 is an open source package that is a logging framework, and is pulled into systems and other projects to create easily readable and maintainable logs; it is an example of the use of small OSS packages in a broad array of systems, both open and proprietary. The vulnerability was announced to the public on December 10th, 2021. This vulnerability had been unknowingly introduced in 2013 and allowed attackers to run code on the underlying servers that were running it.
The vulnerability found in this package demonstrated that an open package in a closed system can inherit vulnerabilities from the packages it uses and be exploited in the same way. Adding to the issue, many companies were not aware if Log4J was running in their system, and if so, where it was.
This vulnerability was incredibly disruptive and concerning; experts estimate that the version of this software with the known vulnerabilities will live on in systems for years to come, and this vulnerability will continue to be leveraged by attackers. However, the use of Log4j2 in proprietary systems did not introduce unique security concerns. The difference between open packages like Log4j and proprietary systems comes down to funding and governance.
What Do We Do from Here? Lessons for Policymakers
Because open and proprietary code, packages, and systems require the same technical/tactical defenses, lessons for policymakers also follow a familiar pattern. Although the particularities of each cybersecurity incident may be different, the broader categories of concern—ranging from software supply chain attacks to malware—remain largely the same. These similarities also open the opportunity to address concerns by leveraging existing lessons in cybersecurity writ large. Moreover, cyber threats and vulnerabilities are continuously evolving. The cybersecurity community regularly updates recommended best practices based on new information, requiring continuous vigilance to stay up to date.
The debate is not more vs. less secure with open vs. closed systems. The question that should be asked and answered is how do we ensure systems—especially critical systems—are secure? The governance model and pervasiveness of OSS means that one important place to focus is the security of OSS.
Supporting and incentivizing best practices for security is in everyone’s best interest
OSS has another benefit for cybersecurity writ-large; unlike proprietary code and packages, there is an active community to alert, fix, and mitigate exploits in OSS. However, the maintainers of OSS (the people who update and patch code to keep it up to date), with a few exceptions, are not working as part of a contract or receiving direct payment. This complicates long-term deployment and maintenance, as the owners of proprietary systems sometimes expect OSS creators and maintainers to solve cybersecurity problems related to OSS in their proprietary systems, without compensation.
Who is responsible for creating a patch once a vulnerability is found? How can the community contributors be educated on security best practices? Security needs to be prioritized throughout the software lifecycle, not added as an afterthought. Finding and fixing security vulnerabilities should be incentivized in this ecosystem, along with procuring support both monetarily and through human resources, such as via partnerships.
Pivotal infrastructure to make all systems more secure
There are current initiatives to help improve security in OSS specifically as well as research for improving computer security generally. The White House is prioritizing improving cybersecurity, to aid in this effort, encouraging and incentivizing employees to contribute to the OSS projects they use with an emphasis on contributing security fixes and enhancements (through creating tooling for security or patching vulnerabilities) would have far reaching impact. Many private companies as well as OSS communities are already focusing on these issues, creating working groups, security best practices, certifications and more. Investing in security through these existing avenues (e.g. OpenSSF, Secure Open Source Rewards, Open Source Technology Improvement Fund) would further their impact and leverage ongoing, effective work.
To address security issues related to OSS, it is important not to remake the wheel; in most ways, these are not novel cybersecurity challenges. However, attention is needed where attention by government is best directed, in addressing the challenges associated with different ways of thinking about governance. There are a wide range of potential responsibility models possible here, but there does need to be greater clarity over what responsibility looks like in open ecosystems.
OSS is pivotal infrastructure; improvements in its security will make all systems more secure. Largely accepted as the norm, OSS creates a focal point for cyber attacks; it also creates an opportunity to create more secure software that is widely distributed. Investing in secure OSS means greater security for companies, government, and every technology user.
Recent Action
In late September, a Bill (S.4913) titled Securing Open Source Software Act of 2022 was introduced in the Senate and referred to the Committee on Homeland Security and Governmental Affairs. The implications of this Bill could have a critical impact on the US’s preparedness to respond to threats related to open source software and open source products within the US government.
The Bill contains a plethora of meaningful changes with a focus on investing the director of the Cybersecurity and Infrastructure Security Agency (CISA) with more power and authority over the securing of OSS across government. This includes implementing a framework within government, coordinating with federal Chief Information Officers to establish open source program officers, and generally encouraging the whole of government to pay closer attention to the open source community and its products.
On September 28th, the Committee on Homeland Security and Governmental Affairs ordered the bill to be reported to the Senate without amendment and favorably.
Acknowledgement
The authors wish to acknowledge Melissa Griffith for insights and feedback throughout the research process.
About the Authors
Science and Technology Innovation Program
The Science and Technology Innovation Program (STIP) serves as the bridge between technologists, policymakers, industry, and global stakeholders. Read more