The modern IT space has completely changed the way developers approach writing code. The demand for faster releases and updates lead many to harness public repositories like GitHub.
GitHub users store and manage source code for easy access. It also provides an easy to use dashboard for users to find their ideal code or applications. Unfortunately, public repositories introduce a variety of new vulnerabilities.
Due to GitHub’s popularity, we’ve decided to focus on them for this article. These tips apply to any public repository one may use.
Many issues come from developers being too casual while using GitHub. Due to its public nature, organizations often deal with unauthorized access. Some developers gain access from personal email accounts, which can be easier to hack into. This also introduces access risk when an employee leaves the company.
Additionally, access management becomes lackadaisical when developers gain access to every repository, rather than their specific focus. To ensure security in this regard, identity access management tools come in handy.
Managed service providers (MSP) help customers set up who can access what. They can also provide users with monitoring tools, fully managed by the MSP.
“Public repositories tend to introduce a long tail of inherited vulnerabilities that increase a customer’s attack surface. DevOps and security teams should run internal and external vulnerability scans and reports to monitor on-premises, hosted and cloud environments with continuous updates to more than 92,000 Common Vulnerabilities and Exposures (CVEs) in software and certain network components.”
GitHub users must always maintain security best practices. It cannot be a simple one-step process where code is checked and approved. Consistent monitoring of stored code alerts users to logins from unusual locations, abnormal changes in stored code, and more.
GitHub logs must be collected to ensure consistency. Furthermore, users must develop a multistep process for maintaining security.
“First, you need to be sure you’re confident in what you’re getting. And don’t just do these checks up front, they need to be built into your automation so that they keep happening as updates happen and things change.
Second, you should trust but verify. There are occurrences throughout the history of the industry where despite all the checks, a compromised executable makes it into a “verified” payload.
You don’t want to think of your checks in binary terms. “This passed the checks,” dust off your hands, “done deal.” Instead, you should think of it in layers and probabilities. We verified the image, so that lowers the probability of compromise. Then we’re going to watch the behavior, and that lowers the probability some more. You never get it completely to zero. But the more things you do, the lower the probability gets.”
Many IT teams see containers and Kubernetes as a necessary tool to optimize workloads. Containers make the development pipeline much simpler. It also allows developers to have an extensive community to work with, as Kubernetes is open source and there are components throughout other development libraries.
Containers are notoriously difficult to manage internally. Furthermore, they can difficult to implement at all without the right expertise. Using public repositories increases the difficulty and risk.
Managed service providers can build personalized container management tools, so enterprises don’t have to worry. Each enterprise wants different functionality for container workloads, but each must maintain security to maintain value. MSPs manage security maintenance so developers can focus on developing.
“If the concern is that a container in a public repository has been compromised, developers can create sha256 hash sums of their containers and share the hash through their website. If the worry is that a docker “private” repository would be accessible to someone who doesn’t have access, then I would highly recommend exploring two-factor options.”