10 steps to automating security in Kubernetes pipelines
Kubernetes pipelines face an ever-increasing range of threats that demand more integrated and automated security across the application lifecycle. Making things more complex, critical vulnerabilities can make their way into any stage of the pipeline: from build to registry to test-and-staging to (especially damaging) production environments.
One of the biggest roadblocks to effective Kubernetes pipeline security has been investing the time to get it right. The purpose of using containers is increasing the velocity of release cycles, enabling more up-to-date code and better features with better resource stabilization. Any manual efforts to inject security into this pipeline risk slowing that speed and preventing the benefits of a container strategy from being fully realized.
DevOps teams simply can’t afford to slow down the pipeline. This is why automation is not just crucial, but also the most realistic way to ensure container security.
Kubernetes pipeline overview
Taking a step back, this is a simplified view of the Kubernetes pipeline, and some of the top threats at each stage:
New vulnerabilities can be introduced as early as the build phase. (Open source tools, in many cases, have been the culprit for adding previously-unknown attack surfaces.) In a registry, even when you’ve successfully removed vulnerabilities in the build phase and stored a clean image, a critical vulnerability might be discovered later that is affecting that image. The same thing can (and regularly does) happen with containers running in production.
In the production environment, containers, critical tools, or Kubernetes itself could be attacked, such as we all saw in last year’s critical API server vulnerability. All of this infrastructure presents an attack perimeter that needs to be monitored and protected automatically. And, even when you do the best possible job of removing vulnerabilities, there’s still the danger of zero-day attacks, unknown vulnerabilities, or even insider attacks.
On the positive side, security strategy can be integrated and automated throughout the Kubernetes pipeline.
10 steps to securing the container lifecycle
Here are 10 specific ways DevOps teams can integrate and automate security across the full lifecycle of their Kubernetes pipeline:
Step 1. Scan during the build phase
This step is fairly simple to automate, sending builds back to developers if there are any vulnerabilities or issues.
Step 2. Scan the registry
If new vulnerabilities are discovered in the registry, that also can be alerted and sent back to the dev team for remediation. One thing to understand is that there’s not always a fix for a vulnerability, so you may have to allow it to be deployed into production. That’s why production vulnerability scanning and compliance assessment are critical as well.
Step 3. Scan during runtime and run compliance checks
Vulnerability scanning in production doesn’t just encompass containers. You always need to be scanning the host and the orchestrator platform (Kubernetes, OpenShift, etc.) for vulnerabilities. The Docker Bench and Kubernetes CIS benchmarks are simple to run automatically and continuously. They’ll provide you with alerts any time a new container is deployed or a new host is added or patched.
For many enterprises it’s also crucial to be able to do compliance reporting and management — including automated segmentation that some industry compliance requirements mandate. This isn’t something you’d want to do manually anyway, as many runtime scanning and compliance checks often need to be automated for compliance purposes.
For example, the payment card industry’s PCI security standards require segmentation and firewalling of in-scope and non-in-scope CDE traffic. As you deploy pods running on a host and in-scope, you can’t be changing firewall rules manually. Therefore, PCI compliance (among others) necessitates automated network segmentation.
Step 4. Run risk reports
As any DevOps team knows, risk reports are critical to managing the entire end-to-end vulnerability protection process. Automating them only makes the process that much faster, speeding remediation whenever necessary.
Step 5. Set security policy as code, and
Step 6. Implement behavioral learning
Security policy as code and behavioral learning should be used in tandem. Doing so enables DevOps teams to create and deploy workload security policies very early in the development process and see them through to production environments. This essential piece of automation nullifies the need to manually create rules for protecting new applications (and nixes the associated slowdowns that causes).
Here’s a possible workflow. The DevOps and QA team deploy an application into a testing/QA environment, where there is a container firewall capable of behavioral learning. The container firewall learns all of that application’s container processes and normal file activity, and creates policy rules that can be exported to the dev team to review and edit as needed.
For example, there may need to be edits to the protocols the app uses, to the connections needed, or to the processes that are going to run in the app. Those can now be created as a security manifest, which can be retested before production deployment. In this way you can define not just workload specific policies but also global security policies, like “No SSH is allowed in a container” or other such rules.
Step 7. Set admission controls
Integrating admission controls into your pipeline is a key step for ensuring automated security. Be sure to implement the ability to create admission control rules that block any unauthorized deployments or vulnerable deployments into the production environment.
Step 8. Put up a container firewall, and
Step 9. Automate container workload and host security
As discussed above, implementing a container firewall and applying it to enforce container workload and host security provide essential automated protections for your pipeline.
This automation needs to include the ability to enforce protections at runtime — automatically blocking unauthorized network connections, processes, or file activity, either in a container or on the host. These measures can also leverage network DLP to inspect container traffic for PII, credit card data, account data, and more sensitive information, and to block any efforts to send unencrypted data (preventing it from being stolen).
Step 10. Set alerts and forensics
Finally, it’s valuable to integrate and automate forensic capabilities and alert responses. This could mean initiating an automated packet capture on a suspicious pod that could have been hacked, or quarantining that container to block all network traffic in or out of it. Doing so can be accomplished by setting policy rules that specify the conditions under which a packet capture or quarantine is initiated on a container.
Every stage of the Kubernetes pipeline is vulnerable without thorough safeguards. But DevOps teams don’t need to sacrifice the speed of containerized development if they know what can be automated, why it’s important, and how to do it.
Gary Duan is co-founder and CTO of NeuVector. He has over 15 years of experience in networking, security, cloud, and data center software. He was the architect of Fortinet’s award winning DPI product and has managed development teams at vArmour, Fortinet, Cisco, and Altigen. His technology expertise includes IDS/IPS, OpenStack, NSX, and orchestration systems. He holds several patents in security and data center technology.
New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to [email protected].
Copyright © 2020 IDG Communications, Inc.