This article is a follow-on from our recent Practice Safe DevOps event (slides here ), where we demonstrated development tooling capabilities focused on the Enterprise, with enough extensibility to bring the corporate security requirements INTO the development process rather than AFTER the deployment process.
When we are asked what “Integrated Security” means to us, we typically respond with the following:
- Security (as a business requirement) must be addressed at every stage
- When automating build and deployment processes, security must be included in the pipeline automation
- Once deployed, automated systems must continuously monitor the security posture of the application
- The depth of automated security tooling and processes must align with the business needs and any formal requirements
Let’s use a real-world example:
If I am headed to the mall to purchase the desirable R2-D2 Coffee Press, I don’t often consider the standard security measures I may encounter during the trip:
- Leaving the house, I lock the door and set the alarm
- Leaving the garage, I close and lock the door behind me
- Parking the car at the mall, I lock the car and set the alarm
- Entering the mall, I walk past security guards (at many points)
- With the item in hand, I use my mobile phone to validate the price and quality of the item
- Purchasing the item, I remove my wallet from a hidden place
- I remove my Credit Card from the RFID protective sleeve
- I use a PIN to confirm my identity during the purchase
- I walk out of the store, passing security gates that alarm if I have stolen anything
- I unlock my car, and we know the rest - “eventually - I drink coffee!”
Those are 10 simple security measures that most people would follow, without much thought. While they may not prevent me from having something stolen, my chances are much greater if these are ignored.
Building Security into the Pipeline
When we are building automated pipelines within an OpenShift environment, we can easily integrate security tooling as well. The following diagram shows a simple pipeline flow that includes Black Duck Hub and OpenSCAP, at different stages of the process, to begin automating the security validations of our application.
The first couple of steps are pretty straight forward. We have a “pipeline” style build configured in OpenShift with GitHub tied in through a webhook. When a user pushes code, a build kicks off. See our previous arcticle for more details on pipeline configuration.
The Jenkins Setup
The OpenShift and Jenkins integration is built in with the Jenkins templates provided in OpenShift.
This pipeline (as shown above) is fully synchronized with our Jenkins setup, which is configured with the following plugins:
- OpenShift Pipeline plugin
- pre-configured on OpenShift Jenkins templates
- Black Duck Hub plugin (installed from here )
- configured in Jenkins; Manage Jenkins; Configure System
The Jekins job defines the pipeline stages, which is where we can add additional steps. Here we add the steps necessary to download and scan the source code in the stage “Take a peek and analyze the OSS packages”.
Not so sure about the syntax? Don’t worry, Jenkins has a helper. You will notice 2 of the Black Duck hub items listed there.
Once the job has completed, we can see that the Black Duck Hub report is also directly visible from within the Jenkins project:
Black Duck Hub Setup
The Black Duck Hub setup is straight forward (a single VM in this lab with a Web UI). Once the maven instance (spawned by Jenkins) executes the necessary code scan stage, the Black Duck cli scanner is downloaded into the build container and delivers a full status report to the corresponding project on the Black Duck Hub server. This system then reaches out to the Black Duck Knowledge base online service for more detail, and the reports are generated.
Bill of Materials
In this event, we are focused on the “practicesafedevops” project created in the Black Duck Hub portal.
When we view the project components, we can get a component view (Bill of Materials in Black Duck terms) of each OSS component used in the application. Each component has an associated Security and Operational risk tied to it.
Security Remediation Management
We can also bring up a list of CVE’s, and flag each one with the necessary remediation details.
In addition to the scanning capabilities, the Black Duck Hub also provides a flexible Policy Management engine that can send alerts or fail a build if the requirements aren’t met.
Operational Scanning with the OpenSCAP Scanner
Following the OpenShift deployment, an automated rule within CloudForms/ManageIQ will force an OpenSCAP scan upon the image inside of the registry. This task is managed outside of Jenkins and is better aligned with the security scanning requirements for Day 2 operations (ie. once the application has completed the initial build/deployment stages).
The following policy is built into CloudForms/ManageIQ and simply needs to be applied to the container provider.
What’s next for the lab with Black Duck?
There is lots more to explore while adding security tooling and policies within a complete pipeline, and Arctiq will continue to explore these capabilities. Some ideas include:
- Actively failing a build within Jenkins
- A container-level scan as a Jenkins post-deployment stage
- Forcing a container out of production if it fails an active scan (integration with OpenShift or CloudForms/ManageIQ)
We have illustrated two simple ways to include automated security scanning of code deployed into a container. Including security tooling directly in the pipeline helps new development activities flow through the enterprise, while ensuring that operational and security teams have an up to date view of the the security posture of each application. While there is plenty of opportunity to build on these capabilities, including these foundational security tools in the up-front application requirements discussion is a great start.