Author: Shea Stewart


As container technology becomes prevalent in organizations, teams begin to move faster, applications are deployed quicker, and the inevitable questions arise around security posture of the container contents. Container technology has enabled developers and operations teams to gain rapid feedback and insight into application health, and security should not be excluded from this capability. This blog article addresses the first of two methods to integrate application security scanning to the development pipeline. We will also introduce you to Aqua Security and their container security solution running on OpenShift - designed to ensure vulnerabilities don’t make it into production.

Existing Security Controls in OpenShift

The OpenShift PaaS (Platform-as-a-Service) is a container platform that is targeted at providing a streamlined developer experience with a focus on security and multi-tenancy. Some of the core security features in the OpenShift Container Platform include:

  • Multi-tenant API access to resources
  • Multi-tenant network isolation
  • Security Context Constraints
    • Disallowing containers running as root
    • Dropping risky linux capabilities
    • Disabling access to the underlying system resources like host directories and volumes
    • Applying SELinux contexts
    • Applying unique UIDs
  • OpenSCAP content scanning with CloudForms/ManageIQ
  • and more…

While there are a lot of capabilities built into OpenShift that focus on security, many of these features are focused on the platform level and not the content of the applications deployed on top of it. Luckily, OpenShift is a open and extensible platform that provides a robust API which can be leveraged by tooling providers within the container platform ecosystem (read: Aqua Security).

Security Controls in Aqua Enterprise Security Platform

The Aqua Container Security Platform is a full lifecycle security solution built to protect containerized applications throughout the pipeline and runtime environments. From image scanning, host hardening, through access management and secrets delivery to runtime protection Aqua provides security management and enforcement capabilities without hindering development efforts.

This blog covers the first of two methods of utilizing the Image Scanning capabilities of Aqua in an OpenShift environment:

  • Jenkins integrated image scanning

Part II of this blog will cover the second method:

  • S2I integrated image scanning

The Environment

The environment used for this blog includes:

  • OpenShift Container Platform 3.7
  • Aqua Enterprise Security Platform 3.0

Jenkins Integrated Image Scanning

OpenShift provides a custom Jenkins image preconfigured with OpenShift plugins and the necessary permissions. This feature makes it extremely easy for new project teams to deploy their own Jenkins instance, fully containerized, without a dependency on other teams or shared systems.

A simple application development flow would look as follows:

Aqua provides a Jenkins plugin to interact with the Aqua Enterprise Security Platform. This plugin utilizes requires a system to download the scanner-cli docker image provided by aqua. This container image scans the built image, feeds the results to the Aqua Enterprise Security Platform, analyzes the content and applies the necessary Image Assurance Policy to the object.

Extending Jenkins with an External Docker Host

In the scenario where Jenkins is fully containerized, utilizing the scanner-cli container image presents a bit of a challenge:

  • The OpenShift Jenkins image does not provide access to docker tooling
  • Containerized Jenkins images cannot natively run docker containers (and no, we aren’t exploring dind)

In order to solve for these challenges, two steps can be performed:

  • Extend the Jenkins image with the docker cli
  • Build a dedicated docker host for remote scanning of container images
    • Why? Providing access to the OpenShift docker daemon requires the Jenkins pod to run as privileged, which is a huge no-no for any pod that a regular user is given access to

Extending Jenkins with the Docker CLI

In order to install the docker-cli into Jenkins, the base image can be extended to include the docker-cli. An example Dockerfile could be created as follows:

FROM openshift/jenkins-2-centos7

### Install Docker
RUN curl https://get.docker.com/builds/Linux/x86_64/docker-latest.tgz | tar xvz -C /tmp/ && mv /tmp/docker/docker /usr/bin/docker

In order to build and deploy the Jenkins instance, the oc new-app command can be used:

oc new-app https://github.com/stewartshea/jenkins2-with-docker

This image will need appropriate permissions on the project workspace. These permissions could be reduced as required, but for the sake of testing the admin role can be utilized. In addition, to preserve the data in Jenkins a persistent volume should be added to the Jenkins instance and a route needs to be present for external access.

oc policy add-role-to-user admin -z default
oc volume dc/jenkins2-with-docker --add --type=persistentVolumeClaim --claim-name=jenkins  --mount-path=/var/lib/jenkins --overwrite=true
oc expose svc jenkins2-with-docker --port=8080

Once up and running, the Aqua plugin can be installed with the standard Jenkins plugin manager:

Once installed, it can be configured as follows:

Building a Dedicated Docker Scanner Host

With the docker cli installed in the Jenkins instance, a dedicated docker host is required with remote execution capabilities. The following playbook was used to install and configure docker on a fresh CentOS server:

---
- hosts: docker-scanner

    vars:
    pip_install_packages:
        - name: docker
    docker_edition: 'ce'
    docker_package: "docker-"
    docker_package_state: present

    roles:
    - geerlingguy.repo-epel
    - geerlingguy.pip
    - geerlingguy.docker

    tasks:
    - name: Modify docker systemd
        lineinfile:
        path:  /etc/systemd/system/multi-user.target.wants/docker.service
        regexp: "ExecStart=/usr/bin/dockerd"
        line: "ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix://var/run/docker.sock"
        state: present
        backrefs: yes
        backup: yes
    - name: restart docker
        service:
        name: docker
        state: restarted
        enabled: yes

Once built, the Jenkins image can use the environment variable DOCKER_HOST to specify the remote host, in this case tcp://docker-scanner.cloud.lab:2375.

If all is well, the docker cli in the Jenkins instance should provide details about the remote docker host:

Getting Images Into the Aqua Enterprise Security Platform

If you’re using the default Atomic registry provided by Red Hat, the _catalog feature has not yet been implemented which means that Aqua cannot search the registry for new images dynamically. See the following for tracking this RFE:

For this blog entry, the image will be manually added to the Aqua Enterprise Security Platform. This, however, could be automated with the Import feature in the platform until the Red Hat Atomic registry supports searching of the catalog.

Add the Atomic registry from System → Integrations. In this instance, the public registry URL is utilized since the remote scanner will need to access the registry through this URL:

In order to utilize an account with access to all of the images, the preconfigured inspector-admin account was used from the management-infra project. The token can be obtained with the following command:

    oc sa get-token inspector-admin -n management-infra

To manually add the image, navigate to Images and select the + Add Images button. Input the full path, including the tag:

Once added, the image will automatically be scanned:

Setting a Policy in Aqua Enterprise Security Platform

In order to have an “pass/fail” action in Aqua, an Image Assurance policy can be set to take action based on a series of conditions. In this example, a simple activity is to disallow the image to run if a High vulnerability is detected on the Default policy. The Image Assurance configuration can combine a number of conditions with a drag and drop action.

Putting It into a Jenkins Pipeline

Jenkins pipelines are a build construct that are tightly integrated into OpenShift and promotes pipeline as code by storing the full configuration with the source code.

In this example we have a Jenkinsfile in our random-beer-selector application that provides a pipeline as follows:

The code is hosted here: https://github.com/stewartshea/random-beer-selector

In the pipeline directory, there is a Jenkinsfile with the following content:

node {
    stage 'build'
        openshiftBuild bldCfg: 'random-beer-selector'

    stage 'scan image'
        withCredentials([usernamePassword(credentialsId: '4adb7468-19de-48b2-ba8a-9b177ae6d8e9', passwordVariable: 'password', usernameVariable: 'username')]) {
            sh '''
            docker login [email protected] -u=$username -p=$password
            '''
        }
        aqua hideBase: false, hostedImage: 'aqua-jenkins-sample-with-plugin/random-beer-selector', localImage: '', locationType: 'hosted', notCompliesCmd: '', onDisallowed: 'fail', registry: 'OpenShift', showNegligible: false


    stage 'ask for deployment'
        input id: 'Approve01', message: 'Looks Good! Wanna deploy?????', ok: 'HANG TIGHT!'

    stage 'deploy'
        openshiftDeploy depCfg: 'random-beer-selector', verbose: 'true'


    stage 'verify'
        openshiftVerifyDeployment depCfg: 'random-beer-selector', verbose: 'true', verifyReplicaCount: 'false'

}

With this configuration in place, both Jenkins and OpenShift visually display the status of the pipeline runs. In this instance the application is approved to run since it only has medium severity vulnerabilities.

If the Image Assurance policy is changed to fail at medium or above, the job will fail as expected.

Part 1 Summary

The Aqua Enterprise Security Platform is a powerful solution that plays a key role in securing many aspects of the OpenShift Container Platform, from the underlying host through to the running container. This article has identified how to include Aqua in a containerized Jenkins pipeline running on OpenShift.

Stay tuned for Part II for more integration with OpenShift when S2I (Source-to-Image) is utilized.

Tagged:



//comments