In this blog post, you’ll learn how to setup image scanning for Azure Pipelines using Sysdig Secure DevOps Platform.
Azure DevOps gives teams tools like version control, reporting, project management, automated builds, lab management, testing, and release management. Azure Pipelines automates the execution of CI/CD tasks, like building the container images when a commit is pushed to your git repository or performing vulnerability scanning on the container image.
Image scanning allows DevOps teams to shift left security, detecting known vulnerabilities and validating container build configuration early in their pipelines, before the containers are deployed in production or images are pushed into any container registry. This allows you to detect and fix issues faster, improving delivery to production time.
In detail, the image scanning process with Sysdig includes:
Analyzing the Dockerfile and image metadata to detect security sensitive configurations, like:
- Running as privileged (root) user, without the USER command.
- Using base images tagged as “latest”, rather than specific versions with full traceability.
- Many other Dockerfile checks.
Validating software packages against well known vulnerabilities databases:
- OS packages.
- 3rd party libraries installed on top, such as Python pip, Ruby gems, Node npm, Java jar, etc.
And user defined policies or compliance requirements that you want to check for every image, like:
- Software packages blacklists.
- Base images whitelists.
- Image scanning policies that match compliance policies like NIST 800-190 or PCI.
One of the benefits of Sysdig’s local scanning approach is that you don’t lose control over your images, as they don’t need to be sent to the backend or exposed to any staging repository. The image will be scanned within the node where it’s built, or where you run the pipeline, and only the scanning results will be sent to Sysdig Secure backend, whether it be our SaaS or your self-hosted instance.
Planning the Azure Pipeline for image scanning
An Azure pipeline defines a bulk of tasks, written in a YAML file, that will be executed automatically when there’s an event, which is typically when there’s a new commit in a linked repository.
This allows you to automatically build and push images into registries (like Azure Container Registry), and then deploy them into Kubernetes.
In the example that we use to illustrate this blog post, we will be pushing commits to a GitHub repository that will trigger an Azure Pipeline. Then the pipeline will build our project into a local image and scan it for vulnerabilities.
Giving Azure Pipeline access to GitHub repositories
Azure will access our GitHub repository to download the code needed to build our project and generate the container image. It will also get the azure-pipelines.yaml that contains the tasks that conform the pipeline, which are hosted within the same repository.
Before we start, we need to grant Azure a few permissions into our GitHub repository. We’ll do so from the Personal Access Token settings in the GitHub web:
From here, you can create a new token, provide a name for your token, grant the permissions marked on the image and save it. You can read more about the permissions needed in the Azure’s permissions guidelines.
Azure Pipeline setup for image scanning
Now, we’ll have to log into the Azure web console. Here, we’ll see our projects list. If you don’t have a project for this pipeline already, create a new one.
Next, we’ll create a service connection to GitHub to provide the pipeline access the GitHub repository. For this, go to: Project Settings > Pipelines (Service connections) > New Service Connection > GitHub:
Here, insert the Personal Access Token we created before:
We will also need to create the credentials so we can access the container registry:
We can add the container registry we want:
Don’t forget the connection name. We will use it later to access the registry.
At this point, we’re ready to create a pipeline. Go to the Pipelines section and create a new one:
The pipeline wizard will ask for the code of our project. In this example we’ll select GitHub:
Next, we’ll select the repository:
Here, we can either create a new file with “Starter pipeline” or select an existing YAML file:
In our case, we already have the pipeline created as a file, so we will select Existing Azure Pipelines YAML file and specify the azure-pipelines.yml
file in the master branch.
Azure Pipeline YAML definition for image scanning with Sysdig Secure
Now we’re ready to define our pipeline. You can edit it in Azure itself, or you can do it by modifying the azure-pipelines.yaml file inside your GitHub repository. This is the content of the file:
pool: vmImage: 'ubuntu-16.04' container: image: sysdiglabs/secure-inline-scan:latest options: -v /usr/bin/docker:/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock variables: containerRegistryConnection: containerRegistry imageName: 'sysdiglabs/dummy-vuln-app' tags: | latest steps: - task: Docker@2 displayName: Build image inputs: repository: $(imageName) command: build tags: $(tags) - script: inline_scan analyze -s https://secure.sysdigde.wpengine.com -k $(secureApiKey) $(imageName):latest displayName: Scan image - task: Docker@2 inputs: command: 'login' containerRegistry: $(containerRegistryConnection) - task: Docker@2 inputs: command: 'push' tags: $(tags) containerRegistry: $(containerRegistryConnection)
Don’t forget to save and commit this file. Let’s go through the different steps in the pipeline:
The pipeline runner will execute an Ubuntu 16.04 LTS image and will spawn a container where the CI/CD will be executed into:
pool: vmImage: 'ubuntu-16.04'
The image scanning process requires a container that includes the the <a rel="noopener nofollow noreferrer" target="_blank" href="https://github.com/sysdiglabs/secure-inline-scan">inline_scan script</a>
. As we will build the application container here, we prepare the environment to have access to the docker binary and the Docker Engine socket.
container: image: sysdiglabs/secure-inline-scan:latest options: -v /usr/bin/docker:/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock
We also define a couple of variables: the name of the connection to the container registry, and the name and the tags of the image, as we’ll use these values in several places of the file:
Variables: containerRegistryConnection: containerRegistry imageName: 'sysdiglabs/dummy-vuln-app' tags: | latest
Next come the steps that the runner has to execute.
The first step is building the image and assigning the tags:
steps: - task: Docker@2 displayName: Build image inputs: repository: $(imageName) command: build tags: $(tags)
The second step is scanning the image:
- script: inline_scan analyze -s https://secure.sysdigde.wpengine.com -k $(secureApiKey) $(imageName):latest displayName: Scan image
Our build process generates one container image per tag. We could build more than one image per commit with different tags, like the commit hash or commit tags, but we are keeping things simple for now.
The benefit of “inline scan” is that the process will be performed within the runner in the same Azure machine where the image was built, all without having the container image ever leave your Azure DevOps account.
Within Sysdig Secure, we can configure multiple scanning policies with different checks to define what is considered dangerous and should be warned or blocked.
Assigning scanning policies for our image is as easy as linking the registry, repository or tag to a policy.
If the scanning process finds a vulnerability or any STOP
condition in our scanning policy, the inline_scan
script will return a code different from 0
and the pipeline execution will be aborted.
The inline_scan
script needs the Sysdig Secure endpoint and an access token. Don’t put the token in the file stored in your GitHub repository because it would be accessible to anyone who can access the repo. Instead, we’ll set a hidden variable. To do so, navigate to Edit pipeline:
And here, go to into Variables:
Use the + button to add a new variable. Name it secureApiKey
and fill the value with your Sysdig Secure API Token. Don’t forget to select the button Keep this value secret so its contents won’t be visible in the pipeline logs after execution:
That’s it. If everything is correct, the pipeline will continue by logging in and pushing the image to the Docker Registry we configured before:
- task: Docker@2 inputs: command: 'login' containerRegistry: $(containerRegistryConnection) - task: Docker@2 inputs: command: 'push' tags: $(tags) containerRegistry: $(containerRegistryConnection)
Finally, we are ready to execute the pipeline and see it in action in Azure!
After pushing a commit to our GitHub repository, we can see the pipeline working in our Azure web console. Here are the results of our pipeline. As you can see, it failed in the step Scan image:
By clicking into this step, we will see the full trace of the step and the reason why it stopped:
Image scanning results within Sysdig Secure
Back to Sysdig Secure, we can further analyze these results and even download a full report.
Here are some highlights of our particular image:
- The Dockerfile exposes the port 22.
- There are some non-OS packages for Python that contain HIGH and CRITICAL-rated vulnerabilities.
- The OS package linux-libc-dev contains some HIGH-rated vulnerabilities.
Sysdig Secure also provides other features that will enhance the security of your CI/CD pipeline and DevOps workflow:
- Trigger notifications whenever there’s a new image version in your repository, either a new tag or if the existing tag has been overwritten.
- Reevaluate the scanning policy if it changes against the existing metadata and notify if the scan results change.
- Notify when a new CVE is published that affects any image.
Conclusions
Using Sysdig, we can scan the images we create in an Azure DevOps Pipelines. Thanks to local scanning capabilities, you can scan your images without having them leave your infrastructure and even scan images that are locally built.
By detecting issues earlier in the CI/CD pipeline, image scanning allows DevOps teams to shift left security, improve delivery to production time, and raise the confidence of running their images in production.