I developed my keyless product for Docker image identity – DIA

The idea of ​​signing and verifying Docker images has been on my mind for a long time. Often I did not understand how much this reduces risks and increases the real security of applications. But for myself, I understood one thing – it protects against one but important vector – an attack on your registry of images (registry) and the introduction of malicious code into images. Everything else needs to be protected by other means. If you run everything by checksums, you can not worry …. But if your images in k8s applications are described as tags, and not checksums, kubernetes runs the image from the source blindly. Also, if imagePullPolicy: always is set, then if the registry is compromised, your cluster will launch malware on the next restart.

containers:
 - image: registry.local/my-awesome-app:alpine
imagePullPolicy: always

Various solutions from Notary to Cosign have been repeatedly explored. One way or another, all these products were difficult to use, they required a separate storage for cryptographic information + on clients it was also necessary to store a bunch of private keys somewhere. Cosign is easier to use, but it also required you to store private keys somewhere, change them, and all the charms of it. I also wanted the signature to work on the principle of ordinary x509 certificates but without validation of the validity period. I signed the image with a trusted CA (issued a certificate) – that’s it, the image is valid. I began to develop this idea further and everything turned out to be obvious, but not without nuances …

The idea was as follows: take the checksum of the image, issue a certificate with a commonName equivalent to the checksum (or so that the checksum is part of it). BUT, the certificate had to be put somewhere. If you put it in the image then the checksum will change and the certificate will no longer be relevant. Then I realized that you can put the certificate, for example, in the LABEL of the image, then, during verification, remove the LABEL and count the reverse checksum. This is all too complicated… Then I saw how cosign stores the signature of the image and decided to do the same (yes, I borrowed it). Cosign creates an additional sig- tag in which it stores the public part of the key. I did the same, only changed the prefix to dia- where sha256sum is the checksum of the image in the registry. The puzzle was formed, but again, not everything is so simple ….

Due to some of my ignorance, I missed that In the common name field of the DN of a X 509 certificate, as defined in ASN. 1 notation for OID “2.5. 4.3”, the limit is up to 64 characters. I solved this problem, impolitely) I cut it off. Those. validation can be performed on part of the characters from the checksum, I called this parameter digestSlice.

openssl x509 -noout subject -in image.crt
subject=C=RU, ST=Moscow, L=Moscow, O=company, OU=local, CN=dia-0addcc1de26ee0f660d21b01c1afdff9f59efb

It remained to develop a webhook for validation in k8s ValidationWebhook and provide a convenient tool for requesting certificates after building the image in CI. The architecture is as follows:

The webhook for k8s was written as it relies on go, also a helm chart to install it: https://github.com/spanarek/dia/tree/master/chart. In order for only signed images to run in the namespace, you need to mark NS like this: diwah=enabled. The principle of operation is as follows: the chart has the attestor_ca parameter. In this variable, you must put the CA certificate (base64 string, of course) that issues certificates to images and will be trusted. During the creation of the pod, the webhook takes a link to the image in the registry, then looks for the layer with the certificate (dia-), checks the CN, membership in a trusted CA (attestor_ca) and issues a conclusion. Images that are declared in the pod by checksum, and not by tag, are not subject to validation and are skipped (this does not make sense).

As a signing tool, I wrote a script to manually add an image certificate, as well as a manifest to issue a certificate to gitalb-ci using hashicorp vault. Since there was already experience and usage patterns, vault in gitlab, hashicorp completely solves the issue of automated certificate issuance for your applications. In total, for gitlab, developers just need to add one line to the image build job script:

dia-sign.sh ${DOCKER_REG_IMAGE}
Job целиком в моем случае(с vault) выглядит так:
make-test-image:
  image:
    name: ghcr.io/spanarek/dia/dia-dind:hashicorp0.1.0
  stage: build
  variables:
    VAULT_AUTH_ROLE: any
    VAULT_AUTH_PATH: auth/jwt/gitlab
    VAULT_ADDR: https://vault.local:8200
    VAULT_CAPATH: vault.pem
    VAULT_PKI_PATH: pki/issue/by-gitlab-id
    DOCKER_REG_IMAGE: registry.local/test-app
  script:
    - docker login -u "{REGISTRY_PASSWORD}" "${REGISTRY_URL}"
    - docker build -t ${DOCKER_REG_IMAGE}:latest .
    - docker push ${DOCKER_REG_IMAGE}:latest
    - dia-sign.sh ${DOCKER_REG_IMAGE}

What we end up with:

  • signature with the x509 standard and independence from the solution, I used pki vault to obtain certificates, but this is due to the specifics of the company’s infrastructure, you are free to use any PKI

  • ease of signing for developers and devops (one extra step to write the certificate in a separate tag)

  • no costs for storing private keys, etc., the keys are not stored anywhere, in general, we forget about them immediately after receiving the certificate

    Obvious and acceptable in my opinion cons:

  • not the most powerful cryptographic strength (consequence of a truncated hash)

  • the inability to revoke the certificate from the image (I haven’t implemented it yet, but maybe I will)

And what does DIA mean you will read here 🙂

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *