Terraform, mono repositories and compliance as code

Hello. OTUS opened a set in a new group at the rate “Kubernetes-based infrastructure platform”In this regard, we have prepared a translation of interesting material on the topic.


You may be one of those who use terraform for Infrastructure as a Code, and you are wondering how to use it more productively and safely. In general, recently, this has bothered many. We all write configuration and code using different tools, languages ​​and spend a significant amount of time to make it more readable, extensible and scalable.


Maybe the problem is in ourselves?

Written code should create value or solve a problem, as well as be reusable for deduplication. This kind of discussion usually ends. “Let’s use the modules”. We all use terraform modules, right? I could write many stories with problems due to excessive modularity, but this is a completely different story, and I will not.


No, I will not. Don’t insist, no … Okay, maybe later.

There is a well-known practice – to tag code when using modules to lock the root module, to guarantee its operation even when the module code is changed. This approach should be a team principle in which the appropriate modules are tagged and used appropriately.

… but what about dependencies? What if i have 120 modules in 120 different repositories, and changing one module affects 20 other modules. Does this mean we need to do 20 + 1 pull request? If the minimum number of reviewers is 2, then this means 21 x 2 = 44 review. Really! We simply paralyze the command by “changing one module” and everyone will start sending memes or gifs of the Lord of the Rings, and the rest of the day will be lost.


One PR to notify everyone, one PR to bring everyone together, fetter and plunge into darkness

Is it worth it to work? Should we reduce the number of reviewers? Or maybe for modules to make an exception and not require PR if the change has a big impact. Really? Do you want to walk blindly in a deep dark forest? Or bring them all together, bore and plunge into darkness?

No, don’t change the order of the review. If you think that working with PR is right, then stick to it. If you have smart pipelines or you push to master, then stay with this approach.

In this case, the problem is not “How do you work”, a “What is the structure of your git repositories”.

This is similar to what I felt when I first applied the suggestion below.

Back to the basics. What are the general requirements for a repository with terraform modules?

  1. It must be tagged so that there are no breaking changes.
  2. Any change should be able to test.
  3. Changes must go through a mutual review.

Then I suggest the following – Do not use micro repositories for terraform modules. Use one mono repository.

  • You can tag the entire repository when there is a change / requirement
  • Any change, PR or push can be tested.
  • Any change can go through the review.


I have strength!

Ok, but what will be the structure of this repository? Over the past four years, I have had many failures related to this, and I came to the conclusion that a separate directory for the module would be the best solution.


Example directory structure for a mono repository. See the tags_override change?

So a module change that affects 20 other modules is just 1 PR! Even if you add 5 reviewers to this PR, the review will be very fast compared to micro-repositories. If you use Github, then this is even better! you can use CODEOWNERS for modules that have maintainers / owners, and any changes to these modules MUST be approved by this owner.

Great, but how to use such a module, which is located in the mono-repository directory?

Easy:

module "from_mono_repo" {
 source = "git::ssh://...//.git//"
 ...
}
module "from_mono_repo_with_tags" {
  source = "git::ssh://....//.git//?ref=1.2.4"
  ...
}
module "from_micro_repo" {
  source = "git::ssh://...//.git"
  ...
}
module "from_micro_repo_with_tags" {
  source = "git::ssh://...//.git?ref=1.2.4"
  ...
}

What are the disadvantages of this kind of structure? Well, if you try to test “every module” with PR / change, then you can get 1.5 hours of CI pipelines. You need to find the modified modules in PR. I do it like this:

changed_modules=$(git diff --name-only $(git rev-parse origin/master) HEAD | cut -d "/" -f1 | grep ^aws- | uniq)

There is another drawback: whenever you run “terraform init”, it loads the entire repository into the .terraform directory. But I have never had a problem with this, as I run my pipes in scalable AWS CodeBuild containers. If you use Jenkins and persistent Jenkins Slaves, then you may have this problem.


Do not make us cry.

With a mono-repository, you still have all the advantages of micro-repositories, but as a bonus, you reduce the cost of servicing your modules.

Honestly, after working in this mode, the proposal to use micro-repositories for terraform modules should be regarded as a crime.

Great, what about unit testing? Do you really need this? … In general, what exactly do you mean by unit testing. Are you really going to check if the AWS resource is created correctly? Whose responsibility is this: terraform or an API that handles resource creation? Perhaps we should focus more on negative code testing and idempotency.

Terraform provides an excellent parameter called idempotency code. -detailed-exitcode. Just run:

> terraform plan -detailed-exitcode

After that run terraform apply and that’s it. At least you will be sure that your code is idempotent and does not create new resources due to a random string or something else.

What about negative testing? What, in general, is negative testing? In fact, this is not much different from unit testing, but you pay attention to negative situations.

For instance, nobody is allowed to create an unencrypted and public S3 bucket.

Thus, instead of checking whether an S3 bucket is actually being created, you, in fact, based on a set of policies, check to see if your code creates a resource. How to do it? Terraform Enterprise provides a great tool for this, Sentinel.

… but there are also open source alternatives. Currently, there are many tools for static analysis of HCL code. These tools, based on common best practices, will not allow you to do anything unwanted, but what if it does not have the test you need … or, even worse, if your situation is slightly different. For example, you want to allow some S3 bucket to be made public based on certain conditions, which, in fact, will be a security error for these tools.

Appears here terraform compliance. This tool will not only allow you to write your own tests in which you can determine WHAT you want as your company’s policy, but also help you separate security and development by moving security to the left. Sounds pretty controversial, right? No. Whereas?


Logo terraform-compliance

First of all, terraform-compliance uses Behavior Driven Development (BDD).

Feature: Ensure that we have encryption everywhere.

    Scenario: Reject if an S3 bucket is not encrypted
        Given I have aws_s3_bucket defined
        Then it must contain server_side_encryption_configuration

Check if encryption is enabled

If this is not enough for you, you can write in more detail:

Feature: Ensure that we have encryption everywhere.

    Scenario: Reject if an S3 bucket is not encrypted with KMS
        Given I have aws_s3_bucket defined
        Then it must contain server_side_encryption_configuration
        And it must contain rule
        And it must contain apply_server_side_encryption_by_default
        And it must contain sse_algorithm
        And its value must match the "aws:kms" regex

We go deeper and verify that KMS is used for encryption

The terraform code for this test is:

resource "aws_kms_key" "mykey" {
 description             = "This key is used to encrypt bucket objects"
 deletion_window_in_days = 10
}

resource "aws_s3_bucket" "mybucket" {
 bucket = "mybucket"

 server_side_encryption_configuration {
   rule {
     apply_server_side_encryption_by_default {
       kms_master_key_id = "${aws_kms_key.mykey.arn}"
       sse_algorithm     = "aws:kms"
     }
   }
 }
}

Thus, the tests are understood by literally EVERYTHING in your organization. Here you can delegate the writing of these tests to security services or to developers with sufficient knowledge of security. This tool also allows you to store BDD files in another repository. This will help to differentiate liability when changes in the code and changes in the security policies associated with your code are two different entities. These can be different teams with different life cycles. Amazing right? Well, at least for me it was.

For more information about terraform compliance take a look this presentation.

We solved a lot of problems with terraform compliance, especially where security services are quite distant from the development teams, and may not understand what the developers are doing. You already guess what is happening in such organizations. Usually, the security service begins to block everything suspicious for them and builds a security system based on the security of the perimeter. Oh my God…

In many situations, only use terraform and terraform compliance for the teams that design (or / and accompany) the infrastructure, it helped put these two different teams at the same table. When your security team begins to develop something with immediate feedback from all development pipelines, they usually get the motivation to do more and more. Well, usually …

Therefore, when using terraform, we structure git repositories as follows:

Of course, this is pretty self-confident. But I was lucky (or not lucky?) To work with a more detailed structure in several organizations. Unfortunately, all this did not end very well. The happy ending must be hidden in number 3.

Let me know if you have any success stories with micro repositories, I’m really interested!


We invite you to free lessonunder which we will look at the components of the future infrastructure platform and see how to deliver our application correctly.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *