logo
Menu
Embracing GitOps for Network Security and Compliance

Embracing GitOps for Network Security and Compliance

Explore GitOps for AWS: merge DevOps with cloud control for enhanced security and compliance. Leverage Git for precise tracking and proactive measures.

Brandon Carroll
Amazon Employee
Published Feb 29, 2024
Are you a cloud networking and security professional feeling overwhelmed by the complexities of securing your AWS environment? You're not alone. Many in our field face challenges in maintaining security while managing sprawling cloud infrastructures. This article is specifically designed for you. The aim is to shift your perspective, and help you to think more like a developer by adopting GitOps practices for your cloud architectures. By embracing GitOps, you can streamline the deployment process, enhance security, and ensure compliance within your AWS environments.
I encourage you to follow the GitOps pipeline example outlined later in this guide. Start by implementing it in your test environment. Once you're comfortable, integrate these practices into your production environment. This hands-on approach will help solidify your understanding and skills in GitOps.

What is GitOps?

GitOps is a methodology that applies the Git version control system's principles to infrastructure a
Figure 1. What is GitOps?
GitOps is an approach to managing and automating cloud infrastructure that has evolved from the principles of DevOps and DevSecOps. It uses Git, a version control system that's become a staple in software development, as the cornerstone for infrastructure management. This strategy extends the version control, collaboration, and continuous integration/continuous deployment (CI/CD) practices typical of software development to infrastructure automation, making your cloud environment more manageable, secure, and compliant.

Historical Context and Evolution

The concept of GitOps originated from the DevOps movement, which aims to unify software development (Dev) and software operation (Ops). DevOps introduced practices like automation, continuous integration (CI), and continuous delivery (CD) to improve the speed and reliability of software releases. You can read about the essentials of DevOps here.
Building on DevOps, the DevSecOps approach integrates security into the lifecycle, emphasizing that security is a shared responsibility across the development and operational phases. GitOps can be seen as a further evolution, focusing specifically on using Git to manage infrastructure changes. This ensures that infrastructure automation benefits from the same rigor, speed, and efficiency as software development. When you have time, check out this article on how you can use generative AI to build a DevSecOps chatbot.

Similarities and Differences

Like DevOps and DevSecOps, GitOps emphasizes collaboration, automation, and fast feedback loops. However, GitOps distinguishes itself by treating Git repositories as the source of truth for both application and infrastructure code. This has several implications:
  • Transparency and Collaboration: Changes to infrastructure are as visible and reviewable as changes to application code, enhancing collaboration between teams.
  • Security and Compliance: By managing infrastructure as code in Git, GitOps enables better audit trails, versioning, and compliance tracking.
  • Stability and Reliability: GitOps promotes immutable infrastructure and declarative configurations, leading to more predictable and error-free deployments.
GitOps can be seen as a natural extension of DevOps and DevSecOps principles, tailored specifically for the challenges of cloud infrastructure management. It leverages the power of Git to make infrastructure changes more manageable, secure, and transparent, aligning closely with the needs of cloud networking and security professionals.
By understanding the roots of GitOps in the broader context of DevOps and DevSecOps, you can better appreciate its value and rationale. Embracing GitOps means applying proven software development practices to the realm of infrastructure, resulting in more secure, efficient, and reliable cloud environments.

The AWS GitOps Stack

The AWS GitOps Stack
Figure 2. The AWS GitOps Stack
So what is the AWS GitOps stack? In terms of Infrastructure as Code (IaC) and cloud infrastructure management, embracing the GitOps methodology on AWS involves a suite of services, seen in figure 2, the graphic above, designed to streamline the deployment, management, and scaling of your applications and infrastructure. The core components of the AWS GitOps stack include AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and an IaC solution to express your Infrastructure as Code. Let's look at what each of these services are used for.

AWS CodeCommit: Your Git Repository in the Cloud

AWS CodeCommit is a managed source control service that hosts private Git repositories. It's a foundational block for GitOps, offering a secure, scalable place for your code, binaries, and configuration files. With CodeCommit, you can collaborate on code with team members, track and manage your code changes, and maintain a comprehensive version history of your infrastructure as code.

AWS CodeBuild and CodePipeline: Continuous Integration and Delivery

AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready to deploy. It fits seamlessly into the GitOps model by automating the build and test phase of your software release process.
AWS CodePipeline automates the stages of your release process for a fast and reliable application and infrastructure updates. It integrates with CodeCommit, CodeBuild, and CloudFormation, allowing you to visualize and automate the end-to-end flow of updates from code commit to deployment.
By integrating these services, AWS offers a cohesive GitOps stack that supports the principles of infrastructure as code, immutable infrastructure, and declarative configuration. Adopting these tools within your GitOps workflows can significantly enhance the reliability, security, and efficiency of managing and deploying your cloud infrastructure and applications.

Infrastructure-as-Code

There are several Infrastructure as Code (IaC) solutions to consider if you're shifting to a GitOps mindset. Terraform stands out for its ability to manage multi-cloud environments, allowing teams to define infrastructure through declarative configuration files, which can be versioned and reviewed as part of the GitOps process. Pulumi, on the other hand, offers a unique approach by allowing developers to define infrastructure using general-purpose programming languages, providing a familiar syntax and richer logic capabilities. Crossplane extends the Kubernetes API to manage and compose infrastructure from multiple vendors and sources, aligning with GitOps practices by integrating infrastructure management directly into the Kubernetes environment.
On the other hand, within the AWS ecosystem, CloudFormation provides a native, integrated solution for defining AWS resources using JSON or YAML templates, enabling seamless automation and integration with AWS services. The AWS Cloud Development Kit (CDK) further enhances the IaC landscape by allowing developers to define infrastructure using familiar programming languages, abstracting away the verbose syntax of traditional templates. For my examples I use either Terraform or CloudFormation.

A Simple GitOps Example

Now let's look at a fairly basic example. In the figure below you can see the process flow including the services and tools we use to deploy a CloudFormation template first to a development account, then to a production account after a manual approval. The idea here is to think through each step of the way.
A simple GitOps example
Figure 3. Using the GitOps process to deploy CloudFormation templates
Let's talk through the process and configuration elements.
The first thing that will happen is you will create your IaC, in this case a CloudFormation template, and commit and push it to the CodeCommit Repo. At this point CodePipeline sees the change and leaps into action.
CodePipeline has a JSON file that it gets it's configuration from. Here is what the JSON file for this project looks like:
This file defines a few important elements of the pipeline.
  1. It defines the artifacts store, which is an S3 bucket that the code will be moved into while the pipeline runs.
  2. It defines the stages of a pipeline. In our pipeline we have the Source, BuildAndDeployToDev, DevApproval, DestroyOnDev, and DeployToProd stages. In Figure 4 you can see the pipeline as its seen in the AWS Console.
  3. It defines the artifacts that are sent into each phase (InputArtifacts) and out of each phase (OutputArtifacts). This is important. You can look at the artifact the phase is trying to use by downloading it from the S3 bucket and unzipping it.
  4. It points to the name of the CodeBuild project that runs at each phase that includes one.
GitOps Pipeline in the AWS Console
Figure 4: The GitOps Pipeline in the AWS Console
Ok, so let's keep moving through. After the code is moved to an artifact we move into the BuildAndDeployToDev phase. This phase says to use the project name "BuildAndDeployToDev" and you can see it on line 51 of the code above. What happens when a CodeBuild project is run?
First, CodeBuild looks at a file for the project configuration. This is what's known as a buildspec file and its a YAML document. Below is the buildspec file for the BuildAndDeployToDev project.
So what exactly does our project do? Well, it runs a container. When the container boots up it goes through the install phase where I installs the packages I need like cfn-lint and cfn-nag, and it gets the credentials for my dev account based a on role that is assigned and some cross account permissions (this is happening in the primary account, but we deploy on the dev account.). Next in the pre_build phase it runs the linting and the cfn-nag to scan for security concerns. If that passes I let it deploy to the dev account.
But what happens once the template is deployed in the Dev environment? The next step according. lines 58-73 of the CodePipeline JSON file is a manual approval. From here we simply repeat the process for each step in the flow, destroying on the development environment so we aren't charged for the build and then deploying to prod so it enters into our production environment.

Integrating Generative AI into the GitOps Process

Incorporating Generative AI into the GitOps process can significantly enhance automation and efficiency. Amazon Bedrock and AWS Q are powerful tools in this regard. Amazon Bedrock provides access to several Large Language Models. How. you integrate Amazon Bedrock into your GitOps process? One idea might be to run your code through the Claudev2 LLM over PrivateLink so that it can be documented automatically. Another idea would be to ask the LLM if the code can be streamlined and provide those recommendations to the person that made the commit via an SNS topic and EventBridge.
The integration of Generative AI into the GitOps process enables a more proactive approach to infrastructure management, where AI can play a significant role. Generative AI can speed up the development and documentation cycle and also enhance the security and reliability of your cloud infrastructure.

Conclusion

The journey to adopting GitOps practices, particularly in the AWS ecosystem, represents a real shift in thinking and it may not be easy for network security professionals. I recommend taking your time through it, testing and learning in a demo environment. Then you can start integrating these practices and tools into your workflows. Experiment with Amazon Bedrock in your GitOps processes and observe the improvements in speed, efficiency, and security. Remember, the transition to GitOps is a journey that involves continuous learning and adaptation.
Join the conversation right here in our community, and share your challenges and successes with applying the GitOps mindset to your organization. By sharing our experiences, we can all grow and improve together. Your insights could greatly benefit others facing similar challenges. Let's collaborate and transform the future of cloud infrastructure management together.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments