What is CI/CD?¶
CI and CD stand for Continuous Integration and Continuous Delivery/Deployment. In very simple terms, CI is a modern software development practice where small code changes are made frequently and reliably. Automated build and test steps triggered by CI ensure the reliability of code changes that are merged into the repository. Then, the code is delivered quickly and seamlessly as part of the CD process. In the software world, a CI/CD pipeline refers to the automation that allows small code changes from developers to be quickly and reliably delivered to production.
Why is CI/CD important?¶
The modern IT industry is rapidly evolving, and it's important to keep up with these trends. CI/CD allows you to automate the process of building, testing, and deploying applications, which reduces development time and improves code quality. Today, you can use various tools for CI/CD, such as Jenkins, GitLab CI, CircleCI, Travis CI, GitHub Actions, and others. But besides knowing how to work with these tools, you also need to understand the practices and principles underlying CI/CD, and you need to create infrastructure for each pipeline.
We will briefly review some popular and commonly used tools for CI/CD.
GitLab CI/CD Pipeline¶
GitLab CI/CD is a powerful tool that automates the processes of building, testing, and deploying applications using a YAML configuration file, .gitlab-ci.yml, located in each repository. This file defines the "recipe" that GitLab CI/CD uses to perform various tasks in different environments, ensuring continuous integration and delivery.
The main components of GitLab CI/CD include: - Pipelines: A sequence of automated processes that are triggered to perform tasks such as building, testing, and deploying. A pipeline consists of several stages and jobs, defining the overall CI/CD process. - Stages: Logical phases within a pipeline, such as build, test, and deploy. Jobs within one stage are executed in parallel, and the next stage starts only after successful completion of all jobs in the previous stage. - Jobs: Individual tasks executed within a specific stage. Each job may include commands for building, testing, or deploying the application. Jobs can be executed in parallel or sequentially, depending on their configuration. - Runners: Special servers or machines that execute jobs in pipelines. Runners can be provided by GitLab (shared runners) or hosted on your own server (self-hosted runners), allowing you to run jobs in environments like Docker containers or virtual machines.
How GitLab Pipelines Work: - Triggers: Pipelines are automatically triggered by certain events, such as commits to the repository, creation of a merge request, adding tags, or releasing a new version. They can also be started manually or on a schedule (cron). - Runners: Pipeline jobs are executed on runners, which can be provided by GitLab or set up on your own servers for greater control over the execution environment.
GitHub Actions¶
GitHub Actions is a built-in automation platform in GitHub that allows developers to automate processes such as building, testing, and deploying applications. Workflows in GitHub Actions are defined using YAML files located in the .github/workflows directory of your repository. These workflows are triggered in response to various events, such as commits, pull requests, releases, or scheduled tasks.
The main components of GitHub Actions include: - Workflows: Sequences of actions that are automatically executed in response to events (e.g., push or creation of a pull request). - Jobs: A set of tasks executed within a single workflow. Jobs can be run in parallel or sequentially, depending on the configuration. - Steps: Individual steps within a job that execute commands or scripts. These can be custom scripts or pre-configured actions. - Actions: Reusable, pre-configured automation blocks that can perform tasks such as setting up environments or running tests. You can use actions from the GitHub Marketplace or create your own for specific needs.
How GitHub Actions Work: - Triggers: Workflows are triggered by certain events (e.g., push to the repository, creation of a pull request, or on a schedule). - Runners: Jobs are executed on virtual machines (runners) that can be provided by GitHub or hosted on your own servers for greater control and customization.
Jenkins¶
Jenkins is an open-source continuous integration (CI) server. It manages and controls multiple stages of the software delivery process, including building, documenting, automated testing, packaging, and static code analysis.
Jenkins is more complex to use than GitLab CI/CD and GitHub Actions, but it provides more options for customization and extension. Jenkins requires self-deployment, which implies costs for purchasing a server and self-management. This article briefly describes its installation and main concepts.
Jenkins can be installed in various ways depending on your operating system and preferences:
- On Ubuntu/Debian:
- Installed via the package manager (APT), after adding the official Jenkins repository. After installation, it runs as a system service, accessible through the web interface by default on port 8080.
- On Windows:
- Installed using the standard installer available on the Jenkins website. Installation is done via a graphical interface, and after completion, Jenkins runs as a Windows service. It is also accessed through a web interface.
- Using Docker:
- Jenkins can be deployed in a Docker container. This is a simplified installation method, allowing you to run Jenkins without installing it on a local server. The container can be started with default settings, and Jenkins will be accessible through the web interface.
- On CentOS/RHEL:
- Installed via the package manager (YUM/DNF) after adding the official Jenkins repository. Jenkins runs as a system service and is also accessible through the browser by default on port 8080.
- In Kubernetes:
- Jenkins can be deployed in a Kubernetes cluster using Helm charts or YAML manifests. This method is useful for scaling Jenkins in a cloud environment using containers and load management.
After installing Jenkins, you will need to configure initial settings, such as installing plugins and creating an administrator account, to start using it for automating the building, testing, and deployment of applications.
Main Concepts of Jenkins: - Jenkins Controller (formerly Master): The controller manages distributed builds and coordinates the work of agents. It is responsible for storing configurations, managing plugins, and coordinating builds. The controller can perform builds, but it's better to use agents for scalability. - Jenkins Agent: An agent connects to the controller and executes build tasks. Agents can be installed on physical machines, virtual machines, Docker containers, or Kubernetes clusters, helping to balance load and improve performance. - Jenkins Node: A node is a general term for controllers and agents. It's a server that executes build tasks and pipelines. Jenkins monitors the health of nodes and takes them offline when performance decreases. - Jenkins Project (formerly Job): A project is an automated task created by the user to perform building, testing, or deployment. Jenkins supports various tasks, many of which can be extended with plugins. - Jenkins Plugins: Plugins are additional modules that extend Jenkins functionality. They can be installed via the control panel to add new features, such as support for new version control systems or testing tools. - Jenkins Pipeline: A pipeline is an automation model that includes building, testing, and deployment. Pipelines can be created through the user interface or described using a Jenkinsfile—a file in the Groovy language that defines the pipeline process as code.
Comparison of GitLab CI/CD, GitHub Actions, and Jenkins¶
GitLab CI/CD, GitHub Actions, and Jenkins are popular tools for automating CI/CD processes. Here are some key differences between them:
GitLab CI/CD: - Integrated with GitLab and provides a simple way to set up CI/CD for GitLab repositories. - Has built-in functions for building, testing, and deploying applications. - Supports pipelines, stages, and jobs to organize CI/CD processes. - Provides cloud runners to execute tasks. - Supports Docker and Kubernetes for application deployment.
GitHub Actions: - Integrated with GitHub and provides a simple way to set up CI/CD for GitHub repositories. - Has built-in actions for building, testing, and deploying applications. - Supports workflows, jobs, and steps to organize CI/CD processes. - Provides virtual machines to execute tasks. - Supports Docker and GitHub Packages for application deployment.
Jenkins: - An independent CI/CD server that requires self-deployment and configuration. - Has numerous plugins for building, testing, and deploying applications. - Supports controllers, agents, and nodes to organize CI/CD processes. - Offers customization and extension through plugins. - Supports various execution environments, including physical machines, virtual machines, Docker, and Kubernetes. - Even from such a brief overview, it's clear that these CI/CD tools are powerful and widely used in the software development industry.
The drawbacks include the following: - Limitations on runner usage for your tasks. You will have a limited number of minutes per month to run your pipelines, and when using Jenkins, you'll need to purchase and set up a server right away. - Creating infrastructure for deploying your application. To deploy an application, it's not enough to just write a pipeline; you also need to create infrastructure for deploying your application, such as a database, a server where your application will run, and set up containerization of the application. - Studying the documentation to set up the pipeline. Each tool has its own features and practices, even if their working principles are the same. - Time. To set all this up, you'll need time that you could spend on developing your application.
Modern technologies provide ready-made CI/CD solutions. You don't need to delve into the details of writing CI/CD, creating infrastructure for your application, or managing your runners; you can use the saved time for development.
One such ready-made solution is Amsdal.
What We Offer¶
Amsdal provides a complete set of tools for deploying your application without the need to set up infrastructure and CI/CD pipelines.
How It Works¶
You develop your application. When you're ready to deploy, you specify the necessary dependencies and the required environment variables, execute the command amsdal cloud deploys new, and the process of deploying your application starts. We take full responsibility for the entire deployment process.
After invoking the deployment command, AWS CodeBuild is launched. AWS CodeBuild is a tool provided by AWS for automating the code build process.
During the deployment process, the following occurs: - Creating a Docker image of your application with the installation of the dependencies you specified. - Uploading the Docker image to AWS ECR, which was created for your deployment. - Creating a PostgreSQL database, a RabbitMQ host, a secret manager where all your passwords and access keys will be stored. - Creating an AWS IAM role with access only to the resources created for your application. - Isolating your application in Kubernetes under your unique namespace. - Setting up monitoring of your application using Grafana. - Deploying your application in Kubernetes and setting up Ingress with an SSL certificate for access to your application.
After the deployment is complete, you will receive a link to your application and can start using it.
Security¶
We understand the importance of your application's security, so your application is deployed in an isolated environment.
Database¶
The database is deployed in AWS RDS. When deploying your application, a database is created in an already existing RDS cluster, and the rights to the database are configured so that only your application has access to it. No other users will be able to access your database from their applications or externally.
RabbitMQ¶
RabbitMQ is deployed in AWS. When deploying your application, a host with a unique name is created in an already existing RabbitMQ instance, and the rights to the host are configured so that only your application has access to it. The queues that will be created in RabbitMQ will be available only to your application and managed only through your application.
Secret Manager¶
Before deploying your application in Kubernetes, an AWS Secrets Manager is created, where all passwords and access keys to AWS resources necessary for your application's operation will be stored.
Kubernetes¶
Your application is deployed in Kubernetes. During deployment, the following Kubernetes entities will be created:
- A unique namespace where your application will be deployed.
- A service account to which an AWS IAM role with access to your resources will be attached.
- A pod with your application.
- A pod with Grafana for monitoring your application.
- An Ingress with an SSL certificate for access to your application.
Based on the above, we can safely say that your application will be deployed in a secure environment, and no one except you will be able to access your application or your data.
Application Removal¶
After you stop using your application, you can delete it by invoking the delete command. After executing this command, all resources created for your application will be deleted, including the database, RabbitMQ host, secret manager, Docker image, Kubernetes namespace, and all other resources created for your application.