Saturday, April 27, 2024
HomeDevOpsDevOps Tools to Watch in 2023

DevOps Tools to Watch in 2023

We are almost at the end of the year, 2022, you may already start preparing your read list for 2023. If not, this post may help to plan for it. So far, we have seen tools like Kubernetes, Jenkins, GIT, terraform, Grafana, Prometheus, Gradle, maven, docker, etc. Hope you are getting familiar with those, if not please check those tools and familiar with those.  

Near future, current toolset or way we are using the tools will get change, as there are more changes are coming quicker now a days. One of the main reasons is, the adoption towards GitOps or everything-as-a-code. Considering that, here we have listed top tools to watch in 2023.

These tools are crafted as per the adoption of recent days by multiple team/organization or they shown interest to explore about it. Let’s check the list of DevOps tools to watch in 2023.

1. Pulumi

Pulumi is an open-source infrastructure as code tool that utilizes the most popular programming languages to simplify provisioning and managing cloud resources.

Unlike Terraform, which has its proprietary language and syntax for defining infrastructure as code, Pulumi uses real languages. You can write configuration files in Python, JavaScript, or TypeScript. In other words, you are not forced to learn a new programming language only to manage infrastructure.

If you are already familiar with some programming language, like TypeScript, Python, Go, C#, Java, etc., but you don’t really want to learn yet another language that is HCL, Pulumi might be for you. If you are using AWS, technically, you can use AWS CDK too, but if you plan to orchestrate a hybrid cloud architecture, Pulumi makes more sense.

2. Crossplane

Crossplane is developed as a Kubernetes add-on and extends any Kubernetes cluster with the flexibility to provision and manage cloud infrastructure, services, and applications. Crossplane uses Kubernetes-styled declarative and API-driven configuration and management of infrastructure, on-premises or within the cloud.

Crossplane can be considered as a Kubernetes add-on, which means that it makes use of custom resources to provide all of its functionality.

As like pulumi, even Crossplane helps to sort the infrastructure provisioning implementation with different way. If you are using everything on Kubernetes and like to manage everything from Kubernetes, then Crossplane would be a best choice. But still, it needs to grow, but it is worth to get exposure about it.

To know more about it, you can refer our Crossplane series.

3. SOPS

SOPS, short for Secrets OPerationS, is an open-source text file editor that encrypts/decrypts files automagically.

Typically, when you want to encrypt a text file, this is what you do:

  • Use your favorite editor for writing, editing, and manipulating the text data, and save it as a file.
  • Use an encryption/decryption tool to encrypt the whole file.

When you need to read the encrypted file:

  • First, you must decrypt the file using an encryption/decryption tool.
  • Open the decrypted file (now it’s a regular text file) with a text editor of your choice.

The drawback of this “normal” process is obvious: you need two tools (an editor and an encryption/decryption tool) for one job. You probably see where I’m going with this, and you are right: SOPS is for that.

In short, it can be integrated with many encryption services (like HashiCorp Vault, AWS KMS, etc.) to encrypt your secret files automatically, making using a git repo to store secrets possible and easy for collaboration.

4. External Secret

External Secrets Operator is a Kubernetes operator that integrates external secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault, IBM Cloud Secrets Manager, A keyless and many more. The operator reads information from external APIs and automatically injects the values into a Kubernetes Secret.

The goal of External Secrets Operator is to synchronize secrets from external APIs into Kubernetes. ESO is a collection of custom API resources – ExternalSecret, SecretStore and ClusterSecretStore that provide a user-friendly abstraction for the external API that stores and manages the lifecycle of the secrets for you.

The External Secrets Operator extends Kubernetes with Custom Resources, which define where secrets live and how to synchronize them. The controller fetches secrets from an external API and creates Kubernetes secrets. If the secret from the external API changes, the controller will reconcile the state in the cluster and update the secrets accordingly.

5. Trivy

Containerization and 12-factor apps have become so popular nowadays that they are your first thoughts when you want to build/deploy an app. Since we rely on container images heavily for our cloud-native workload, the importance of container image security is rising all the time: any container created from an image inherits all its characteristics — including security vulnerabilities, misconfigurations, or even malware.

Trivy is a security scanner. It is reliable, fast, effortless, and works wherever you need it. Trivy has different scanners that look for various security issues, and the most famous use case is for container image Known vulnerabilities (CVEs) scanning.

You can run it as a CLI tool locally to scan your local container image and other artifacts before pushing it to a container registry or deploying your application.

Moreover, Trivy is designed to be used in CI and can be easily integrated with your CI pipelines, thus perfectly fitting in the “continuous everything” DevOps mindset.

6. ArgoCD

Being designed and built for deploying applications on Kubernetes, ArgoCD can also enable GitOps workflows and deploy applications on-demand via automation at scale.

ArgoCD helps solve these problems using established GitOps principles. Once setup and connected to a Kubernetes cluster and authenticated to one or more public or private git repositories, ArgoCD reads the application manifests directly from the git repositories. ArgoCD then deploys cloud native applications and Kubernetes objects, such as RBAC policies, onto the connected clusters based on their definition in the git repository. Once deployed, ArgoCD will continuously monitor the current state of the application or the Kubernetes object in the cluster for any change in configuration. It will also monitor the git repository for any updates to the manifests, or the desired state. If either change, ArgoCD can automatically, or manually, resolve the discrepancy to ensure that the application or Kubernetes objects are always consistent with the desired state as defined in the git repository.

Resolving discrepancies in this manner leverages the ability to track change, which can then be extended to continuously deploy changes to an application’s configuration. If a new container image is built or deployed, an upgraded version of the application is updated in real time.

Since ArgoCD understands Kubernetes application manifests and supports all the commonly used application packaging options, (Kustomize, Helm, Ksonnet, Jsonnet, or just plain-yaml). Using ArgoCD in conjunction with Kubernetes ensures consistent deployment of applications across various stages of the application development lifecycle.

Check more about ArgoCD on our ArgoCD Series.

7. Linkerd

Linkerd is an open-source, lightweight Service Mesh developed mainly for Kubernetes. It adds reliability, security, and visibility to Cloud-Native applications. It provides observability for all Microservices running in the cluster, without requiring any code change in Microservices.

Users monitor success rate, requests, and latency issues for individual service. It also provides live traffic analysis which helps in diagnosing of failures. The best part about Linkerd is that it works out of the box without any complicated configuration. This can easily handle tens of thousands of requests per second, and it works perfectly with Kubernetes.

Benefits of using Linkerd,

  • It’s completely open-source and has a very active community.
  • Adding Service Mesh to a cluster of Microservices is very important.
  • It helps both developers and DevOps guys to find bottlenecks in applications.
  • It is an excellent choice for Service Mesh to monitor Service-to-Service communication across multiple applications running in a cluster.
  • Battle tested in production by many big Enterprises.
  • It’s one of the most used Service Mesh, along with Istio.
  • It works independently, it is not dependent on any libraries or languages.

8. Kaniko

Kaniko is a tool to build container images from a Dockerfile. Unlike Docker, Kaniko doesn’t require the Docker daemon.

Since there’s no dependency on the daemon process, this can be run in any environment where the user doesn’t have root access like a Kubernetes cluster.

Kaniko executes each command within the Dockerfile completely in the userspace using an executor imagegcr.io/kaniko-project/executor which runs inside a container; for instance, a Kubernetes pod. It executes each command inside the Dockerfile in order and takes a snapshot of the file system after each command.

If there are changes to the file system, the executor takes a snapshot of the filesystem change as a “diff” layer and updates the image metadata.

9. GitHub Actions

GitHub Actions is yet another CI. CI interacts with your code a lot, and by nature, GitHub Actions interacts with your GitHub repos easily. No more trouble integrating your CI with your code repos.

Another benefit for start-ups is: GitHub Actions has some free quota, so when you just launched a new product, the free quota might be more than enough, making it completely free. You probably don’t need to register some extra self-hosted runners for quite a long time, and you save the costs of running some VMs in some cloud for your own infrastructure just for the CI part.

10. Tekton

Tekton is a Kubernetes-native open-source framework for creating Continuous Integration and Continuous Delivery (CI/CD) systems. It is highly optimized for building, testing, and deploying applications across multiple cloud providers or hybrid environments.

Tekton is simple to use since it abstracts away the complex Kubernetes concepts and implementation details. It comes with a number of Kubernetes custom resources to declare and use Tekton CI CD Pipelines. We’ll cover more on this in the upcoming sections.

One of the greatest benefits of using Tekton is that Tekton standardizes Continuous Delivery Tooling and Processes across many vendors, languages, and deployment environments. It works well with Jenkins, Jenkins X, Skaffold, Knative, GitHub Actions, Argo CD and many other popular Continuous Delivery Tools. This helps your teams to create pipelines and standardize software releases with ease.

Tekton is simple to use, and it can be reused to suit your team’s workflow. It provides a scalable and serverless cloud-native execution out of the box, so there’s less preparation time and more action time for your teams. You can find more information about Tekton in this Tekton Git repository here- Tekton CD GitHub.

11. HashiCorp Harness

Harness is yet another CI, but it’s more than that. It’s from HashiCorp, a name we are already familiar with, and it combines a few things into one:

  • CI
  • CD/GitOps
  • feature flags
  • cloud costs

Harness offers hosted virtual machines (VMs) to run your builds. With Harness Cloud, you can build your code worry-free on the infrastructure that Harness provides. You can spend less time and effort maintaining infrastructure and focus on developing great software instead.

In Harness, Continuous Delivery is modelled using Pipelines and Stages. In each stage, you define what you want to deploy using Services, where you want to deploy it using Environments, and how you want to deploy it using Execution steps.

Harness GitOps lets you perform GitOps deployments in Harness. You define the desired state of the service you want to deploy in your Git manifest and then use Harness GitOps to sync the state with your live Kubernetes cluster.

Harness Feature Flags (FF) is a feature management solution that lets you change your software’s functionality without deploying new code. It allows you to hide code or behaviour without shipping new software versions. A feature flag is like a powerful `if` statement. In short, if you want a SaaS CI/CD/FeatureFlags all in one place, this is the one to look at.

12.Thanos

Thanos is an open-source project, built as a set of components that can be composed into a highly available metric system with unlimited storage capacity, which can be added seamlessly on top of existing Prometheus deployments.

Thanos leverages the Prometheus storage format to cost-efficiently store historical metric data in any object storage while retaining fast query latency. Additionally, it provides a global query view across all Prometheus installations.

Thanos main components are:

  • Sidecar: connects to Prometheus and exposes it for real time queries by the Query Gateway and/or upload its data to cloud storage for longer term usage
  • Query Gateway: implements Prometheus’ API to aggregate data from the underlying components (such as Sidecar or Store Gateway)
  • Store Gateway: exposes the content of a cloud storage
  • Compactor: compacts and down-samples data stored in cloud storage
  • Receiver: receives data from Prometheus’ remote-write WAL, exposes it and/or upload it to cloud storage
  • Ruler: evaluates recording and alerting rules against data in Thanos for exposition and/or upload

13. Kyverno

Kyverno makes it possible to set up Kubernetes-native policies inside your cloud native infrastructure to validate the resources that get deployed. The deployed resources can then be used to validate, mutate or generate Kubernetes resources.

The benefit of defining policies through Kubernetes manifests are

  1. If everything is a Kubernetes resource, you can use the same tools and processes to manage every aspect of your application. For instance, Kyverno can be used through the `krew` Kubernetes-plugin tool in `kubectl` and the Kyverno CLI connects directly to kubectl and your kubeconfig.
  2. Policies can be enforced from within the cluster, making enforcement more reliable.

14. Jaeger

Jaeger is based on the vendor-neutral Open Tracing APIs and instrumentation.

Jaeger presents execution requests as traces. A trace shows the data/execution path through a system. A trace is made up of one or more spans. A span is a logical unit of work in Jaeger. Each span includes the operation name, start time, and duration. Spans may be nested and ordered.

Jaeger includes several components that work together to collect, store and visualize spans and traces.

  • Jaeger Client includes language-specific implementations of the Open Tracing API for distributed tracing. These can be used manually or with a variety of open-source frameworks.
  • Jaeger Agent is a network daemon that listens for spans sent over User Datagram Protocol. The agent is meant to be placed on the same host as the instrumented application. This is usually implemented through a sidecar in container environments like Kubernetes.
  • Jaeger Collector receives spans and places them in a queue for processing.
  • Collectors require a persistent storage backend, so Jaeger also has a pluggable mechanism for span storage
  • Query is a service that retrieves traces from storage.
  • Jaeger Console is a user interface that lets you visualize your distributed tracing data.

Summary

Here is the quick summary of the tools we have discussed,

  • Infrastructure-as-Code: Pulumi, Crossplane
  • Security: SOPS, Trivy, external-secrets
  • GitOps: ArgoCD
  • K8s/multi-cluster: Linkerd
  • Docker image Build tool: Kaniko
  • CI/CD: GitHub Actions, Tekton, HashiCorp Harness
  • Monitoring: Thanos
  • Tracing: Jaeger
  • Policy-as-Code: Kyverno
RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments