Saturday, December 21, 2024
HomeCloudAzureHow to create Azure Kubernetes Service using Terraform

How to create Azure Kubernetes Service using Terraform

We have seen recently how to setup the kubernetes cluster on the bare-metal, now lets see how to setup the kubernetes cluster on cloud providers like AWS and Azure. Part of that lets see how to create the Azure kubernetes services using Terraform. You can create even via UI, but it will be too easy, so we want to cover the automation way.

What is AKS?

Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance. Since Kubernetes masters are managed by Azure, you only manage and maintain the agent nodes. Thus, AKS is free; you only pay for the agent nodes within your clusters, not for the masters.

Azure Kubernetes Architecture

The AKS cluster architecture is broadly two major components which comprises of the control plane serving the core services and orchestration and set of Nodes that actually runs your applications.

In this post, lets perform following actions, like login to azure and get the azure credentials, provision the azure cluster and try to access it via kubectl

create Azure Kubernetes Service using Terraform

Pre-requisites

  • Azure CLI
  • Terrafrom
  • Kubectl

Install Terraform

Follow the instructions here to install Terraform. When you’re done, you should be able to run the terraform command:

# terraform

Create your Azure Service Principal

Before we start, we need to create the service principal, for that please run following command. Follow the instruction suggested by the command.

# az login
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code A9F39EFUE to authenticate.
[
  {
    "cloudName": "AzureCloud",
    "homeTenantId": "xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
    "id": "xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
    "isDefault": true,
    "managedByTenants": [],
    "name": "azureftworkspace",
    "state": "Enabled",
    "tenantId": "xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
    "user": {
      "name": "cloud@foxutech.com",
      "type": "user"
    }
  }
]

Some case we may need to manage more then one subscription, so I suggest to set the subscription you like the use or declare in terraform variables correctly. If you like to set via azure CLI, please use following command to set it.

# az account set --subscription="SUBSCRIPTION_ID"

We can now create the Service Principal which will have permissions to manage resources in the specified subscription using the following command:

# az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/SUBSCRIPTION_ID
"Creating a role assignment under the scope of "/subscriptions/xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
{
  "appId": "xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
  "displayName": "azure-cli-2022-04-27-16-16-09",
  "password": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
  "tenant": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}

These values map to the Terraform variables like so:

  • appId is the CLIENT_ID defined above.
  • password is the CLIENT_SECRET defined above.
  • tenant is the TENANT_ID defined above.
# az login --service-principal -u CLIENT_ID -p CLIENT_SECRET --tenant TENANT_ID
[
  {
    "cloudName": "AzureCloud",
    "id": "xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
    "isDefault": true,
    "name": "Azureftworkspace",
    "state": "Enabled",
    "tenantId": "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
    "user": {
      "name": "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
      "type": "servicePrincipal"
    }
  }
]

Configure Azure storage to store Terraform state

Terraform tracks state locally via the terraform.tfstate file. This pattern works well in a single-person environment. However, in a more practical multi-person environment, you need to track state on the server using Azure storage. In this section, you learn to retrieve the necessary storage account information and create a storage container. The Terraform state information is then stored in that container.

Use one of the following options to create an Azure storage account and container:

Hope with this we have all required resources as following,

  • Terraform installed in the machine
  • Azure Service Principal – To create the azure k8s
  • Azure Storage and containers – To store tfstate file
  • Kubectl installed for the managing the resources

All set! Let’s deploy it.

Let’s Start Terraform

Before we start the terraform, lets create separate the data folder or use your git repository. Once you have all required the folders, create following files,

Providers.tf

terraform {

  required_version = ">=0.12"

  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~>2.0"
    }
  }

  backend "azurerm" {
    resource_group_name  = "foxutech"
    storage_account_name = "foxtfstate"
    container_name       = "tfstate"
    key                  = "foxutech.microsoft.tfstate"
  }
}

provider "azurerm" {
  subscription_id = var.subscription_id
  client_id       = var.client_id
  client_secret   = var.client_secret
  tenant_id       = var.tenant_id
  features {}
}

This file container subscription details and backend details where we are going to store the tfstate file.

Main.tf

resource "azurerm_resource_group" "rg" {
  name = var.resource_group_name
  location = var.location
}

resource "random_id" "log_analytics_workspace_name_suffix" {
  byte_length = 8
}

resource "azurerm_log_analytics_workspace" "foxulogs" {
  name                = "${var.log_analytics_workspace_name}-${random_id.log_analytics_workspace_name_suffix.dec}"
  location            = var.location
  resource_group_name = azurerm_resource_group.rg.name
  sku                 = var.log_analytics_workspace_sku
}

resource "azurerm_log_analytics_solution" "foxulogs" {
  solution_name         = "ContainerInsights"
  location              = var.location
  resource_group_name   = azurerm_resource_group.rg.name
  workspace_resource_id = azurerm_log_analytics_workspace.foxulogs.id
  workspace_name        = azurerm_log_analytics_workspace.foxulogs.name

  plan {
    publisher = "Microsoft"
    product   = "OMSGallery/ContainerInsights"
  }
}

resource "azurerm_kubernetes_cluster" "ak8s" {
  name                = var.cluster_name
  location            = var.location
  resource_group_name = azurerm_resource_group.rg.name
  dns_prefix          = var.dns_prefix

  default_node_pool {
    name       = "agentpool"
    node_count = var.agent_count
    vm_size    = "Standard_D2_v2"
  }

  service_principal {
    client_id     = var.client_id
    client_secret = var.client_secret
  }

  addon_profile {
    oms_agent {
      enabled                    = true
      log_analytics_workspace_id = azurerm_log_analytics_workspace.foxulogs.id
    }
  }

  network_profile {
    load_balancer_sku = "Standard"
    network_plugin    = "kubenet"
  }

  tags = {
    Environment = "Staging"
  }
}

In this we are creating the AKS cluster with log analytics workspace.

Variables.tf

variable "subscription_id" {
  default = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}

variable "client_id" {
  default = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}

variable "client_secret" {
  default = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}

variable "tenant_id" {
  default = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}

variable "resource_group_name" {
  default = "foxutech-rg"
}

variable "agent_count" {
  default = 3
}

variable "dns_prefix" {
  default = "ak8s"
}

variable "cluster_name" {
  default = "ak8s"
}

variable "location" {
  default = "eastus"
}

variable "log_analytics_workspace_name" {
  default = "foxLogAnalyticsWorkspaceName"
}

variable "log_analytics_workspace_sku" {
  default = "PerGB2018"
}

This file contains all the variable details.

Outputs.tf

output "resource_group_name" {
  value = azurerm_resource_group.rg.name
}

output "client_key" {
  value = azurerm_kubernetes_cluster.ak8s.kube_config.0.client_key
}

output "client_certificate" {
  value = azurerm_kubernetes_cluster.ak8s.kube_config.0.client_certificate
}

output "cluster_ca_certificate" {
  value = azurerm_kubernetes_cluster.ak8s.kube_config.0.cluster_ca_certificate
}

output "cluster_username" {
  value = azurerm_kubernetes_cluster.ak8s.kube_config.0.username
}

output "cluster_password" {
  value = azurerm_kubernetes_cluster.ak8s.kube_config.0.password
}

output "kube_config" {
  value     = azurerm_kubernetes_cluster.ak8s.kube_config_raw
  sensitive = true
}

output "host" {
  value = azurerm_kubernetes_cluster.ak8s.kube_config.0.host
}

output "fqdn" {
  value = azurerm_kubernetes_cluster.ak8s.fqdn

Once the cluster created, we need some details to access the k8s, with outputs.tf files we can get those details lets see in the details.

Once all the files created, lets start terraforming.

# terraform init

Run terraform init to initialize the Terraform deployment. This command downloads the Azure modules required to manage your Azure resources.

# terraform plan

Run terraform plan to create an execution plan. This command creates an execution plan, but doesn’t execute it. Instead, it determines what actions are necessary to create the configuration specified in your configuration files. This pattern allows you to verify whether the execution plan matches your expectations before making any changes to actual resources.

# terraform apply

Run terraform apply to apply the execution plan to your cloud infrastructure.

With this we have created following resources and you can find this under the resource group,

  • Log Analytics Solution
  • Log Analytics Workspace
  • Kubernetes service

Get the Kubernetes configuration from the Terraform state and store it in a file that kubectl can read.

# echo "$(terraform output kube_config)" > ./foxk8s

Verify the previous command didn’t add an ASCII EOT character. Just open the foxk8s file and delete the EOT from start and end of the file.

Set an environment variable so that kubectl picks up the correct config.

# export KUBECONFIG=./foxk8s

Verify the health of the cluster.

# kubectl get nodes

Check the video explain for the same with below URL:

You can follow us on social media, to get some short knowledges regularly.

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments