ECS is the AWS Docker container service that handles the orchestration and provisioning of Docker containers. This is a beginner level introduction to AWS ECS. I’ve seen some nightmare posts and some glowing reviews about the ECS service so I knew it was going to interesting to get my hands dirty and see what ECS was all about.
Summary of the ECS Terms
First we need to cover ECS terminology:
- Task Definition — This a blueprint that describes how a docker container should launch. If you are already familiar with AWS, it is like a LaunchConfig except instead it is for a docker container instead of a instance. It contains settings like exposed port, docker image, cpu shares, memory requirement, command to run and environmental variables.
- Task — This is a running container with the settings defined in the Task Definition. It can be thought of as an “instance” of a Task Definition.
- Service — Defines long running tasks of the same Task Definition. This can be 1 running container or multiple running containers all using the same Task Definition.
- Cluster — A logic group of EC2 instances. When an instance launches the ecs-agent software on the server registers the instance to an ECS Cluster. This is easily configurable by setting the ECS_CLUSTER variable in /etc/ecs/ecs.config described here.
- Container Instance — This is just an EC2 instance that is part of an ECS Cluster and has docker and the ecs-agent running on it.
Some of AWS terms will confuse while we start working on it. Better to refer AWS provided detailed diagrams to help explain it.
There are three ways to create a cluster and instance, via ecs-cli and aws cli as well as AWS Console. And ecs-cli via CloudFormation Stack to create, will try to create using AWS CLI,
To Create using AWS Console Refer this AWS Console
In this tutorial example will create the sample flask service using AWS CLI on ECS
First, we create a Back-End Service (Use Flask)
Step1: Create a Cluster for Back-End Service
First lets create Cluster for a service;
# aws ecs create-cluster --cluster-name $CLUSTER_FLASK
Step2: Create an instance for Cluster
- Create user-data.txt to install ecs-init package
user-data.txt :
#!/bin/bash sudo yum update -y sudo yum install -y ecs-init sudo service docker start sudo echo ECS_CLUSTER=$CLUSTER_FLASK >> /etc/ecs/ecs.config sudo start ecs
- Launch an Instance and use ecsInstanceRole as well as user-data.txt
# aws ec2 run-instances --image-id ami-7f43f307 --count 1 \ --instance-type t2.micro --key-name foxutech \ --user-data file://user-data.txt --subnet-id $SUBNET --security-group-ids $SECURITY_GROUP \ --iam-instance-profile Arn=$ecsInstanceRole_ARN \ --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=ECS-Instance-CLUSTER_FLASK}]'
Where;
- Image-id ami-7f43f307 : Amazon Linux AMI
- Iam-instance-profile Arn : use ecsInstanceRole to connect cluster and instance, create this role if doesn’t exist
Step3. Create a target group for ALB
Lets create the target group name with the VPC and port number fot Application load balancer.
# aws elbv2 create-target-group --name $CLUSTER_FLASK --protocol HTTP \ --port 8080 --vpc-id $VPC
Note:
- port : Back-End Service’s Port (Flask use 8080)
Step4. Register instance to target group when EC2 status is running
Now add the instance to the target group
# aws elbv2 register-targets --target-group-arn $TARGET_GROUP_ARN \ --targets Id=$INSTANCE_ID_FLASK
Step5. Create an Application Load Balancer (ALB) and Listener for Back-End service cluster
- Create ALB
# aws elbv2 create-load-balancer --name $ALB_NAME --subnets $SUBNET_A $SUBNET_B \ --security-groups $SECURITY_GROUP
- Create Listener
# aws elbv2 create-listener --load-balancer-arn $ALB_ARN --protocol HTTP \ --port 8080 --default-actions Type=forward,TargetGroupArn=$TARGET_GROUP_ARN
Step6. Create a task definition for Flask cluster
# aws ecs register-task-definition --network-mode bridge \ --family $TASK_NAME --volumes name=test-flask, \ host={sourcePath=/tmp/container_log} \ --container-definitions "[{"name":"$TASK_NAME","image":"$FLASK_DOCKER_IMAGE", \ "cpu":50,"memory":500,"essential":true \ "portMappings":[{"hostPort":8080,"containerPort":8080,"protocol":"tcp"}], \ "logConfiguration":{"logDriver":"awslogs", \ "options":{"awslogs-group":"/ecs/foxutech-flask","awslogs-region":"ap-southeast-1", \ "awslogs-stream-prefix":"flask"}}, "mountPoints": \ [{"sourceVolume":"test-flask","containerPath":"/var/log","readOnly":false}]}]"
Note:
- volumes name : Volume name , host={sourcePath=$YOUR_INSTANCE_PATH }
- container-definitions {[“name”:”$TASK_NAME”]} must the same with –family name
- logConfiguration : awslogs – It will send stdout result to CloudWatch
- mountPoints : sourceVolume must the same with Volume name
Step7. Create a Service for Back-End service cluster
# aws ecs create-service --service-name $SERVICE_NAME --cluster $CLUSTER_FLASK \ --task-definition $FLASK_TASK_DEFINITION --desired-count 1 \ --load-balancers "[{"targetGroupArn":"$TARGET_GROUP_ARN", \ "containerName":"$CONTAINER_NAME", "containerPort":8080}]" \ --role ecsServiceRole
Note:
- load-balancers : Use ALB that Step5 create containerName must the same with Step6 you create
- role ecsServiceRole : Use ecsServiceRole to let cluster to access EC2, S3, CloudWatch (Can be modifity in IAM Settings)
We already create a Flask Service in ECS , it can be check via GET.
# wget http://ALB-DNS-NAME:8080
Now we create a Front-End Service (Use Nginx)
Step8. Create a cluster for Nginx just like Step1 we create Flask cluster
# aws ecs create-cluster --cluster-name $CLUSTER_NGINX
Step9. Create a instance for Nginx Cluster just like Step2
- Create user-data.txt to install ecs-init package
user-data.txt :
#!/bin/bash sudo yum update -y sudo yum install -y ecs-init sudo service docker start sudo echo ECS_CLUSTER=$CLUSTER_NGINX >> /etc/ecs/ecs.config sudo start ecs
- Launch an Instance and use ecsInstanceRole as well as user-data.txt
# aws ec2 run-instances --image-id ami-7f43f307 --count 1 \ --instance-type t2.micro --key-name foxutech --user-data file://user-data.txt \ --subnet-id $SUBNET --security-group-ids $SECURITY_GROUP --iam-instance-profile \ Arn=arn:aws:iam::$ACCOUNT:instance-profile/ecsInstanceRole \ --tag-specifications 'ResourceType=instance, \ Tags=[{Key=Name,Value=ECS-Instance-CLUSTER_NGINX}]'
Note:
- Image-id ami-7f43f307 : Amazon Linux AMI
- Iam-instance-profile Arn : use ecsInstanceRole to connect cluster and instance, create this role if doesn’t exist
Step10. Create an Classic Load Balancer (ELB) for Nginx Cluster
# aws elb create-load-balancer --load-balancer-name $ELB_NAME \ --listeners "Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP, \ InstancePort=80" --subnets $SUBNET_A $SUBNET_B --security-groups $SECURITY_GROUP
Note:
In ELB, We don’t have to specify target group.
Step11. Register instance to ELB
# aws elb register-instances-with-load-balancer \ --load-balancer-name $ELB_NAME --instances $INSTANCE_ID_NGINX
Step12. Build a new version for Nginx image
Because we should specify the proxy_pass setting to Back-End service’s ALB DNS name in Nginx config.
Dockerfile:
FROM $ACCOUNT.dkr.ecr.ap-southeast-1.amazonaws.com/foxutech-nginx:latest RUN sed -i '52c\ proxy_pass http://$ALB_DNS_NAME:8080;' /etc/nginx/nginx.conf EXPOSE 80 CMD (tail -F /var/log/nginx/access.log &) && exec nginx -g \"daemon off;\"
Save Dockerfile and build a new version then push it to ECR
# docker build -t foxutech-nginx:latest . # docker tag foxutech-nginx:latest $ACCOUNT.dkr.ecr.ap-southeast-1.amazonaws.com/foxutech-nginx:latest # docker push $ACCOUNT.dkr.ecr.ap-southeast-1.amazonaws.com/foxutech-nginx:latest
Where;
We should use exec nginx –g “daemon off; ” to start Nginx, or Nginx process will crash when ECS start.
Execute aws ecs get-login to login if push image to ECR fail.
Step13. Create a task definition for Nginx Service
# aws ecs register-task-definition --network-mode host \ --family $TASK_NAME --volumes name=test-nginx, \ host={sourcePath=/tmp/container_tmp} --container-definitions \ "[{"name":"task-nginx","image":"$NGINX_DOCKER_IMAGE","cpu":50, \ "memory":500,"essential":true"portMappings": \ [{"hostPort":80,"containerPort":80,"protocol":"tcp"}], \ "logConfiguration":{"logDriver":"awslogs","options": \ {"awslogs-group":"/ecs/foxutech-nginx","awslogs-region":"ap-southeast-1", \ "awslogs-stream-prefix":"nginx"}}, \ "mountPoints":[{"sourceVolume":"test-nginx", \ "containerPath":"/tmp","readOnly":false}]}]"
Where;
- ntework-mode host : Use host mode to make sure one instance only have one nginx container, because ELB doesn’t allow we use random port in ECS to point to ELB Listener just like ALB.
- volumes name : Volume name , host={sourcePath=$YOUR_INSTANCE_PATH }
- container-definitions {[“name”:”$TASK_NAME”]} must the same with –family name
- logConfiguration: awslogs – It will send stdout result to CloudWatch
- mountPoints: sourceVolume must the same with Volume name
Step14. Create a Nginx Service for Nginx Cluster just like Step7
# aws ecs create-service --service-name service-nginx --cluster $CLUSTER_NGINX \ --task-definition $NGINX_TASK_DEFINITION --desired-count 1 --load-balancers \ "[{"loadBalancerName":"$ELB_NAME","containerName":"$CONTAINER_NEM", \ "containerPort":80}]" --role ecsServiceRole
Where;
- load-balancers : Use ELB that Step10 create containerName must the same with family name that Step13 you create role
- ecsServiceRole : Use ecsServiceRole to let cluster to access EC2, S3, CloudWatch (Can be modifity in IAM Settings)
Now, We create Nginx Cluster and Flask Cluster successfully, it should be connect via Nginx’s proxy_pass setting.
We can verify via GET.
# wget http://ELB-DNS-NAME
We will see the Flask’s program is running via Nginx’s proxy.