In previous post, we have seen what is HAProxy and how to install and configure it. In this post will see about how to run haproxy on docker container. Docker is simplified solution tool for any kind of application, we can easily deploy/redeploy at any time. Also, we cannot avoid docker on fast driving infrastructure. Find detailed explanation about Haproxy on previous post.
There are multiple ways to run the haproxy on docker. We can use docker-engine or docker-compose or docker-swarm. In this will discuss about it one by one.
Run using Docker-Engine
To run the haproxy on docker container, we can use existing docker image (official) or we must write our own based on our environment or requirement. In this will use official image and also will use it as base image and create the docker image.
To run HAProxy with official image:
# docker run -d --name haproxy haproxy:latest ##--> latest version is 1.9.0
Create Own dockerfile based on haproxy,
# vim Dockerfile FROM haproxy:latest COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
Build the docker image,
# docker build -t haproxy-docker .
Validate/Test the haproxy configuration,
# docker run -it --rm --name haproxy-syntax-check haproxy-docker haproxy -c -f /usr/local/etc/haproxy/haproxy.cfg
Run haproxy with new image,
# docker run -d --name haproxy haproxy-docker
You may need to publish the ports your HAProxy is listening on to the host by specifying the -p option, for example -p 8080:80 to publish port 8080 from the container host to port 80 in the container.
Instead of creating docker image only for configuration, you can use bind mount.
# docker run -d --name haproxy -v /path/to/etc/haproxy:/usr/local/etc/haproxy:ro haproxy:latest
Make sure you have placed all the configuration files and required template files on /path/to/etc/haproxy, files like haproxy.cfg, 404.http, 500.http etc.
If you used a bind mount for the config and have edited your haproxy.cfg file, you can use HAProxy’s graceful reload feature by sending a SIGHUP to the container:
# docker kill -s HUP haproxy
In this will see how to run the haproxy using docker-compose. Along with haproxy will run our application also. Refer the docker-compose file below,
In this our application is run with “my-app” name with port 80. Also, its launch 6 replicas (6 application container). And haproxy will starts with “leastconn” algorithm not default “roundrobin” (refer algorithms in previous post) and its run only when application container runs still then it waits as it is “depends_on” “myapp”. It’s also shared the docker.sock file (volumes field). This is the file the HAProxy container needs to look at to learn about the containers in its network (new and existing containers). We expose port 80, and put this container in the test network, In the deploy setting, we just tell it to always put this container on the manager node. Also, we are creating separate network for this setup.
To create the docker swarm, you can run
# docker swarm init
It adds the machine to docker swarm, as we have only one, if you have many you can follow this article to achieve it.
As we have docker compose file already, we can use the same to deploy the application and haproxy. In Docker swarm, networks, the services and all the containers are called as “Stack”. To create stack, we can use “docker stack” command. Like below
# docker stack deploy –compose-file=docker-compose.yml test
Here, “test” is the stack name, you can give your own. Once you done, run following command to get the status,
# docker service ls
To test, you can use following command,
# curl http://localhost
You can see the output keep changes, which means the request keep passing across the containers.
Another advantage of docker is zero downtime, which means for next update you no need any downtime or maintenance windows. For example, if we update our application to v2 from v1 and now we need to deploy/update it. To update just run following command
# docker service update --image my-app:v2 test_my-app
Here, test is our stack and my-app is our service name, and it updates the container three by three to use the second version of our app (three, because we wrote parallelism: 3 in our docker-compose.yml file). And to scale from 6 to 12, run following command.
# docker service scale test_my-app=12