Thursday, April 25, 2024
HomeLinuxHow to configure nginx to serve as a load balancer for gRPC

How to configure nginx to serve as a load balancer for gRPC

Announcing gRPC Support in NGINX , gRPC support comes in with nginx 1.13.9, it seems to be able to handle gRPC stream like HTTP.

NGINX can already proxy gRPC TCP connections. With this new capability, you can terminate, inspect, and route gRPC method calls. You can use it to:

  • Publish a gRPC service, and then use NGINX to apply HTTP/2 TLS encryption, rate limits, IP‑based access control lists, and logging. You can operate the service using encrypted HTTP/2 (h2c cleartext) or wrap TLS encryption and authentication around the service.
  • Publish multiple gRPC services through a single endpoint, using NGINX to inspect and route calls to each internal service. You can even use the same endpoint for other HTTPS and HTTP/2 services, such as websites and REST‑based APIs.
  • Load balance a cluster of gRPC services, using Round Robin, Least Connections, or other methods to distribute calls across the cluster. You can then scale your gRPC‑based service when you need additional capacity.

It seems that the grpc_pass directive is newly implemented so that it can perform reverse proxy for grpc: // and grpcs: // backend. Using this,

  • Have the nginx do the TLS termination
  • We had multiple backends and had flexible load balance
  • You can set multiple gRPC services on the same endpoint and have them route to nginx
What is gRPC?

gRPC is a remote procedure call protocol, used for communication between client and server applications. It is designed to be compact (space‑efficient) and portable across multiple languages, and it supports both request‑response and streaming interactions. The protocol is gaining popularity, including in service mesh implementations, because of its widespread language support and simple user‑facing design.

gRPC is transported over HTTP/2, either in cleartext or TLS‑encrypted. A gRPC call is implemented as an HTTP POST request with an efficiently encoded body (protocol buffers are the standard encoding). gRPC responses use a similarly encoded body and use HTTP trailers to send the status code at the end of the response.

Read more: Nginx Performance Tuning Reference for Geeks

By design, the gRPC protocol cannot be transported over HTTP. The gRPC protocol mandates HTTP/2 in order to take advantage of the multiplexing and streaming features of an HTTP/2 connection.

Preparation

This time I properly launched the Ubuntu 16.04 instance with EC 2. Install necessary packages, bring HEAD tar.gz from https://hg.nginx.org/ , and build.

Especially if you do not pass options to configure, all files will be placed under /usr/local/ nginx.

# apt install -y build-essential libssl-dev libpcre3-dev
# curl -O https://hg.nginx.org/nginx/archive/tip.tar.gz
# tar xvf tip.tar.gz
# cd nginx-c2a0a838c40f
# ./auto/configure --with - http_ssl_module --with - http_v2_module
# make
# make install
# Start nginx with foreground
sudo. / nginx - g 'daemon off;'

Also prepare the experimental gRPC server · client. This time I will use Python version in examples of grpc / grpc . It is a usual greeter ‘s bastard.

# apt install python-pip 3
# pip3 install grpcio-tools
# git clone https://github.com/grpc/grpc.git
----- Tries to move it to try
# cd ~ / grpc / examples / python / helloworld
# python3 greeter_server.py
# python3 greeter_client.py
2018/04/11 14: 46: 12 Greeting: Hello world
Normally reverse proxy

http {
    server {
        listen 80 http 2; // http 2 Required
        location / {
          grpc_pass grpc: // localhost: 50051;
        }
    }
}

Rewrite greeter_client.py to turn localhost:80it on, and run it.

# python3 greeter_client.py
Hello, you!

# tail /usr/local/nginx/logs/access.log

You could reverse proxy as easily.

Normally TLS terminated

server {
    listen 443 ssl http 2;

    ssl_certificate ssl / cert.pem;
    ssl_certificate_key ssl / key.pem;
}

It was troublesome to prepare a certificate this time, so I did not try it, but this alone will allow us to receive grpcs: // normally. I want to communicate with an external client via grpcs, but the internal Microservices are clear plain text … Is it convenient in the case?

Routing to multiple gRPC Service + load balance + co-existence example with REST API

Try setting up a gRPC server that provides another Service (“DaininkiService”) and try load balancing.

upstream daininki_service_servers {
    Servers providing # helloworld.Daininki
    server localhost: 50052;
    server localhost: 50053;
}
server {
    listen 80 http2;
    location / helloworld.Greeter {
        grpc_pass grpc: // localhost: 50051;
    }
    location / helloworld.Daininki {
        # Load balance just like HTTP
        grpc_pass grpc: // daininki_service_servers;
        error_page 502 = / error 502 grpc;
    }
    # If back end is unavailable, return error in gRPC format
    location = / error502grpc {
        internal;
        default_type application / grpc;
        add_header grpc - status 14;
        add_header grpc-message "unavailable";
        return 204;
    }
    location / {
        proxy_pass http: // rest_api_server;
    }
}

Because the popular Daininki Service is very popular, I think that you can see that you have two servers and that you are load balanced by nginx. Also, if none of the backends can respond, it seems that nginx itself can also get a response of application / grpc format.

It says that the non-gRPC service like the REST API can be mixed with the same endpoint on the announcement page, but even if you write it like this example location /did not work well … If it works well, is it useful in cases where you are extending the application that provided the REST API so that you can speak gRPC as well.

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments