Setting up Distributed Tracing in Kubernetes with OpenTracing, Jaeger, and Ingress-NGINX

Fernando Diaz
8 min readApr 27, 2021
Evidence Board helping detectives solve mysteries. Bigfoot is missing.

In an age where companies like Netflix are running over 500 Microservices at once, it is important to quickly find out where exactly a failure or decrease in performance is coming from. It can be like finding a slightly discolored piece of hay in a haystack, unless something like Distributed Tracing is in place.

Distributed Tracing is a way of profiling and monitoring applications. It can pinpoint the location of failures and slowdowns, helping you debug and optimize your code.

This post will provide a tutorial on getting Distributed Tracing working from ingress-nginx to your microservice functions. We will be using Meow-Micro 🐈, as the tutorial application, which was created for this very blog post.

Prerequisites

This guide assumes that you can understand code written in Go and know how to use ingress-nginx and understand how basic Kubernetes objects such as services and deployments work.

If you want a refresher, you can checkout the following resources:

Docker Desktop and Installing Ingress-Nginx

Now let’s install Docker Desktop, which is used for the building and sharing of containerized applications and microservices as well as running Kubernetes locally. Once you have installed Docker Desktop, go through the following steps to enable Kubernetes.

  1. Click on the Preferences Icon

2. Select the Kubernetes tab, Check the Enable Kubernetes button, and click the Apply & Restart button

3. Press the Install button, and wait for the installation to complete

4. In the menu bar select Kubernetes

5. In the context select docker-desktop

6. Open the terminal and confirm you are using the correct cluster

$ kubectl cluster-infoKubernetes master is running at https://kubernetes.docker.internal:6443
KubeDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Installing Ingress-Nginx

  1. Now we can install Ingress-Nginx simply by running the following command as seen in the Ingress-Nginx Getting Started Guide
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.45.0/deploy/static/provider/cloud/deploy.yamlnamespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created\
deployment.apps/ingress-nginx-controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created]
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created\
job.batch/ingress-nginx-admission-patch created

2. We can confirm the installation by checking if the ingress-nginx-controller pod is ready and running successfully

$ kubectl get pods -n ingress-nginxNAME                                     READY   STATUS    RESTARTS ingress-nginx-admission-create-52jsl        0/1     Completed   0          
ingress-nginx-admission-patch-78fkc 0/1 Completed 0
ingress-nginx-controller-6f5454cbfb-qsfnn 1/1 Running 0

Note: If there are memory/cpu issues, you may need to increase your resources

Installing Jaeger and configuring the Ingress Controller

Jaeger is a distributed tracing platform, which we will use to monitor our microservices. Now let’s install Jaeger and enable tracing at the ingress-controller level.

  1. Start by cloning meow-micro, the project which we will be using for this tutorial
$ git clone https://github.com/diazjf/meow-micro.gitCloning into 'meow-micro'...
remote: Enumerating objects: 105, done.
...
$ cd meow-micro

2. We then need to install Jaeger. I created an updated yaml for jaeger-all-in-one which can be applied as follows

$ kubectl apply -f jaeger/jaeger-all-in-one.yamldeployment.apps/jaeger created
service/jaeger-query created
service/jaeger-collector created
service/jaeger-agent created
service/zipkin created

3. Confirm that jaeger is running and ready

$ kubectl get podsNAME                      READY   STATUS    RESTARTS   AGE
jaeger-6f6b5d8689-8gccp 1/1 Running 0 17s

4. Now to enable communication between ingress-nginx and jaeger, we need to add the enable-opentracing and jaeger-collector-host keys to the ingress-nginx-controller configmap.

This tells the ingress controller to send traces to jaeger at the given endpoint, which is defined by the jaeger-agent service

$ echo '
apiVersion: v1
kind: ConfigMap
data:
enable-opentracing: "true"
jaeger-collector-host: jaeger-agent.default.svc.cluster.local
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
' | kubectl replace -f -
configmap/ingress-nginx-controller replaced

5. Confirm that the Ingress Controller has opentracing enabled and is correctly setup

$ kubectl get pods -n ingress-nginx | grep controlleringress-nginx-controller-6f5454cbfb-qptxt   1/1     Running     0          8m56s$ kubectl exec -it ingress-nginx-controller-6f5454cbfb-qptxt -n ingress-nginx -- bash -c "cat nginx.conf | grep ngx_http_opentracing_module.so"load_module /etc/nginx/modules/ngx_http_opentracing_module.so;$ kubectl exec -it ingress-nginx-controller-6f5454cbfb-qptxt -n ingress-nginx -- bash -c "cat nginx.conf | grep jaeger"opentracing_load_tracer /usr/local/lib/libjaegertracing_plugin.so /etc/nginx/opentracing.json;$ kubectl exec -it ingress-nginx-controller-6f5454cbfb-qptxt -n ingress-nginx -- bash -c "cat /etc/nginx/opentracing.json"{
"service_name": "nginx",
"propagation_format": "jaeger",
"sampler": {
"type": "const",
"param": 1,
"samplingServerURL": "http://127.0.0.1:5778/sampling"
},
"reporter": {
"endpoint": "",
"localAgentHostPort": "jaeger-agent.default.svc.cluster.local:6831"
},
"headers": {
"TraceContextHeaderName": "",
"jaegerDebugHeader": "",
"jaegerBaggageHeader": "",
"traceBaggageHeaderPrefix": ""
}
}

Now we have both jaeger and ingress running on our cluster. We can then deploy our microservices!

Deploy Microservices with Instrumentation

Instrumentation is added to our microservices to extract important information about the microservices. Instrumentation is required for us to be able to perform traces and map out specific functions.

In this section we will deploy 2 different Microservices from our sample project meow-micro 🐈 which contain the OpenTracing Instrumentation. The meow-client microservice will accept a REST call and send the information to the meow-server microservice via GRPC.

For a refresh on REST and GRPC with GoLang see:

Photo by Adam Solomon on Unsplash

Now let’s go over some of the files which instrumentation was added to.

tracing.go

Contains the information for configuring the Jaeger tracer. It grabs the configuration values from the environment, which are set in the helm templates for meow-client and meow-server.

cfg, err := config.FromEnv()
if err != nil {
panic(fmt.Sprintf("Could not parse Jaeger env vars: %s", err.Error()))
}
tracer, closer, err := cfg.NewTracer()
if err != nil {
panic(fmt.Sprintf("Could not initialize jaeger tracer: %s", err.Error()))
}

client.go

In the client, we setup the service name for the trace(a data/execution path through the system, and can be thought of as a directed acyclic graph of spans) to be meow-client.

We also setup the span (a logical unit of work in Jaeger that has an operation name, the start time of the operation, and the duration) to start when the main function is called, as well as when the sleep function is called. This will let us know how long sleep took within main, since there is a span for each one.

// main function span
os.Setenv("JAEGER_SERVICE_NAME", "meow-client")
tracer, closer := tracing.Init()
defer closer.Close()
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {spanCtx, _ := tracer.Extract(opentracing.HTTPHeaders, opentracing.HTTPHeadersCarrier(r.Header))
span := tracer.StartSpan("send-meow-communication", ext.RPCServerOption(spanCtx))
defer span.Finish()
...// sleep function span
os.Setenv("JAEGER_SERVICE_NAME", "meow-client")
tracer, closer := tracing.Init()
defer closer.Close()
span := tracer.StartSpan("sleep")
defer span.Finish()

Installing the microservices

Now let’s go ahead and deploy the microservices into our cluster. Make sure you have helm v3 installed, I installed mine using brew.

$ brew install helm
...
==> Downloading https://ghcr.io/v2/homebrew/core/helm/manifests/3.5.4
######################################################################## 100.0%
==> Downloading https://ghcr.io/v2/homebrew/core/helm/blobs/sha256:5dac5803c1ad2db3a91b0928fc472aaf80a4
==> Downloading from https://pkg-containers-az.githubusercontent.com/ghcr1/blobs/sha256:5dac5803c1ad2db
######################################################################## 100.0%
==> Pouring helm--3.5.4.big_sur.bottle.tar.gz
...
$ helm version
version.BuildInfo{Version:"v3.3.4", GitCommit:"a61ce5633af99708171414353ed49547cf05013d", GitTreeState:"clean", GoVersion:"go1.14.9"}

Now let’s deploy these microservices onto our Kubernetes Cluster using the Makefile.

# Build client and server from Dockerfile
$ make build
docker build -t meow-client:1.0 -f client/Dockerfile .
[+] Building 17.1s (10/10) FINISHED
...
docker build -t meow-server:1.0 -f server/Dockerfile .
[+] Building 0.5s (10/10) FINISHED
...
# Install Microservices into Kubernetes via Helm
$ make install
helm install -f helm/Values.yaml meow-micro ./helm
NAME: meow-micro
LAST DEPLOYED: Mon Apr 26 13:42:38 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

We can verify that everything is working by making sure the pods are running and ready.

$ kubectl get podsNAME                           READY   STATUS    RESTARTS   AGE
jaeger-6f6b5d8689-s7cln 1/1 Running 0 26m
meow-client-8b974778c-85896 1/1 Running 0 15m
meow-server-56f559db44-5mvgp 1/1 Running 0 15m

Viewing the Trace

Now for the exciting part! Taking a look at a trace.

  1. Let’s open up the Jaeger console, by pointing our browser to http://localhost:8081. This is where we have the jaeger UI running. You should see a cool interface as follows

2. Now let’s send a request to our application, so that the trace can be logged

$ curl http://localhost/meow -X POST -d '{"name": "Meow-Mixer"}'200 - Meow sent: Meow-Mixer

3. Now refresh your browser and you should see populated tabs. We can select nginx as the service

4. Then press the Find Traces button

5. Now you should see all the traces for nginx, we can expand by clicking anywhere in the trace item

6. You can see how long each function took, from the call to nginx(ingress) ➡️ main(client) ➡️ sleep(client). This gives us information on how long we were stuck in each span. We see that we were a whole 2 out of 3 seconds in the sleep, telling us it should be something to examine.

This is a cool little example to get you started, but you see the real power of tracing when you add this to several functions across your microservices.

You can further expand an operation in the trace to obtain more information

7. Here we can see more info on the sleep function

Other Tracers and Configurations

Currently Zipkin, Jaeger, and DataDog are supported for Distributed Tracing with Ingress-Nginx.

These different Distributed Tracing items in Ingress-Nginx that can be further configured for the supported Tracers. These can be found in the Ingress-Nginx documentation.

Hope you enjoyed this blog post! For more information on Distributed Tracing, be sure to checkout the Jaeger Documentation which contains lots of cool resources.

Also be sure to checkout my Medium profile for more content like this and feel free to follow @awkwardferny on Twitter.

--

--

Fernando Diaz

Senior Technical Marketing @ GitLab 🦊, Developer 👨🏻‍💻, Alien 👽