Auto Instrumentation with OpenTelemetry Operator
The OpenTelemetry Operator simplifies the process of capturing telemetry signals, including logs, metrics, and spans, from a Kubernetes deployment with minimal effort. This guide provides an opinionated recommendation based on our experiences using this tool at Cardinal, and demonstrates how to add automatic instrumentation to Go and Java applications deployed in a Kubernetes environment.
What is OpenTelemetry?
OpenTelemetry is an open-source observability framework for cloud-native software. It provides a set of APIs, libraries, agents, and instrumentation to enable the collection of distributed traces and metrics from applications. OpenTelemetry aims to standardize the way telemetry data is collected and transmitted, making it easier to monitor and troubleshoot applications.
By using OpenTelemetry, developers can gain insights into the performance and behavior of their applications, identify bottlenecks, and improve overall reliability. The framework is vendor agnostic, supporting multiple programming languages and integrating with various backends, making it a versatile choice for observability in modern software systems.
What is a Kubernetes Operator?
Kubernetes Operators extend the Kubernetes API with custom resources and controllers, allowing users to manage complex applications and their components using Kubernetes-native tools and practices. In the context of OpenTelemetry instrumentation, the OpenTelemetry Operator allows pods to be modified with all the necessary components to enable telemetry without any code changes.
What does the OpenTelemetry Operator do?
The OpenTelemetry Operator provides two primary functions:
- Install OpenTelemetry collectors with manual data processing configurations.
- Install auto-instrumentation profiles and inject these into deployments that opt-in.
This guide will focus solely on auto-instrumentation. Cardinal recommends using our operator to install and manage OpenTelemetry collectors within any Kubernetes cluster. Our operator simplifies the deployment process, ensures compatibility, and provides a robust telemetry infrastructure with minimal effort. By using Cardinal's operator, you benefit from:
- Seamless integration with Kubernetes environments.
- Pre-configured settings optimized for performance and reliability.
- Regular updates and maintenance from Cardinal's dedicated team.
- Enhanced support and documentation tailored to your needs.
For auto-instrumentation, we leverage the OpenTelemetry Operator to inject the necessary components into your deployments without requiring code changes.
Prerequisites
- A working OpenTelemetry collector to receive telemetry. To install a managed collector with minimal effort, use the CardinalHQ Operator.
- A working Kubernetes cluster. This can be a cloud based instance or a local Kubernetes instance such as Talos or Kind.
- The minimum version of Kubernetes supported is 1.25, although more modern is preferred.
- Permissions to install Custom Resource Definitions and other RBAC controls in that cluster.
- Some telemetry configurations require additional permissions.
Installing the OpenTelemetry Operator
This is a quick start guide. For more detailed information, see the opentelemetry-operator (opens in a new tab) documentation.
With cert-manager
It is recommended to use this installation when using Certificate Manger, which is a very common tool installed in a Kubernetes cluster.
kubectl create ns opentelemetry-operator
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm install my-opentelemetry-operator open-telemetry/opentelemetry-operator \
--namespace opentelemetry-operator \
--set "manager.collectorImage.repository=otel/opentelemetry-collector-k8s"
Without cert-manager
If you do not have cert-manager, or do not wish to use it, you can install the operator using self-signed certificates. This method is generally less secure, and the certificates may not automatically refresh before expiration. This may be suitable for a test cluster, but not for production.
kubectl create ns opentelemetry-operator
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm install my-opentelemetry-operator open-telemetry/opentelemetry-operator \
--namespace opentelemetry-operator \
--set "manager.collectorImage.repository=otel/opentelemetry-collector-k8s" \
--set admissionWebhooks.certManager.enabled=false \
--set admissionWebhooks.autoGenerateCert.enabled=true
Installing Instrumentation configuration
Instrumentation settings can be inside any namespace and used by those applications inside that namespace. It is also possible to install a "paved path" instrumentation configuration that all applications can use. This global configuration allows a single place to be changed to update the instrumentation settings for the entire cluster and ensuring instrumentation is kept up to date.
We will be installing a global configuration that can be used as a starting point. This requires that you know the URL for the Collector that you wish to send data to. If you do not yet have a collector, see Installing Managed Operators with CardinalHQ's Operator.
Details:
- Send all telemetry to your collector.
- Use common propagators for tracing.
- Sample (at the source) 10% of traces. This configuration allows the service that starts the trace decide if it wants to trace or not, and all other services using this configuration will respect that setting. This helps ensure complete traces are obtained without sending every trace.
- The current versions of each auto-instrumentation container image are defined. The OpenTelemetry Operator has defaults for many of these that are out of date.
Change the ENDPOINT
variable to point to your agent collector. This is the "daemonset" collector in OpenTelemetry terms, and is called the "agent" in CardinalHQ's Managed Collector system.
ENDPOINT=http://collector-name-agent.collector.svc.cluster.local:4318
kubectl -n opentelemetry-operator -f - <<EOF
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: auto-instrumentation
spec:
exporter:
endpoint: "${ENDPOINT}"
propagators:
- tracecontext
- baggage
sampler:
argument: "0.10"
type: parentbased_traceidratio
apacheHttpd:
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.4
dotnet:
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:1.10.0
go:
image: ghcr.io/open-telemetry/opentelemetry-go-instrumentation/autoinstrumentation-go:v0.21.0
resourceRequirements:
requests:
memory: "100Mi"
cpu: "50m"
limits:
memory: "200Mi"
cpu: "500m"
java:
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:2.13.3
nginx:
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.4
nodejs:
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs:0.56.0
python:
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-python:0.51b0
EOF
Adding instrumentation to Services
Generally, the only deployment change that needs to occur is to add an annotation to the pods that a Deployment, DaemonSet, or StatefulSet creates.
Note: It is important that the attribute is applied to the pod template, and not the Deployment, StatefulSet, or DaemonSet.
Instrumentation of Java
A single annotation is used to enable instrumentation of Java. This causes a container to be injected into the pod as an "init-container" that will run before your container runs, and will pre-configure your container to use OpenTelemetry instrumentation based on the components your Java application uses.
apiVersion: apps/v1
kind: Deployment
metadata:
name: your-java-app-name
labels:
app.kubernetes.io/name: your-java-app-name
app.kubernetes.io/component: app-component-name
spec:
selector:
matchLabels:
app.kubernetes.io/name: your-java-app-name
app.kubernetes.io/component: app-component-name
template:
metadata:
labels:
app.kubernetes.io/name: your-java-app-name
app.kubernetes.io/component: app-component-name
annotations:
instrumentation.opentelemetry.io/inject-java: opentelemetry-operator/auto-instrumentation
...
For more information, see the opentelemetery-java-autoinstrumentation (opens in a new tab) documentation.
Instrumentation of Go
Go uses a different approach, and requires a privileged container to run. This may be a restriction in some Kubernetes environments. If this is restricted in your environment, manual instrumentation is the best practice. While the pod must be privileged, your application containers can still run in a more restricted manner. The instrumentation container must run in a privileged mode to monitor your Go application.
Additionally, the shareProcessNamespace: true
setting must be included to ensure that the containers all share a single operating system namespace, allowing one container to monitor the other.
Two annotations required are shown below, as well as the additional permission settings:
apiVersion: apps/v1
kind: Deployment
metadata:
name: your-go-app-name
labels:
app.kubernetes.io/name: your-go-app-name
app.kubernetes.io/component: app-component-name
spec:
selector:
matchLabels:
app.kubernetes.io/name: your-go-app-name
app.kubernetes.io/component: app-component-name
template:
shareProcessNamespace: true
securityContext:
runAsNonRoot: false
metadata:
labels:
app.kubernetes.io/name: your-go-app-name
app.kubernetes.io/component: app-component-name
annotations:
instrumentation.opentelemetry.io/inject-go: opentelemetry-operator/auto-instrumentation
instrumentation.opentelemetry.io/otel-go-auto-target-exe: /app/bin/your-go-app-binary
...
For more information, see the opentelemetry-go-instrumentation (opens in a new tab) documentation.