Skip to Content
How-To GuidesPostgreSQL Metrics

PostgreSQL Metrics with postgres_exporter

This example adds prometheuscommunity/postgres-exporter to a PostgreSQL StatefulSet. The exporter connects over localhost, exposes Prometheus metrics on port 9187, and the collector scrapes that port with Kubernetes pod discovery.

Use a separate exporter Deployment for managed databases such as RDS or Cloud SQL.

Prerequisites

Kubernetes accessPermission to edit the PostgreSQL StatefulSet and Service, create Secrets, and grant pod discovery RBAC.
OpenTelemetry CollectorAn otelcol-contrib build or vendor distribution with the prometheusreceiver.
🔐
PostgreSQL accessPermission to create a role with pg_monitor membership.

Installation

1

Create the exporter role

Run as a PostgreSQL admin:

CREATE USER postgres_exporter WITH PASSWORD '<strong-password>'; GRANT pg_monitor TO postgres_exporter;

Store the password in the PostgreSQL namespace:

kubectl -n <postgres-namespace> create secret generic postgres-exporter \ --from-literal=password='<strong-password>'
2

Add the sidecar

Add postgres-exporter to the PostgreSQL pod template. The port name, metrics, is used by the collector scrape config.

# Excerpt from the PostgreSQL StatefulSet pod spec - name: postgres-exporter image: quay.io/prometheuscommunity/postgres-exporter:v0.19.1 securityContext: runAsNonRoot: true runAsUser: 65534 runAsGroup: 65534 allowPrivilegeEscalation: false readOnlyRootFilesystem: true capabilities: drop: [ALL] seccompProfile: type: RuntimeDefault env: - name: DATA_SOURCE_URI value: "localhost:5432/postgres?sslmode=disable" - name: DATA_SOURCE_USER value: "postgres_exporter" - name: DATA_SOURCE_PASS valueFrom: secretKeyRef: name: postgres-exporter key: password ports: - name: metrics containerPort: 9187 protocol: TCP

Keep sslmode=disable for this local pod connection. Use TLS settings for any network connection.

The securityContext above lets the sidecar run under the restricted PodSecurity standard (OpenShift, GKE Autopilot, or any cluster with a baseline/restricted PodSecurity admission policy). postgres_exporter does not need to write to its filesystem, so readOnlyRootFilesystem: true is safe.

3

Expose the metrics port

Add the metrics port to the PostgreSQL Service so you can test the endpoint by service name. The collector config below still scrapes pod IPs.

apiVersion: v1 kind: Service metadata: name: postgres-headless spec: type: ClusterIP clusterIP: None ports: - name: postgresql port: 5432 targetPort: postgresql protocol: TCP - name: metrics port: 9187 targetPort: metrics protocol: TCP selector: app: postgres
4

Verify the exporter

Apply the StatefulSet and Service changes:

kubectl -n <postgres-namespace> apply -f statefulset.yaml -f headless-service.yaml kubectl -n <postgres-namespace> rollout status statefulset/postgres

Check /metrics from inside the cluster:

kubectl -n <postgres-namespace> run curl --rm -it --restart=Never \ --image=curlimages/curl -- \ curl -s http://postgres-headless:9187/metrics | head

pg_up 1 means the exporter can query PostgreSQL. pg_up 0 means the exporter is running but cannot authenticate or connect.

5

Grant pod discovery access

The collector ServiceAccount needs pod discovery access in the PostgreSQL namespace:

apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: collector-pod-discovery namespace: <postgres-namespace> rules: - apiGroups: [""] resources: [pods] verbs: [get, list, watch] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: collector-pod-discovery namespace: <postgres-namespace> roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: collector-pod-discovery subjects: - kind: ServiceAccount name: <collector-serviceaccount> namespace: <collector-namespace>

The Role lives in the PostgreSQL namespace, but its subjects can reference a ServiceAccount in any namespace. The collector does not need to run in the PostgreSQL namespace.

Skip this step entirely if the collector already discovers pods cluster-wide via a ClusterRole.

6

Add the collector scrape job

Add a prometheus receiver scrape job. Change app=postgres if your pods use a different label.

receivers: prometheus/postgresql: config: scrape_configs: - job_name: postgresql scrape_interval: 30s kubernetes_sd_configs: - role: pod namespaces: names: [<postgres-namespace>] relabel_configs: - source_labels: - __meta_kubernetes_pod_label_app - __meta_kubernetes_pod_container_port_name action: keep regex: postgres;metrics - source_labels: [__meta_kubernetes_namespace] target_label: k8s_namespace_name - source_labels: [__meta_kubernetes_pod_name] target_label: k8s_pod_name

Add the receiver to your metrics pipeline:

service: pipelines: metrics: receivers: [prometheus/postgresql] processors: [cumulativetodelta, batch] exporters: [otlphttp/upstream]

Keep cumulativetodelta for OTLP destinations that expect delta counters. Remove it for prometheusremotewrite.

7

Roll out the collector

Apply the collector config and restart the collector:

kubectl -n <collector-namespace> apply -f collector-configmap.yaml kubectl -n <collector-namespace> rollout restart deployment <collector-deployment>

Check for scrape errors:

kubectl -n <collector-namespace> logs deploy/<collector-deployment> | \ grep -iE 'prometheus|postgresql|scrape|error|fail'

Then query pg_up in your metrics destination. It should be 1 for each PostgreSQL pod.

Troubleshooting

SymptomCheck
pg_up 0Verify DATA_SOURCE_USER, DATA_SOURCE_PASS, database name, and sslmode. Check exporter logs for the PostgreSQL error.
No scrape targetsVerify the pod label matches app=postgres and the sidecar port is named metrics.
pods is forbidden in collector logsGrant the collector ServiceAccount pod get, list, and watch access in the PostgreSQL namespace.
401 Unauthorized while scrapingThe target is probably not postgres_exporter; /metrics on port 9187 has no built-in auth.
Duplicate samplesOnly one collector replica should scrape a given pod unless you use the OpenTelemetry target allocator.

Reach out to support@cardinalhq.io for support or to ask questions not answered in our documentation.

Last updated on