Skip to content

Exporters

First PublishedByAtif Alam

Exporters bridge the gap between systems that don’t natively expose Prometheus metrics and Prometheus itself. An exporter runs alongside (or inside) a target, collects metrics, and exposes them on a /metrics endpoint.

The most commonly used exporter — provides CPU, memory, disk, network, and filesystem metrics for Linux hosts.

Terminal window
# Download
wget https://github.com/prometheus/node_exporter/releases/download/v1.7.0/node_exporter-1.7.0.linux-amd64.tar.gz
tar xzf node_exporter-*.tar.gz
./node_exporter

Or as a systemd service:

/etc/systemd/system/node_exporter.service
[Unit]
Description=Node Exporter
After=network.target
[Service]
ExecStart=/usr/local/bin/node_exporter
Restart=always
[Install]
WantedBy=multi-user.target
MetricTypeWhat It Measures
node_cpu_seconds_totalCounterCPU time per mode (user, system, idle, iowait)
node_memory_MemAvailable_bytesGaugeAvailable memory
node_memory_MemTotal_bytesGaugeTotal memory
node_filesystem_avail_bytesGaugeAvailable disk space
node_filesystem_size_bytesGaugeTotal disk space
node_network_receive_bytes_totalCounterNetwork bytes received
node_network_transmit_bytes_totalCounterNetwork bytes sent
node_load1 / node_load5 / node_load15GaugeSystem load averages
node_disk_io_time_seconds_totalCounterDisk I/O time

These are PromQL queries you’d run in the Prometheus UI (http://prometheus:9090/graph) or use in Grafana dashboard panels to visualize Node exporter data:

# CPU usage percentage
100 - (avg by (instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)
# Memory usage percentage
(1 - node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100
# Disk usage percentage
(1 - node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"}) * 100
# Network throughput (bytes/sec)
rate(node_network_receive_bytes_total{device="eth0"}[5m])

To tell Prometheus to scrape your Node exporters, add this to the Prometheus server’s prometheus.yml under scrape_configs. Each target is a host:port where Node exporter is running:

# prometheus.yml — on the Prometheus server
scrape_configs:
- job_name: "node"
static_configs:
- targets: ["node1:9100", "node2:9100", "node3:9100"]

Prometheus will scrape http://node1:9100/metrics, http://node2:9100/metrics, and http://node3:9100/metrics at the global scrape_interval.

Probes endpoints from the outside — HTTP, TCP, DNS, ICMP. Tests what users experience:

  • Is my website returning 200?
  • Is my API responding within 500ms?
  • Is my DNS resolving correctly?
  • Is port 5432 (PostgreSQL) open?
blackbox.yml
modules:
http_2xx:
prober: http
timeout: 5s
http:
valid_status_codes: [200, 201, 202]
method: GET
follow_redirects: true
http_post:
prober: http
http:
method: POST
headers:
Content-Type: application/json
body: '{"check": true}'
tcp_connect:
prober: tcp
timeout: 5s
dns_lookup:
prober: dns
dns:
query_name: example.com
query_type: A
icmp_ping:
prober: icmp
timeout: 5s
scrape_configs:
- job_name: "blackbox-http"
metrics_path: /probe
params:
module: [http_2xx]
static_configs:
- targets:
- https://example.com
- https://api.example.com/health
- https://staging.example.com
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: blackbox-exporter:9115
# Is the endpoint up? (1 = success, 0 = failure)
probe_success{instance="https://example.com"}
# Response time
probe_duration_seconds{instance="https://example.com"}
# SSL certificate expiry (days until expiration)
(probe_ssl_earliest_cert_expiry - time()) / 86400

Application Instrumentation (Client Libraries)

Section titled “Application Instrumentation (Client Libraries)”

Instead of using an exporter, instrument your own application to expose metrics directly.

from prometheus_client import Counter, Histogram, start_http_server
REQUEST_COUNT = Counter(
'http_requests_total',
'Total HTTP requests',
['method', 'status']
)
REQUEST_LATENCY = Histogram(
'http_request_duration_seconds',
'Request latency in seconds',
['method']
)
@app.route('/api/data')
def get_data():
with REQUEST_LATENCY.labels(method='GET').time():
# ... handle request ...
REQUEST_COUNT.labels(method='GET', status='200').inc()
return data
# Expose /metrics on port 8000
start_http_server(8000)
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
var requestCount = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "http_requests_total",
Help: "Total HTTP requests",
},
[]string{"method", "status"},
)
func init() {
prometheus.MustRegister(requestCount)
}
func main() {
http.Handle("/metrics", promhttp.Handler())
http.ListenAndServe(":8080", nil)
}
const client = require('prom-client');
const requestCount = new client.Counter({
name: 'http_requests_total',
help: 'Total HTTP requests',
labelNames: ['method', 'status'],
});
app.get('/metrics', async (req, res) => {
res.set('Content-Type', client.register.contentType);
res.end(await client.register.metrics());
});
PatternExampleRule
<namespace>_<name>_<unit>http_request_duration_secondsAlways include the unit
Counter suffix_totalCounters should end in _total
Histogram suffix_bucket, _sum, _countAuto-generated
Unit suffixes_bytes, _seconds, _totalUse base units (bytes not KB, seconds not ms)
ExporterMetrics From
mysqld_exporterMySQL queries, connections, buffer pool
postgres_exporterPostgreSQL queries, connections, replication
redis_exporterRedis memory, keys, commands/sec
mongodb_exporterMongoDB connections, operations, replication
nginx_exporterNginx connections, requests
cadvisorContainer CPU, memory, network (Docker/K8s)
kube-state-metricsKubernetes object states (pods, deployments, nodes)

When no existing exporter fits, write your own. The pattern:

  1. Collect data from your system (API, file, database).
  2. Map it to Prometheus metric types (Counter, Gauge, Histogram).
  3. Expose on /metrics.
from prometheus_client import Gauge, start_http_server
import requests, time
queue_depth = Gauge('queue_depth', 'Number of items in the job queue', ['queue_name'])
def collect():
while True:
resp = requests.get('http://internal-api/queues')
for queue in resp.json():
queue_depth.labels(queue_name=queue['name']).set(queue['depth'])
time.sleep(15)
start_http_server(9200)
collect()
  • Node exporter for host metrics (CPU, memory, disk, network) — install on every machine.
  • Blackbox exporter for probing endpoints from the outside (HTTP, TCP, DNS, ICMP).
  • Instrument your apps with client libraries to expose request counts, latency histograms, and business metrics.
  • Follow naming conventions: _total for counters, _seconds/_bytes for units.
  • Use existing exporters for databases, caches, and web servers before writing custom ones.