Grafana+Prometheus(Exporter) 简单上手

Grafana: 数据可视化,可定时请求数据源并图表化
Prometheus: 时间序列数据库,定时从上流数据源 exporter 中获取数据保存,通过 HTTP API 接口返回时间序列化的数据
Prometheus exporter: Prometheus 数据源,通过 /metrics HTTP API 接口对外提供数据,开发者自己能过 prometheus 提供的 client 进行开发,比如 prometheus/client_golang

下面是自己用 Golang 写一个 prometheus exporter, 用来提供进程的连接数以及 HTTP QPS 的指标数据,主要是实现 promethues client 包中 Collector 接口,然后注册即可

比如:

type QpsMetric struct {
	metric *prometheus.Desc
}

func NewQpsMetric() *QpsMetric {
	go cal()
	return &QpsMetric{
		metric: prometheus.NewDesc("http_request_count_second",
			"http qps",
			nil, nil,
		),
	}
}

func (m *QpsMetric) Describe(ch chan<- *prometheus.Desc) {
	ch <- m.metric
}

func (m *QpsMetric) Collect(ch chan<- prometheus.Metric) {
	metricValue := float64(qps)
	ch <- prometheus.MustNewConstMetric(m.metric, prometheus.GaugeValue, metricValue)
}

然后对该 Collector 进行注册

	prometheus.MustRegister(metric.NewQpsMetric())

下载好 prometheus 主程序后在 配置文件prometheus.ymlstatic_configs->targets加入自己开发的 exporter 的支持,比如新加 exporter 为127.0.0.1:8019

scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'demo'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    scrape_interval:     3s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
    static_configs:
      - targets: ['127.0.0.1:8019','127.0.0.1:8018']

2018-05-19