Exporting metrics in Prometheus format
To export metrics in Prometheus format, use the method prometheusMetrics. Before uploading metrics to Prometheus, set up metrics collection in Prometheus.
Example of setting up metrics collection from Monitoring in Prometheus:
-
Select the folder you want to collect data from.
-
Select a service from the following list:
compute
– Compute Cloud.storage
– Object Storage.managed-postgresql
– Managed Service for PostgreSQL.managed-clickhouse
– Managed Service for ClickHouse.managed-mongodb
– Managed Service for MongoDB.managed-mysql
– Managed Service for MySQL.managed-redis
– Managed Service for Redis.managed-kafka
– Managed Service for Apache Kafka®.managed-elasticsearch
– Managed Service for Elasticsearch.managed-sqlserver
– Managed Service for SQL Servermanaged-kubernetes
– Managed Service for Kubernetes.serverless-functions
– Cloud Functions.serverless_triggers_client_metrics
– Cloud Functions triggers.ydb
– Yandex Database.interconnect
– Cloud Interconnect.certificate-manager
– Certificate Manager.data-transfer
– Data Transfer.serverless-apigateway
– API Gateway.
-
Create a static API key for your service account.
-
Assign to the service account the role viewer for the selected folder.
-
Add a new
job
to the data collection section of the Prometheus configuration.... scrape_configs: ... - job_name: 'yc-monitoring-export' metrics_path: '/monitoring/v2/prometheusMetrics' params: folderId: - '<folderId>' # for example, aoeng2krmasimogorn5m service: - '<service>' # for example, managed-mongodb bearer_token: '<api_key>' # Or use a file (recommended): # bearer_token_file: '<name of file with api_key>' static_configs: - targets: ['monitoring.api.cloud.yandex.net'] labels: folderId: '<folderId>' service: '<serviceId>'
-
Restart Prometheus.
-
Check the data collection in the Prometheus user interface:
http://localhost:9090/targets
(replacelocalhost
with the name of the host that runs Prometheus). -
If you need to change the label names, use relabeling.
Tip
If you have a lot of metrics, increase the data collection timeout (scrape_timeout
) to 60s.