Viewing cluster logs
Managed Service for Apache Kafka® lets you get cluster logs for viewing and analysis.
You can get:
- A simple log snippet.
- A log entry stream in the cluster (
tail -f
command semantics are supported).
Note
Here, the log is the system log of the cluster and its hosts. This log isn't related to the partition log for the Apache Kafka® topic where the broker writes messages received from message producers.
Getting a cluster log
- Go to the folder page and select Managed Service for Apache Kafka®.
- Click on the name of the cluster and select the Logs tab.
- Specify the time period for which you want to display the log.
Use the listLogs API method and pass the cluster ID in the clusterId
request parameter.
You'll get the full cluster log. The number of log entries that the cluster can return must not exceed 100,000 (100 pages of 1000 entries each).
If the log size exceeds this value or you want to get logs for a specific period of time, pass in the fromTime
and toTime
request parameters the timeframes in RFC-3339 format.
You can get the cluster ID with a list of clusters in the folder.
Getting a log entry stream
This way of working with logs is different from getting a simple log snippet by the fact that the server can send more log entries as they appear. This behavior is the same as the semantics of the tail -f
command for working with logs.
Use the streamLogs API method and pass the cluster ID in the clusterId
request parameter.
You'll get the full cluster log. The number of log entries that the cluster can return must not exceed 100,000 (100 pages of 1000 entries each).
If the log size exceeds this value or you want to get logs for a specific period of time, pass in the fromTime
and toTime
request parameters the timeframes in RFC-3339 format.
If you don't set the toTime
parameter value, the stream will receive new log entries as they appear.
You can get the cluster ID with a list of clusters in the folder.