Yandex Cloud
  • Services
  • Solutions
  • Why Yandex Cloud
  • Pricing
  • Documentation
  • Contact us
Get started
Language / Region
© 2022 Yandex.Cloud LLC
Yandex Managed Service for Apache Kafka®
  • Getting started
  • Step-by-step instructions
    • All instructions
    • Information about existing clusters
    • Creating clusters
    • Connecting to clusters
    • Stopping and starting clusters
    • Upgrading the Apache Kafka® version
    • Changing cluster settings
    • Managing Apache Kafka® hosts
    • Working with topics and partitions
    • Managing Kafka accounts
    • Managing connectors
    • Viewing cluster logs
    • Deleting a cluster
    • Monitoring the state of a cluster and hosts
  • Practical guidelines
    • All tutorials
    • Data delivery in Managed Service for ClickHouse
    • Configuring Kafka Connect for Managed Service for Apache Kafka®
    • Data delivery in ksqlDB
    • Using schema registries with Managed Service for Apache Kafka®
      • Overview
      • Working with the managed schema registry
      • Working with Confluent Schema Registry
    • Migrating with MirrorMaker 2.0
    • Delivering data using Debezium
  • Concepts
    • Relationship between service resources
    • Topics and partitions
    • Brokers
    • Producers and consumers
    • Managing data schemas
    • Host classes
    • Network in Managed Service for Apache Kafka®
    • Quotas and limits
    • Storage types
    • Connectors
    • Apache Kafka® settings
  • Access management
  • Pricing policy
  • API reference
    • Authentication in the API
    • gRPC
      • Overview
      • ClusterService
      • ConnectorService
      • ResourcePresetService
      • TopicService
      • UserService
      • OperationService
    • REST
      • Overview
      • Cluster
        • Overview
        • create
        • delete
        • get
        • list
        • listHosts
        • listLogs
        • listOperations
        • move
        • rescheduleMaintenance
        • start
        • stop
        • streamLogs
        • update
      • Connector
        • Overview
        • create
        • delete
        • get
        • list
        • pause
        • resume
        • update
      • ResourcePreset
        • Overview
        • get
        • list
      • Topic
        • Overview
        • create
        • delete
        • get
        • list
        • update
      • User
        • Overview
        • create
        • delete
        • get
        • grantPermission
        • list
        • revokePermission
        • update
      • Operation
        • Overview
        • get
  • Questions and answers
  1. Step-by-step instructions
  2. Viewing cluster logs

Viewing cluster logs

Written by
Yandex Cloud
  • Getting a cluster log
  • Getting a log entry stream

Managed Service for Apache Kafka® lets you get cluster logs for viewing and analysis.

You can get:

  • A simple log snippet.
  • A log entry stream in the cluster (tail -f command semantics are supported).

Note

Here, the log is the system log of the cluster and its hosts. This log isn't related to the partition log for the Apache Kafka® topic where the broker writes messages received from message producers.

Getting a cluster log

Management console
API
  1. Go to the folder page and select Managed Service for Apache Kafka®.
  2. Click on the name of the cluster and select the Logs tab.
  3. Specify the time period for which you want to display the log.

Use the listLogs API method and pass the cluster ID in the clusterId request parameter.

You'll get the full cluster log. The number of log entries that the cluster can return must not exceed 100,000 (100 pages of 1000 entries each).

If the log size exceeds this value or you want to get logs for a specific period of time, pass in the fromTime and toTime request parameters the timeframes in RFC-3339 format.

You can get the cluster ID with a list of clusters in the folder.

Getting a log entry stream

This way of working with logs is different from getting a simple log snippet by the fact that the server can send more log entries as they appear. This behavior is the same as the semantics of the tail -f command for working with logs.

API

Use the streamLogs API method and pass the cluster ID in the clusterId request parameter.

You'll get the full cluster log. The number of log entries that the cluster can return must not exceed 100,000 (100 pages of 1000 entries each).

If the log size exceeds this value or you want to get logs for a specific period of time, pass in the fromTime and toTime request parameters the timeframes in RFC-3339 format.

If you don't set the toTime parameter value, the stream will receive new log entries as they appear.

You can get the cluster ID with a list of clusters in the folder.

Was the article helpful?

Language / Region
© 2022 Yandex.Cloud LLC
In this article:
  • Getting a cluster log
  • Getting a log entry stream