Yandex.Cloud
  • Services
  • Why Yandex.Cloud
  • Solutions
  • Pricing
  • Documentation
  • Contact us
Get started
Yandex Managed Service for Apache Kafka®
  • Getting started
  • Step-by-step instructions
    • All instructions
    • Information about existing clusters
    • Creating clusters
    • Connecting to clusters
    • Stopping and starting clusters
    • Changing cluster settings
    • Working with topics and partitions
    • Managing Kafka accounts
    • Deleting clusters
  • Concepts
    • Relationship between service resources
    • Topics and partitions
    • Brokers
    • Producers and consumers
    • Host classes
    • Network in Managed Service for Apache Kafka®
    • Quotas and limits
    • Storage types
  • Access management
  • Pricing policy
  • API reference
    • Authentication in the API
    • gRPC
      • Overview
      • ClusterService
      • ResourcePresetService
      • TopicService
      • UserService
      • OperationService
    • REST
      • Overview
      • Cluster
        • Overview
        • create
        • delete
        • get
        • list
        • listHosts
        • listLogs
        • listOperations
        • move
        • start
        • stop
        • streamLogs
        • update
      • ResourcePreset
        • Overview
        • get
        • list
      • Topic
        • Overview
        • create
        • delete
        • get
        • list
        • update
      • User
        • Overview
        • create
        • delete
        • get
        • grantPermission
        • list
        • revokePermission
        • update
      • Operation
        • Overview
        • get
  • Questions and answers
  1. API reference
  2. REST
  3. Cluster
  4. list

Method list

  • HTTP request
  • Query parameters
  • Response

Retrieves the list of Apache Kafka® clusters that belong to the specified folder.

HTTP request

GET https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters

Query parameters

Parameter Description
folderId Required. ID of the folder to list Apache Kafka® clusters in. To get the folder ID, make a list request. The maximum string length in characters is 50.
pageSize The maximum number of results per page to return. If the number of available results is larger than pageSize, the service returns a nextPageToken that can be used to get the next page of results in subsequent list requests. The maximum value is 1000.
pageToken Page token. To get the next page of results, set pageToken to the nextPageToken returned by a previous list request. The maximum string length in characters is 100.
filter Filter support is not currently implemented. Any filters are ignored. The maximum string length in characters is 1000.

Response

HTTP Code: 200 - OK

{
  "clusters": [
    {
      "id": "string",
      "folderId": "string",
      "createdAt": "string",
      "name": "string",
      "description": "string",
      "labels": "object",
      "environment": "string",
      "monitoring": [
        {
          "name": "string",
          "description": "string",
          "link": "string"
        }
      ],
      "config": {
        "version": "string",
        "kafka": {
          "resources": {
            "resourcePresetId": "string",
            "diskSize": "string",
            "diskTypeId": "string"
          },

          // `clusters[].config.kafka` includes only one of the fields `kafkaConfig_2_1`, `kafkaConfig_2_6`
          "kafkaConfig_2_1": {
            "compressionType": "string",
            "logFlushIntervalMessages": "integer",
            "logFlushIntervalMs": "integer",
            "logFlushSchedulerIntervalMs": "integer",
            "logRetentionBytes": "integer",
            "logRetentionHours": "integer",
            "logRetentionMinutes": "integer",
            "logRetentionMs": "integer",
            "logSegmentBytes": "integer",
            "logPreallocate": true
          },
          "kafkaConfig_2_6": {
            "compressionType": "string",
            "logFlushIntervalMessages": "integer",
            "logFlushIntervalMs": "integer",
            "logFlushSchedulerIntervalMs": "integer",
            "logRetentionBytes": "integer",
            "logRetentionHours": "integer",
            "logRetentionMinutes": "integer",
            "logRetentionMs": "integer",
            "logSegmentBytes": "integer",
            "logPreallocate": true
          },
          // end of the list of possible fields`clusters[].config.kafka`

        },
        "zookeeper": {
          "resources": {
            "resourcePresetId": "string",
            "diskSize": "string",
            "diskTypeId": "string"
          }
        },
        "zoneId": [
          "string"
        ],
        "brokersCount": "integer",
        "assignPublicIp": true
      },
      "networkId": "string",
      "health": "string",
      "status": "string",
      "securityGroupIds": [
        "string"
      ]
    }
  ],
  "nextPageToken": "string"
}
Field Description
clusters[] object

An Apache Kafka® cluster resource. For more information, see the Concepts section of the documentation.

clusters[].
id
string

ID of the Apache Kafka® cluster. This ID is assigned at creation time.

clusters[].
folderId
string

ID of the folder that the Apache Kafka® cluster belongs to.

clusters[].
createdAt
string (date-time)

Creation timestamp.

String in RFC3339 text format.

clusters[].
name
string

Name of the Apache Kafka® cluster. The name must be unique within the folder. 1-63 characters long. Value must match the regular expression [a-zA-Z0-9_-]*.

clusters[].
description
string

Description of the Apache Kafka® cluster. 0-256 characters long.

clusters[].
labels
object

Custom labels for the Apache Kafka® cluster as key:value pairs. A maximum of 64 labels per resource is allowed.

clusters[].
environment
string

Deployment environment of the Apache Kafka® cluster.

  • PRODUCTION: stable environment with a conservative update policy when only hotfixes are applied during regular maintenance.
  • PRESTABLE: environment with a more aggressive update policy when new versions are rolled out irrespective of backward compatibility.
clusters[].
monitoring[]
object

Metadata of monitoring system.

clusters[].
monitoring[].
name
string

Name of the monitoring system.

clusters[].
monitoring[].
description
string

Description of the monitoring system.

clusters[].
monitoring[].
link
string

Link to the monitoring system charts for the Apache Kafka® cluster.

clusters[].
config
object

Configuration of the Apache Kafka® cluster.

clusters[].
config.
version
string

Version of Apache Kafka® used in the cluster. Possible values: 2.1, 2.6.

clusters[].
config.
kafka
object

Configuration and resource allocation for Kafka brokers.

clusters[].
config.
kafka.
resources
object
Resources allocated to Kafka brokers.
clusters[].
config.
kafka.
resources.
resourcePresetId
string

ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation.

clusters[].
config.
kafka.
resources.
diskSize
string (int64)

Volume of the storage available to a host, in bytes.

clusters[].
config.
kafka.
resources.
diskTypeId
string

Type of the storage environment for the host.

clusters[].
config.
kafka.
kafkaConfig_2_1
object
clusters[].config.kafka includes only one of the fields kafkaConfig_2_1, kafkaConfig_2_6

Kafka version 2.1 broker configuration.

clusters[].
config.
kafka.
kafkaConfig_2_1.
compressionType
string

Сluster topics compression type.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
clusters[].
config.
kafka.
kafkaConfig_2_1.
logFlushIntervalMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMessages setting.

clusters[].
config.
kafka.
kafkaConfig_2_1.
logFlushIntervalMs
integer (int64)

The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of logFlushSchedulerIntervalMs is used.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMs setting.

clusters[].
config.
kafka.
kafkaConfig_2_1.
logFlushSchedulerIntervalMs
integer (int64)

The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.

clusters[].
config.
kafka.
kafkaConfig_2_1.
logRetentionBytes
integer (int64)

Partition size limit; Kafka will discard old log segments to free up space if delete cleanupPolicy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionBytes setting.

clusters[].
config.
kafka.
kafkaConfig_2_1.
logRetentionHours
integer (int64)

The number of hours to keep a log segment file before deleting it.

clusters[].
config.
kafka.
kafkaConfig_2_1.
logRetentionMinutes
integer (int64)

The number of minutes to keep a log segment file before deleting it.

If not set, the value of logRetentionHours is used.

clusters[].
config.
kafka.
kafkaConfig_2_1.
logRetentionMs
integer (int64)

The number of milliseconds to keep a log segment file before deleting it.

If not set, the value of logRetentionMinutes is used.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionMs setting.

clusters[].
config.
kafka.
kafkaConfig_2_1.
logSegmentBytes
integer (int64)

The maximum size of a single log file.

This is the global cluster-level setting that can be overridden on a topic level by using the segmentBytes setting.

clusters[].
config.
kafka.
kafkaConfig_2_1.
logPreallocate
boolean (boolean)

Should pre allocate file when create new segment?

This is the global cluster-level setting that can be overridden on a topic level by using the preallocate setting.

clusters[].
config.
kafka.
kafkaConfig_2_6
object
clusters[].config.kafka includes only one of the fields kafkaConfig_2_1, kafkaConfig_2_6

Kafka version 2.6 broker configuration.

clusters[].
config.
kafka.
kafkaConfig_2_6.
compressionType
string

Сluster topics compression type.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
clusters[].
config.
kafka.
kafkaConfig_2_6.
logFlushIntervalMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMessages setting.

clusters[].
config.
kafka.
kafkaConfig_2_6.
logFlushIntervalMs
integer (int64)

The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of logFlushSchedulerIntervalMs is used.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMs setting.

clusters[].
config.
kafka.
kafkaConfig_2_6.
logFlushSchedulerIntervalMs
integer (int64)

The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.

clusters[].
config.
kafka.
kafkaConfig_2_6.
logRetentionBytes
integer (int64)

Partition size limit; Kafka will discard old log segments to free up space if delete cleanupPolicy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionBytes setting.

clusters[].
config.
kafka.
kafkaConfig_2_6.
logRetentionHours
integer (int64)

The number of hours to keep a log segment file before deleting it.

clusters[].
config.
kafka.
kafkaConfig_2_6.
logRetentionMinutes
integer (int64)

The number of minutes to keep a log segment file before deleting it.

If not set, the value of logRetentionHours is used.

clusters[].
config.
kafka.
kafkaConfig_2_6.
logRetentionMs
integer (int64)

The number of milliseconds to keep a log segment file before deleting it.

If not set, the value of logRetentionMinutes is used.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionMs setting.

clusters[].
config.
kafka.
kafkaConfig_2_6.
logSegmentBytes
integer (int64)

The maximum size of a single log file.

This is the global cluster-level setting that can be overridden on a topic level by using the segmentBytes setting.

clusters[].
config.
kafka.
kafkaConfig_2_6.
logPreallocate
boolean (boolean)

Should pre allocate file when create new segment?

This is the global cluster-level setting that can be overridden on a topic level by using the preallocate setting.

clusters[].
config.
zookeeper
object

Configuration and resource allocation for ZooKeeper hosts.

clusters[].
config.
zookeeper.
resources
object

Resources allocated to ZooKeeper hosts.

clusters[].
config.
zookeeper.
resources.
resourcePresetId
string

ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation.

clusters[].
config.
zookeeper.
resources.
diskSize
string (int64)

Volume of the storage available to a host, in bytes.

clusters[].
config.
zookeeper.
resources.
diskTypeId
string

Type of the storage environment for the host.

clusters[].
config.
zoneId[]
string

IDs of availability zones where Kafka brokers reside.

clusters[].
config.
brokersCount
integer (int64)

The number of Kafka brokers deployed in each availability zone.

clusters[].
config.
assignPublicIp
boolean (boolean)

The flag that defines whether a public IP address is assigned to the cluster. If the value is true, then Apache Kafka® cluster is available on the Internet via it's public IP address.

clusters[].
networkId
string

ID of the network that the cluster belongs to.

clusters[].
health
string

Aggregated cluster health.

  • HEALTH_UNKNOWN: state of the cluster is unknown (health of all hosts in the cluster is UNKNOWN).
  • ALIVE: cluster is alive and well (health of all hosts in the cluster is ALIVE).
  • DEAD: cluster is inoperable (health of all hosts in the cluster is DEAD).
  • DEGRADED: cluster is in degraded state (health of at least one of the hosts in the cluster is not ALIVE).
clusters[].
status
string

Current state of the cluster.

  • STATUS_UNKNOWN: cluster state is unknown.
  • CREATING: cluster is being created.
  • RUNNING: cluster is running normally.
  • ERROR: cluster encountered a problem and cannot operate.
  • UPDATING: cluster is being updated.
  • STOPPING: cluster is stopping.
  • STOPPED: cluster stopped.
  • STARTING: cluster is starting.
clusters[].
securityGroupIds[]
string

User security groups

nextPageToken string

Token that allows you to get the next page of results for list requests.

If the number of results is larger than pageSize, use nextPageToken as the value for the pageToken parameter in the next list request. Each subsequent list request will have its own nextPageToken to continue paging through the results.

In this article:
  • HTTP request
  • Query parameters
  • Response
Language / Region
Careers
Privacy policy
Terms of use
Brandbook
© 2021 Yandex.Cloud LLC