Yandex.Cloud
  • Services
  • Why Yandex.Cloud
  • Solutions
  • Pricing
  • Documentation
  • Contact us
Get started
Yandex Managed Service for Apache Kafka®
  • Getting started
  • Step-by-step instructions
    • All instructions
    • Information about existing clusters
    • Creating clusters
    • Connecting to clusters
    • Stopping and starting clusters
    • Changing cluster settings
    • Working with topics and partitions
    • Managing Kafka accounts
    • Deleting clusters
  • Concepts
    • Relationship between service resources
    • Topics and partitions
    • Brokers
    • Producers and consumers
    • Host classes
    • Network in Managed Service for Apache Kafka®
    • Quotas and limits
    • Storage types
  • Access management
  • Pricing policy
  • API reference
    • Authentication in the API
    • gRPC
      • Overview
      • ClusterService
      • ResourcePresetService
      • TopicService
      • UserService
      • OperationService
    • REST
      • Overview
      • Cluster
        • Overview
        • create
        • delete
        • get
        • list
        • listHosts
        • listLogs
        • listOperations
        • move
        • start
        • stop
        • streamLogs
        • update
      • ResourcePreset
        • Overview
        • get
        • list
      • Topic
        • Overview
        • create
        • delete
        • get
        • list
        • update
      • User
        • Overview
        • create
        • delete
        • get
        • grantPermission
        • list
        • revokePermission
        • update
      • Operation
        • Overview
        • get
  • Questions and answers
  1. API reference
  2. REST
  3. Cluster
  4. get

Method get

  • HTTP request
  • Path parameters
  • Response

Returns the specified Apache Kafka® cluster.

To get the list of available Apache Kafka® clusters, make a list request.

HTTP request

GET https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/{clusterId}

Path parameters

Parameter Description
clusterId Required. ID of the Apache Kafka® Cluster resource to return. To get the cluster ID, make a list request. The maximum string length in characters is 50.

Response

HTTP Code: 200 - OK

{
  "id": "string",
  "folderId": "string",
  "createdAt": "string",
  "name": "string",
  "description": "string",
  "labels": "object",
  "environment": "string",
  "monitoring": [
    {
      "name": "string",
      "description": "string",
      "link": "string"
    }
  ],
  "config": {
    "version": "string",
    "kafka": {
      "resources": {
        "resourcePresetId": "string",
        "diskSize": "string",
        "diskTypeId": "string"
      },

      // `config.kafka` includes only one of the fields `kafkaConfig_2_1`, `kafkaConfig_2_6`
      "kafkaConfig_2_1": {
        "compressionType": "string",
        "logFlushIntervalMessages": "integer",
        "logFlushIntervalMs": "integer",
        "logFlushSchedulerIntervalMs": "integer",
        "logRetentionBytes": "integer",
        "logRetentionHours": "integer",
        "logRetentionMinutes": "integer",
        "logRetentionMs": "integer",
        "logSegmentBytes": "integer",
        "logPreallocate": true
      },
      "kafkaConfig_2_6": {
        "compressionType": "string",
        "logFlushIntervalMessages": "integer",
        "logFlushIntervalMs": "integer",
        "logFlushSchedulerIntervalMs": "integer",
        "logRetentionBytes": "integer",
        "logRetentionHours": "integer",
        "logRetentionMinutes": "integer",
        "logRetentionMs": "integer",
        "logSegmentBytes": "integer",
        "logPreallocate": true
      },
      // end of the list of possible fields`config.kafka`

    },
    "zookeeper": {
      "resources": {
        "resourcePresetId": "string",
        "diskSize": "string",
        "diskTypeId": "string"
      }
    },
    "zoneId": [
      "string"
    ],
    "brokersCount": "integer",
    "assignPublicIp": true
  },
  "networkId": "string",
  "health": "string",
  "status": "string",
  "securityGroupIds": [
    "string"
  ]
}

An Apache Kafka® cluster resource.
For more information, see the Concepts section of the documentation.

Field Description
id string

ID of the Apache Kafka® cluster. This ID is assigned at creation time.

folderId string

ID of the folder that the Apache Kafka® cluster belongs to.

createdAt string (date-time)

Creation timestamp.

String in RFC3339 text format.

name string

Name of the Apache Kafka® cluster. The name must be unique within the folder. 1-63 characters long. Value must match the regular expression [a-zA-Z0-9_-]*.

description string

Description of the Apache Kafka® cluster. 0-256 characters long.

labels object

Custom labels for the Apache Kafka® cluster as key:value pairs. A maximum of 64 labels per resource is allowed.

environment string

Deployment environment of the Apache Kafka® cluster.

  • PRODUCTION: stable environment with a conservative update policy when only hotfixes are applied during regular maintenance.
  • PRESTABLE: environment with a more aggressive update policy when new versions are rolled out irrespective of backward compatibility.
monitoring[] object

Metadata of monitoring system.

monitoring[].
name
string

Name of the monitoring system.

monitoring[].
description
string

Description of the monitoring system.

monitoring[].
link
string

Link to the monitoring system charts for the Apache Kafka® cluster.

config object

Configuration of the Apache Kafka® cluster.

config.
version
string

Version of Apache Kafka® used in the cluster. Possible values: 2.1, 2.6.

config.
kafka
object

Configuration and resource allocation for Kafka brokers.

config.
kafka.
resources
object
Resources allocated to Kafka brokers.
config.
kafka.
resources.
resourcePresetId
string

ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation.

config.
kafka.
resources.
diskSize
string (int64)

Volume of the storage available to a host, in bytes.

config.
kafka.
resources.
diskTypeId
string

Type of the storage environment for the host.

config.
kafka.
kafkaConfig_2_1
object
config.kafka includes only one of the fields kafkaConfig_2_1, kafkaConfig_2_6

Kafka version 2.1 broker configuration.

config.
kafka.
kafkaConfig_2_1.
compressionType
string

Сluster topics compression type.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
config.
kafka.
kafkaConfig_2_1.
logFlushIntervalMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMessages setting.

config.
kafka.
kafkaConfig_2_1.
logFlushIntervalMs
integer (int64)

The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of logFlushSchedulerIntervalMs is used.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMs setting.

config.
kafka.
kafkaConfig_2_1.
logFlushSchedulerIntervalMs
integer (int64)

The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.

config.
kafka.
kafkaConfig_2_1.
logRetentionBytes
integer (int64)

Partition size limit; Kafka will discard old log segments to free up space if delete cleanupPolicy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionBytes setting.

config.
kafka.
kafkaConfig_2_1.
logRetentionHours
integer (int64)

The number of hours to keep a log segment file before deleting it.

config.
kafka.
kafkaConfig_2_1.
logRetentionMinutes
integer (int64)

The number of minutes to keep a log segment file before deleting it.

If not set, the value of logRetentionHours is used.

config.
kafka.
kafkaConfig_2_1.
logRetentionMs
integer (int64)

The number of milliseconds to keep a log segment file before deleting it.

If not set, the value of logRetentionMinutes is used.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionMs setting.

config.
kafka.
kafkaConfig_2_1.
logSegmentBytes
integer (int64)

The maximum size of a single log file.

This is the global cluster-level setting that can be overridden on a topic level by using the segmentBytes setting.

config.
kafka.
kafkaConfig_2_1.
logPreallocate
boolean (boolean)

Should pre allocate file when create new segment?

This is the global cluster-level setting that can be overridden on a topic level by using the preallocate setting.

config.
kafka.
kafkaConfig_2_6
object
config.kafka includes only one of the fields kafkaConfig_2_1, kafkaConfig_2_6

Kafka version 2.6 broker configuration.

config.
kafka.
kafkaConfig_2_6.
compressionType
string

Сluster topics compression type.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
config.
kafka.
kafkaConfig_2_6.
logFlushIntervalMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMessages setting.

config.
kafka.
kafkaConfig_2_6.
logFlushIntervalMs
integer (int64)

The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of logFlushSchedulerIntervalMs is used.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMs setting.

config.
kafka.
kafkaConfig_2_6.
logFlushSchedulerIntervalMs
integer (int64)

The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.

config.
kafka.
kafkaConfig_2_6.
logRetentionBytes
integer (int64)

Partition size limit; Kafka will discard old log segments to free up space if delete cleanupPolicy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionBytes setting.

config.
kafka.
kafkaConfig_2_6.
logRetentionHours
integer (int64)

The number of hours to keep a log segment file before deleting it.

config.
kafka.
kafkaConfig_2_6.
logRetentionMinutes
integer (int64)

The number of minutes to keep a log segment file before deleting it.

If not set, the value of logRetentionHours is used.

config.
kafka.
kafkaConfig_2_6.
logRetentionMs
integer (int64)

The number of milliseconds to keep a log segment file before deleting it.

If not set, the value of logRetentionMinutes is used.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionMs setting.

config.
kafka.
kafkaConfig_2_6.
logSegmentBytes
integer (int64)

The maximum size of a single log file.

This is the global cluster-level setting that can be overridden on a topic level by using the segmentBytes setting.

config.
kafka.
kafkaConfig_2_6.
logPreallocate
boolean (boolean)

Should pre allocate file when create new segment?

This is the global cluster-level setting that can be overridden on a topic level by using the preallocate setting.

config.
zookeeper
object

Configuration and resource allocation for ZooKeeper hosts.

config.
zookeeper.
resources
object

Resources allocated to ZooKeeper hosts.

config.
zookeeper.
resources.
resourcePresetId
string

ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation.

config.
zookeeper.
resources.
diskSize
string (int64)

Volume of the storage available to a host, in bytes.

config.
zookeeper.
resources.
diskTypeId
string

Type of the storage environment for the host.

config.
zoneId[]
string

IDs of availability zones where Kafka brokers reside.

config.
brokersCount
integer (int64)

The number of Kafka brokers deployed in each availability zone.

config.
assignPublicIp
boolean (boolean)

The flag that defines whether a public IP address is assigned to the cluster. If the value is true, then Apache Kafka® cluster is available on the Internet via it's public IP address.

networkId string

ID of the network that the cluster belongs to.

health string

Aggregated cluster health.

  • HEALTH_UNKNOWN: state of the cluster is unknown (health of all hosts in the cluster is UNKNOWN).
  • ALIVE: cluster is alive and well (health of all hosts in the cluster is ALIVE).
  • DEAD: cluster is inoperable (health of all hosts in the cluster is DEAD).
  • DEGRADED: cluster is in degraded state (health of at least one of the hosts in the cluster is not ALIVE).
status string

Current state of the cluster.

  • STATUS_UNKNOWN: cluster state is unknown.
  • CREATING: cluster is being created.
  • RUNNING: cluster is running normally.
  • ERROR: cluster encountered a problem and cannot operate.
  • UPDATING: cluster is being updated.
  • STOPPING: cluster is stopping.
  • STOPPED: cluster stopped.
  • STARTING: cluster is starting.
securityGroupIds[] string

User security groups

In this article:
  • HTTP request
  • Path parameters
  • Response
Language / Region
Careers
Privacy policy
Terms of use
Brandbook
© 2021 Yandex.Cloud LLC