Yandex.Cloud
  • Services
  • Why Yandex.Cloud
  • Pricing
  • Documentation
  • Contact us
Get started
Yandex Managed Service for Apache Kafka®
  • Getting started
  • Step-by-step instructions
    • All instructions
    • Information about existing clusters
    • Creating clusters
    • Connecting to clusters
    • Stopping and starting clusters
    • Changing cluster settings
    • Working with topics and partitions
    • Managing Kafka accounts
    • Deleting clusters
  • Concepts
    • Relationship between service resources
    • Topics and partitions
    • Brokers
    • Producers and consumers
    • Host classes
    • Network in Managed Service for Apache Kafka®
    • Quotas and limits
    • Storage types
  • Access management
  • Pricing policy
  • API reference
    • Authentication in the API
    • gRPC
      • Overview
      • ClusterService
      • ResourcePresetService
      • TopicService
      • UserService
      • OperationService
    • REST
      • Overview
      • Cluster
        • Overview
        • create
        • delete
        • get
        • list
        • listHosts
        • listLogs
        • listOperations
        • move
        • start
        • stop
        • streamLogs
        • update
      • ResourcePreset
        • Overview
        • get
        • list
      • Topic
        • Overview
        • create
        • delete
        • get
        • list
        • update
      • User
        • Overview
        • create
        • delete
        • get
        • grantPermission
        • list
        • revokePermission
        • update
      • Operation
        • Overview
        • get
  • Questions and answers
  1. API reference
  2. REST
  3. Cluster
  4. update

Method update

  • HTTP request
  • Path parameters
  • Body parameters
  • Response

Updates the specified Apache Kafka® cluster.

HTTP request

PATCH https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/{clusterId}

Path parameters

Parameter Description
clusterId Required. ID of the Apache Kafka® cluster to update. To get the Apache Kafka® cluster ID, make a list request. The maximum string length in characters is 50.

Body parameters

{
  "updateMask": "string",
  "description": "string",
  "labels": "object",
  "configSpec": {
    "version": "string",
    "kafka": {
      "resources": {
        "resourcePresetId": "string",
        "diskSize": "string",
        "diskTypeId": "string"
      },

      // `configSpec.kafka` includes only one of the fields `kafkaConfig_2_1`, `kafkaConfig_2_6`
      "kafkaConfig_2_1": {
        "compressionType": "string",
        "logFlushIntervalMessages": "integer",
        "logFlushIntervalMs": "integer",
        "logFlushSchedulerIntervalMs": "integer",
        "logRetentionBytes": "integer",
        "logRetentionHours": "integer",
        "logRetentionMinutes": "integer",
        "logRetentionMs": "integer",
        "logSegmentBytes": "integer",
        "logPreallocate": true
      },
      "kafkaConfig_2_6": {
        "compressionType": "string",
        "logFlushIntervalMessages": "integer",
        "logFlushIntervalMs": "integer",
        "logFlushSchedulerIntervalMs": "integer",
        "logRetentionBytes": "integer",
        "logRetentionHours": "integer",
        "logRetentionMinutes": "integer",
        "logRetentionMs": "integer",
        "logSegmentBytes": "integer",
        "logPreallocate": true
      },
      // end of the list of possible fields`configSpec.kafka`

    },
    "zookeeper": {
      "resources": {
        "resourcePresetId": "string",
        "diskSize": "string",
        "diskTypeId": "string"
      }
    },
    "zoneId": [
      "string"
    ],
    "brokersCount": "integer",
    "assignPublicIp": true
  },
  "name": "string",
  "securityGroupIds": [
    "string"
  ]
}
Field Description
updateMask string

A comma-separated names off ALL fields to be updated. Оnly the specified fields will be changed. The others will be left untouched. If the field is specified in updateMask and no value for that field was sent in the request, the field's value will be reset to the default. The default value for most fields is null or 0.

If updateMask is not sent in the request, all fields' values will be updated. Fields specified in the request will be updated to provided values. The rest of the fields will be reset to the default.

description string

New description of the Apache Kafka® cluster.

The maximum string length in characters is 256.

labels object

Custom labels for the Apache Kafka® cluster as key:value pairs.

For example, "project": "mvp" or "source": "dictionary".

The new set of labels will completely replace the old ones. To add a label, request the current set with the get method, then send an update request with the new label added to the set.

No more than 64 per resource. The string length in characters for each key must be 1-63. Each key must match the regular expression [a-z][-_0-9a-z]*. The maximum string length in characters for each value is 63. Each value must match the regular expression [-_0-9a-z]*.

configSpec object

New configuration and resources for hosts in the Apache Kafka® cluster.

Use updateMask to prevent reverting all cluster settings that are not listed in configSpec to their default values.

configSpec.
version
string

Version of Apache Kafka® used in the cluster. Possible values: 2.1, 2.6.

configSpec.
kafka
object

Configuration and resource allocation for Kafka brokers.

configSpec.
kafka.
resources
object
Resources allocated to Kafka brokers.
configSpec.
kafka.
resources.
resourcePresetId
string

ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation.

configSpec.
kafka.
resources.
diskSize
string (int64)

Volume of the storage available to a host, in bytes.

configSpec.
kafka.
resources.
diskTypeId
string

Type of the storage environment for the host.

configSpec.
kafka.
kafkaConfig_2_1
object
configSpec.kafka includes only one of the fields kafkaConfig_2_1, kafkaConfig_2_6

Kafka version 2.1 broker configuration.

configSpec.
kafka.
kafkaConfig_2_1.
compressionType
string

Сluster topics compression type.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
configSpec.
kafka.
kafkaConfig_2_1.
logFlushIntervalMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMessages setting.

configSpec.
kafka.
kafkaConfig_2_1.
logFlushIntervalMs
integer (int64)

The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of logFlushSchedulerIntervalMs is used.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMs setting.

configSpec.
kafka.
kafkaConfig_2_1.
logFlushSchedulerIntervalMs
integer (int64)

The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.

configSpec.
kafka.
kafkaConfig_2_1.
logRetentionBytes
integer (int64)

Partition size limit; Kafka will discard old log segments to free up space if delete cleanupPolicy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionBytes setting.

configSpec.
kafka.
kafkaConfig_2_1.
logRetentionHours
integer (int64)

The number of hours to keep a log segment file before deleting it.

configSpec.
kafka.
kafkaConfig_2_1.
logRetentionMinutes
integer (int64)

The number of minutes to keep a log segment file before deleting it.

If not set, the value of logRetentionHours is used.

configSpec.
kafka.
kafkaConfig_2_1.
logRetentionMs
integer (int64)

The number of milliseconds to keep a log segment file before deleting it.

If not set, the value of logRetentionMinutes is used.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionMs setting.

configSpec.
kafka.
kafkaConfig_2_1.
logSegmentBytes
integer (int64)

The maximum size of a single log file.

This is the global cluster-level setting that can be overridden on a topic level by using the segmentBytes setting.

configSpec.
kafka.
kafkaConfig_2_1.
logPreallocate
boolean (boolean)

Should pre allocate file when create new segment?

This is the global cluster-level setting that can be overridden on a topic level by using the preallocate setting.

configSpec.
kafka.
kafkaConfig_2_6
object
configSpec.kafka includes only one of the fields kafkaConfig_2_1, kafkaConfig_2_6

Kafka version 2.6 broker configuration.

configSpec.
kafka.
kafkaConfig_2_6.
compressionType
string

Сluster topics compression type.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
configSpec.
kafka.
kafkaConfig_2_6.
logFlushIntervalMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMessages setting.

configSpec.
kafka.
kafkaConfig_2_6.
logFlushIntervalMs
integer (int64)

The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of logFlushSchedulerIntervalMs is used.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMs setting.

configSpec.
kafka.
kafkaConfig_2_6.
logFlushSchedulerIntervalMs
integer (int64)

The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.

configSpec.
kafka.
kafkaConfig_2_6.
logRetentionBytes
integer (int64)

Partition size limit; Kafka will discard old log segments to free up space if delete cleanupPolicy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionBytes setting.

configSpec.
kafka.
kafkaConfig_2_6.
logRetentionHours
integer (int64)

The number of hours to keep a log segment file before deleting it.

configSpec.
kafka.
kafkaConfig_2_6.
logRetentionMinutes
integer (int64)

The number of minutes to keep a log segment file before deleting it.

If not set, the value of logRetentionHours is used.

configSpec.
kafka.
kafkaConfig_2_6.
logRetentionMs
integer (int64)

The number of milliseconds to keep a log segment file before deleting it.

If not set, the value of logRetentionMinutes is used.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionMs setting.

configSpec.
kafka.
kafkaConfig_2_6.
logSegmentBytes
integer (int64)

The maximum size of a single log file.

This is the global cluster-level setting that can be overridden on a topic level by using the segmentBytes setting.

configSpec.
kafka.
kafkaConfig_2_6.
logPreallocate
boolean (boolean)

Should pre allocate file when create new segment?

This is the global cluster-level setting that can be overridden on a topic level by using the preallocate setting.

configSpec.
zookeeper
object

Configuration and resource allocation for ZooKeeper hosts.

configSpec.
zookeeper.
resources
object

Resources allocated to ZooKeeper hosts.

configSpec.
zookeeper.
resources.
resourcePresetId
string

ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation.

configSpec.
zookeeper.
resources.
diskSize
string (int64)

Volume of the storage available to a host, in bytes.

configSpec.
zookeeper.
resources.
diskTypeId
string

Type of the storage environment for the host.

configSpec.
zoneId[]
string

IDs of availability zones where Kafka brokers reside.

configSpec.
brokersCount
integer (int64)

The number of Kafka brokers deployed in each availability zone.

configSpec.
assignPublicIp
boolean (boolean)

The flag that defines whether a public IP address is assigned to the cluster. If the value is true, then Apache Kafka® cluster is available on the Internet via it's public IP address.

name string

New name for the Apache Kafka® cluster.

The maximum string length in characters is 63. Value must match the regular expression [a-zA-Z0-9_-]*.

securityGroupIds[] string

User security groups

Response

HTTP Code: 200 - OK

{
  "id": "string",
  "description": "string",
  "createdAt": "string",
  "createdBy": "string",
  "modifiedAt": "string",
  "done": true,
  "metadata": "object",

  //  includes only one of the fields `error`, `response`
  "error": {
    "code": "integer",
    "message": "string",
    "details": [
      "object"
    ]
  },
  "response": "object",
  // end of the list of possible fields

}

An Operation resource. For more information, see Operation.

Field Description
id string

ID of the operation.

description string

Description of the operation. 0-256 characters long.

createdAt string (date-time)

Creation timestamp.

String in RFC3339 text format.

createdBy string

ID of the user or service account who initiated the operation.

modifiedAt string (date-time)

The time when the Operation resource was last modified.

String in RFC3339 text format.

done boolean (boolean)

If the value is false, it means the operation is still in progress. If true, the operation is completed, and either error or response is available.

metadata object

Service-specific metadata associated with the operation. It typically contains the ID of the target resource that the operation is performed on. Any method that returns a long-running operation should document the metadata type, if any.

error object
The error result of the operation in case of failure or cancellation.
includes only one of the fields error, response

The error result of the operation in case of failure or cancellation.

error.
code
integer (int32)

Error code. An enum value of google.rpc.Code.

error.
message
string

An error message.

error.
details[]
object

A list of messages that carry the error details.

response object
includes only one of the fields error, response

The normal response of the operation in case of success. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is the standard Create/Update, the response should be the target resource of the operation. Any method that returns a long-running operation should document the response type, if any.

In this article:
  • HTTP request
  • Path parameters
  • Body parameters
  • Response
Language
Careers
Privacy policy
Terms of use
© 2021 Yandex.Cloud LLC