Yandex.Cloud
  • Services
  • Why Yandex.Cloud
  • Solutions
  • Pricing
  • Documentation
  • Contact us
Get started
Yandex Managed Service for Apache Kafka®
  • Getting started
  • Step-by-step instructions
    • All instructions
    • Information about existing clusters
    • Creating clusters
    • Connecting to clusters
    • Stopping and starting clusters
    • Changing cluster settings
    • Working with topics and partitions
    • Managing Kafka accounts
    • Deleting clusters
  • Concepts
    • Relationship between service resources
    • Topics and partitions
    • Brokers
    • Producers and consumers
    • Host classes
    • Network in Managed Service for Apache Kafka®
    • Quotas and limits
    • Storage types
  • Access management
  • Pricing policy
  • API reference
    • Authentication in the API
    • gRPC
      • Overview
      • ClusterService
      • ResourcePresetService
      • TopicService
      • UserService
      • OperationService
    • REST
      • Overview
      • Cluster
        • Overview
        • create
        • delete
        • get
        • list
        • listHosts
        • listLogs
        • listOperations
        • move
        • start
        • stop
        • streamLogs
        • update
      • ResourcePreset
        • Overview
        • get
        • list
      • Topic
        • Overview
        • create
        • delete
        • get
        • list
        • update
      • User
        • Overview
        • create
        • delete
        • get
        • grantPermission
        • list
        • revokePermission
        • update
      • Operation
        • Overview
        • get
  • Questions and answers
  1. API reference
  2. REST
  3. Cluster
  4. create

Method create

  • HTTP request
  • Body parameters
  • Response

Creates a new Apache Kafka® cluster in the specified folder.

HTTP request

POST https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters

Body parameters

{
  "folderId": "string",
  "name": "string",
  "description": "string",
  "labels": "object",
  "environment": "string",
  "configSpec": {
    "version": "string",
    "kafka": {
      "resources": {
        "resourcePresetId": "string",
        "diskSize": "string",
        "diskTypeId": "string"
      },

      // `configSpec.kafka` includes only one of the fields `kafkaConfig_2_1`, `kafkaConfig_2_6`
      "kafkaConfig_2_1": {
        "compressionType": "string",
        "logFlushIntervalMessages": "integer",
        "logFlushIntervalMs": "integer",
        "logFlushSchedulerIntervalMs": "integer",
        "logRetentionBytes": "integer",
        "logRetentionHours": "integer",
        "logRetentionMinutes": "integer",
        "logRetentionMs": "integer",
        "logSegmentBytes": "integer",
        "logPreallocate": true
      },
      "kafkaConfig_2_6": {
        "compressionType": "string",
        "logFlushIntervalMessages": "integer",
        "logFlushIntervalMs": "integer",
        "logFlushSchedulerIntervalMs": "integer",
        "logRetentionBytes": "integer",
        "logRetentionHours": "integer",
        "logRetentionMinutes": "integer",
        "logRetentionMs": "integer",
        "logSegmentBytes": "integer",
        "logPreallocate": true
      },
      // end of the list of possible fields`configSpec.kafka`

    },
    "zookeeper": {
      "resources": {
        "resourcePresetId": "string",
        "diskSize": "string",
        "diskTypeId": "string"
      }
    },
    "zoneId": [
      "string"
    ],
    "brokersCount": "integer",
    "assignPublicIp": true
  },
  "topicSpecs": [
    {
      "name": "string",
      "partitions": "integer",
      "replicationFactor": "integer",

      // `topicSpecs[]` includes only one of the fields `topicConfig_2_1`, `topicConfig_2_6`
      "topicConfig_2_1": {
        "cleanupPolicy": "string",
        "compressionType": "string",
        "deleteRetentionMs": "integer",
        "fileDeleteDelayMs": "integer",
        "flushMessages": "integer",
        "flushMs": "integer",
        "minCompactionLagMs": "integer",
        "retentionBytes": "integer",
        "retentionMs": "integer",
        "maxMessageBytes": "integer",
        "minInsyncReplicas": "integer",
        "segmentBytes": "integer",
        "preallocate": true
      },
      "topicConfig_2_6": {
        "cleanupPolicy": "string",
        "compressionType": "string",
        "deleteRetentionMs": "integer",
        "fileDeleteDelayMs": "integer",
        "flushMessages": "integer",
        "flushMs": "integer",
        "minCompactionLagMs": "integer",
        "retentionBytes": "integer",
        "retentionMs": "integer",
        "maxMessageBytes": "integer",
        "minInsyncReplicas": "integer",
        "segmentBytes": "integer",
        "preallocate": true
      },
      // end of the list of possible fields`topicSpecs[]`

    }
  ],
  "userSpecs": [
    {
      "name": "string",
      "password": "string",
      "permissions": [
        {
          "topicName": "string",
          "role": "string"
        }
      ]
    }
  ],
  "networkId": "string",
  "subnetId": [
    "string"
  ],
  "securityGroupIds": [
    "string"
  ]
}
Field Description
folderId string

Required. ID of the folder to create the Apache Kafka® cluster in.

To get the folder ID, make a list request.

The maximum string length in characters is 50.

name string

Required. Name of the Apache Kafka® cluster. The name must be unique within the folder.

The string length in characters must be 1-63. Value must match the regular expression <a href="%5B-a-z0-9%5D%7B0,61%7D%5Ba-z0-9%5D">a-z</a>?.

description string

Description of the Apache Kafka® cluster.

The maximum string length in characters is 256.

labels object

Custom labels for the Apache Kafka® cluster as key:value pairs.

For example, "project": "mvp" or "source": "dictionary".

No more than 64 per resource. The string length in characters for each key must be 1-63. Each key must match the regular expression [a-z][-<em>./@0-9a-z]*. The maximum string length in characters for each value is 63. Each value must match the regular expression [-</em>./@0-9a-z]*.

environment string

Deployment environment of the Apache Kafka® cluster.

  • PRODUCTION: stable environment with a conservative update policy when only hotfixes are applied during regular maintenance.
  • PRESTABLE: environment with a more aggressive update policy when new versions are rolled out irrespective of backward compatibility.
configSpec object

Kafka and hosts configuration the Apache Kafka® cluster.

configSpec.
version
string

Version of Apache Kafka® used in the cluster. Possible values: 2.1, 2.6.

configSpec.
kafka
object

Configuration and resource allocation for Kafka brokers.

configSpec.
kafka.
resources
object
Resources allocated to Kafka brokers.
configSpec.
kafka.
resources.
resourcePresetId
string

ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation.

configSpec.
kafka.
resources.
diskSize
string (int64)

Volume of the storage available to a host, in bytes.

configSpec.
kafka.
resources.
diskTypeId
string

Type of the storage environment for the host.

configSpec.
kafka.
kafkaConfig_2_1
object
configSpec.kafka includes only one of the fields kafkaConfig_2_1, kafkaConfig_2_6

Kafka version 2.1 broker configuration.

configSpec.
kafka.
kafkaConfig_2_1.
compressionType
string

Сluster topics compression type.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
configSpec.
kafka.
kafkaConfig_2_1.
logFlushIntervalMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMessages setting.

configSpec.
kafka.
kafkaConfig_2_1.
logFlushIntervalMs
integer (int64)

The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of logFlushSchedulerIntervalMs is used.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMs setting.

configSpec.
kafka.
kafkaConfig_2_1.
logFlushSchedulerIntervalMs
integer (int64)

The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.

configSpec.
kafka.
kafkaConfig_2_1.
logRetentionBytes
integer (int64)

Partition size limit; Kafka will discard old log segments to free up space if delete cleanupPolicy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionBytes setting.

configSpec.
kafka.
kafkaConfig_2_1.
logRetentionHours
integer (int64)

The number of hours to keep a log segment file before deleting it.

configSpec.
kafka.
kafkaConfig_2_1.
logRetentionMinutes
integer (int64)

The number of minutes to keep a log segment file before deleting it.

If not set, the value of logRetentionHours is used.

configSpec.
kafka.
kafkaConfig_2_1.
logRetentionMs
integer (int64)

The number of milliseconds to keep a log segment file before deleting it.

If not set, the value of logRetentionMinutes is used.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionMs setting.

configSpec.
kafka.
kafkaConfig_2_1.
logSegmentBytes
integer (int64)

The maximum size of a single log file.

This is the global cluster-level setting that can be overridden on a topic level by using the segmentBytes setting.

configSpec.
kafka.
kafkaConfig_2_1.
logPreallocate
boolean (boolean)

Should pre allocate file when create new segment?

This is the global cluster-level setting that can be overridden on a topic level by using the preallocate setting.

configSpec.
kafka.
kafkaConfig_2_6
object
configSpec.kafka includes only one of the fields kafkaConfig_2_1, kafkaConfig_2_6

Kafka version 2.6 broker configuration.

configSpec.
kafka.
kafkaConfig_2_6.
compressionType
string

Сluster topics compression type.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
configSpec.
kafka.
kafkaConfig_2_6.
logFlushIntervalMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMessages setting.

configSpec.
kafka.
kafkaConfig_2_6.
logFlushIntervalMs
integer (int64)

The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of logFlushSchedulerIntervalMs is used.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMs setting.

configSpec.
kafka.
kafkaConfig_2_6.
logFlushSchedulerIntervalMs
integer (int64)

The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.

configSpec.
kafka.
kafkaConfig_2_6.
logRetentionBytes
integer (int64)

Partition size limit; Kafka will discard old log segments to free up space if delete cleanupPolicy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionBytes setting.

configSpec.
kafka.
kafkaConfig_2_6.
logRetentionHours
integer (int64)

The number of hours to keep a log segment file before deleting it.

configSpec.
kafka.
kafkaConfig_2_6.
logRetentionMinutes
integer (int64)

The number of minutes to keep a log segment file before deleting it.

If not set, the value of logRetentionHours is used.

configSpec.
kafka.
kafkaConfig_2_6.
logRetentionMs
integer (int64)

The number of milliseconds to keep a log segment file before deleting it.

If not set, the value of logRetentionMinutes is used.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionMs setting.

configSpec.
kafka.
kafkaConfig_2_6.
logSegmentBytes
integer (int64)

The maximum size of a single log file.

This is the global cluster-level setting that can be overridden on a topic level by using the segmentBytes setting.

configSpec.
kafka.
kafkaConfig_2_6.
logPreallocate
boolean (boolean)

Should pre allocate file when create new segment?

This is the global cluster-level setting that can be overridden on a topic level by using the preallocate setting.

configSpec.
zookeeper
object

Configuration and resource allocation for ZooKeeper hosts.

configSpec.
zookeeper.
resources
object

Resources allocated to ZooKeeper hosts.

configSpec.
zookeeper.
resources.
resourcePresetId
string

ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation.

configSpec.
zookeeper.
resources.
diskSize
string (int64)

Volume of the storage available to a host, in bytes.

configSpec.
zookeeper.
resources.
diskTypeId
string

Type of the storage environment for the host.

configSpec.
zoneId[]
string

IDs of availability zones where Kafka brokers reside.

configSpec.
brokersCount
integer (int64)

The number of Kafka brokers deployed in each availability zone.

configSpec.
assignPublicIp
boolean (boolean)

The flag that defines whether a public IP address is assigned to the cluster. If the value is true, then Apache Kafka® cluster is available on the Internet via it's public IP address.

topicSpecs[] object

One or more configurations of topics to be created in the Apache Kafka® cluster.

topicSpecs[].
name
string

Name of the topic.

topicSpecs[].
partitions
integer (int64)

The number of the topic's partitions.

topicSpecs[].
replicationFactor
integer (int64)

Amount of copies of a topic data kept in the cluster.

topicSpecs[].
topicConfig_2_1
object
topicSpecs[] includes only one of the fields topicConfig_2_1, topicConfig_2_6

A topic settings for 2.1.

topicSpecs[].
topicConfig_2_1.
cleanupPolicy
string

Retention policy to use on old log messages.

  • CLEANUP_POLICY_DELETE: this policy discards log segments when either their retention time or log size limit is reached. See also: logRetentionMs and other similar parameters.
  • CLEANUP_POLICY_COMPACT: this policy compacts messages in log.
  • CLEANUP_POLICY_COMPACT_AND_DELETE: this policy use both compaction and deletion for messages and log segments.
topicSpecs[].
topicConfig_2_1.
compressionType
string

The compression type for a given topic.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
topicSpecs[].
topicConfig_2_1.
deleteRetentionMs
integer (int64)

The amount of time in milliseconds to retain delete tombstone markers for log compacted topics.

topicSpecs[].
topicConfig_2_1.
fileDeleteDelayMs
integer (int64)

The time to wait before deleting a file from the filesystem.

topicSpecs[].
topicConfig_2_1.
flushMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This setting overrides the cluster-level logFlushIntervalMessages setting on the topic level.

topicSpecs[].
topicConfig_2_1.
flushMs
integer (int64)

The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk.

This setting overrides the cluster-level logFlushIntervalMs setting on the topic level.

topicSpecs[].
topicConfig_2_1.
minCompactionLagMs
integer (int64)

The minimum time in milliseconds a message will remain uncompacted in the log.

topicSpecs[].
topicConfig_2_1.
retentionBytes
integer (int64)

The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the delete cleanupPolicy is in effect. It is helpful if you need to control the size of log due to limited disk space.

This setting overrides the cluster-level logRetentionBytes setting on the topic level.

topicSpecs[].
topicConfig_2_1.
retentionMs
integer (int64)

The number of milliseconds to keep a log segment's file before deleting it.

This setting overrides the cluster-level logRetentionMs setting on the topic level.

topicSpecs[].
topicConfig_2_1.
maxMessageBytes
integer (int64)

The largest record batch size allowed in topic.

topicSpecs[].
topicConfig_2_1.
minInsyncReplicas
integer (int64)

This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write to be considered successful (when a producer sets acks to "all").

topicSpecs[].
topicConfig_2_1.
segmentBytes
integer (int64)

This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.

This setting overrides the cluster-level logSegmentBytes setting on the topic level.

topicSpecs[].
topicConfig_2_1.
preallocate
boolean (boolean)

True if we should preallocate the file on disk when creating a new log segment.

This setting overrides the cluster-level logPreallocate setting on the topic level.

topicSpecs[].
topicConfig_2_6
object
topicSpecs[] includes only one of the fields topicConfig_2_1, topicConfig_2_6

A topic settings for 2.6

topicSpecs[].
topicConfig_2_6.
cleanupPolicy
string

Retention policy to use on old log messages.

  • CLEANUP_POLICY_DELETE: this policy discards log segments when either their retention time or log size limit is reached. See also: logRetentionMs and other similar parameters.
  • CLEANUP_POLICY_COMPACT: this policy compacts messages in log.
  • CLEANUP_POLICY_COMPACT_AND_DELETE: this policy use both compaction and deletion for messages and log segments.
topicSpecs[].
topicConfig_2_6.
compressionType
string

The compression type for a given topic.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
topicSpecs[].
topicConfig_2_6.
deleteRetentionMs
integer (int64)

The amount of time in milliseconds to retain delete tombstone markers for log compacted topics.

topicSpecs[].
topicConfig_2_6.
fileDeleteDelayMs
integer (int64)

The time to wait before deleting a file from the filesystem.

topicSpecs[].
topicConfig_2_6.
flushMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This setting overrides the cluster-level logFlushIntervalMessages setting on the topic level.

topicSpecs[].
topicConfig_2_6.
flushMs
integer (int64)

The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk.

This setting overrides the cluster-level logFlushIntervalMs setting on the topic level.

topicSpecs[].
topicConfig_2_6.
minCompactionLagMs
integer (int64)

The minimum time in milliseconds a message will remain uncompacted in the log.

topicSpecs[].
topicConfig_2_6.
retentionBytes
integer (int64)

The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the delete cleanupPolicy is in effect. It is helpful if you need to control the size of log due to limited disk space.

This setting overrides the cluster-level logRetentionBytes setting on the topic level.

topicSpecs[].
topicConfig_2_6.
retentionMs
integer (int64)

The number of milliseconds to keep a log segment's file before deleting it.

This setting overrides the cluster-level logRetentionMs setting on the topic level.

topicSpecs[].
topicConfig_2_6.
maxMessageBytes
integer (int64)

The largest record batch size allowed in topic.

topicSpecs[].
topicConfig_2_6.
minInsyncReplicas
integer (int64)

This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write to be considered successful (when a producer sets acks to "all").

topicSpecs[].
topicConfig_2_6.
segmentBytes
integer (int64)

This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.

This setting overrides the cluster-level logSegmentBytes setting on the topic level.

topicSpecs[].
topicConfig_2_6.
preallocate
boolean (boolean)

True if we should preallocate the file on disk when creating a new log segment.

This setting overrides the cluster-level logPreallocate setting on the topic level.

userSpecs[] object

Configurations of accounts to be created in the Apache Kafka® cluster.

userSpecs[].
name
string

Required. Name of the Kafka user.

The string length in characters must be 1-63. Value must match the regular expression [a-zA-Z0-9_]*.

userSpecs[].
password
string

Required. Password of the Kafka user.

The string length in characters must be 8-128.

userSpecs[].
permissions[]
object

Set of permissions granted to the user.

userSpecs[].
permissions[].
topicName
string

Name of the topic that the permission grants access to.

To get the topic name, make a list request.

userSpecs[].
permissions[].
role
string

Access role type to grant to the user.

  • ACCESS_ROLE_PRODUCER: producer role for the user.
  • ACCESS_ROLE_CONSUMER: consumer role for the user.
networkId string

ID of the network to create the Apache Kafka® cluster in.

The maximum string length in characters is 50.

subnetId[] string

IDs of subnets to create brokers in.

securityGroupIds[] string

User security groups

Response

HTTP Code: 200 - OK

{
  "id": "string",
  "description": "string",
  "createdAt": "string",
  "createdBy": "string",
  "modifiedAt": "string",
  "done": true,
  "metadata": "object",

  //  includes only one of the fields `error`, `response`
  "error": {
    "code": "integer",
    "message": "string",
    "details": [
      "object"
    ]
  },
  "response": "object",
  // end of the list of possible fields

}

An Operation resource. For more information, see Operation.

Field Description
id string

ID of the operation.

description string

Description of the operation. 0-256 characters long.

createdAt string (date-time)

Creation timestamp.

String in RFC3339 text format.

createdBy string

ID of the user or service account who initiated the operation.

modifiedAt string (date-time)

The time when the Operation resource was last modified.

String in RFC3339 text format.

done boolean (boolean)

If the value is false, it means the operation is still in progress. If true, the operation is completed, and either error or response is available.

metadata object

Service-specific metadata associated with the operation. It typically contains the ID of the target resource that the operation is performed on. Any method that returns a long-running operation should document the metadata type, if any.

error object
The error result of the operation in case of failure or cancellation.
includes only one of the fields error, response

The error result of the operation in case of failure or cancellation.

error.
code
integer (int32)

Error code. An enum value of google.rpc.Code.

error.
message
string

An error message.

error.
details[]
object

A list of messages that carry the error details.

response object
includes only one of the fields error, response

The normal response of the operation in case of success. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is the standard Create/Update, the response should be the target resource of the operation. Any method that returns a long-running operation should document the response type, if any.

In this article:
  • HTTP request
  • Body parameters
  • Response
Language / Region
Careers
Privacy policy
Terms of use
Brandbook
© 2021 Yandex.Cloud LLC