Yandex Cloud
  • Services
  • Solutions
  • Why Yandex Cloud
  • Blog
  • Pricing
  • Documentation
  • Contact us
Get started
Language / Region
Yandex project
© 2023 Yandex.Cloud LLC
Yandex Managed Service for Apache Kafka®
  • Getting started
  • Step-by-step instructions
    • All instructions
    • Information about existing clusters
    • Creating clusters
    • Connecting to a cluster
    • Stopping and starting clusters
    • Upgrading the Apache Kafka® version
    • Changing cluster settings
    • Managing Apache Kafka® hosts
    • Working with topics and partitions
    • Managing Apache Kafka® users
    • Managing connectors
    • Viewing cluster logs
    • Deleting clusters
    • Monitoring the state of clusters and hosts
  • Practical guidelines
    • All tutorials
    • Setting up Kafka Connect to work with Managed Service for Apache Kafka®
    • Using data format schemas with Managed Service for Apache Kafka®
      • Overview
      • Working with the managed schema registry
      • Using Confluent Schema Registry with Managed Service for Apache Kafka®
    • Migrating databases from a third-party Apache Kafka® cluster
    • Moving data between Managed Service for Apache Kafka® clusters using Yandex Data Transfer
    • Delivering data from Managed Service for Apache Kafka® using Debezium
    • Delivering data from Yandex Managed Service for MySQL using Debezium
    • Delivering data from Managed Service for Apache Kafka® with Yandex Data Transfer
    • Delivering data to Managed Service for ClickHouse
    • Data delivery in ksqlDB
    • Delivering data to Yandex Managed Service for YDB using Yandex Data Transfer
  • Concepts
    • Relationships between service resources
    • Topics and partitions
    • Brokers
    • Producers and consumers
    • Managing data schemas
    • Host classes
    • Network in Managed Service for Apache Kafka®
    • Quotas and limits
    • Disk types
    • Connectors
    • Maintenance
    • Apache Kafka® settings
  • Access management
  • Pricing policy
  • API reference
    • Authentication in the API
    • gRPC
      • Overview
      • ClusterService
      • ConnectorService
      • ResourcePresetService
      • TopicService
      • UserService
      • OperationService
    • REST
      • Overview
      • Cluster
        • Overview
        • create
        • delete
        • get
        • list
        • listHosts
        • listLogs
        • listOperations
        • move
        • rescheduleMaintenance
        • start
        • stop
        • streamLogs
        • update
      • Connector
        • Overview
        • create
        • delete
        • get
        • list
        • pause
        • resume
        • update
      • ResourcePreset
        • Overview
        • get
        • list
      • Topic
        • Overview
        • create
        • delete
        • get
        • list
        • update
      • User
        • Overview
        • create
        • delete
        • get
        • grantPermission
        • list
        • revokePermission
        • update
      • Operation
        • Overview
        • get
  • Revision history
  • Questions and answers
  1. API reference
  2. REST
  3. Topic
  4. create

Managed Service for Apache Kafka® API, REST: Topic.create

Written by
Yandex Cloud
  • HTTP request
  • Path parameters
  • Body parameters
  • Response

Creates a new Kafka topic in the specified cluster.

HTTP request

POST https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/{clusterId}/topics

Path parameters

Parameter Description
clusterId

Required. ID of the Apache Kafka® cluster to create a topic in.

To get the cluster ID, make a list request.

The maximum string length in characters is 50.

Body parameters

{
  "topicSpec": {
    "name": "string",
    "partitions": "integer",
    "replicationFactor": "integer",

    // `topicSpec` includes only one of the fields `topicConfig_2_1`, `topicConfig_2_6`, `topicConfig_2_8`, `topicConfig_3`
    "topicConfig_2_1": {
      "cleanupPolicy": "string",
      "compressionType": "string",
      "deleteRetentionMs": "integer",
      "fileDeleteDelayMs": "integer",
      "flushMessages": "integer",
      "flushMs": "integer",
      "minCompactionLagMs": "integer",
      "retentionBytes": "integer",
      "retentionMs": "integer",
      "maxMessageBytes": "integer",
      "minInsyncReplicas": "integer",
      "segmentBytes": "integer",
      "preallocate": true
    },
    "topicConfig_2_6": {
      "cleanupPolicy": "string",
      "compressionType": "string",
      "deleteRetentionMs": "integer",
      "fileDeleteDelayMs": "integer",
      "flushMessages": "integer",
      "flushMs": "integer",
      "minCompactionLagMs": "integer",
      "retentionBytes": "integer",
      "retentionMs": "integer",
      "maxMessageBytes": "integer",
      "minInsyncReplicas": "integer",
      "segmentBytes": "integer",
      "preallocate": true
    },
    "topicConfig_2_8": {
      "cleanupPolicy": "string",
      "compressionType": "string",
      "deleteRetentionMs": "integer",
      "fileDeleteDelayMs": "integer",
      "flushMessages": "integer",
      "flushMs": "integer",
      "minCompactionLagMs": "integer",
      "retentionBytes": "integer",
      "retentionMs": "integer",
      "maxMessageBytes": "integer",
      "minInsyncReplicas": "integer",
      "segmentBytes": "integer",
      "preallocate": true
    },
    "topicConfig_3": {
      "cleanupPolicy": "string",
      "compressionType": "string",
      "deleteRetentionMs": "integer",
      "fileDeleteDelayMs": "integer",
      "flushMessages": "integer",
      "flushMs": "integer",
      "minCompactionLagMs": "integer",
      "retentionBytes": "integer",
      "retentionMs": "integer",
      "maxMessageBytes": "integer",
      "minInsyncReplicas": "integer",
      "segmentBytes": "integer",
      "preallocate": true
    },
    // end of the list of possible fields`topicSpec`

  }
}
Field Description
topicSpec object

Required. Configuration of the topic to create.

topicSpec.
name
string

Name of the topic.

topicSpec.
partitions
integer (int64)

The number of the topic's partitions.

topicSpec.
replicationFactor
integer (int64)

Amount of copies of a topic data kept in the cluster.

topicSpec.
topicConfig_2_1
object
topicSpec includes only one of the fields topicConfig_2_1, topicConfig_2_6, topicConfig_2_8, topicConfig_3

Deprecated. Version 2.1 of Kafka not supported in Yandex Cloud.

topicSpec.
topicConfig_2_1.
cleanupPolicy
string

Retention policy to use on old log messages.

  • CLEANUP_POLICY_DELETE: this policy discards log segments when either their retention time or log size limit is reached. See also: logRetentionMs and other similar parameters.
  • CLEANUP_POLICY_COMPACT: this policy compacts messages in log.
  • CLEANUP_POLICY_COMPACT_AND_DELETE: this policy use both compaction and deletion for messages and log segments.
topicSpec.
topicConfig_2_1.
compressionType
string

The compression type for a given topic.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
topicSpec.
topicConfig_2_1.
deleteRetentionMs
integer (int64)

The amount of time in milliseconds to retain delete tombstone markers for log compacted topics.

topicSpec.
topicConfig_2_1.
fileDeleteDelayMs
integer (int64)

The time to wait before deleting a file from the filesystem.

topicSpec.
topicConfig_2_1.
flushMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This setting overrides the cluster-level logFlushIntervalMessages setting on the topic level.

topicSpec.
topicConfig_2_1.
flushMs
integer (int64)

The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk.

This setting overrides the cluster-level logFlushIntervalMs setting on the topic level.

topicSpec.
topicConfig_2_1.
minCompactionLagMs
integer (int64)

The minimum time in milliseconds a message will remain uncompacted in the log.

topicSpec.
topicConfig_2_1.
retentionBytes
integer (int64)

The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the delete cleanupPolicy is in effect. It is helpful if you need to control the size of log due to limited disk space.

This setting overrides the cluster-level logRetentionBytes setting on the topic level.

topicSpec.
topicConfig_2_1.
retentionMs
integer (int64)

The number of milliseconds to keep a log segment's file before deleting it.

This setting overrides the cluster-level logRetentionMs setting on the topic level.

topicSpec.
topicConfig_2_1.
maxMessageBytes
integer (int64)

The largest record batch size allowed in topic.

topicSpec.
topicConfig_2_1.
minInsyncReplicas
integer (int64)

This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write to be considered successful (when a producer sets acks to "all").

topicSpec.
topicConfig_2_1.
segmentBytes
integer (int64)

This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.

This setting overrides the cluster-level logSegmentBytes setting on the topic level.

topicSpec.
topicConfig_2_1.
preallocate
boolean (boolean)

True if we should preallocate the file on disk when creating a new log segment.

This setting overrides the cluster-level logPreallocate setting on the topic level.

topicSpec.
topicConfig_2_6
object
topicSpec includes only one of the fields topicConfig_2_1, topicConfig_2_6, topicConfig_2_8, topicConfig_3

Deprecated. Version 2.6 of Kafka not supported in Yandex Cloud.

topicSpec.
topicConfig_2_6.
cleanupPolicy
string

Retention policy to use on old log messages.

  • CLEANUP_POLICY_DELETE: this policy discards log segments when either their retention time or log size limit is reached. See also: logRetentionMs and other similar parameters.
  • CLEANUP_POLICY_COMPACT: this policy compacts messages in log.
  • CLEANUP_POLICY_COMPACT_AND_DELETE: this policy use both compaction and deletion for messages and log segments.
topicSpec.
topicConfig_2_6.
compressionType
string

The compression type for a given topic.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
topicSpec.
topicConfig_2_6.
deleteRetentionMs
integer (int64)

The amount of time in milliseconds to retain delete tombstone markers for log compacted topics.

topicSpec.
topicConfig_2_6.
fileDeleteDelayMs
integer (int64)

The time to wait before deleting a file from the filesystem.

topicSpec.
topicConfig_2_6.
flushMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This setting overrides the cluster-level logFlushIntervalMessages setting on the topic level.

topicSpec.
topicConfig_2_6.
flushMs
integer (int64)

The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk.

This setting overrides the cluster-level logFlushIntervalMs setting on the topic level.

topicSpec.
topicConfig_2_6.
minCompactionLagMs
integer (int64)

The minimum time in milliseconds a message will remain uncompacted in the log.

topicSpec.
topicConfig_2_6.
retentionBytes
integer (int64)

The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the delete cleanupPolicy is in effect. It is helpful if you need to control the size of log due to limited disk space.

This setting overrides the cluster-level logRetentionBytes setting on the topic level.

topicSpec.
topicConfig_2_6.
retentionMs
integer (int64)

The number of milliseconds to keep a log segment's file before deleting it.

This setting overrides the cluster-level logRetentionMs setting on the topic level.

topicSpec.
topicConfig_2_6.
maxMessageBytes
integer (int64)

The largest record batch size allowed in topic.

topicSpec.
topicConfig_2_6.
minInsyncReplicas
integer (int64)

This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write to be considered successful (when a producer sets acks to "all").

topicSpec.
topicConfig_2_6.
segmentBytes
integer (int64)

This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.

This setting overrides the cluster-level logSegmentBytes setting on the topic level.

topicSpec.
topicConfig_2_6.
preallocate
boolean (boolean)

True if we should preallocate the file on disk when creating a new log segment.

This setting overrides the cluster-level logPreallocate setting on the topic level.

topicSpec.
topicConfig_2_8
object
topicSpec includes only one of the fields topicConfig_2_1, topicConfig_2_6, topicConfig_2_8, topicConfig_3

A topic settings for 2.8

topicSpec.
topicConfig_2_8.
cleanupPolicy
string

Retention policy to use on old log messages.

  • CLEANUP_POLICY_DELETE: this policy discards log segments when either their retention time or log size limit is reached. See also: logRetentionMs and other similar parameters.
  • CLEANUP_POLICY_COMPACT: this policy compacts messages in log.
  • CLEANUP_POLICY_COMPACT_AND_DELETE: this policy use both compaction and deletion for messages and log segments.
topicSpec.
topicConfig_2_8.
compressionType
string

The compression type for a given topic.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
topicSpec.
topicConfig_2_8.
deleteRetentionMs
integer (int64)

The amount of time in milliseconds to retain delete tombstone markers for log compacted topics.

topicSpec.
topicConfig_2_8.
fileDeleteDelayMs
integer (int64)

The time to wait before deleting a file from the filesystem.

topicSpec.
topicConfig_2_8.
flushMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This setting overrides the cluster-level logFlushIntervalMessages setting on the topic level.

topicSpec.
topicConfig_2_8.
flushMs
integer (int64)

The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk.

This setting overrides the cluster-level logFlushIntervalMs setting on the topic level.

topicSpec.
topicConfig_2_8.
minCompactionLagMs
integer (int64)

The minimum time in milliseconds a message will remain uncompacted in the log.

topicSpec.
topicConfig_2_8.
retentionBytes
integer (int64)

The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the delete cleanupPolicy is in effect. It is helpful if you need to control the size of log due to limited disk space.

This setting overrides the cluster-level logRetentionBytes setting on the topic level.

topicSpec.
topicConfig_2_8.
retentionMs
integer (int64)

The number of milliseconds to keep a log segment's file before deleting it.

This setting overrides the cluster-level logRetentionMs setting on the topic level.

topicSpec.
topicConfig_2_8.
maxMessageBytes
integer (int64)

The largest record batch size allowed in topic.

topicSpec.
topicConfig_2_8.
minInsyncReplicas
integer (int64)

This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write to be considered successful (when a producer sets acks to "all").

topicSpec.
topicConfig_2_8.
segmentBytes
integer (int64)

This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.

This setting overrides the cluster-level logSegmentBytes setting on the topic level.

topicSpec.
topicConfig_2_8.
preallocate
boolean (boolean)

True if we should preallocate the file on disk when creating a new log segment.

This setting overrides the cluster-level logPreallocate setting on the topic level.

topicSpec.
topicConfig_3
object
topicSpec includes only one of the fields topicConfig_2_1, topicConfig_2_6, topicConfig_2_8, topicConfig_3

A topic settings for 3.x

topicSpec.
topicConfig_3.
cleanupPolicy
string

Retention policy to use on old log messages.

  • CLEANUP_POLICY_DELETE: this policy discards log segments when either their retention time or log size limit is reached. See also: logRetentionMs and other similar parameters.
  • CLEANUP_POLICY_COMPACT: this policy compacts messages in log.
  • CLEANUP_POLICY_COMPACT_AND_DELETE: this policy use both compaction and deletion for messages and log segments.
topicSpec.
topicConfig_3.
compressionType
string

The compression type for a given topic.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
topicSpec.
topicConfig_3.
deleteRetentionMs
integer (int64)

The amount of time in milliseconds to retain delete tombstone markers for log compacted topics.

topicSpec.
topicConfig_3.
fileDeleteDelayMs
integer (int64)

The time to wait before deleting a file from the filesystem.

topicSpec.
topicConfig_3.
flushMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This setting overrides the cluster-level logFlushIntervalMessages setting on the topic level.

topicSpec.
topicConfig_3.
flushMs
integer (int64)

The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk.

This setting overrides the cluster-level logFlushIntervalMs setting on the topic level.

topicSpec.
topicConfig_3.
minCompactionLagMs
integer (int64)

The minimum time in milliseconds a message will remain uncompacted in the log.

topicSpec.
topicConfig_3.
retentionBytes
integer (int64)

The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the delete cleanupPolicy is in effect. It is helpful if you need to control the size of log due to limited disk space.

This setting overrides the cluster-level logRetentionBytes setting on the topic level.

topicSpec.
topicConfig_3.
retentionMs
integer (int64)

The number of milliseconds to keep a log segment's file before deleting it.

This setting overrides the cluster-level logRetentionMs setting on the topic level.

topicSpec.
topicConfig_3.
maxMessageBytes
integer (int64)

The largest record batch size allowed in topic.

topicSpec.
topicConfig_3.
minInsyncReplicas
integer (int64)

This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write to be considered successful (when a producer sets acks to "all").

topicSpec.
topicConfig_3.
segmentBytes
integer (int64)

This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.

This setting overrides the cluster-level logSegmentBytes setting on the topic level.

topicSpec.
topicConfig_3.
preallocate
boolean (boolean)

True if we should preallocate the file on disk when creating a new log segment.

This setting overrides the cluster-level logPreallocate setting on the topic level.

Response

HTTP Code: 200 - OK

{
  "id": "string",
  "description": "string",
  "createdAt": "string",
  "createdBy": "string",
  "modifiedAt": "string",
  "done": true,
  "metadata": "object",

  //  includes only one of the fields `error`, `response`
  "error": {
    "code": "integer",
    "message": "string",
    "details": [
      "object"
    ]
  },
  "response": "object",
  // end of the list of possible fields

}

An Operation resource. For more information, see Operation.

Field Description
id string

ID of the operation.

description string

Description of the operation. 0-256 characters long.

createdAt string (date-time)

Creation timestamp.

String in RFC3339 text format. The range of possible values is from 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z, i.e. from 0 to 9 digits for fractions of a second.

To work with values in this field, use the APIs described in the Protocol Buffers reference. In some languages, built-in datetime utilities do not support nanosecond precision (9 digits).

createdBy string

ID of the user or service account who initiated the operation.

modifiedAt string (date-time)

The time when the Operation resource was last modified.

String in RFC3339 text format. The range of possible values is from 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z, i.e. from 0 to 9 digits for fractions of a second.

To work with values in this field, use the APIs described in the Protocol Buffers reference. In some languages, built-in datetime utilities do not support nanosecond precision (9 digits).

done boolean (boolean)

If the value is false, it means the operation is still in progress. If true, the operation is completed, and either error or response is available.

metadata object

Service-specific metadata associated with the operation. It typically contains the ID of the target resource that the operation is performed on. Any method that returns a long-running operation should document the metadata type, if any.

error object
The error result of the operation in case of failure or cancellation.
includes only one of the fields error, response
error.
code
integer (int32)

Error code. An enum value of google.rpc.Code.

error.
message
string

An error message.

error.
details[]
object

A list of messages that carry the error details.

response object
includes only one of the fields error, response

The normal response of the operation in case of success. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is the standard Create/Update, the response should be the target resource of the operation. Any method that returns a long-running operation should document the response type, if any.

Was the article helpful?

Language / Region
Yandex project
© 2023 Yandex.Cloud LLC
In this article:
  • HTTP request
  • Path parameters
  • Body parameters
  • Response