Yandex Cloud
  • Services
  • Solutions
  • Why Yandex Cloud
  • Blog
  • Pricing
  • Documentation
  • Contact us
Get started
Language / Region
Yandex project
© 2023 Yandex.Cloud LLC
Yandex Data Transfer
  • Available transfers
  • Getting started
  • Step-by-step guide
    • All instructions
    • Preparing for the transfer
    • Configuring endpoints
      • Endpoint management
      • Configuring source endpoints
        • Apache Kafka®
        • AWS CloudTrail
        • BigQuery
        • ClickHouse
        • Eventhub
        • Greenplum®
        • MongoDB
        • MySQL
        • Oracle
        • PostgreSQL
        • S3
        • Yandex Data Streams
        • Yandex Managed Service for YDB
      • Configuring target endpoints
    • Managing the transfer process
    • Working with databases during the transfer
    • Monitoring the transfer status
  • Practical guidelines
  • Concepts
  • Troubleshooting
  • Access management
  • Pricing policy
  • API reference
  • Questions and answers
  1. Step-by-step guide
  2. Configuring endpoints
  3. Configuring source endpoints
  4. MySQL

Configuring a MySQL source endpoint

Written by
Yandex Cloud
  • Managed Service for MySQL cluster
  • Custom installation
  • Additional settings
    • Settings for transferring a DB schema when enabling and disabling a transfer
  • Known limitations

When creating or editing an endpoint, you can define:

  • Yandex Managed Service for MySQL cluster connection or custom installation settings, including those based on Yandex Compute Cloud VMs. These are required parameters.
  • Additional parameters.

Managed Service for MySQL cluster

Connecting to the database with the cluster ID specified in Yandex Cloud. Available only for clusters deployed in Yandex Managed Service for MySQL.

Management console
CLI
Terraform
API
  • MDB cluster ID: Select the cluster to connect to.

  • Security groups: Select the cloud network to host the endpoint and security groups for network traffic.

    This will let you apply the specified security group rules to the VMs and clusters in the selected network without changing the settings of these VMs and clusters. For more information, see Network in Yandex Data Transfer.

  • Database name: Specify the name of the database in the selected cluster. Leave the field empty if you want to transfer tables from multiple databases at the same time.

  • Username: Specify the username that Data Transfer will use to connect to the database.

  • Password: Enter the user's password to the database.

  • Endpoint type: mysql-source.
  • --cluster-id: ID of the cluster you need to connect to.

  • --database — Database name. Leave the field empty if you want to transfer tables from multiple databases at the same time.

  • --user: Username that Data Transfer will use to connect to the database.

  • To set a user password to access the DB, use one of the following parameters:

    • --raw-password: Password as text.

    • --password-file: The path to the password file.

  • Endpoint type: mysql_source.
  • connection.mdb_cluster_id: ID of cluster to connect to.

  • database — Database name. Leave the field empty if you want to transfer tables from multiple databases at the same time.

  • user: Username that Data Transfer will use to connect to the database.

  • password.raw: Password in text form.

Example of the configuration file structure:

resource "yandex_datatransfer_endpoint" "<endpoint name in Terraform>" {
  name = "<endpoint name>"
  settings {
    mysql_source {
      security_groups = [ "list of security group IDs" ]
      connection {
        mdb_cluster_id = "<Managed Service for MySQL cluster ID>"
      }
      database = "<name of database being transferred>"
      user     = "<username for connection>"
      password {
        raw = "<user password>"
      }
      <advanced endpoint settings>
    }
  }
}

For more information, see the Terraform provider documentation.

  • mdbClusterId: ID of the cluster you need to connect to.

  • database — Database name. Leave the field empty if you want to transfer tables from multiple databases at the same time.

  • user: Username that Data Transfer will use to connect to the database.

  • password.raw: Database user password (in text form).

Custom installation

For OnPremise, all fields are filled in manually.

Management console
CLI
Terraform
API
  • Database port: Set the number of the port that Data Transfer will use for the connection.

  • CA Certificate: Upload the certificate file or add its contents as text if encryption of the transmitted data is required, for example, to meet the PCI DSS requirements.

  • Host IP or FQDN: Enter the IP address or FQDN of the master host to connect to.

  • Subnet ID: Select or create a subnet in the desired availability zone.

    If the source and target are geographically close, connecting over the selected subnet speeds up the transfer.

  • Security groups: Select the cloud network to host the endpoint and security groups for network traffic.

    This will let you apply the specified security group rules to the VMs and clusters in the selected network without changing the settings of these VMs and clusters. For more information, see Network in Yandex Data Transfer.

  • Database name: Specify the name of the database in the selected cluster. Leave the field empty if you want to transfer tables from multiple databases at the same time.

  • Username: Specify the username that Data Transfer will use to connect to the database.

  • Password: Enter the user's password to the database.

  • Endpoint type: mysql-source.
  • --host: IP address or FQDN of the master host you want to connect to.

  • --port: Number of the port that Data Transfer will use for the connection.

  • --ca-certificate — If the transmitted data needs to be encrypted, for example, to meet the requirements of PCI DSS.

  • --subnet-id: ID of the subnet the host resides in.

  • --database — Database name. Leave the field empty if you want to transfer tables from multiple databases at the same time.

  • --user: Username that Data Transfer will use to connect to the database.

  • To set a user password to access the DB, use one of the following parameters:

    • --raw-password: Password as text.

    • --password-file: The path to the password file.

  • Endpoint type: mysql_source.
  • on_premise.hosts: List of IPs or FQDNs of hosts to connect to. Since only single-item lists are supported, specify the master host address.

  • on_premise.port: Port number that Data Transfer will use for connections.

  • on_premise.tls_mode.enabled.ca_certificate: CA certificate if the data to transfer must be encrypted to comply with PCI DSS requirements.

  • on_premise.subnet_id: ID of subnet that host is on.

  • database — Database name. Leave the field empty if you want to transfer tables from multiple databases at the same time.

  • user: Username that Data Transfer will use to connect to the database.

  • password.raw: Password in text form.

Example of the configuration file structure:

resource "yandex_datatransfer_endpoint" "<endpoint name in Terraform>" {
  name = "<endpoint name>"
  settings {
    mysql_source {
      security_groups = [ "list of security group IDs" ]
      connection {
        on_premise {
          hosts = ["<host list>"]
          port  = <connection port>
        }
      }
      database = "<name of database being transferred>"
      user     = "<username for connection>"
      password {
        raw = "<user password>"
      }
      <advanced endpoint settings>
    }
  }
}

For more information, see the Terraform provider documentation.

  • onPremise: Database connection parameters:
    • hosts — IP address or FQDN of the master host to connect to.

    • port: The number of the port that Data Transfer will use for the connection.

    • tlsMode: Parameters for encrypting transmitted data if it is required, for example, to meet PCI DSS requirements.
      • disabled: Disabled.
      • enabled: Enabled.
        • caCertificate: CA certificate.
    • subnetId: ID of the subnet the host resides in.

  • database — Database name. Leave the field empty if you want to transfer tables from multiple databases at the same time.

  • user: Username that Data Transfer will use to connect to the database.

  • password.raw: Database user password (in text form).

Additional settings

Management console
CLI
Terraform
API
  • List of included tables: Data is only transferred from the listed tables. It is set using regular expressions.

    When you add new tables when editing an endpoint used in Snapshot and increment or Increment transfers with the Replicating status, the data history for these tables will not get uploaded. To add a table with its historical data, use the List of objects to be transferred field in the transfer settings.

  • List of excluded tables: Data from these listed tables is not transferred. It is set using regular expressions.

  • Schema migration, lets you select the DB schema elements to be transferred when activating or deactivating a transfer.

  • DB time zone, specified as an IANA Time Zone Database identifier. The server's local time zone is used by default.

  • DB for service tables: A database for dummy tables (__tm_keeper and __tm_gtid_keeper). By default, this is the source database the data is transferred from.

  • --include-table-regex: List of included tables. If this is on, the data will only be transferred from the tables in this list. It is set using regular expressions.

    When you add new tables when editing an endpoint used in Snapshot and increment or Increment transfers with the Replicating status, the data history for these tables will not get uploaded. To add a table with its historical data, use the List of objects to be transferred field in the transfer settings.

  • --exclude-table-regex: Blacklist of tables. Data from tables on this list will not be transferred. It is set using regular expressions.

  • --timezone: DB time zone, specified as an IANA Time Zone Database identifier. Defaults to UTC+0.

  • Schema transfer settings:

    • --transfer-before-data: When activating transfer.
    • --transfer-after-data: When deactivating transfer.
  • include_table_regex: List of included tables. If set, data is only transferred from the tables in this list. It is set using regular expressions.

    When you add new tables when editing an endpoint used in Snapshot and increment or Increment transfers with the Replicating status, the data history for these tables will not get uploaded. To add a table with its historical data, use the List of objects to be transferred field in the transfer settings.

    When you add new tables when editing an endpoint used in Snapshot and increment or Increment transfers with the Replicating status, the data history for these tables will not get uploaded. To add a table with its historical data, use the List of objects to be transferred field in the transfer settings.

  • exclude_table_regex: List of excluded tables. Data from tables on this list will not be transferred. It is set using regular expressions.

  • timezone: DB time zone, specified as an IANA Time Zone Database identifier. Defaults to UTC+0.

  • object_transfer_settings: Schema transfer settings:

    • view: Views.
    • routine: Procedures and functions.
    • trigger: Triggers.

    You can specify one of the following values for each entity:

    • BEFORE_DATA: Move at transfer activation.
    • AFTER_DATA: Move at transfer deactivation.
    • NEVER: Do not move.
  • includeTablesRegex: List of included tables. If this is on, the data will only be transferred from the tables in this list. It is set using regular expressions.

    When you add new tables when editing an endpoint used in Snapshot and increment or Increment transfers with the Replicating status, the data history for these tables will not get uploaded. To add a table with its historical data, use the List of objects to be transferred field in the transfer settings.

  • excludeTablesRegex: Blacklist of tables. Data from tables on this list will not be transferred. It is set using regular expressions.

  • timezone: DB time zone, specified as an IANA Time Zone Database identifier. Defaults to UTC+0.

  • objectTransferSettings: Settings for transferring a DB schema when activating and deactivating a transfer (BEFORE_DATA and AFTER_DATA values, respectively).

Settings for transferring a DB schema when enabling and disabling a transfer

During a transfer, the database schema is transferred from the source to the target. The transfer is performed in two stages:

  1. At the activation stage.

    This step is performed before copying or replicating data to create a schema on the target. At this stage, you can enable the migration of views and stored procedures, stored functions, and triggers.

  2. At the deactivation stage.

    This step is performed at the end of the transfer operation when it is deactivated. If the transfer keeps running in replication mode, the final stage of the transfer will be performed only when replication stops. At this stage, you can enable the migration of views and stored procedures, stored functions, and triggers.

    At the final stage, it is assumed that when the transfer is deactivated, there is no writing load on the source. You can ensure this by switching to read-only mode. At this stage, the database schema on the target is brought to a state where it will be consistent with the schema on the source.

Known limitations

If you are setting up a transfer from a MySQL cluster, use the cluster master server. During its operation, the transfer creates service tables in the source database. Therefore, you can't use a MySQL replica as a source, because it is read-only.

Was the article helpful?

Language / Region
Yandex project
© 2023 Yandex.Cloud LLC
In this article:
  • Managed Service for MySQL cluster
  • Custom installation
  • Additional settings
  • Settings for transferring a DB schema when enabling and disabling a transfer
  • Known limitations