Skip to Content
FAQ

FAQ

1 Q: How many regions currently support cross-region migration?

UDTS cross-region migration uses UDPN or dedicated lines. All paths with supported regions and UDPN/dedicated line transmissions are supported. For supported regions, please refer to the console.

2 Q: What are the requirements for MySQL full migration?

Please refer

3 Q: What are the requirements for MySQL incremental migration?

Please refer

4 Q: How to handle the “only support ROW format binlog” prompt during incremental migration?

Incremental migration requires the MySQL (including UDB MySQL) binlog_format value to be ROW. You can choose one of the following methods to change binlog_format; after changing, you need to re-execute UDTS’s “full + incremental”.

1. Modify the configuration file (default is my.cnf) and restart MySQL

[mysqld] ... binlog_format = ROW binlog_row_image = FULL ...

Note: For MySQL 5.5, which doesn’t have the binlog_row_image variable, this setting is not needed.

2. Set through MySQL commands

Pay special attention: if setting binlog_format using MySQL commands, the written binlog_format remains the original value if there are connections writing data to the database. You need to disconnect the connection for it to take effect.

FLUSH TABLES WITH READ LOCK; FLUSH LOGS; SET GLOBAL binlog_format = 'ROW'; -- Similarly, for MySQL 5.5, setting binlog_row_image is not required SET GLOBAL binlog_row_image = 'FULL'; FLUSH LOGS; UNLOCK TABLES;

After changing, you can disconnect existing connections using the following command:

-- View all current connections > show processlist; +-----+------+-----------------+--------+---------+------+----------+------------------+ | Id | User | Host | db | Command | Time | State | Info | +-----+------+-----------------+--------+---------+------+----------+------------------+ | 495 | root | 10.20.5.1:56820 | <null> | Query | 0 | starting | show processlist | | 497 | root | 10.20.5.1:56828 | <null> | Sleep | 3 | | <null> | +-----+------+-----------------+--------+---------+------+----------+------------------+ -- Use kill to disconnect all sessions. If you know which connections are performing write operations, kill those connections. > kill 497 FLUSH LOGS;

If you are using master-slave mode, you need to execute the following commands:

On the slave node:

stop slave;

On the master:

FLUSH TABLES WITH READ LOCK; FLUSH LOGS; SET GLOBAL binlog_format = 'ROW'; FLUSH LOGS; UNLOCK TABLES;

On the slave:

start slave;

Note:
Through SET GLOBAL binlog_format = 'ROW';,
the value queried again via show global variables like 'binlog_format'; remains the original value. It requires disconnection and reconnection to display the changed value.

3. Cloud Data

If you are using cloud data, you can copy the current configuration file, modify the corresponding binlog_format and binlog_row_image settings, and then reapply the database with the updated configuration.

5 Q: ERROR 1292 (22007): Incorrect date value: ‘0000-00-00’ for column

The error occurs due to inconsistency in the sql_mode of the migrated MySQL (UDB-MySQL), where the target database does not allow invalid dates like ‘0000-00-00’.

You can check sql_mode using the following commands:

select @@sql_mode; --- Or show variables like "sql_mode";

If the current sql_mode includes NO_ZERO_IN_DATE, NO_ZERO_DATE, remove them and add ALLOW_INVALID_DATES to allow invalid dates using the command below:

SET GLOBAL sql_mode='xxxx,ALLOW_INVALID_DATES';

Here, xxxx refers to the original sql_mode value queried (minus NO_ZERO_IN_DATE and NO_ZERO_DATE).

6 Q: How to determine if the source and target databases in a MySQL-MySQL incremental task are synchronized?

When the data in the target database reaches consistency with the source database during an incremental task, the task status changes from “synchronizing” to “synchronized.”

“Synchronized” status explanation:

It means that the data in the target database corresponding to the db and table specified in the migration task is consistent with the source database. For example: when the migration task is *, “synchronized” implies that all dbs (excluding built-in databases like sys, test, INFORMATION_SCHEMA, PERFORMANCE_SCHEMA, etc.) in the target database are consistent with the source database. When the migration task involves multiple dbs, e.g., db1, db2, it implies that db1 and db2 in the target database are consistent with the source database. Any changes in the source database’s data (even if the db undergoing changes is not in the migration task) will change the “synchronized” status back to “synchronizing” until data consistency is reached again.

7 Q: error=“Error 1040: Too many connections” Error Handling

The database server has reached its connection limit.

show variables like '%max_connections%'; +-----------------+-------+ | Variable_name | Value | +-----------------+-------+ | max_connections | 151 | +-----------------+-------+ 1 row in set (0.01 sec)

Solution:

  1. Increase max_connections: set GLOBAL max_connections=1000;

8 Q: row size too large (> 8126). Changing some columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help. In current row format, BLOB prefix of 768 bytes is stored inline

Solution:

Execute in the target database:

set GLOBAL innodb_file_format_max = 'Barracuda'; set GLOBAL innodb_file_format = 'Barracuda'; set GLOBAL innodb_file_per_table = ON; set GLOBAL innodb_strict_mode = OFF;

9 Q: Column count doesn’t match value count

During incremental tasks from Alibaba Cloud to Surfercloud, an error such as Column count doesn’t match value count: 2 (columns) vs 3 (values) may occur because there are hidden primary keys in the source database, causing a mismatch between data columns and values.

Solution: Identify tables without primary keys and add primary keys

  • Identify tables without primary keys
SELECT table_schema, table_name FROM information_schema.tables WHERE (table_schema, table_name) NOT in( SELECT DISTINCT table_schema, table_name FROM information_schema.columns WHERE COLUMN_KEY = 'PRI') AND table_schema NOT in('sys', 'mysql', 'INFORMATION_SCHEMA', 'PERFORMANCE_SCHEMA');
  • Add primary keys to these tables

10 Q: sync: Type:ExecSQL msg:“exec jobs failed,err:Error 1205:Lock wait timeout exceeded;try restarting transaction”

This issue may occur when the target database has low configuration. Users should restart the task. The task will automatically resume from where it last synchronized (supports resuming from breakpoints).

11 Q: Are there locks on the database during MySQL full tasks in operation? When will they be released?

In default mode: During full task operation, in order to ensure data consistency, FTWRL (FLUSH TABLES WITH READ LOCK) is executed. If there are non-transactional tables like MyISAM in the migrating database, locks will be released after these non-transactional tables have been backed up (lock release time is related to the table data size); for InnoDB tables, locks are immediately released and a transaction with consistent snapshot read is started using START TRANSACTION WITH CONSISTENT SNAPSHOT.

  • For MyISAM tables, tables being dumped allow reads but no writes until dumping is completed
  • For InnoDB tables, tables being dumped allow both reads and writes.

Innolock mode: No locks are placed on any database or table.

12 Q: Redis migration encountered ERR illegal address

The reason might be that the user has enabled the whitelist setting, but the UDTS machine IP is not in the whitelist. If you need the whitelist address, please contact technical support.

13 Q: Error 3140: Invalid JSON text

During MySQL synchronization, if Error 3140: Invalid JSON text: “The document is empty.” at position 0 in value for column occurs, it is because the source database has lenient validation. The database field requires NOT NULL, but there are data entries with NULL values.

There are two solutions, depending on the requirement:

  • Repair the data in the source database, correcting all NULL values to appropriate values (aligns with business logic requirements)
  • Or modify the table in the target database to allow the field to be NULL. For example, if the table is period_progress and the field is total:
ALTER TABLE `period_progress` CHANGE `total` `total` JSON NULL;

14 Q: MongoDB migration shows error reading collection: cursor id 5707195885304103447 not found

MongoDB’s default cursor valid time is 10 minutes. If data processing cannot be completed within 10 minutes, the cursor expires, resulting in an inability to retrieve remaining data. Solution:

  • Log into the MongoDB shell, switch to the admin database, and set the cursor timeout to 1 day:
db.runCommand( { setParameter:1 , "cursorTimeoutMillis":86400000} )

15 Q: Error 1264: Out of range value for column ‘xxx’ at row 100

During MySQL2TiDB data import, you may encounter Error 1264: Out of range value for column ‘xxx’ at row 100, indicating that data in the source database wasn’t strictly validated; abnormal data could be stored properly, but the target database requires data to meet specified constraints.

For example:

CREATE TABLE `aaa` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `lot` float(10,7) NOT NULL DEFAULT '0.0000000', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;

If there’s a table with the structure shown above in the source database, and a field lot requires data type float with a total length of 10, of which 7 are decimal places, importing a data entry with the value -1000 may fail, resulting in Error 1264: Out of range value for column ‘lot’.

There are two solutions, depending on the requirement:

  • Adjust table structure and increase field length

    select max(`lot`) from `aaa`; select min(`lot`) from `aaa`;

    Obtain the maximum and minimum values, then modify the table structure accordingly via alter to meet specific conditions. For example, change the table aaa’s lot field to float(11,7).

    ALTER TABLE `aaa` MODIFY `lot` float(11,7) NOT NULL DEFAULT '0.0000000',
  • Fix data to comply with table structure limitations

    select `id` where `lot` > 999 OR `lot` < -999;

    Correct the found data entries.

16 Q: When the target database is TiDB, Error 1071: Specified key was too long; max key length is 3072 bytes occurs

You need to increase the max-index-length configuration in the target TiDB file.

17 Q: When the target database is TiDB, Error: incorrect utf8 value f09f8c80 for column xxx occurs

You need to set tidb_skip_utf8_check on the target database:

set global tidb_skip_utf8_check=1;

18 Q: When the target database is TiDB, Error 1071: Specified key was too long; max key length is 3072 bytes occurs

This can be resolved by changing the max-index-length value in the TiDB configuration file to 12288.

19 Q: Cannot add foreign key constraint

When migrating from MySQL to MySQL/TiDB, if you encounter “Cannot add foreign key constraint,” check the CHARSET of the table structure. The current table structure and its dependent foreign key table structures should have the same CHARSET.

Below is an example where an error “Cannot add foreign key constraint” may occur when the CHARSET of group and user tables differ:

CREATE TABLE `group` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4; CREATE TABLE `user` ( `id` int(11) NOT NULL AUTO_INCREMENT, `username` varchar(255) NOT NULL, `password` varchar(255) DEFAULT NULL, `g_id` int(11) NOT NULL, PRIMARY KEY (`id`), FOREIGN KEY (`g_id`) REFERENCES `group`(`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Solution when this issue occurs:

  1. Execute the following SQL in the source database to modify the table’s CHARSET. It is recommended to replace the old encoding with a compatible one (e.g., replace utf8 with utf8mb4) to make user and group CHARSET consistent.
ALTER TABLE `user` CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_bin;
  1. Re-run the UDTS task

20 Q: When the source database is MySQL, incremental synchronization prompts ERROR 1236 (HY000): The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires

This is because the binlog synchronization point needed for incremental synchronization cannot be found in the source database. Possible causes are one of the following four:

  • 1 User is doing full + incremental tasks, and the time for full migration exceeds the maximum expiration time of the binlog, causing the binlog synchronization point recorded at the end of the full migration to expire. In this case, the binlog expiration time of the source database needs to be increased.
  • 2 The binlog from the user’s source database was manually cleared.
  • 3 The user’s task was paused or failed midway, and then restarted after a period, during which the binlog expired and was cleared.
  • 4 The user did not use GTID for incremental synchronization, and master-slave switching occurred.

Solution: The above situations all require re-executing the UDTS’s “full + incremental” task, and simply restarting the task will not solve the problem. Steps:

  • For full + incremental tasks, click “Edit Task” on the task details page, fill in the relevant task information, and then select “Save Task”. The current task progress will be cleared, and upon task restart, it will begin from full migration.
  • For tasks created directly for incremental migration, a full migration must be performed again, and then the correct BinlogName, BinlogPos, GTID should be re-entered. Upon task start, it will synchronize from the specified binlog position.

How to modify binlog log expiration time can be done in the following two ways:

  • 1 Temporarily effective, loses effect after database restart

    View the default expiration time setting

    show variables like "%expire_logs%";

    Set retention to 15 days

    set global expire_logs_days=15

    Refresh logs

    flush logs;

    View the newly created binlog logs

    show master status\G:

    Note: The above commands take effect immediately upon execution in the database, ensure you set the data retention date, so logs are not accidentally deleted.

  • 2 Modify configuration files for permanent effect

    Modify the MySQL configuration file location based on your actual situation

    vim /etc/my.cnf

    Modify binlog expiration-related configurations

    [mysqld] expire_logs_days=15

    Note: After modifying the configuration file, you need to restart for permanent effect. Setting to 0 means never expiring, the unit is in days.

21 Q: The table ‘xxx’ is full or No space left on device

Typically, this issue is caused by insufficient disk space on the target. Either upgrade the target disk space or clear data in the target database.

The table ‘xxx’ is full can also occur in special cases when the table uses the memory storage engine. You can modify the two values below in the MySQL configuration file to resolve it (or change the table to another storage engine). The default value is 16M.

tmp_table_size = 256M max_heap_table_size = 256M

How much should this value be set to? This should be determined based on actual situations.

  • What’s the maximum memory available for the current MySQL instance? The value should be below this limit.
  • What’s the expected storage size for memory engine tables?

22 Q: Error 1785: When @@GLOBAL.ENFORCE_GTID_CONSISTENCY = 1, updates to non-transactional tables can only be done in either autocommitted statements or single-statement transactions, and never in the same statement as updates to transactional tables

If you operate both transaction-supported tables (like InnoDB) and non-transaction-supported tables (like MyISAM) in the same transaction in MySQL while GTID is enabled in the database, this error will occur.

Solution, choose one of the following:

  • Convert non-transactional tables (typically MyISAM) to transactional tables (typically InnoDB) in the source database
  • Disable GTID in the target database

23 Q: Migration reports “Invalid default value for ‘xxxx_time’”

During migration, if the target database’s sql_mode includes NO_ZERO_DATE or NO_ZERO_IN_DATE, and the source database contains values like 0000-00-00 or 00:00:00, an error will occur when loading data.

Solution:

  • Modify the target database’s sql_mode to remove NO_ZERO_DATE and NO_ZERO_IN_DATE, adding ALLOW_INVALID_DATES. Steps for modification can be referred to in FAQ No. 5.
  • If the target database’s sql_mode includes STRICT_TRANS_TABLES, it can also be temporarily removed to avoid other issues, to be added back after migration completion.

24 Q: error creating indexes for xx.xx: createIndex error: WiredTigerIndex::insert: key too large to index, failing 1089

MongoDB introduced the failIndexKeyTooLong parameter in version 2.6, deprecating it in version 4.2, used to limit the index length.

Solution: Execute in the target database:

db.runCommand({ "setParameter": 1, "failIndexKeyTooLong": false} );

25 Q: OOM command not allowed when used memory > ‘maxmemory’ or rss_memory not enough.

This indicates insufficient capacity for shards in the target MongoDB sharded cluster.

Solution: Upgrade the shard capacity of the target cluster.

26 Q: ddtLog.v_HBUSer: error creating collection ddtLog.v_HBUSer: error running create command: Cannot create a view when the featureCompatibilityVersion is 3.X.

During MongoDB migration, the error indicates that the featureCompatibilityVersion values are inconsistent between the source and target databases, and the source database contains features that are incompatible with the target database, such as views, collations, etc.

Solution: Check and adjust incompatible contents between the source and target databases.

27 Q: The source database is Redis, and the error message is: [PANIC] read error, please check source redis log or network

During Redis migration, this error indicates that the source database’s data write volume exceeds the buffer size.

Solution: Execute the following command to modify the client-output-buffer-limit parameter value in the source database

config set client-output-buffer-limit 'slave 536870912 0 0'

28 Q: The source database is MongoDB, and the error message is: Failed: error creating intents to dump: error creating intents for database config: error getting collections for database config: (Unauthorized) not authorized on config to execute command { listCollections: 1, filter: {}, cursor: {}, lsid: { id: UUID(“12d62805-acea-472f-9862-ca27253c107e”) }, $db: “config” }

During MongoDB full migration, this error indicates a lack of backup permission for the user in the source database.

Solution: Execute the following command to modify the user’s permissions in the source database

db.grantRolesToUser("root",[{role:"backup",db:"admin"}])

29 Q: The source database is MySQL, and the error message is: Deadlock found when trying to get lock; try restarting transaction

During MySQL full migration, if the load phase reports this error, it indicates a deadlock due to multi-threaded concurrent operation while loading full data into the target database.

Solution: Set the target database limit value to 1

30 Q: load: return result; END; This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you might want to use the less safe log_bin_trust_function_creators variable

Execute the following command in the target database and then retry:

set global log_bin_trust_function_creators="ON";