docs
udb-mysql
rwrouter
Performance Advantage

Performance Advantages

The core value of the read-write separation middleware lies in the significant improvement in the database's read performance after adding slave nodes. The ability to handle read requests from slave nodes can be fully utilized. If this cannot be achieved, no matter how good the middleware's MySQL compatibility, or how powerful its management functions, it cannot meet the essential needs of the business.

The UDB read-write separation middleware has been meticulously designed, refining and optimizing every line of performance-related code to achieve linear growth in read performance with the number of read replicas. By adding the corresponding number of read replicas, the database's read performance can also increase linearly. In this sense, using UDB read-write separation middleware allows customers to fully utilize the performance of each read replica they purchase, eliminating waste. This is something that the vast majority of middleware in the industry cannot achieve.

Linear increase in read request QPS with the number of nodes

As can be seen from the above figure, using the Sysbench testing program, under the condition that the number of testing threads >=128 (to ensure sufficient testing pressure), the read QPS linearly increases with the increase in the number of slave nodes: The maximum QPS can reach 50,000 when there is only 1 primary node, the QPS can reach 100,000 with 1 primary and 1 slave node, and the peak QPS is 150,000 when there are 1 primary and 2 slave nodes.

In order to more intuitively illustrate the effect of "letting read performance linearly extend with the number of slave nodes", we selected the highest read QPS under the three configurations of 1 master 0 slave, 1 master 1 slave, and 1 master 2 slaves respectively from the above figure, to get the following performance curve:

From the graph, it can be seen that the read QPS increases almost perfectly linearly with the increase in the number of nodes.

Comparison Test with ProxySQL

ProxySQL is a well-known database middleware in the industry, primarily offering functionalities such as read-write separation, database management, and caching, making it a favorite among many DBAs both domestically and internationally. To some extent, ProxySQL is the first choice for open-source read-write separation middleware.

Before the product launch, to understand how much the UDB read-write separation middleware's read performance deviates from the industry benchmark, we conducted a comparative test of the read performance of both products. Surprisingly, we found that the UDB read-write separation middleware almost overwhelmingly outperformed ProxySQL in terms of read performance:

From the test results, for two backend database nodes with identical configurations, the UDB read-write separation middleware achieved a peak read performance of 100,000 QPS, while ProxySQL's maximum reached only 75,000 QPS, resulting in a 25% difference between the two (however, ProxySQL consumed less CPU than the UDB read-write separation middleware. Since the bottleneck was on the backend database nodes' I/O, neither middleware fully utilized the CPU capacity of the testing machine).

Considering that ProxySQL is developed in C++ while the UDB read-write separation middleware is developed in Go, this result surprised us. We carefully reviewed the testing methods and conducted multiple tests, consistently obtaining the same results.

From this test, we conclude that, generally speaking, C++ offers more advantages in performance due to its greater control over the underlying layers compared to Go. However, this is not absolute. Performance often depends more on whether the module design is scientific, whether the code is meticulously optimized, and whether the system can fully leverage the powerful computational capabilities of multi-core CPUs.

The specific test methods are as follows:

CategoryName
Test programSysbench 1.1.0
Test machineTest machine
Read-write separation middlewareDual-active two-node, each node configuration: 4GB memory / CPU unlimited
ULBReuse standard ULB product, no special configuration
UDBMaster and slave node configurations are: 6GB memory / 200GB SSD / CPU unlimited / MySQL5.6
Data volume5 tables, each with 50 million records
ProxySQLSingle node, each node configuration: 8GB memory / CPU unlimited

Create table statement:

CREATE TABLE `sbtest1` (`id` int(10) unsigned NOT NULL,`k` int(10) unsigned NOT NULL DEFAULT '0',`c` char(120) NOT NULL DEFAULT '',`pad` char(60) NOT NULL DEFAULT '',PRIMARY KEY (`id`),KEY `k_1` (`k`)) ENGINE=InnoDB DEFAULT CHARSET=utf8 MAX_ROWS=1000000

Test command:

./src/sysbench --db-driver=mysql \
--mysql-table-engine=innodb \
--mysql-host=10.9.99.169  --mysql-port=3306 --mysql-user=root --mysql-password="liuly624@cloud" --mysql-db=sbtest \
--oltp-tables-count=5 --oltp-table-size=50000000 --report-interval=2 --max-requests=0 --time=300 --threads=128 \
--rand-init=on --rand-type=special --rand-spec-pct=5 --percentile=99 --oltp_auto_inc=off \
--test=/data/sysbench/tests/include/oltp_legacy/select.lua run

The specific test steps are:

  1. Use 64, 128, 192, 256 threads respectively, directly connect to the UDB master node for pressure testing, and record QPS;

  2. Use 64, 128, 192, 256 threads respectively, connect to the read-write separation middleware, the backend has 1 primary and 0 slave nodes totaling 1 UDB node for pressure testing, record QPS;

  3. Use 64, 128, 192, 256 threads respectively, connect to the read-write separation middleware, the backend has 1 primary and 1 slave nodes totaling 2 UDB nodes for pressure testing, record QPS (Note: The configuration of the primary and secondary nodes is identical, the same below);

  4. Use 64, 128, 192, 256 threads respectively, connect to the read-write separation middleware, the backend has 1 primary and 2 slave nodes totaling 3 UDB nodes for pressure testing, record QPS.

  5. Use 64, 128, 192, 256 threads respectively, connect to the ProxySQL middleware, the backend has 1 primary and 2 slave nodes totaling 3 nodes, record QPS. We found during the test that since ProxySQL forwards on a SQL statement granularity, and the forwarding target is grouped, the primary and secondary nodes are not in the same group. Therefore, although there are 3 nodes in the backend configuration, actually only 2 slave nodes are utilized. Therefore, in the last figure of the performance test results above, this test only shows the performance of ProxySQL with 2 backend slave nodes.

  • Company
  • ContactUs
  • Blog
Copyright © 2024 SurferCloud All Rights Reserved