docs
udisk
Product Introduction
Product Performance
Test Tools

Testing Tools

Use fio (opens in a new tab) tool, it is recommended to test with the libaio engine

Installation Method

Linux: yum install fio.x86_64

fio Parameter Description

ParameterDescription
-direct=1Skip cache and write directly to disk
-iodepth=128The depth of the IO queue of the request
-rw=writeRead and write strategy, optional values are randread (random read), randwrite (random write), read (sequential read), write (sequential write), randrw (mixed random read-write)
-ioengine=libaioIO engine configuration, it is recommended to use libaio
-bs=4kBlock size configuration, you can use 4k, 8k, 16k etc.
-size=200GThe size of the test-generated file
-numjobs=1Thread count configuration
-runtime=1000The length of the test run, in seconds
-group_reportingTest result summary display
-name=testTest task name
-filename=/data/testTest output path and file name

Common test examples are as follows:

  • Latency performance test:
Read Latency: 
fio -direct=1 -iodepth=1 -rw=read -ioengine=libaio -bs=4k -size=200G -numjobs=1 -runtime=1000 -group_reporting -name=test -filename=/data/test  
Write Latency: 
fio -direct=1 -iodepth=1 -rw=write -ioengine=libaio -bs=4k -size=200G -numjobs=1 -runtime=1000 -group_reporting -name=test -filename=/data/test
  • Throughput performance test:
Read Bandwidth: 
fio -direct=1 -iodepth=32 -rw=read -ioengine=libaio -bs=256k -size=200G -numjobs=4 -runtime=1000 -group_reporting -name=test -filename=/data/test  
Write Bandwidth:  
fio -direct=1 -iodepth=32 -rw=write -ioengine=libaio -bs=256k -size=200G -numjobs=4 -runtime=1000 -group_reporting -name=test -filename=/data/test
  • IOPS performance test(4k, 4*32 queue, random read-write):
Read IOPS: 
fio -direct=1 -iodepth=32 -rw=randread  -ioengine=libaio -bs=4k -size=200G -numjobs=4 -runtime=1000 -group_reporting -name=test -filename=/data/test 
Write IOPS:   
fio -direct=1 -iodepth=32 -rw=randwrite -ioengine=libaio -bs=4k -size=200G -numjobs=4 -runtime=1000 -group_reporting -name=test -filename=/data/test

RSSD Performance Test

As the performance of the cloud disk and the pressure test conditions play a key role when testing the cloud disk, to fully exploit the system performance of multi-core and multi-threading, and to put out the performance indicator of 1.2 million IOPS for the RSSD cloud disk, you can refer to the following rssd_test.sh script:

#!/bin/bash     
numjobs=16          # Test thread number, do not exceed the number of CPU cores, default 16
iodepth=32          # IO queue depth per thread, default 32
bs=4k               # Size per I/O, default 4k
rw=randread         # Read and write strategy, default random read
dev_name=vdb        # Test block device name, default vdb

if [[ $# == 0 ]]; then
  echo "Default test: `basename $0` $numjobs $iodepth $bs $rw $dev_name"
  echo "Or you can specify parameter:"
  echo "`basename $0` numjobs iodepth bs rw dev_name"
elif [[ $# == 5 ]]; then
  numjobs=$1
  iodepth=$2
  bs=$3
  rw=$4
  dev_name=$5
else
  echo "Parameter number error!"
  echo "`basename $0` numjobs iodepth bs rw dev_name"
  exit 1
fi

nr_cpus=`cat /proc/cpuinfo |grep "processor" |wc -l`
if [ $nr_cpus -lt $numjobs ];then
  echo "Numjobs is more than cpu cores, exit!"
  exit -1
fi
nu=$((numjobs+1))
cpulist=""
for ((i=1;i<10;i++))
do
  list=`cat /sys/block/${dev_name}/mq/*/cpu_list | awk '{if(i<=NF) print $i;}' i="$i" | tr -d ',' | tr '\n' ','
  if [ -z $list ];then
    break
  fi
  cpulist=${cpulist}${list}
done
spincpu=`echo $cpulist | cut -d ',' -f 2-${nu}` # Do not use core 0
echo $spincpu
echo $numjobs
echo 2 > /sys/block/${dev_name}/queue/rq_affinity
sleep 5
# Execute fio command
fio --ioengine=libaio --runtime=30s --numjobs=${numjobs} --iodepth=${iodepth} --bs=${bs} --rw=${rw} --filename=/dev/${dev_name} --time_based=1 --direct=1 --name=test --group_reporting --cpus_allowed=$spincpu --cpus_allowed_policy=split

Testing Description

  1. According to the user's test environment, the input parameters of the script can be specified. If it is not specified, the default test method will be executed.

  2. Direct testing of bare disks will destroy the file system structure. If there is data on the cloud disk, you can set filename=[specific file path, such as /mnt/test.image]. If there is no data, you can directly set filename=[device name, like /dev/vdb in this example]

Script Explanation

Block Device Parameters

  • When testing the instance, the command echo 2 > /sys/block/vdb/queue/rq_affinity in the script is to modify the rq_affinity value of the block device in the cloud host instance to 2.

  • When the value of parameter rq_affinity is 1, it means that when the block device receives an IO Completion event, this IO will be sent back to handle this IO on the vCPU group of the initial issuing process. In the case of multithreading concurrency, IO Completion may be executed on a certain vCPU, which will cause a bottleneck and prevent the performance from improving.

  • When the value of parameter rq_affinity is 2, it means that when the block device receives an IO Completion event, this IO will execute on the originally issued vCPU. In the case of multithreading concurrency, it can completely take full advantage of the performance of each vCPU.

Binding Corresponding vCPU

  • In normal mode, a device (Device) only has one request list (Request-Queue). In the case of multithreading concurrent processing I/O, this unique Request-Queue is a performance bottleneck point.

  • In the latest Multi-Queue mode, a device (Device) can have multiple Request-Queue for processing I/O, which can fully exploit the performance of the backend storage. If you have 4 I/O threads, you need to bind the 4 threads to different CPU Cores corresponding to Request-Queue, so as to fully utilize Multi-Queue to improve performance.

  • fio provides parameters cpusallowed and cpus_allowed_policy to bind vCPU. Take vdb cloud disk as an example, run ls /sys/block/vdb/mq/ to view the QueueId of the device named vdb cloud disk, and run cat /sys/block/vdb/mq/$QueueId/cpu_list to view the cpu_core_id bound to the QueueId of the device named vdb cloud disk.

  • Company
  • ContactUs
  • Blog
Copyright © 2024 SurferCloud All Rights Reserved