2023-03-21 16:00:57 +00:00
---
2023-03-21 17:58:10 +00:00
slug: /en/development/building_and_benchmarking_deflate_qpl
2023-03-21 16:00:57 +00:00
sidebar_position: 73
sidebar_label: Building and Benchmarking DEFLATE_QPL
description: How to build Clickhouse and run benchmark with DEFLATE_QPL Codec
---
2023-05-13 13:26:47 +00:00
2023-03-21 17:58:10 +00:00
# Build Clickhouse with DEFLATE_QPL
2023-05-13 13:26:47 +00:00
- Make sure your target machine meet the QPL required [prerequisites ](https://intel.github.io/qpl/documentation/get_started_docs/installation.html#prerequisites )
- Pass the following flag to CMake when building ClickHouse:
2023-03-21 16:00:57 +00:00
``` bash
2023-05-13 13:26:47 +00:00
cmake -DENABLE_QPL=1 ..
2023-03-21 16:00:57 +00:00
```
2023-05-13 13:26:47 +00:00
2023-03-21 16:00:57 +00:00
- For generic requirements, please refer to Clickhouse generic [build instructions ](/docs/en/development/build.md )
2023-03-21 17:58:10 +00:00
# Run Benchmark with DEFLATE_QPL
2023-05-13 13:26:47 +00:00
2023-03-21 16:00:57 +00:00
## Files list
2023-05-13 13:26:47 +00:00
2023-03-21 19:36:38 +00:00
The folders `benchmark_sample` under [qpl-cmake ](https://github.com/ClickHouse/ClickHouse/tree/master/contrib/qpl-cmake ) give example to run benchmark with python scripts:
2023-03-20 21:19:34 +00:00
2023-03-21 16:00:57 +00:00
`client_scripts` contains python scripts for running typical benchmark, for example:
2023-03-21 16:06:29 +00:00
- `client_stressing_test.py` : The python script for query stress test with [1~4] server instances.
2023-03-21 16:00:57 +00:00
- `queries_ssb.sql` : The file lists all queries for [Star Schema Benchmark ](https://clickhouse.com/docs/en/getting-started/example-datasets/star-schema/ )
- `allin1_ssb.sh` : This shell script executes benchmark workflow all in one automatically.
2023-03-20 21:19:34 +00:00
2023-03-21 19:36:38 +00:00
`database_files` means it will store database files according to lz4/deflate/zstd codec.
2023-03-20 21:19:34 +00:00
2023-03-21 16:00:57 +00:00
## Run benchmark automatically for Star Schema:
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
2023-03-21 16:06:29 +00:00
$ cd ./benchmark_sample/client_scripts
2023-03-20 21:19:34 +00:00
$ sh run_ssb.sh
```
2023-05-13 13:26:47 +00:00
2023-03-21 16:00:57 +00:00
After complete, please check all the results in this folder:`./output/`
2023-03-20 21:19:34 +00:00
2023-03-21 19:36:38 +00:00
In case you run into failure, please manually run benchmark as below sections.
2023-03-20 21:19:34 +00:00
2023-03-21 16:06:29 +00:00
## Definition
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
[CLICKHOUSE_EXE] means the path of clickhouse executable program.
2023-03-21 19:36:38 +00:00
## Environment
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
- CPU: Sapphire Rapid
2023-03-21 16:00:57 +00:00
- OS Requirements refer to [System Requirements for QPL ](https://intel.github.io/qpl/documentation/get_started_docs/installation.html#system-requirements )
- IAA Setup refer to [Accelerator Configuration ](https://intel.github.io/qpl/documentation/get_started_docs/installation.html#accelerator-configuration )
2023-03-21 19:36:38 +00:00
- Install python modules:
2023-05-13 13:26:47 +00:00
2023-03-21 19:36:38 +00:00
``` bash
pip3 install clickhouse_driver numpy
```
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
[Self-check for IAA]
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
$ accel-config list | grep -P 'iax|state'
```
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
Expected output like this:
``` bash
"dev":"iax1",
"state":"enabled",
"state":"enabled",
```
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
If you see nothing output, it means IAA is not ready to work. Please check IAA setup again.
## Generate raw data
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
2023-03-21 18:23:36 +00:00
$ cd ./benchmark_sample
$ mkdir rawdata_dir & & cd rawdata_dir
2023-03-20 21:19:34 +00:00
```
2023-05-13 13:26:47 +00:00
2023-03-21 19:36:38 +00:00
Use [`dbgen` ](https://clickhouse.com/docs/en/getting-started/example-datasets/star-schema ) to generate 100 million rows data with the parameters:
2023-03-20 21:19:34 +00:00
-s 20
2023-03-21 20:31:47 +00:00
The files like `*.tbl` are expected to output under `./benchmark_sample/rawdata_dir/ssb-dbgen` :
2023-03-20 21:19:34 +00:00
## Database setup
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
Set up database with LZ4 codec
``` bash
$ cd ./database_dir/lz4
$ [CLICKHOUSE_EXE] server -C config_lz4.xml >& /dev/null&
$ [CLICKHOUSE_EXE] client
```
2023-05-13 13:26:47 +00:00
2023-03-21 19:36:38 +00:00
Here you should see the message `Connected to ClickHouse server` from console which means client successfully setup connection with server.
2023-03-20 21:19:34 +00:00
Complete below three steps mentioned in [Star Schema Benchmark ](https://clickhouse.com/docs/en/getting-started/example-datasets/star-schema )
- Creating tables in ClickHouse
2023-03-21 18:23:36 +00:00
- Inserting data. Here should use `./benchmark_sample/rawdata_dir/ssb-dbgen/*.tbl` as input data.
2023-03-21 20:31:47 +00:00
- Converting “star schema” to de-normalized “flat schema”
2023-03-20 21:19:34 +00:00
Set up database with with IAA Deflate codec
``` bash
$ cd ./database_dir/deflate
$ [CLICKHOUSE_EXE] server -C config_deflate.xml >& /dev/null&
$ [CLICKHOUSE_EXE] client
```
Complete three steps same as lz4 above
Set up database with with ZSTD codec
``` bash
$ cd ./database_dir/zstd
$ [CLICKHOUSE_EXE] server -C config_zstd.xml >& /dev/null&
$ [CLICKHOUSE_EXE] client
```
Complete three steps same as lz4 above
2023-03-21 16:06:29 +00:00
[self-check]
2023-03-20 21:19:34 +00:00
For each codec(lz4/zstd/deflate), please execute below query to make sure the databases are created successfully:
2023-03-22 14:26:31 +00:00
```sql
select count() from lineorder_flat
```
2023-03-20 21:19:34 +00:00
You are expected to see below output:
```sql
┌───count()─┐
│ 119994608 │
└───────────┘
```
2023-03-21 16:06:29 +00:00
[Self-check for IAA Deflate codec]
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
At the first time you execute insertion or query from client, clickhouse server console is expected to print this log:
```text
2023-03-21 16:00:57 +00:00
Hardware-assisted DeflateQpl codec is ready!
2023-03-20 21:19:34 +00:00
```
If you never find this, but see another log as below:
```text
2023-03-21 16:00:57 +00:00
Initialization of hardware-assisted DeflateQpl codec failed
2023-03-20 21:19:34 +00:00
```
That means IAA devices is not ready, you need check IAA setup again.
## Benchmark with single instance
2023-05-13 13:26:47 +00:00
2023-03-21 16:00:57 +00:00
- Before start benchmark, Please disable C6 and set CPU frequency governor to be `performance`
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
$ cpupower idle-set -d 3
$ cpupower frequency-set -g performance
```
2023-05-13 13:26:47 +00:00
2023-03-21 19:36:38 +00:00
- To eliminate impact of memory bound on cross sockets, we use `numactl` to bind server on one socket and client on another socket.
2023-03-20 21:19:34 +00:00
- Single instance means single server connected with single client
Now run benchmark for LZ4/Deflate/ZSTD respectively:
LZ4:
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
$ cd ./database_dir/lz4
$ numactl -m 0 -N 0 [CLICKHOUSE_EXE] server -C config_lz4.xml >& /dev/null&
$ cd ./client_scripts
$ numactl -m 1 -N 1 python3 client_stressing_test.py queries_ssb.sql 1 > lz4.log
```
IAA deflate:
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
$ cd ./database_dir/deflate
$ numactl -m 0 -N 0 [CLICKHOUSE_EXE] server -C config_deflate.xml >& /dev/null&
$ cd ./client_scripts
$ numactl -m 1 -N 1 python3 client_stressing_test.py queries_ssb.sql 1 > deflate.log
```
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
ZSTD:
2023-05-13 13:26:47 +00:00
2023-03-21 16:06:29 +00:00
``` bash
2023-03-20 21:19:34 +00:00
$ cd ./database_dir/zstd
$ numactl -m 0 -N 0 [CLICKHOUSE_EXE] server -C config_zstd.xml >& /dev/null&
$ cd ./client_scripts
$ numactl -m 1 -N 1 python3 client_stressing_test.py queries_ssb.sql 1 > zstd.log
```
Now three logs should be output as expected:
```text
lz4.log
deflate.log
zstd.log
```
How to check performance metrics:
2023-03-21 16:00:57 +00:00
We focus on QPS, please search the keyword: `QPS_Final` and collect statistics
2023-03-20 21:19:34 +00:00
2023-03-21 16:06:29 +00:00
## Benchmark with multi-instances
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
- To reduce impact of memory bound on too much threads, We recommend run benchmark with multi-instances.
- Multi-instance means multiple( 2 or 4) servers connected with respective client.
2023-03-21 19:36:38 +00:00
- The cores of one socket need to be divided equally and assigned to the servers respectively.
- For multi-instances, must create new folder for each codec and insert dataset by following the similar steps as single instance.
2023-03-20 21:19:34 +00:00
There are 2 differences:
- For client side, you need launch clickhouse with the assigned port during table creation and data insertion.
- For server side, you need launch clickhouse with the specific xml config file in which port has been assigned. All customized xml config files for multi-instances has been provided under ./server_config.
Here we assume there are 60 cores per socket and take 2 instances for example.
Launch server for first instance
LZ4:
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
$ cd ./database_dir/lz4
$ numactl -C 0-29,120-149 [CLICKHOUSE_EXE] server -C config_lz4.xml >& /dev/null&
```
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
ZSTD:
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
$ cd ./database_dir/zstd
$ numactl -C 0-29,120-149 [CLICKHOUSE_EXE] server -C config_zstd.xml >& /dev/null&
```
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
IAA Deflate:
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
$ cd ./database_dir/deflate
$ numactl -C 0-29,120-149 [CLICKHOUSE_EXE] server -C config_deflate.xml >& /dev/null&
```
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
[Launch server for second instance]
LZ4:
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
$ cd ./database_dir & & mkdir lz4_s2 & & cd lz4_s2
$ cp ../../server_config/config_lz4_s2.xml ./
$ numactl -C 30-59,150-179 [CLICKHOUSE_EXE] server -C config_lz4_s2.xml >& /dev/null&
```
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
ZSTD:
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
$ cd ./database_dir & & mkdir zstd_s2 & & cd zstd_s2
$ cp ../../server_config/config_zstd_s2.xml ./
$ numactl -C 30-59,150-179 [CLICKHOUSE_EXE] server -C config_zstd_s2.xml >& /dev/null&
```
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
IAA Deflate:
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
$ cd ./database_dir & & mkdir deflate_s2 & & cd deflate_s2
$ cp ../../server_config/config_deflate_s2.xml ./
$ numactl -C 30-59,150-179 [CLICKHOUSE_EXE] server -C config_deflate_s2.xml >& /dev/null&
```
Creating tables & & Inserting data for second instance
Creating tables:
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
$ [CLICKHOUSE_EXE] client -m --port=9001
```
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
Inserting data:
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
2023-03-22 13:35:35 +00:00
$ [CLICKHOUSE_EXE] client --query "INSERT INTO [TBL_FILE_NAME] FORMAT CSV" < [TBL_FILE_NAME].tbl --port=9001
2023-03-20 21:19:34 +00:00
```
2023-05-13 13:26:47 +00:00
2023-03-22 13:35:35 +00:00
- [TBL_FILE_NAME] represents the name of a file named with the regular expression: *. tbl under `./benchmark_sample/rawdata_dir/ssb-dbgen` .
- `--port=9001` stands for the assigned port for server instance which is also defined in config_lz4_s2.xml/config_zstd_s2.xml/config_deflate_s2.xml. For even more instances, you need replace it with the value: 9002/9003 which stand for s3/s4 instance respectively. If you don't assign it, the port is 9000 by default which has been used by first instance.
2023-03-20 21:19:34 +00:00
Benchmarking with 2 instances
LZ4:
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
$ cd ./database_dir/lz4
$ numactl -C 0-29,120-149 [CLICKHOUSE_EXE] server -C config_lz4.xml >& /dev/null&
$ cd ./database_dir/lz4_s2
$ numactl -C 30-59,150-179 [CLICKHOUSE_EXE] server -C config_lz4_s2.xml >& /dev/null&
$ cd ./client_scripts
$ numactl -m 1 -N 1 python3 client_stressing_test.py queries_ssb.sql 2 > lz4_2insts.log
```
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
ZSTD:
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
$ cd ./database_dir/zstd
$ numactl -C 0-29,120-149 [CLICKHOUSE_EXE] server -C config_zstd.xml >& /dev/null&
$ cd ./database_dir/zstd_s2
$ numactl -C 30-59,150-179 [CLICKHOUSE_EXE] server -C config_zstd_s2.xml >& /dev/null&
$ cd ./client_scripts
$ numactl -m 1 -N 1 python3 client_stressing_test.py queries_ssb.sql 2 > zstd_2insts.log
```
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
IAA deflate
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
2023-03-21 16:06:29 +00:00
$ cd ./database_dir/deflate
2023-03-20 21:19:34 +00:00
$ numactl -C 0-29,120-149 [CLICKHOUSE_EXE] server -C config_deflate.xml >& /dev/null&
$ cd ./database_dir/deflate_s2
$ numactl -C 30-59,150-179 [CLICKHOUSE_EXE] server -C config_deflate_s2.xml >& /dev/null&
$ cd ./client_scripts
$ numactl -m 1 -N 1 python3 client_stressing_test.py queries_ssb.sql 2 > deflate_2insts.log
```
2023-05-13 13:26:47 +00:00
2023-03-21 16:00:57 +00:00
Here the last argument: `2` of client_stressing_test.py stands for the number of instances. For more instances, you need replace it with the value: 3 or 4. This script support up to 4 instances/
2023-03-20 21:19:34 +00:00
Now three logs should be output as expected:
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` text
lz4_2insts.log
deflate_2insts.log
zstd_2insts.log
```
How to check performance metrics:
2023-03-21 16:00:57 +00:00
We focus on QPS, please search the keyword: `QPS_Final` and collect statistics
2023-03-20 21:19:34 +00:00
Benchmark setup for 4 instances is similar with 2 instances above.
2023-03-21 16:06:29 +00:00
We recommend use 2 instances benchmark data as final report for review.
2023-03-20 21:19:34 +00:00
2023-03-21 16:06:29 +00:00
## Tips
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
Each time before launch new clickhouse server, please make sure no background clickhouse process running, please check and kill old one:
2023-05-13 13:26:47 +00:00
2023-03-20 21:19:34 +00:00
``` bash
$ ps -aux| grep clickhouse
$ kill -9 [PID]
```
2023-03-21 19:36:38 +00:00
By comparing the query list in ./client_scripts/queries_ssb.sql with official [Star Schema Benchmark ](https://clickhouse.com/docs/en/getting-started/example-datasets/star-schema ), you will find 3 queries are not included: Q1.2/Q1.3/Q3.4 . This is because cpu utilization% is very low < 10 % for these queries which means cannot demonstrate performance differences .