Since there can be some leftovers:
2023.07.24 07:08:25.238066 [ 140 ] {} <Error> Application: Code: 219. DB::Exception: Cannot drop: filesystem error: in remove: Directory not empty ["/var/lib/clickhouse/data/system/"]. Probably database contain some detached tables or metadata leftovers from Ordinary engine. If you want to remove all data anyway, try to attach database back and drop it again with enabled force_remove_data_recursively_on_drop setting: Exception while trying to convert database system from Ordinary to Atomic. It may be in some intermediate state. You can finish conversion manually by moving the rest tables from system to .tmp_convert.system.9396432095832455195 (using RENAME TABLE) and executing DROP DATABASE system and RENAME DATABASE .tmp_convert.system.9396432095832455195 TO system. (DATABASE_NOT_EMPTY), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000e68af57 in /usr/bin/clickhouse
1. ? @ 0x000000000cab443c in /usr/bin/clickhouse
2. DB::DatabaseOnDisk::drop(std::shared_ptr<DB::Context const>) @ 0x000000001328d617 in /usr/bin/clickhouse
3. DB::DatabaseCatalog::detachDatabase(std::shared_ptr<DB::Context const>, String const&, bool, bool) @ 0x0000000013524a6c in /usr/bin/clickhouse
4. DB::InterpreterDropQuery::executeToDatabaseImpl(DB::ASTDropQuery const&, std::shared_ptr<DB::IDatabase>&, std::vector<StrongTypedef<wide::integer<128ul, unsigned int>, DB::UUIDTag>, std::allocator<StrongTypedef<wide::integer<128ul, unsigned int>, DB::UUIDTag>>>&) @ 0x0000000013bc05e4 in /usr/bin/clickhouse
5. DB::InterpreterDropQuery::executeToDatabase(DB::ASTDropQuery const&) @ 0x0000000013bbc6b8 in /usr/bin/clickhouse
6. DB::InterpreterDropQuery::execute() @ 0x0000000013bbba22 in /usr/bin/clickhouse
7. ? @ 0x00000000140b13a5 in /usr/bin/clickhouse
8. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x00000000140ad20e in /usr/bin/clickhouse
9. ? @ 0x00000000140d2ef0 in /usr/bin/clickhouse
10. DB::maybeConvertSystemDatabase(std::shared_ptr<DB::Context>) @ 0x00000000140d0aaf in /usr/bin/clickhouse
11. DB::Server::main(std::vector<String, std::allocator<String>> const&) @ 0x000000000e724e55 in /usr/bin/clickhouse
12. Poco::Util::Application::run() @ 0x0000000017ead086 in /usr/bin/clickhouse
13. DB::Server::run() @ 0x000000000e714a5d in /usr/bin/clickhouse
14. Poco::Util::ServerApplication::run(int, char**) @ 0x0000000017ec07b9 in /usr/bin/clickhouse
15. mainEntryClickHouseServer(int, char**) @ 0x000000000e711a26 in /usr/bin/clickhouse
16. main @ 0x0000000008cf13cf in /usr/bin/clickhouse
17. __libc_start_main @ 0x0000000000021b97 in /lib/x86_64-linux-gnu/libc-2.27.so
18. _start @ 0x00000000080705ae in /usr/bin/clickhouse
(version 23.7.1.2012)
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(highly not recommended) If you really want to use OS packages on modern debian/ubuntu instead of "pip": sudo apt install -y docker docker-compose python3-pytest python3-dicttoxml python3-docker python3-pymysql python3-protobuf python3-pymongo python3-tzlocal python3-kazoo python3-psycopg2 kafka-python python3-pytest-timeout python3-minio
If you want to run the tests under a non-privileged user, you must add this user to docker group: sudo usermod -aG docker $USER and re-login.
(You must close all your sessions (for example, restart your computer))
To check, that you have access to Docker, run docker ps.
Run the tests with the pytest command. To select which tests to run, use: pytest -k <test_name_pattern>
By default tests are run with system-wide client binary, server binary and base configs. To change that,
set the following environment variables:
CLICKHOUSE_TESTS_SERVER_BIN_PATH to choose the server binary.
CLICKHOUSE_TESTS_CLIENT_BIN_PATH to choose the client binary.
CLICKHOUSE_TESTS_BASE_CONFIG_DIR to choose the directory from which base configs (config.xml andusers.xml) are taken.
Please note that if you use separate build (ENABLE_CLICKHOUSE_ALL=OFF), you need to build different components, including but not limited to ENABLE_CLICKHOUSE_LIBRARY_BRIDGE=ON ENABLE_CLICKHOUSE_ODBC_BRIDGE=ON ENABLE_CLICKHOUSE_KEEPER=ON. So it is easier to use ENABLE_CLICKHOUSE_ALL=ON
For tests that use common docker compose files you may need to set up their path with environment variable: DOCKER_COMPOSE_DIR=$HOME/ClickHouse/docker/test/integration/runner/compose
Running with runner script
The only requirement is fresh configured docker and
docker pull clickhouse/integration-tests-runner
Notes:
If you want to run integration tests without sudo you have to add your user to docker group sudo usermod -aG docker $USER. More information about docker configuration.
If you already had run these tests without ./runner script you may have problems with pytest cache. It can be removed with rm -r __pycache__ .pytest_cache/.
Some tests maybe require a lot of resources (CPU, RAM, etc.). Better not try large tests like test_cluster_copier or test_distributed_ddl* on your laptop.
You can run tests via ./runner script and pass pytest arguments as last arg:
You can just open shell inside a container by overwritting the command:
./runner --command=bash
Rebuilding the docker containers
The main container used for integration tests lives in docker/test/integration/base/Dockerfile. Rebuild it with
cd docker/test/integration/base
docker build -t clickhouse/integration-test .
The helper container used by the runner script is in docker/test/integration/runner/Dockerfile.
Adding new tests
To add new test named foo, create a directory test_foo with an empty __init__.py and a file
named test.py containing tests in it. All functions with names starting with test will become test cases.
helpers directory contains utilities for:
Launching a ClickHouse cluster with or without ZooKeeper in docker containers.
Sending queries to launched instances.
Introducing network failures such as severing network link between two instances.
To assert that two TSV files must be equal, wrap them in the TSV class and use the regular assert
statement. Example: assert TSV(result) == TSV(reference). In case the assertion fails, pytest
will automagically detect the types of variables and only the small diff of two files is printed.
Troubleshooting
If tests failing for mysterious reasons, this may help:
sudo service docker stop
sudo bash -c 'rm -rf /var/lib/docker/*'
sudo service docker start
iptables-nft
On Ubuntu 20.10 and later in host network mode (default) one may encounter problem with nested containers not seeing each other. It happens because legacy and nftables rules are out of sync. Problem can be solved by: