There was some cases when some patches to the datetime code leads to
flaky tests, due to the tests itself had been runned using regular
timezone (TZ).
But if you will this tests with something "specific" (that is not
strictly defined around 1970 year), those tests will fail.
So to catch such issues in the PRs itself, let's randomize
session_timezone as well.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
* Add zookeeper name in endpoint id
When we migrate a replicated table from one zookeeper cluster to
another (the reason why we migration is that zookeeper's load is
too high), we will create a new table with the same zpath, but it
will fail and the old table will be in trouble.
Here is some infomation:
1.old table:
CREATE TABLE a1 (`id` UInt64)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/default/a1/{shard}', '{replica}')
ORDER BY (id);
2.new table:
CREATE TABLE a2 (`id` UInt64)
ENGINE = ReplicatedMergeTree('aux1:/clickhouse/tables/default/a1/{shard}', '{replica}')
ORDER BY (id);
3.error info:
<Error> executeQuery: Code: 220. DB::Exception: Duplicate interserver IO endpoint:
DataPartsExchange:/clickhouse/tables/default/a1/01/replicas/02.
(DUPLICATE_INTERSERVER_IO_ENDPOINT)
<Error> InterserverIOHTTPHandler: Code: 221. DB::Exception: No interserver IO endpoint
named DataPartsExchange:/clickhouse/tables/default/a1/01/replicas/02.
(NO_SUCH_INTERSERVER_IO_ENDPOINT)
* Revert "Add zookeeper name in endpoint id"
This reverts commit 9deb75b249619b7abdd38e3949ca8b3a76c9df8e.
* Add zookeeper name in endpoint id
When we migrate a replicated table from one zookeeper cluster to
another (the reason why we migration is that zookeeper's load is
too high), we will create a new table with the same zpath, but it
will fail and the old table will be in trouble.
* Fix incompatible with a new setting
* add a test, fix other issues
* Update 02442_auxiliary_zookeeper_endpoint_id.sql
* Update 02735_system_zookeeper_connection.reference
* Update 02735_system_zookeeper_connection.sql
* Update run.sh
* Remove the 'no-fasttest' tag
* Update 02442_auxiliary_zookeeper_endpoint_id.sql
---------
Co-authored-by: Alexander Tokmakov <tavplubix@clickhouse.com>
Co-authored-by: Alexander Tokmakov <tavplubix@gmail.com>
- Cleanup apt garbage in the container
- Download the same binary in docker and setup_minio.sh
- Use binary and not rpm for minio
- Sort packages in Dockerfile's
If 10 seconds will not be enough to finish the server, then
clickhouse-local (that goes after) cannot obtain the logs due to status
file will be locked, like in [1]:
Code: 76. DB::Exception: Cannot lock file /var/lib/clickhouse/status. Another server instance in same directory is already running. (CANNOT_OPEN_FILE)
[1]: https://s3.amazonaws.com/clickhouse-test-reports/35075/4a064e5b6f81136f2bf923d85001f25fa05d39ce/stateless_tests_flaky_check__address__actions_.html
So use proper wait via "clickhouse stop"
v2: Fix permissions pid file for replicated database servers
They do not use default, /var/run/clickhouse-server, that do not have
proper permissions.
Fixes: #36885
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
Changelog:
- remove query_thread_log (query_log should be enough)
- for s3 storage there /var/lib/clickhouse/data/system/*_log cannot be
used, replace them with plain TSVWithNamesAndTypes, also note, that
after schema inference it is even pretty easy to use
- support replicated database env correctly
Co-authored-by: tavplubix <tavplubix@clickhouse.com>
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>