* Limit log frequence for "Skipping send data over distributed table" message
After SYSTEM STOP DISTRIBUTED SENDS it will constantly print this
message.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
* Rename directory monitor concept into async INSERT
Rename the following query settings (with preserving backward
compatiblity, by keeping old name as an alias):
- distributed_directory_monitor_sleep_time_ms -> distributed_async_insert_sleep_time_ms
- distributed_directory_monitor_max_sleep_time_ms -> distributed_async_insert_max_sleep_time_ms
- distributed_directory_monitor_batch -> distributed_async_insert_batch_inserts
- distributed_directory_monitor_split_batch_on_failure -> distributed_async_insert_split_batch_on_failure
Rename the following table settings (with preserving backward
compatiblity, by keeping old name as an alias):
- monitor_batch_inserts -> async_insert_batch
- monitor_split_batch_on_failure -> async_insert_split_batch_on_failure
- directory_monitor_sleep_time_ms -> async_insert_sleep_time_ms
- directory_monitor_max_sleep_time_ms -> async_insert_max_sleep_time_ms
And also update all the references:
$ gg -e directory_monitor_ -e monitor_ tests docs | cut -d: -f1 | sort -u | xargs sed -e 's/distributed_directory_monitor_sleep_time_ms/distributed_async_insert_sleep_time_ms/g' -e 's/distributed_directory_monitor_max_sleep_time_ms/distributed_async_insert_max_sleep_time_ms/g' -e 's/distributed_directory_monitor_batch_inserts/distributed_async_insert_batch/g' -e 's/distributed_directory_monitor_split_batch_on_failure/distributed_async_insert_split_batch_on_failure/g' -e 's/monitor_batch_inserts/async_insert_batch/g' -e 's/monitor_split_batch_on_failure/async_insert_split_batch_on_failure/g' -e 's/monitor_sleep_time_ms/async_insert_sleep_time_ms/g' -e 's/monitor_max_sleep_time_ms/async_insert_max_sleep_time_ms/g' -i
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
* Rename async_insert for Distributed into background_insert
This will avoid amigibuity between general async INSERT's and INSERT
into Distributed, which are indeed background, so new term express it
even better.
Mostly done with:
$ git di HEAD^ --name-only | xargs sed -i -e 's/distributed_async_insert/distributed_background_insert/g' -e 's/async_insert_batch/background_insert_batch/g' -e 's/async_insert_split_batch_on_failure/background_insert_split_batch_on_failure/g' -e 's/async_insert_sleep_time_ms/background_insert_sleep_time_ms/g' -e 's/async_insert_max_sleep_time_ms/background_insert_max_sleep_time_ms/g'
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
* Mark 02417_opentelemetry_insert_on_distributed_table as long
CI: https://s3.amazonaws.com/clickhouse-test-reports/55978/7a6abb03a0b507e29e999cb7e04f246a119c6f28/stateless_tests_flaky_check__asan_.html
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
---------
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
* Add zookeeper name in endpoint id
When we migrate a replicated table from one zookeeper cluster to
another (the reason why we migration is that zookeeper's load is
too high), we will create a new table with the same zpath, but it
will fail and the old table will be in trouble.
Here is some infomation:
1.old table:
CREATE TABLE a1 (`id` UInt64)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/default/a1/{shard}', '{replica}')
ORDER BY (id);
2.new table:
CREATE TABLE a2 (`id` UInt64)
ENGINE = ReplicatedMergeTree('aux1:/clickhouse/tables/default/a1/{shard}', '{replica}')
ORDER BY (id);
3.error info:
<Error> executeQuery: Code: 220. DB::Exception: Duplicate interserver IO endpoint:
DataPartsExchange:/clickhouse/tables/default/a1/01/replicas/02.
(DUPLICATE_INTERSERVER_IO_ENDPOINT)
<Error> InterserverIOHTTPHandler: Code: 221. DB::Exception: No interserver IO endpoint
named DataPartsExchange:/clickhouse/tables/default/a1/01/replicas/02.
(NO_SUCH_INTERSERVER_IO_ENDPOINT)
* Revert "Add zookeeper name in endpoint id"
This reverts commit 9deb75b249619b7abdd38e3949ca8b3a76c9df8e.
* Add zookeeper name in endpoint id
When we migrate a replicated table from one zookeeper cluster to
another (the reason why we migration is that zookeeper's load is
too high), we will create a new table with the same zpath, but it
will fail and the old table will be in trouble.
* Fix incompatible with a new setting
* add a test, fix other issues
* Update 02442_auxiliary_zookeeper_endpoint_id.sql
* Update 02735_system_zookeeper_connection.reference
* Update 02735_system_zookeeper_connection.sql
* Update run.sh
* Remove the 'no-fasttest' tag
* Update 02442_auxiliary_zookeeper_endpoint_id.sql
---------
Co-authored-by: Alexander Tokmakov <tavplubix@clickhouse.com>
Co-authored-by: Alexander Tokmakov <tavplubix@gmail.com>