mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-29 02:52:13 +00:00
Merge branch 'master' into pretty-type-names-default
This commit is contained in:
commit
662c56dd55
@ -375,6 +375,7 @@
|
|||||||
* Do not interpret the `send_timeout` set on the client side as the `receive_timeout` on the server side and vise-versa. [#56035](https://github.com/ClickHouse/ClickHouse/pull/56035) ([Azat Khuzhin](https://github.com/azat)).
|
* Do not interpret the `send_timeout` set on the client side as the `receive_timeout` on the server side and vise-versa. [#56035](https://github.com/ClickHouse/ClickHouse/pull/56035) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
* Comparison of time intervals with different units will throw an exception. This closes [#55942](https://github.com/ClickHouse/ClickHouse/issues/55942). You might have occasionally rely on the previous behavior when the underlying numeric values were compared regardless of the units. [#56090](https://github.com/ClickHouse/ClickHouse/pull/56090) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
* Comparison of time intervals with different units will throw an exception. This closes [#55942](https://github.com/ClickHouse/ClickHouse/issues/55942). You might have occasionally rely on the previous behavior when the underlying numeric values were compared regardless of the units. [#56090](https://github.com/ClickHouse/ClickHouse/pull/56090) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
* Rewrited the experimental `S3Queue` table engine completely: changed the way we keep information in zookeeper which allows to make less zookeeper requests, added caching of zookeeper state in cases when we know the state will not change, improved the polling from s3 process to make it less aggressive, changed the way ttl and max set for trached files is maintained, now it is a background process. Added `system.s3queue` and `system.s3queue_log` tables. Closes [#54998](https://github.com/ClickHouse/ClickHouse/issues/54998). [#54422](https://github.com/ClickHouse/ClickHouse/pull/54422) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
* Rewrited the experimental `S3Queue` table engine completely: changed the way we keep information in zookeeper which allows to make less zookeeper requests, added caching of zookeeper state in cases when we know the state will not change, improved the polling from s3 process to make it less aggressive, changed the way ttl and max set for trached files is maintained, now it is a background process. Added `system.s3queue` and `system.s3queue_log` tables. Closes [#54998](https://github.com/ClickHouse/ClickHouse/issues/54998). [#54422](https://github.com/ClickHouse/ClickHouse/pull/54422) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Arbitrary paths on HTTP endpoint are no longer interpreted as a request to the `/query` endpoint. [#55521](https://github.com/ClickHouse/ClickHouse/pull/55521) ([Konstantin Bogdanov](https://github.com/thevar1able)).
|
||||||
|
|
||||||
#### New Feature
|
#### New Feature
|
||||||
* Add function `arrayFold(accumulator, x1, ..., xn -> expression, initial, array1, ..., arrayn)` which applies a lambda function to multiple arrays of the same cardinality and collects the result in an accumulator. [#49794](https://github.com/ClickHouse/ClickHouse/pull/49794) ([Lirikl](https://github.com/Lirikl)).
|
* Add function `arrayFold(accumulator, x1, ..., xn -> expression, initial, array1, ..., arrayn)` which applies a lambda function to multiple arrays of the same cardinality and collects the result in an accumulator. [#49794](https://github.com/ClickHouse/ClickHouse/pull/49794) ([Lirikl](https://github.com/Lirikl)).
|
||||||
|
@ -34,7 +34,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
|||||||
# lts / testing / prestable / etc
|
# lts / testing / prestable / etc
|
||||||
ARG REPO_CHANNEL="stable"
|
ARG REPO_CHANNEL="stable"
|
||||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||||
ARG VERSION="23.12.1.1368"
|
ARG VERSION="23.12.2.59"
|
||||||
ARG PACKAGES="clickhouse-keeper"
|
ARG PACKAGES="clickhouse-keeper"
|
||||||
ARG DIRECT_DOWNLOAD_URLS=""
|
ARG DIRECT_DOWNLOAD_URLS=""
|
||||||
|
|
||||||
|
@ -32,7 +32,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
|||||||
# lts / testing / prestable / etc
|
# lts / testing / prestable / etc
|
||||||
ARG REPO_CHANNEL="stable"
|
ARG REPO_CHANNEL="stable"
|
||||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||||
ARG VERSION="23.12.1.1368"
|
ARG VERSION="23.12.2.59"
|
||||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||||
ARG DIRECT_DOWNLOAD_URLS=""
|
ARG DIRECT_DOWNLOAD_URLS=""
|
||||||
|
|
||||||
|
@ -30,7 +30,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
|
|||||||
|
|
||||||
ARG REPO_CHANNEL="stable"
|
ARG REPO_CHANNEL="stable"
|
||||||
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
||||||
ARG VERSION="23.12.1.1368"
|
ARG VERSION="23.12.2.59"
|
||||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||||
|
|
||||||
# set non-empty deb_location_url url to create a docker image
|
# set non-empty deb_location_url url to create a docker image
|
||||||
|
51
docs/changelogs/v23.10.6.60-stable.md
Normal file
51
docs/changelogs/v23.10.6.60-stable.md
Normal file
@ -0,0 +1,51 @@
|
|||||||
|
---
|
||||||
|
sidebar_position: 1
|
||||||
|
sidebar_label: 2024
|
||||||
|
---
|
||||||
|
|
||||||
|
# 2024 Changelog
|
||||||
|
|
||||||
|
### ClickHouse release v23.10.6.60-stable (68907bbe643) FIXME as compared to v23.10.5.20-stable (e84001e5c61)
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
* Backported in [#58493](https://github.com/ClickHouse/ClickHouse/issues/58493): Fix transfer query to MySQL compatible query. Fixes [#57253](https://github.com/ClickHouse/ClickHouse/issues/57253). Fixes [#52654](https://github.com/ClickHouse/ClickHouse/issues/52654). Fixes [#56729](https://github.com/ClickHouse/ClickHouse/issues/56729). [#56456](https://github.com/ClickHouse/ClickHouse/pull/56456) ([flynn](https://github.com/ucasfl)).
|
||||||
|
* Backported in [#57659](https://github.com/ClickHouse/ClickHouse/issues/57659): Handle sigabrt case when getting PostgreSQl table structure with empty array. [#57618](https://github.com/ClickHouse/ClickHouse/pull/57618) ([Mike Kot (Михаил Кот)](https://github.com/myrrc)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
* Backported in [#57586](https://github.com/ClickHouse/ClickHouse/issues/57586): Fix issue caught in https://github.com/docker-library/official-images/pull/15846. [#57571](https://github.com/ClickHouse/ClickHouse/pull/57571) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
|
||||||
|
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||||
|
|
||||||
|
* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix ALTER COLUMN with ALIAS [#56493](https://github.com/ClickHouse/ClickHouse/pull/56493) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||||
|
* Prevent incompatible ALTER of projection columns [#56948](https://github.com/ClickHouse/ClickHouse/pull/56948) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix segfault after ALTER UPDATE with Nullable MATERIALIZED column [#57147](https://github.com/ClickHouse/ClickHouse/pull/57147) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||||
|
* Fix incorrect JOIN plan optimization with partially materialized normal projection [#57196](https://github.com/ClickHouse/ClickHouse/pull/57196) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix `ReadonlyReplica` metric for all cases [#57267](https://github.com/ClickHouse/ClickHouse/pull/57267) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Background merges correctly use temporary data storage in the cache [#57275](https://github.com/ClickHouse/ClickHouse/pull/57275) ([vdimir](https://github.com/vdimir)).
|
||||||
|
* MergeTree mutations reuse source part index granularity [#57352](https://github.com/ClickHouse/ClickHouse/pull/57352) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Fix function jsonMergePatch for partially const columns [#57379](https://github.com/ClickHouse/ClickHouse/pull/57379) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||||
|
* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* bugfix: correctly parse SYSTEM STOP LISTEN TCP SECURE [#57483](https://github.com/ClickHouse/ClickHouse/pull/57483) ([joelynch](https://github.com/joelynch)).
|
||||||
|
* Ignore ON CLUSTER clause in grant/revoke queries for management of replicated access entities. [#57538](https://github.com/ClickHouse/ClickHouse/pull/57538) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||||
|
* Disable system.kafka_consumers by default (due to possible live memory leak) [#57822](https://github.com/ClickHouse/ClickHouse/pull/57822) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix invalid memory access in BLAKE3 (Rust) [#57876](https://github.com/ClickHouse/ClickHouse/pull/57876) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Normalize function names in CREATE INDEX [#57906](https://github.com/ClickHouse/ClickHouse/pull/57906) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)).
|
||||||
|
* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix parallel parsing for JSONCompactEachRow [#58250](https://github.com/ClickHouse/ClickHouse/pull/58250) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix lost blobs after dropping a replica with broken detached parts [#58333](https://github.com/ClickHouse/ClickHouse/pull/58333) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* MergeTreePrefetchedReadPool disable for LIMIT only queries [#58505](https://github.com/ClickHouse/ClickHouse/pull/58505) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
|
||||||
|
#### NO CL CATEGORY
|
||||||
|
|
||||||
|
* Backported in [#57916](https://github.com/ClickHouse/ClickHouse/issues/57916):. [#57909](https://github.com/ClickHouse/ClickHouse/pull/57909) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||||
|
|
||||||
|
* Pin alpine version of integration tests helper container [#57669](https://github.com/ClickHouse/ClickHouse/pull/57669) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Remove heavy rust stable toolchain [#57905](https://github.com/ClickHouse/ClickHouse/pull/57905) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Fix docker image for integration tests (fixes CI) [#57952](https://github.com/ClickHouse/ClickHouse/pull/57952) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix test_user_valid_until [#58409](https://github.com/ClickHouse/ClickHouse/pull/58409) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||||
|
|
26
docs/changelogs/v23.11.4.24-stable.md
Normal file
26
docs/changelogs/v23.11.4.24-stable.md
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
---
|
||||||
|
sidebar_position: 1
|
||||||
|
sidebar_label: 2024
|
||||||
|
---
|
||||||
|
|
||||||
|
# 2024 Changelog
|
||||||
|
|
||||||
|
### ClickHouse release v23.11.4.24-stable (e79d840d7fe) FIXME as compared to v23.11.3.23-stable (a14ab450b0e)
|
||||||
|
|
||||||
|
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||||
|
|
||||||
|
* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Disable system.kafka_consumers by default (due to possible live memory leak) [#57822](https://github.com/ClickHouse/ClickHouse/pull/57822) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)).
|
||||||
|
* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix parallel parsing for JSONCompactEachRow [#58250](https://github.com/ClickHouse/ClickHouse/pull/58250) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix lost blobs after dropping a replica with broken detached parts [#58333](https://github.com/ClickHouse/ClickHouse/pull/58333) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* MergeTreePrefetchedReadPool disable for LIMIT only queries [#58505](https://github.com/ClickHouse/ClickHouse/pull/58505) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
|
||||||
|
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||||
|
|
||||||
|
* Handle another case for preprocessing in Keeper [#58308](https://github.com/ClickHouse/ClickHouse/pull/58308) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Fix test_user_valid_until [#58409](https://github.com/ClickHouse/ClickHouse/pull/58409) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||||
|
|
32
docs/changelogs/v23.12.2.59-stable.md
Normal file
32
docs/changelogs/v23.12.2.59-stable.md
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
---
|
||||||
|
sidebar_position: 1
|
||||||
|
sidebar_label: 2024
|
||||||
|
---
|
||||||
|
|
||||||
|
# 2024 Changelog
|
||||||
|
|
||||||
|
### ClickHouse release v23.12.2.59-stable (17ab210e761) FIXME as compared to v23.12.1.1368-stable (a2faa65b080)
|
||||||
|
|
||||||
|
#### Backward Incompatible Change
|
||||||
|
* Backported in [#58389](https://github.com/ClickHouse/ClickHouse/issues/58389): The MergeTree setting `clean_deleted_rows` is deprecated, it has no effect anymore. The `CLEANUP` keyword for `OPTIMIZE` is not allowed by default (unless `allow_experimental_replacing_merge_with_cleanup` is enabled). [#58316](https://github.com/ClickHouse/ClickHouse/pull/58316) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
|
||||||
|
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||||
|
|
||||||
|
* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix lost blobs after dropping a replica with broken detached parts [#58333](https://github.com/ClickHouse/ClickHouse/pull/58333) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* Fix segfault when graphite table does not have agg function [#58453](https://github.com/ClickHouse/ClickHouse/pull/58453) ([Duc Canh Le](https://github.com/canhld94)).
|
||||||
|
* MergeTreePrefetchedReadPool disable for LIMIT only queries [#58505](https://github.com/ClickHouse/ClickHouse/pull/58505) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
|
||||||
|
#### NO CL ENTRY
|
||||||
|
|
||||||
|
* NO CL ENTRY: 'Revert "Refreshable materialized views (takeover)"'. [#58296](https://github.com/ClickHouse/ClickHouse/pull/58296) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
|
||||||
|
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||||
|
|
||||||
|
* Fix an error in the release script - it didn't allow to make 23.12. [#58288](https://github.com/ClickHouse/ClickHouse/pull/58288) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Update version_date.tsv and changelogs after v23.12.1.1368-stable [#58290](https://github.com/ClickHouse/ClickHouse/pull/58290) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||||
|
* Fix test_storage_s3_queue/test.py::test_drop_table [#58293](https://github.com/ClickHouse/ClickHouse/pull/58293) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Handle another case for preprocessing in Keeper [#58308](https://github.com/ClickHouse/ClickHouse/pull/58308) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Fix test_user_valid_until [#58409](https://github.com/ClickHouse/ClickHouse/pull/58409) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||||
|
|
36
docs/changelogs/v23.3.19.32-lts.md
Normal file
36
docs/changelogs/v23.3.19.32-lts.md
Normal file
@ -0,0 +1,36 @@
|
|||||||
|
---
|
||||||
|
sidebar_position: 1
|
||||||
|
sidebar_label: 2024
|
||||||
|
---
|
||||||
|
|
||||||
|
# 2024 Changelog
|
||||||
|
|
||||||
|
### ClickHouse release v23.3.19.32-lts (c4d4ca8ec02) FIXME as compared to v23.3.18.15-lts (7228475d77a)
|
||||||
|
|
||||||
|
#### Backward Incompatible Change
|
||||||
|
* Backported in [#57840](https://github.com/ClickHouse/ClickHouse/issues/57840): Remove function `arrayFold` because it has a bug. This closes [#57816](https://github.com/ClickHouse/ClickHouse/issues/57816). This closes [#57458](https://github.com/ClickHouse/ClickHouse/issues/57458). [#57836](https://github.com/ClickHouse/ClickHouse/pull/57836) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
* Backported in [#58489](https://github.com/ClickHouse/ClickHouse/issues/58489): Fix transfer query to MySQL compatible query. Fixes [#57253](https://github.com/ClickHouse/ClickHouse/issues/57253). Fixes [#52654](https://github.com/ClickHouse/ClickHouse/issues/52654). Fixes [#56729](https://github.com/ClickHouse/ClickHouse/issues/56729). [#56456](https://github.com/ClickHouse/ClickHouse/pull/56456) ([flynn](https://github.com/ucasfl)).
|
||||||
|
* Backported in [#57653](https://github.com/ClickHouse/ClickHouse/issues/57653): Handle sigabrt case when getting PostgreSQl table structure with empty array. [#57618](https://github.com/ClickHouse/ClickHouse/pull/57618) ([Mike Kot (Михаил Кот)](https://github.com/myrrc)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
* Backported in [#57580](https://github.com/ClickHouse/ClickHouse/issues/57580): Fix issue caught in https://github.com/docker-library/official-images/pull/15846. [#57571](https://github.com/ClickHouse/ClickHouse/pull/57571) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
|
||||||
|
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||||
|
|
||||||
|
* Prevent incompatible ALTER of projection columns [#56948](https://github.com/ClickHouse/ClickHouse/pull/56948) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix segfault after ALTER UPDATE with Nullable MATERIALIZED column [#57147](https://github.com/ClickHouse/ClickHouse/pull/57147) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||||
|
* Fix incorrect JOIN plan optimization with partially materialized normal projection [#57196](https://github.com/ClickHouse/ClickHouse/pull/57196) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* MergeTree mutations reuse source part index granularity [#57352](https://github.com/ClickHouse/ClickHouse/pull/57352) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Fix invalid memory access in BLAKE3 (Rust) [#57876](https://github.com/ClickHouse/ClickHouse/pull/57876) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Normalize function names in CREATE INDEX [#57906](https://github.com/ClickHouse/ClickHouse/pull/57906) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)).
|
||||||
|
* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||||
|
|
||||||
|
* Pin alpine version of integration tests helper container [#57669](https://github.com/ClickHouse/ClickHouse/pull/57669) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Fix docker image for integration tests (fixes CI) [#57952](https://github.com/ClickHouse/ClickHouse/pull/57952) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
|
47
docs/changelogs/v23.8.9.54-lts.md
Normal file
47
docs/changelogs/v23.8.9.54-lts.md
Normal file
@ -0,0 +1,47 @@
|
|||||||
|
---
|
||||||
|
sidebar_position: 1
|
||||||
|
sidebar_label: 2024
|
||||||
|
---
|
||||||
|
|
||||||
|
# 2024 Changelog
|
||||||
|
|
||||||
|
### ClickHouse release v23.8.9.54-lts (192a1d231fa) FIXME as compared to v23.8.8.20-lts (5e012a03bf2)
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
* Backported in [#57668](https://github.com/ClickHouse/ClickHouse/issues/57668): Output valid JSON/XML on excetpion during HTTP query execution. Add setting `http_write_exception_in_output_format` to enable/disable this behaviour (enabled by default). [#52853](https://github.com/ClickHouse/ClickHouse/pull/52853) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Backported in [#58491](https://github.com/ClickHouse/ClickHouse/issues/58491): Fix transfer query to MySQL compatible query. Fixes [#57253](https://github.com/ClickHouse/ClickHouse/issues/57253). Fixes [#52654](https://github.com/ClickHouse/ClickHouse/issues/52654). Fixes [#56729](https://github.com/ClickHouse/ClickHouse/issues/56729). [#56456](https://github.com/ClickHouse/ClickHouse/pull/56456) ([flynn](https://github.com/ucasfl)).
|
||||||
|
* Backported in [#57238](https://github.com/ClickHouse/ClickHouse/issues/57238): Fetching a part waits when that part is fully committed on remote replica. It is better not send part in PreActive state. In case of zero copy this is mandatory restriction. [#56808](https://github.com/ClickHouse/ClickHouse/pull/56808) ([Sema Checherinda](https://github.com/CheSema)).
|
||||||
|
* Backported in [#57655](https://github.com/ClickHouse/ClickHouse/issues/57655): Handle sigabrt case when getting PostgreSQl table structure with empty array. [#57618](https://github.com/ClickHouse/ClickHouse/pull/57618) ([Mike Kot (Михаил Кот)](https://github.com/myrrc)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
* Backported in [#57582](https://github.com/ClickHouse/ClickHouse/issues/57582): Fix issue caught in https://github.com/docker-library/official-images/pull/15846. [#57571](https://github.com/ClickHouse/ClickHouse/pull/57571) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
|
||||||
|
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||||
|
|
||||||
|
* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix ALTER COLUMN with ALIAS [#56493](https://github.com/ClickHouse/ClickHouse/pull/56493) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||||
|
* Prevent incompatible ALTER of projection columns [#56948](https://github.com/ClickHouse/ClickHouse/pull/56948) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix segfault after ALTER UPDATE with Nullable MATERIALIZED column [#57147](https://github.com/ClickHouse/ClickHouse/pull/57147) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||||
|
* Fix incorrect JOIN plan optimization with partially materialized normal projection [#57196](https://github.com/ClickHouse/ClickHouse/pull/57196) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix `ReadonlyReplica` metric for all cases [#57267](https://github.com/ClickHouse/ClickHouse/pull/57267) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* bugfix: correctly parse SYSTEM STOP LISTEN TCP SECURE [#57483](https://github.com/ClickHouse/ClickHouse/pull/57483) ([joelynch](https://github.com/joelynch)).
|
||||||
|
* Ignore ON CLUSTER clause in grant/revoke queries for management of replicated access entities. [#57538](https://github.com/ClickHouse/ClickHouse/pull/57538) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||||
|
* Disable system.kafka_consumers by default (due to possible live memory leak) [#57822](https://github.com/ClickHouse/ClickHouse/pull/57822) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix invalid memory access in BLAKE3 (Rust) [#57876](https://github.com/ClickHouse/ClickHouse/pull/57876) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Normalize function names in CREATE INDEX [#57906](https://github.com/ClickHouse/ClickHouse/pull/57906) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)).
|
||||||
|
* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix parallel parsing for JSONCompactEachRow [#58250](https://github.com/ClickHouse/ClickHouse/pull/58250) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
|
||||||
|
#### NO CL ENTRY
|
||||||
|
|
||||||
|
* NO CL ENTRY: 'Update PeekableWriteBuffer.cpp'. [#57701](https://github.com/ClickHouse/ClickHouse/pull/57701) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
|
||||||
|
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||||
|
|
||||||
|
* Pin alpine version of integration tests helper container [#57669](https://github.com/ClickHouse/ClickHouse/pull/57669) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Remove heavy rust stable toolchain [#57905](https://github.com/ClickHouse/ClickHouse/pull/57905) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Fix docker image for integration tests (fixes CI) [#57952](https://github.com/ClickHouse/ClickHouse/pull/57952) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
|
@ -573,11 +573,12 @@ void RestorerFromBackup::createDatabase(const String & database_name) const
|
|||||||
create_database_query->if_not_exists = (restore_settings.create_table == RestoreTableCreationMode::kCreateIfNotExists);
|
create_database_query->if_not_exists = (restore_settings.create_table == RestoreTableCreationMode::kCreateIfNotExists);
|
||||||
|
|
||||||
LOG_TRACE(log, "Creating database {}: {}", backQuoteIfNeed(database_name), serializeAST(*create_database_query));
|
LOG_TRACE(log, "Creating database {}: {}", backQuoteIfNeed(database_name), serializeAST(*create_database_query));
|
||||||
|
auto query_context = Context::createCopy(context);
|
||||||
|
query_context->setSetting("allow_deprecated_database_ordinary", 1);
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
/// Execute CREATE DATABASE query.
|
/// Execute CREATE DATABASE query.
|
||||||
InterpreterCreateQuery interpreter{create_database_query, context};
|
InterpreterCreateQuery interpreter{create_database_query, query_context};
|
||||||
interpreter.setInternal(true);
|
interpreter.setInternal(true);
|
||||||
interpreter.execute();
|
interpreter.execute();
|
||||||
}
|
}
|
||||||
|
@ -589,6 +589,7 @@
|
|||||||
M(707, GCP_ERROR) \
|
M(707, GCP_ERROR) \
|
||||||
M(708, ILLEGAL_STATISTIC) \
|
M(708, ILLEGAL_STATISTIC) \
|
||||||
M(709, CANNOT_GET_REPLICATED_DATABASE_SNAPSHOT) \
|
M(709, CANNOT_GET_REPLICATED_DATABASE_SNAPSHOT) \
|
||||||
|
M(710, FAULT_INJECTED) \
|
||||||
\
|
\
|
||||||
M(999, KEEPER_EXCEPTION) \
|
M(999, KEEPER_EXCEPTION) \
|
||||||
M(1000, POCO_EXCEPTION) \
|
M(1000, POCO_EXCEPTION) \
|
||||||
|
@ -34,6 +34,8 @@ static struct InitFiu
|
|||||||
|
|
||||||
#define APPLY_FOR_FAILPOINTS(ONCE, REGULAR, PAUSEABLE_ONCE, PAUSEABLE) \
|
#define APPLY_FOR_FAILPOINTS(ONCE, REGULAR, PAUSEABLE_ONCE, PAUSEABLE) \
|
||||||
ONCE(replicated_merge_tree_commit_zk_fail_after_op) \
|
ONCE(replicated_merge_tree_commit_zk_fail_after_op) \
|
||||||
|
ONCE(replicated_queue_fail_next_entry) \
|
||||||
|
REGULAR(replicated_queue_unfail_entries) \
|
||||||
ONCE(replicated_merge_tree_insert_quorum_fail_0) \
|
ONCE(replicated_merge_tree_insert_quorum_fail_0) \
|
||||||
REGULAR(replicated_merge_tree_commit_zk_fail_when_recovering_from_hw_fault) \
|
REGULAR(replicated_merge_tree_commit_zk_fail_when_recovering_from_hw_fault) \
|
||||||
REGULAR(use_delayed_remote_source) \
|
REGULAR(use_delayed_remote_source) \
|
||||||
|
@ -26,6 +26,8 @@ namespace DB
|
|||||||
M(UInt64, max_active_parts_loading_thread_pool_size, 64, "The number of threads to load active set of data parts (Active ones) at startup.", 0) \
|
M(UInt64, max_active_parts_loading_thread_pool_size, 64, "The number of threads to load active set of data parts (Active ones) at startup.", 0) \
|
||||||
M(UInt64, max_outdated_parts_loading_thread_pool_size, 32, "The number of threads to load inactive set of data parts (Outdated ones) at startup.", 0) \
|
M(UInt64, max_outdated_parts_loading_thread_pool_size, 32, "The number of threads to load inactive set of data parts (Outdated ones) at startup.", 0) \
|
||||||
M(UInt64, max_parts_cleaning_thread_pool_size, 128, "The number of threads for concurrent removal of inactive data parts.", 0) \
|
M(UInt64, max_parts_cleaning_thread_pool_size, 128, "The number of threads for concurrent removal of inactive data parts.", 0) \
|
||||||
|
M(UInt64, max_mutations_bandwidth_for_server, 0, "The maximum read speed of all mutations on server in bytes per second. Zero means unlimited.", 0) \
|
||||||
|
M(UInt64, max_merges_bandwidth_for_server, 0, "The maximum read speed of all merges on server in bytes per second. Zero means unlimited.", 0) \
|
||||||
M(UInt64, max_replicated_fetches_network_bandwidth_for_server, 0, "The maximum speed of data exchange over the network in bytes per second for replicated fetches. Zero means unlimited.", 0) \
|
M(UInt64, max_replicated_fetches_network_bandwidth_for_server, 0, "The maximum speed of data exchange over the network in bytes per second for replicated fetches. Zero means unlimited.", 0) \
|
||||||
M(UInt64, max_replicated_sends_network_bandwidth_for_server, 0, "The maximum speed of data exchange over the network in bytes per second for replicated sends. Zero means unlimited.", 0) \
|
M(UInt64, max_replicated_sends_network_bandwidth_for_server, 0, "The maximum speed of data exchange over the network in bytes per second for replicated sends. Zero means unlimited.", 0) \
|
||||||
M(UInt64, max_remote_read_network_bandwidth_for_server, 0, "The maximum speed of data exchange over the network in bytes per second for read. Zero means unlimited.", 0) \
|
M(UInt64, max_remote_read_network_bandwidth_for_server, 0, "The maximum speed of data exchange over the network in bytes per second for read. Zero means unlimited.", 0) \
|
||||||
|
@ -92,9 +92,16 @@ void validate(const ASTCreateQuery & create_query)
|
|||||||
|
|
||||||
DatabasePtr DatabaseFactory::get(const ASTCreateQuery & create, const String & metadata_path, ContextPtr context)
|
DatabasePtr DatabaseFactory::get(const ASTCreateQuery & create, const String & metadata_path, ContextPtr context)
|
||||||
{
|
{
|
||||||
|
const auto engine_name = create.storage->engine->name;
|
||||||
/// check if the database engine is a valid one before proceeding
|
/// check if the database engine is a valid one before proceeding
|
||||||
if (!database_engines.contains(create.storage->engine->name))
|
if (!database_engines.contains(engine_name))
|
||||||
throw Exception(ErrorCodes::UNKNOWN_DATABASE_ENGINE, "Unknown database engine: {}", create.storage->engine->name);
|
{
|
||||||
|
auto hints = getHints(engine_name);
|
||||||
|
if (!hints.empty())
|
||||||
|
throw Exception(ErrorCodes::UNKNOWN_DATABASE_ENGINE, "Unknown database engine {}. Maybe you meant: {}", engine_name, toString(hints));
|
||||||
|
else
|
||||||
|
throw Exception(ErrorCodes::UNKNOWN_DATABASE_ENGINE, "Unknown database engine: {}", create.storage->engine->name);
|
||||||
|
}
|
||||||
|
|
||||||
/// if the engine is found (i.e. registered with the factory instance), then validate if the
|
/// if the engine is found (i.e. registered with the factory instance), then validate if the
|
||||||
/// supplied engine arguments, settings and table overrides are valid for the engine.
|
/// supplied engine arguments, settings and table overrides are valid for the engine.
|
||||||
|
@ -1,5 +1,6 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
|
#include <Common/NamePrompter.h>
|
||||||
#include <Interpreters/Context_fwd.h>
|
#include <Interpreters/Context_fwd.h>
|
||||||
#include <Databases/IDatabase.h>
|
#include <Databases/IDatabase.h>
|
||||||
#include <Parsers/ASTCreateQuery.h>
|
#include <Parsers/ASTCreateQuery.h>
|
||||||
@ -24,7 +25,7 @@ static inline ValueType safeGetLiteralValue(const ASTPtr &ast, const String &eng
|
|||||||
return ast->as<ASTLiteral>()->value.safeGet<ValueType>();
|
return ast->as<ASTLiteral>()->value.safeGet<ValueType>();
|
||||||
}
|
}
|
||||||
|
|
||||||
class DatabaseFactory : private boost::noncopyable
|
class DatabaseFactory : private boost::noncopyable, public IHints<>
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
|
|
||||||
@ -52,6 +53,14 @@ public:
|
|||||||
|
|
||||||
const DatabaseEngines & getDatabaseEngines() const { return database_engines; }
|
const DatabaseEngines & getDatabaseEngines() const { return database_engines; }
|
||||||
|
|
||||||
|
std::vector<String> getAllRegisteredNames() const override
|
||||||
|
{
|
||||||
|
std::vector<String> result;
|
||||||
|
auto getter = [](const auto & pair) { return pair.first; };
|
||||||
|
std::transform(database_engines.begin(), database_engines.end(), std::back_inserter(result), getter);
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
private:
|
private:
|
||||||
DatabaseEngines database_engines;
|
DatabaseEngines database_engines;
|
||||||
|
|
||||||
|
@ -330,6 +330,9 @@ struct ContextSharedPart : boost::noncopyable
|
|||||||
|
|
||||||
mutable ThrottlerPtr backups_server_throttler; /// A server-wide throttler for BACKUPs
|
mutable ThrottlerPtr backups_server_throttler; /// A server-wide throttler for BACKUPs
|
||||||
|
|
||||||
|
mutable ThrottlerPtr mutations_throttler; /// A server-wide throttler for mutations
|
||||||
|
mutable ThrottlerPtr merges_throttler; /// A server-wide throttler for merges
|
||||||
|
|
||||||
MultiVersion<Macros> macros; /// Substitutions extracted from config.
|
MultiVersion<Macros> macros; /// Substitutions extracted from config.
|
||||||
std::unique_ptr<DDLWorker> ddl_worker TSA_GUARDED_BY(mutex); /// Process ddl commands from zk.
|
std::unique_ptr<DDLWorker> ddl_worker TSA_GUARDED_BY(mutex); /// Process ddl commands from zk.
|
||||||
LoadTaskPtr ddl_worker_startup_task; /// To postpone `ddl_worker->startup()` after all tables startup
|
LoadTaskPtr ddl_worker_startup_task; /// To postpone `ddl_worker->startup()` after all tables startup
|
||||||
@ -738,6 +741,12 @@ struct ContextSharedPart : boost::noncopyable
|
|||||||
|
|
||||||
if (auto bandwidth = server_settings.max_backup_bandwidth_for_server)
|
if (auto bandwidth = server_settings.max_backup_bandwidth_for_server)
|
||||||
backups_server_throttler = std::make_shared<Throttler>(bandwidth);
|
backups_server_throttler = std::make_shared<Throttler>(bandwidth);
|
||||||
|
|
||||||
|
if (auto bandwidth = server_settings.max_mutations_bandwidth_for_server)
|
||||||
|
mutations_throttler = std::make_shared<Throttler>(bandwidth);
|
||||||
|
|
||||||
|
if (auto bandwidth = server_settings.max_merges_bandwidth_for_server)
|
||||||
|
merges_throttler = std::make_shared<Throttler>(bandwidth);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -3001,6 +3010,16 @@ ThrottlerPtr Context::getBackupsThrottler() const
|
|||||||
return throttler;
|
return throttler;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ThrottlerPtr Context::getMutationsThrottler() const
|
||||||
|
{
|
||||||
|
return shared->mutations_throttler;
|
||||||
|
}
|
||||||
|
|
||||||
|
ThrottlerPtr Context::getMergesThrottler() const
|
||||||
|
{
|
||||||
|
return shared->merges_throttler;
|
||||||
|
}
|
||||||
|
|
||||||
bool Context::hasDistributedDDL() const
|
bool Context::hasDistributedDDL() const
|
||||||
{
|
{
|
||||||
return getConfigRef().has("distributed_ddl");
|
return getConfigRef().has("distributed_ddl");
|
||||||
|
@ -1328,6 +1328,9 @@ public:
|
|||||||
|
|
||||||
ThrottlerPtr getBackupsThrottler() const;
|
ThrottlerPtr getBackupsThrottler() const;
|
||||||
|
|
||||||
|
ThrottlerPtr getMutationsThrottler() const;
|
||||||
|
ThrottlerPtr getMergesThrottler() const;
|
||||||
|
|
||||||
/// Kitchen sink
|
/// Kitchen sink
|
||||||
using ContextData::KitchenSink;
|
using ContextData::KitchenSink;
|
||||||
using ContextData::kitchen_sink;
|
using ContextData::kitchen_sink;
|
||||||
|
@ -1280,6 +1280,7 @@ void MutationsInterpreter::Source::read(
|
|||||||
VirtualColumns virtual_columns(std::move(required_columns), part);
|
VirtualColumns virtual_columns(std::move(required_columns), part);
|
||||||
|
|
||||||
createReadFromPartStep(
|
createReadFromPartStep(
|
||||||
|
MergeTreeSequentialSourceType::Mutation,
|
||||||
plan, *data, storage_snapshot, part,
|
plan, *data, storage_snapshot, part,
|
||||||
std::move(virtual_columns.columns_to_read),
|
std::move(virtual_columns.columns_to_read),
|
||||||
apply_deleted_mask_, filter, context_,
|
apply_deleted_mask_, filter, context_,
|
||||||
|
@ -43,6 +43,8 @@ ReplicatedMergeMutateTaskBase::PrepareResult MergeFromLogEntryTask::prepare()
|
|||||||
LOG_TRACE(log, "Executing log entry to merge parts {} to {}",
|
LOG_TRACE(log, "Executing log entry to merge parts {} to {}",
|
||||||
fmt::join(entry.source_parts, ", "), entry.new_part_name);
|
fmt::join(entry.source_parts, ", "), entry.new_part_name);
|
||||||
|
|
||||||
|
StorageMetadataPtr metadata_snapshot = storage.getInMemoryMetadataPtr();
|
||||||
|
int32_t metadata_version = metadata_snapshot->getMetadataVersion();
|
||||||
const auto storage_settings_ptr = storage.getSettings();
|
const auto storage_settings_ptr = storage.getSettings();
|
||||||
|
|
||||||
if (storage_settings_ptr->always_fetch_merged_part)
|
if (storage_settings_ptr->always_fetch_merged_part)
|
||||||
@ -129,6 +131,18 @@ ReplicatedMergeMutateTaskBase::PrepareResult MergeFromLogEntryTask::prepare()
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int32_t part_metadata_version = source_part_or_covering->getMetadataVersion();
|
||||||
|
if (part_metadata_version > metadata_version)
|
||||||
|
{
|
||||||
|
LOG_DEBUG(log, "Source part metadata version {} is newer then the table metadata version {}. ALTER_METADATA is still in progress.",
|
||||||
|
part_metadata_version, metadata_version);
|
||||||
|
return PrepareResult{
|
||||||
|
.prepared_successfully = false,
|
||||||
|
.need_to_check_missing_part_in_fetch = false,
|
||||||
|
.part_log_writer = {}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
parts.push_back(source_part_or_covering);
|
parts.push_back(source_part_or_covering);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -176,8 +190,6 @@ ReplicatedMergeMutateTaskBase::PrepareResult MergeFromLogEntryTask::prepare()
|
|||||||
/// It will live until the whole task is being destroyed
|
/// It will live until the whole task is being destroyed
|
||||||
table_lock_holder = storage.lockForShare(RWLockImpl::NO_QUERY, storage_settings_ptr->lock_acquire_timeout_for_background_operations);
|
table_lock_holder = storage.lockForShare(RWLockImpl::NO_QUERY, storage_settings_ptr->lock_acquire_timeout_for_background_operations);
|
||||||
|
|
||||||
StorageMetadataPtr metadata_snapshot = storage.getInMemoryMetadataPtr();
|
|
||||||
|
|
||||||
auto future_merged_part = std::make_shared<FutureMergedMutatedPart>(parts, entry.new_part_format);
|
auto future_merged_part = std::make_shared<FutureMergedMutatedPart>(parts, entry.new_part_format);
|
||||||
if (future_merged_part->name != entry.new_part_name)
|
if (future_merged_part->name != entry.new_part_name)
|
||||||
{
|
{
|
||||||
|
@ -570,6 +570,7 @@ void MergeTask::VerticalMergeStage::prepareVerticalMergeForOneColumn() const
|
|||||||
for (size_t part_num = 0; part_num < global_ctx->future_part->parts.size(); ++part_num)
|
for (size_t part_num = 0; part_num < global_ctx->future_part->parts.size(); ++part_num)
|
||||||
{
|
{
|
||||||
Pipe pipe = createMergeTreeSequentialSource(
|
Pipe pipe = createMergeTreeSequentialSource(
|
||||||
|
MergeTreeSequentialSourceType::Merge,
|
||||||
*global_ctx->data,
|
*global_ctx->data,
|
||||||
global_ctx->storage_snapshot,
|
global_ctx->storage_snapshot,
|
||||||
global_ctx->future_part->parts[part_num],
|
global_ctx->future_part->parts[part_num],
|
||||||
@ -925,6 +926,7 @@ void MergeTask::ExecuteAndFinalizeHorizontalPart::createMergedStream()
|
|||||||
for (const auto & part : global_ctx->future_part->parts)
|
for (const auto & part : global_ctx->future_part->parts)
|
||||||
{
|
{
|
||||||
Pipe pipe = createMergeTreeSequentialSource(
|
Pipe pipe = createMergeTreeSequentialSource(
|
||||||
|
MergeTreeSequentialSourceType::Merge,
|
||||||
*global_ctx->data,
|
*global_ctx->data,
|
||||||
global_ctx->storage_snapshot,
|
global_ctx->storage_snapshot,
|
||||||
part,
|
part,
|
||||||
|
@ -22,7 +22,9 @@ namespace ErrorCodes
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
/// Lightweight (in terms of logic) stream for reading single part from MergeTree
|
/// Lightweight (in terms of logic) stream for reading single part from
|
||||||
|
/// MergeTree, used for merges and mutations.
|
||||||
|
///
|
||||||
/// NOTE:
|
/// NOTE:
|
||||||
/// It doesn't filter out rows that are deleted with lightweight deletes.
|
/// It doesn't filter out rows that are deleted with lightweight deletes.
|
||||||
/// Use createMergeTreeSequentialSource filter out those rows.
|
/// Use createMergeTreeSequentialSource filter out those rows.
|
||||||
@ -30,6 +32,7 @@ class MergeTreeSequentialSource : public ISource
|
|||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
MergeTreeSequentialSource(
|
MergeTreeSequentialSource(
|
||||||
|
MergeTreeSequentialSourceType type,
|
||||||
const MergeTreeData & storage_,
|
const MergeTreeData & storage_,
|
||||||
const StorageSnapshotPtr & storage_snapshot_,
|
const StorageSnapshotPtr & storage_snapshot_,
|
||||||
MergeTreeData::DataPartPtr data_part_,
|
MergeTreeData::DataPartPtr data_part_,
|
||||||
@ -85,6 +88,7 @@ private:
|
|||||||
|
|
||||||
|
|
||||||
MergeTreeSequentialSource::MergeTreeSequentialSource(
|
MergeTreeSequentialSource::MergeTreeSequentialSource(
|
||||||
|
MergeTreeSequentialSourceType type,
|
||||||
const MergeTreeData & storage_,
|
const MergeTreeData & storage_,
|
||||||
const StorageSnapshotPtr & storage_snapshot_,
|
const StorageSnapshotPtr & storage_snapshot_,
|
||||||
MergeTreeData::DataPartPtr data_part_,
|
MergeTreeData::DataPartPtr data_part_,
|
||||||
@ -144,10 +148,25 @@ MergeTreeSequentialSource::MergeTreeSequentialSource(
|
|||||||
columns_for_reader = data_part->getColumns().addTypes(columns_to_read);
|
columns_for_reader = data_part->getColumns().addTypes(columns_to_read);
|
||||||
}
|
}
|
||||||
|
|
||||||
ReadSettings read_settings;
|
const auto & context = storage.getContext();
|
||||||
|
ReadSettings read_settings = context->getReadSettings();
|
||||||
|
read_settings.read_from_filesystem_cache_if_exists_otherwise_bypass_cache = true;
|
||||||
|
/// It does not make sense to use pthread_threadpool for background merges/mutations
|
||||||
|
/// And also to preserve backward compatibility
|
||||||
|
read_settings.local_fs_method = LocalFSReadMethod::pread;
|
||||||
if (read_with_direct_io)
|
if (read_with_direct_io)
|
||||||
read_settings.direct_io_threshold = 1;
|
read_settings.direct_io_threshold = 1;
|
||||||
read_settings.read_from_filesystem_cache_if_exists_otherwise_bypass_cache = true;
|
/// Configure throttling
|
||||||
|
switch (type)
|
||||||
|
{
|
||||||
|
case Mutation:
|
||||||
|
read_settings.local_throttler = context->getMutationsThrottler();
|
||||||
|
break;
|
||||||
|
case Merge:
|
||||||
|
read_settings.local_throttler = context->getMergesThrottler();
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
read_settings.remote_throttler = read_settings.local_throttler;
|
||||||
|
|
||||||
MergeTreeReaderSettings reader_settings =
|
MergeTreeReaderSettings reader_settings =
|
||||||
{
|
{
|
||||||
@ -242,6 +261,7 @@ MergeTreeSequentialSource::~MergeTreeSequentialSource() = default;
|
|||||||
|
|
||||||
|
|
||||||
Pipe createMergeTreeSequentialSource(
|
Pipe createMergeTreeSequentialSource(
|
||||||
|
MergeTreeSequentialSourceType type,
|
||||||
const MergeTreeData & storage,
|
const MergeTreeData & storage,
|
||||||
const StorageSnapshotPtr & storage_snapshot,
|
const StorageSnapshotPtr & storage_snapshot,
|
||||||
MergeTreeData::DataPartPtr data_part,
|
MergeTreeData::DataPartPtr data_part,
|
||||||
@ -262,7 +282,7 @@ Pipe createMergeTreeSequentialSource(
|
|||||||
if (need_to_filter_deleted_rows && !has_filter_column)
|
if (need_to_filter_deleted_rows && !has_filter_column)
|
||||||
columns_to_read.emplace_back(filter_column.name);
|
columns_to_read.emplace_back(filter_column.name);
|
||||||
|
|
||||||
auto column_part_source = std::make_shared<MergeTreeSequentialSource>(
|
auto column_part_source = std::make_shared<MergeTreeSequentialSource>(type,
|
||||||
storage, storage_snapshot, data_part, columns_to_read, std::move(mark_ranges),
|
storage, storage_snapshot, data_part, columns_to_read, std::move(mark_ranges),
|
||||||
/*apply_deleted_mask=*/ false, read_with_direct_io, take_column_types_from_storage, quiet);
|
/*apply_deleted_mask=*/ false, read_with_direct_io, take_column_types_from_storage, quiet);
|
||||||
|
|
||||||
@ -290,6 +310,7 @@ class ReadFromPart final : public ISourceStep
|
|||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
ReadFromPart(
|
ReadFromPart(
|
||||||
|
MergeTreeSequentialSourceType type_,
|
||||||
const MergeTreeData & storage_,
|
const MergeTreeData & storage_,
|
||||||
const StorageSnapshotPtr & storage_snapshot_,
|
const StorageSnapshotPtr & storage_snapshot_,
|
||||||
MergeTreeData::DataPartPtr data_part_,
|
MergeTreeData::DataPartPtr data_part_,
|
||||||
@ -299,6 +320,7 @@ public:
|
|||||||
ContextPtr context_,
|
ContextPtr context_,
|
||||||
Poco::Logger * log_)
|
Poco::Logger * log_)
|
||||||
: ISourceStep(DataStream{.header = storage_snapshot_->getSampleBlockForColumns(columns_to_read_)})
|
: ISourceStep(DataStream{.header = storage_snapshot_->getSampleBlockForColumns(columns_to_read_)})
|
||||||
|
, type(type_)
|
||||||
, storage(storage_)
|
, storage(storage_)
|
||||||
, storage_snapshot(storage_snapshot_)
|
, storage_snapshot(storage_snapshot_)
|
||||||
, data_part(std::move(data_part_))
|
, data_part(std::move(data_part_))
|
||||||
@ -335,7 +357,7 @@ public:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
auto source = createMergeTreeSequentialSource(
|
auto source = createMergeTreeSequentialSource(type,
|
||||||
storage,
|
storage,
|
||||||
storage_snapshot,
|
storage_snapshot,
|
||||||
data_part,
|
data_part,
|
||||||
@ -351,6 +373,7 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
private:
|
private:
|
||||||
|
MergeTreeSequentialSourceType type;
|
||||||
const MergeTreeData & storage;
|
const MergeTreeData & storage;
|
||||||
StorageSnapshotPtr storage_snapshot;
|
StorageSnapshotPtr storage_snapshot;
|
||||||
MergeTreeData::DataPartPtr data_part;
|
MergeTreeData::DataPartPtr data_part;
|
||||||
@ -362,6 +385,7 @@ private:
|
|||||||
};
|
};
|
||||||
|
|
||||||
void createReadFromPartStep(
|
void createReadFromPartStep(
|
||||||
|
MergeTreeSequentialSourceType type,
|
||||||
QueryPlan & plan,
|
QueryPlan & plan,
|
||||||
const MergeTreeData & storage,
|
const MergeTreeData & storage,
|
||||||
const StorageSnapshotPtr & storage_snapshot,
|
const StorageSnapshotPtr & storage_snapshot,
|
||||||
@ -372,7 +396,7 @@ void createReadFromPartStep(
|
|||||||
ContextPtr context,
|
ContextPtr context,
|
||||||
Poco::Logger * log)
|
Poco::Logger * log)
|
||||||
{
|
{
|
||||||
auto reading = std::make_unique<ReadFromPart>(
|
auto reading = std::make_unique<ReadFromPart>(type,
|
||||||
storage, storage_snapshot, std::move(data_part),
|
storage, storage_snapshot, std::move(data_part),
|
||||||
std::move(columns_to_read), apply_deleted_mask,
|
std::move(columns_to_read), apply_deleted_mask,
|
||||||
filter, std::move(context), log);
|
filter, std::move(context), log);
|
||||||
|
@ -8,9 +8,16 @@
|
|||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
|
enum MergeTreeSequentialSourceType
|
||||||
|
{
|
||||||
|
Mutation,
|
||||||
|
Merge,
|
||||||
|
};
|
||||||
|
|
||||||
/// Create stream for reading single part from MergeTree.
|
/// Create stream for reading single part from MergeTree.
|
||||||
/// If the part has lightweight delete mask then the deleted rows are filtered out.
|
/// If the part has lightweight delete mask then the deleted rows are filtered out.
|
||||||
Pipe createMergeTreeSequentialSource(
|
Pipe createMergeTreeSequentialSource(
|
||||||
|
MergeTreeSequentialSourceType type,
|
||||||
const MergeTreeData & storage,
|
const MergeTreeData & storage,
|
||||||
const StorageSnapshotPtr & storage_snapshot,
|
const StorageSnapshotPtr & storage_snapshot,
|
||||||
MergeTreeData::DataPartPtr data_part,
|
MergeTreeData::DataPartPtr data_part,
|
||||||
@ -25,6 +32,7 @@ Pipe createMergeTreeSequentialSource(
|
|||||||
class QueryPlan;
|
class QueryPlan;
|
||||||
|
|
||||||
void createReadFromPartStep(
|
void createReadFromPartStep(
|
||||||
|
MergeTreeSequentialSourceType type,
|
||||||
QueryPlan & plan,
|
QueryPlan & plan,
|
||||||
const MergeTreeData & storage,
|
const MergeTreeData & storage,
|
||||||
const StorageSnapshotPtr & storage_snapshot,
|
const StorageSnapshotPtr & storage_snapshot,
|
||||||
|
@ -172,6 +172,9 @@ struct ReplicatedMergeTreeLogEntryData
|
|||||||
/// The quorum value (for GET_PART) is a non-zero value when the quorum write is enabled.
|
/// The quorum value (for GET_PART) is a non-zero value when the quorum write is enabled.
|
||||||
size_t quorum = 0;
|
size_t quorum = 0;
|
||||||
|
|
||||||
|
/// Used only in tests for permanent fault injection for particular queue entry.
|
||||||
|
bool fault_injected = false;
|
||||||
|
|
||||||
/// If this MUTATE_PART entry caused by alter(modify/drop) query.
|
/// If this MUTATE_PART entry caused by alter(modify/drop) query.
|
||||||
bool isAlterMutation() const
|
bool isAlterMutation() const
|
||||||
{
|
{
|
||||||
|
@ -18,6 +18,7 @@
|
|||||||
#include <Common/thread_local_rng.h>
|
#include <Common/thread_local_rng.h>
|
||||||
#include <Common/typeid_cast.h>
|
#include <Common/typeid_cast.h>
|
||||||
#include <Common/ThreadFuzzer.h>
|
#include <Common/ThreadFuzzer.h>
|
||||||
|
#include <Common/FailPoint.h>
|
||||||
|
|
||||||
#include <Core/ServerUUID.h>
|
#include <Core/ServerUUID.h>
|
||||||
|
|
||||||
@ -147,6 +148,12 @@ namespace CurrentMetrics
|
|||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
|
namespace FailPoints
|
||||||
|
{
|
||||||
|
extern const char replicated_queue_fail_next_entry[];
|
||||||
|
extern const char replicated_queue_unfail_entries[];
|
||||||
|
}
|
||||||
|
|
||||||
namespace ErrorCodes
|
namespace ErrorCodes
|
||||||
{
|
{
|
||||||
extern const int CANNOT_READ_ALL_DATA;
|
extern const int CANNOT_READ_ALL_DATA;
|
||||||
@ -191,6 +198,7 @@ namespace ErrorCodes
|
|||||||
extern const int TABLE_IS_DROPPED;
|
extern const int TABLE_IS_DROPPED;
|
||||||
extern const int CANNOT_BACKUP_TABLE;
|
extern const int CANNOT_BACKUP_TABLE;
|
||||||
extern const int SUPPORT_IS_DISABLED;
|
extern const int SUPPORT_IS_DISABLED;
|
||||||
|
extern const int FAULT_INJECTED;
|
||||||
}
|
}
|
||||||
|
|
||||||
namespace ActionLocks
|
namespace ActionLocks
|
||||||
@ -1737,14 +1745,12 @@ bool StorageReplicatedMergeTree::checkPartChecksumsAndAddCommitOps(
|
|||||||
|
|
||||||
if (replica_part_header.getColumnsHash() != local_part_header.getColumnsHash())
|
if (replica_part_header.getColumnsHash() != local_part_header.getColumnsHash())
|
||||||
{
|
{
|
||||||
/// Currently there are two (known) cases when it may happen:
|
/// Currently there are only one (known) cases when it may happen:
|
||||||
/// - KILL MUTATION query had removed mutation before all replicas have executed assigned MUTATE_PART entries.
|
/// - KILL MUTATION query had removed mutation before all replicas have executed assigned MUTATE_PART entries.
|
||||||
/// Some replicas may skip this mutation and update part version without actually applying any changes.
|
/// Some replicas may skip this mutation and update part version without actually applying any changes.
|
||||||
/// It leads to mismatching checksum if changes were applied on other replicas.
|
/// It leads to mismatching checksum if changes were applied on other replicas.
|
||||||
/// - ALTER_METADATA and MERGE_PARTS were reordered on some replicas.
|
|
||||||
/// It may lead to different number of columns in merged parts on these replicas.
|
|
||||||
throw Exception(ErrorCodes::CHECKSUM_DOESNT_MATCH, "Part {} from {} has different columns hash "
|
throw Exception(ErrorCodes::CHECKSUM_DOESNT_MATCH, "Part {} from {} has different columns hash "
|
||||||
"(it may rarely happen on race condition with KILL MUTATION or ALTER COLUMN).", part_name, replica);
|
"(it may rarely happen on race condition with KILL MUTATION).", part_name, replica);
|
||||||
}
|
}
|
||||||
|
|
||||||
replica_part_header.getChecksums().checkEqual(local_part_header.getChecksums(), true);
|
replica_part_header.getChecksums().checkEqual(local_part_header.getChecksums(), true);
|
||||||
@ -1931,6 +1937,17 @@ MergeTreeData::MutableDataPartPtr StorageReplicatedMergeTree::attachPartHelperFo
|
|||||||
|
|
||||||
bool StorageReplicatedMergeTree::executeLogEntry(LogEntry & entry)
|
bool StorageReplicatedMergeTree::executeLogEntry(LogEntry & entry)
|
||||||
{
|
{
|
||||||
|
fiu_do_on(FailPoints::replicated_queue_fail_next_entry,
|
||||||
|
{
|
||||||
|
entry.fault_injected = true;
|
||||||
|
});
|
||||||
|
fiu_do_on(FailPoints::replicated_queue_unfail_entries,
|
||||||
|
{
|
||||||
|
entry.fault_injected = false;
|
||||||
|
});
|
||||||
|
if (entry.fault_injected)
|
||||||
|
throw Exception(ErrorCodes::FAULT_INJECTED, "Injecting fault for log entry {}", entry.getDescriptionForLogs(format_version));
|
||||||
|
|
||||||
if (entry.type == LogEntry::DROP_RANGE || entry.type == LogEntry::DROP_PART)
|
if (entry.type == LogEntry::DROP_RANGE || entry.type == LogEntry::DROP_PART)
|
||||||
{
|
{
|
||||||
executeDropRange(entry);
|
executeDropRange(entry);
|
||||||
|
@ -31,4 +31,7 @@
|
|||||||
<allowed_disk>default</allowed_disk>
|
<allowed_disk>default</allowed_disk>
|
||||||
<allowed_path>/backups/</allowed_path>
|
<allowed_path>/backups/</allowed_path>
|
||||||
</backups>
|
</backups>
|
||||||
|
|
||||||
|
<max_mutations_bandwidth_for_server>1000000</max_mutations_bandwidth_for_server> <!-- 1M -->
|
||||||
|
<max_merges_bandwidth_for_server>1000000</max_merges_bandwidth_for_server> <!-- 1M -->
|
||||||
</clickhouse>
|
</clickhouse>
|
@ -34,8 +34,8 @@ node = cluster.add_instance(
|
|||||||
"node",
|
"node",
|
||||||
stay_alive=True,
|
stay_alive=True,
|
||||||
main_configs=[
|
main_configs=[
|
||||||
"configs/server_backups.xml",
|
"configs/static_overrides.xml",
|
||||||
"configs/server_overrides.xml",
|
"configs/dynamic_overrides.xml",
|
||||||
"configs/ssl.xml",
|
"configs/ssl.xml",
|
||||||
],
|
],
|
||||||
user_configs=[
|
user_configs=[
|
||||||
@ -64,7 +64,7 @@ def revert_config():
|
|||||||
[
|
[
|
||||||
"bash",
|
"bash",
|
||||||
"-c",
|
"-c",
|
||||||
f"echo '<clickhouse></clickhouse>' > /etc/clickhouse-server/config.d/server_overrides.xml",
|
f"echo '<clickhouse></clickhouse>' > /etc/clickhouse-server/config.d/dynamic_overrides.xml",
|
||||||
]
|
]
|
||||||
)
|
)
|
||||||
node.exec_in_container(
|
node.exec_in_container(
|
||||||
@ -96,7 +96,7 @@ def node_update_config(mode, setting, value=None):
|
|||||||
if mode is None:
|
if mode is None:
|
||||||
return
|
return
|
||||||
if mode == "server":
|
if mode == "server":
|
||||||
config_path = "/etc/clickhouse-server/config.d/server_overrides.xml"
|
config_path = "/etc/clickhouse-server/config.d/dynamic_overrides.xml"
|
||||||
config_content = f"""
|
config_content = f"""
|
||||||
<clickhouse><{setting}>{value}</{setting}></clickhouse>
|
<clickhouse><{setting}>{value}</{setting}></clickhouse>
|
||||||
"""
|
"""
|
||||||
@ -430,3 +430,32 @@ def test_write_throttling(policy, mode, setting, value, should_took):
|
|||||||
)
|
)
|
||||||
_, took = elapsed(node.query, f"insert into data select * from numbers(1e6)")
|
_, took = elapsed(node.query, f"insert into data select * from numbers(1e6)")
|
||||||
assert_took(took, should_took)
|
assert_took(took, should_took)
|
||||||
|
|
||||||
|
|
||||||
|
def test_max_mutations_bandwidth_for_server():
|
||||||
|
node.query(
|
||||||
|
"""
|
||||||
|
drop table if exists data;
|
||||||
|
create table data (key UInt64 CODEC(NONE)) engine=MergeTree() order by tuple() settings min_bytes_for_wide_part=1e9;
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
node.query("insert into data select * from numbers(1e6)")
|
||||||
|
_, took = elapsed(
|
||||||
|
node.query,
|
||||||
|
"alter table data update key = -key where 1 settings mutations_sync = 1",
|
||||||
|
)
|
||||||
|
# reading 1e6*8 bytes with 1M/s bandwith should take (8-1)/1=7 seconds
|
||||||
|
assert_took(took, 7)
|
||||||
|
|
||||||
|
|
||||||
|
def test_max_merges_bandwidth_for_server():
|
||||||
|
node.query(
|
||||||
|
"""
|
||||||
|
drop table if exists data;
|
||||||
|
create table data (key UInt64 CODEC(NONE)) engine=MergeTree() order by tuple() settings min_bytes_for_wide_part=1e9;
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
node.query("insert into data select * from numbers(1e6)")
|
||||||
|
_, took = elapsed(node.query, "optimize table data final")
|
||||||
|
# reading 1e6*8 bytes with 1M/s bandwith should take (8-1)/1=7 seconds
|
||||||
|
assert_took(took, 7)
|
||||||
|
@ -57,6 +57,31 @@ def test_drop_wrong_database_name(start):
|
|||||||
node.query("DROP DATABASE test;")
|
node.query("DROP DATABASE test;")
|
||||||
|
|
||||||
|
|
||||||
|
def test_database_engine_name(start):
|
||||||
|
# test with a valid database engine
|
||||||
|
node.query(
|
||||||
|
"""
|
||||||
|
CREATE DATABASE test_atomic ENGINE = Atomic;
|
||||||
|
CREATE TABLE test_atomic.table_test_atomic (i Int64) ENGINE = MergeTree() ORDER BY i;
|
||||||
|
INSERT INTO test_atomic.table_test_atomic SELECT 1;
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
assert 1 == int(node.query("SELECT * FROM test_atomic.table_test_atomic".strip()))
|
||||||
|
# test with a invalid database engine
|
||||||
|
with pytest.raises(
|
||||||
|
QueryRuntimeException,
|
||||||
|
match="DB::Exception: Unknown database engine Atomic123. Maybe you meant: \\['Atomic'\\].",
|
||||||
|
):
|
||||||
|
node.query("CREATE DATABASE test_atomic123 ENGINE = Atomic123;")
|
||||||
|
|
||||||
|
node.query(
|
||||||
|
"""
|
||||||
|
DROP TABLE test_atomic.table_test_atomic;
|
||||||
|
DROP DATABASE test_atomic;
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def test_wrong_table_name(start):
|
def test_wrong_table_name(start):
|
||||||
node.query(
|
node.query(
|
||||||
"""
|
"""
|
||||||
|
@ -0,0 +1,98 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Tags: no-parallel
|
||||||
|
# Tag no-parallel: failpoint is in use
|
||||||
|
|
||||||
|
CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||||
|
# shellcheck source=../shell_config.sh
|
||||||
|
. "$CUR_DIR"/../shell_config.sh
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
function wait_part()
|
||||||
|
{
|
||||||
|
local table=$1 && shift
|
||||||
|
local part=$1 && shift
|
||||||
|
|
||||||
|
for ((i = 0; i < 100; ++i)); do
|
||||||
|
if [[ $($CLICKHOUSE_CLIENT -q "select count() from system.parts where database = '$CLICKHOUSE_DATABASE' and table = '$table' and active and name = '$part'") -eq 1 ]]; then
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
sleep 0.1
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "Part $table::$part does not appeared" >&2
|
||||||
|
}
|
||||||
|
|
||||||
|
function restore_failpoints()
|
||||||
|
{
|
||||||
|
# restore entry error with failpoints (to avoid endless errors in logs)
|
||||||
|
$CLICKHOUSE_CLIENT -nm -q "
|
||||||
|
system enable failpoint replicated_queue_unfail_entries;
|
||||||
|
system sync replica $failed_replica;
|
||||||
|
system disable failpoint replicated_queue_unfail_entries;
|
||||||
|
"
|
||||||
|
}
|
||||||
|
trap restore_failpoints EXIT
|
||||||
|
|
||||||
|
$CLICKHOUSE_CLIENT -nm --insert_keeper_fault_injection_probability=0 -q "
|
||||||
|
drop table if exists data_r1;
|
||||||
|
drop table if exists data_r2;
|
||||||
|
|
||||||
|
create table data_r1 (key Int, value Int, index value_idx value type minmax) engine=ReplicatedMergeTree('/clickhouse/tables/{database}/data', '{table}') order by key;
|
||||||
|
create table data_r2 (key Int, value Int, index value_idx value type minmax) engine=ReplicatedMergeTree('/clickhouse/tables/{database}/data', '{table}') order by key;
|
||||||
|
|
||||||
|
insert into data_r1 (key) values (1); -- part all_0_0_0
|
||||||
|
"
|
||||||
|
|
||||||
|
# will fail ALTER_METADATA on one of replicas
|
||||||
|
$CLICKHOUSE_CLIENT -nm -q "
|
||||||
|
system enable failpoint replicated_queue_fail_next_entry;
|
||||||
|
alter table data_r1 drop index value_idx settings alter_sync=0; -- part all_0_0_0_1
|
||||||
|
|
||||||
|
system sync replica data_r1 pull;
|
||||||
|
system sync replica data_r2 pull;
|
||||||
|
"
|
||||||
|
|
||||||
|
# replica on which ALTER_METADATA had been succeed
|
||||||
|
success_replica=
|
||||||
|
for ((i = 0; i < 100; ++i)); do
|
||||||
|
for table in data_r1 data_r2; do
|
||||||
|
mutations="$($CLICKHOUSE_CLIENT -q "select count() from system.mutations where database = '$CLICKHOUSE_DATABASE' and table = '$table' and is_done = 0")"
|
||||||
|
if [[ $mutations -eq 0 ]]; then
|
||||||
|
success_replica=$table
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
if [[ -n $success_replica ]]; then
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
sleep 0.1
|
||||||
|
done
|
||||||
|
case "$success_replica" in
|
||||||
|
data_r1) failed_replica=data_r2;;
|
||||||
|
data_r2) failed_replica=data_r1;;
|
||||||
|
*) echo "ALTER_METADATA does not succeed on any replica" >&2 && exit 1;;
|
||||||
|
esac
|
||||||
|
mutations_on_failed_replica="$($CLICKHOUSE_CLIENT -q "select count() from system.mutations where database = '$CLICKHOUSE_DATABASE' and table = '$failed_replica' and is_done = 0")"
|
||||||
|
if [[ $mutations_on_failed_replica != 1 ]]; then
|
||||||
|
echo "Wrong number of mutations on failed replica $failed_replica, mutations $mutations_on_failed_replica" >&2
|
||||||
|
fi
|
||||||
|
|
||||||
|
# This will create MERGE_PARTS, on failed replica it will be fetched from source replica (since it does not have all parts to execute merge)
|
||||||
|
$CLICKHOUSE_CLIENT -q "optimize table $success_replica final settings optimize_throw_if_noop=1, alter_sync=1" # part all_0_0_1_1
|
||||||
|
|
||||||
|
$CLICKHOUSE_CLIENT -nm --insert_keeper_fault_injection_probability=0 -q "
|
||||||
|
insert into $success_replica (key) values (2); -- part all_2_2_0
|
||||||
|
optimize table $success_replica final settings optimize_throw_if_noop=1, alter_sync=1; -- part all_0_2_2_1
|
||||||
|
system sync replica $failed_replica pull;
|
||||||
|
"
|
||||||
|
|
||||||
|
# Wait for part to be merged on failed replica, that will trigger CHECKSUM_DOESNT_MATCH
|
||||||
|
wait_part "$failed_replica" all_0_2_2_1
|
||||||
|
|
||||||
|
# Already after part fetched there will CHECKSUM_DOESNT_MATCH in case of ALTER_METADATA re-order, but let's restore fail points and sync failed replica first.
|
||||||
|
restore_failpoints
|
||||||
|
trap '' EXIT
|
||||||
|
|
||||||
|
$CLICKHOUSE_CLIENT -q "system flush logs"
|
||||||
|
# check for error "Different number of files: 5 compressed (expected 3) and 2 uncompressed ones (expected 2). (CHECKSUM_DOESNT_MATCH)"
|
||||||
|
$CLICKHOUSE_CLIENT -q "select part_name, merge_reason, event_type, errorCodeToName(error) from system.part_log where database = '$CLICKHOUSE_DATABASE' and error != 0 order by event_time_microseconds"
|
@ -1,7 +1,10 @@
|
|||||||
|
v23.12.2.59-stable 2024-01-05
|
||||||
v23.12.1.1368-stable 2023-12-28
|
v23.12.1.1368-stable 2023-12-28
|
||||||
|
v23.11.4.24-stable 2024-01-05
|
||||||
v23.11.3.23-stable 2023-12-21
|
v23.11.3.23-stable 2023-12-21
|
||||||
v23.11.2.11-stable 2023-12-13
|
v23.11.2.11-stable 2023-12-13
|
||||||
v23.11.1.2711-stable 2023-12-06
|
v23.11.1.2711-stable 2023-12-06
|
||||||
|
v23.10.6.60-stable 2024-01-05
|
||||||
v23.10.5.20-stable 2023-11-25
|
v23.10.5.20-stable 2023-11-25
|
||||||
v23.10.4.25-stable 2023-11-17
|
v23.10.4.25-stable 2023-11-17
|
||||||
v23.10.3.5-stable 2023-11-10
|
v23.10.3.5-stable 2023-11-10
|
||||||
@ -13,6 +16,7 @@ v23.9.4.11-stable 2023-11-08
|
|||||||
v23.9.3.12-stable 2023-10-31
|
v23.9.3.12-stable 2023-10-31
|
||||||
v23.9.2.56-stable 2023-10-19
|
v23.9.2.56-stable 2023-10-19
|
||||||
v23.9.1.1854-stable 2023-09-29
|
v23.9.1.1854-stable 2023-09-29
|
||||||
|
v23.8.9.54-lts 2024-01-05
|
||||||
v23.8.8.20-lts 2023-11-25
|
v23.8.8.20-lts 2023-11-25
|
||||||
v23.8.7.24-lts 2023-11-17
|
v23.8.7.24-lts 2023-11-17
|
||||||
v23.8.6.16-lts 2023-11-08
|
v23.8.6.16-lts 2023-11-08
|
||||||
@ -41,6 +45,7 @@ v23.4.4.16-stable 2023-06-17
|
|||||||
v23.4.3.48-stable 2023-06-12
|
v23.4.3.48-stable 2023-06-12
|
||||||
v23.4.2.11-stable 2023-05-02
|
v23.4.2.11-stable 2023-05-02
|
||||||
v23.4.1.1943-stable 2023-04-27
|
v23.4.1.1943-stable 2023-04-27
|
||||||
|
v23.3.19.32-lts 2024-01-05
|
||||||
v23.3.18.15-lts 2023-11-25
|
v23.3.18.15-lts 2023-11-25
|
||||||
v23.3.17.13-lts 2023-11-17
|
v23.3.17.13-lts 2023-11-17
|
||||||
v23.3.16.7-lts 2023-11-08
|
v23.3.16.7-lts 2023-11-08
|
||||||
|
|
Loading…
Reference in New Issue
Block a user