diff --git a/CHANGELOG.md b/CHANGELOG.md
index 283000f1804..1b36142cc9f 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -22,7 +22,7 @@
* The MergeTree setting `clean_deleted_rows` is deprecated, it has no effect anymore. The `CLEANUP` keyword for the `OPTIMIZE` is not allowed by default (it can be unlocked with the `allow_experimental_replacing_merge_with_cleanup` setting). [#58267](https://github.com/ClickHouse/ClickHouse/pull/58267) ([Alexander Tokmakov](https://github.com/tavplubix)). This fixes [#57930](https://github.com/ClickHouse/ClickHouse/issues/57930). This closes [#54988](https://github.com/ClickHouse/ClickHouse/issues/54988). This closes [#54570](https://github.com/ClickHouse/ClickHouse/issues/54570). This closes [#50346](https://github.com/ClickHouse/ClickHouse/issues/50346). This closes [#47579](https://github.com/ClickHouse/ClickHouse/issues/47579). The feature has to be removed because it is not good. We have to remove it as quickly as possible, because there is no other option. [#57932](https://github.com/ClickHouse/ClickHouse/pull/57932) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
#### New Feature
-* Implement Refreshable Materialized Views, requested in [#33919](https://github.com/ClickHouse/ClickHouse/issues/57995). [#56946](https://github.com/ClickHouse/ClickHouse/pull/56946) ([Michael Kolupaev](https://github.com/al13n321), [Michael Guzov](https://github.com/koloshmet)).
+* Implement Refreshable Materialized Views, requested in [#33919](https://github.com/ClickHouse/ClickHouse/issues/33919). [#56946](https://github.com/ClickHouse/ClickHouse/pull/56946) ([Michael Kolupaev](https://github.com/al13n321), [Michael Guzov](https://github.com/koloshmet)).
* Introduce `PASTE JOIN`, which allows users to join tables without `ON` clause simply by row numbers. Example: `SELECT * FROM (SELECT number AS a FROM numbers(2)) AS t1 PASTE JOIN (SELECT number AS a FROM numbers(2) ORDER BY a DESC) AS t2`. [#57995](https://github.com/ClickHouse/ClickHouse/pull/57995) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
* The `ORDER BY` clause now supports specifying `ALL`, meaning that ClickHouse sorts by all columns in the `SELECT` clause. Example: `SELECT col1, col2 FROM tab WHERE [...] ORDER BY ALL`. [#57875](https://github.com/ClickHouse/ClickHouse/pull/57875) ([zhongyuankai](https://github.com/zhongyuankai)).
* Added a new mutation command `ALTER TABLE
APPLY DELETED MASK`, which allows to enforce applying of mask written by lightweight delete and to remove rows marked as deleted from disk. [#57433](https://github.com/ClickHouse/ClickHouse/pull/57433) ([Anton Popov](https://github.com/CurtizJ)).
@@ -375,6 +375,7 @@
* Do not interpret the `send_timeout` set on the client side as the `receive_timeout` on the server side and vise-versa. [#56035](https://github.com/ClickHouse/ClickHouse/pull/56035) ([Azat Khuzhin](https://github.com/azat)).
* Comparison of time intervals with different units will throw an exception. This closes [#55942](https://github.com/ClickHouse/ClickHouse/issues/55942). You might have occasionally rely on the previous behavior when the underlying numeric values were compared regardless of the units. [#56090](https://github.com/ClickHouse/ClickHouse/pull/56090) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Rewrited the experimental `S3Queue` table engine completely: changed the way we keep information in zookeeper which allows to make less zookeeper requests, added caching of zookeeper state in cases when we know the state will not change, improved the polling from s3 process to make it less aggressive, changed the way ttl and max set for trached files is maintained, now it is a background process. Added `system.s3queue` and `system.s3queue_log` tables. Closes [#54998](https://github.com/ClickHouse/ClickHouse/issues/54998). [#54422](https://github.com/ClickHouse/ClickHouse/pull/54422) ([Kseniia Sumarokova](https://github.com/kssenii)).
+* Arbitrary paths on HTTP endpoint are no longer interpreted as a request to the `/query` endpoint. [#55521](https://github.com/ClickHouse/ClickHouse/pull/55521) ([Konstantin Bogdanov](https://github.com/thevar1able)).
#### New Feature
* Add function `arrayFold(accumulator, x1, ..., xn -> expression, initial, array1, ..., arrayn)` which applies a lambda function to multiple arrays of the same cardinality and collects the result in an accumulator. [#49794](https://github.com/ClickHouse/ClickHouse/pull/49794) ([Lirikl](https://github.com/Lirikl)).
diff --git a/docker/keeper/Dockerfile b/docker/keeper/Dockerfile
index 145f5d13cc2..4b5e8cd3970 100644
--- a/docker/keeper/Dockerfile
+++ b/docker/keeper/Dockerfile
@@ -34,7 +34,7 @@ RUN arch=${TARGETARCH:-amd64} \
# lts / testing / prestable / etc
ARG REPO_CHANNEL="stable"
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
-ARG VERSION="23.12.1.1368"
+ARG VERSION="23.12.2.59"
ARG PACKAGES="clickhouse-keeper"
ARG DIRECT_DOWNLOAD_URLS=""
diff --git a/docker/server/Dockerfile.alpine b/docker/server/Dockerfile.alpine
index 26d65eb3ccc..452d8539a48 100644
--- a/docker/server/Dockerfile.alpine
+++ b/docker/server/Dockerfile.alpine
@@ -32,7 +32,7 @@ RUN arch=${TARGETARCH:-amd64} \
# lts / testing / prestable / etc
ARG REPO_CHANNEL="stable"
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
-ARG VERSION="23.12.1.1368"
+ARG VERSION="23.12.2.59"
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
ARG DIRECT_DOWNLOAD_URLS=""
diff --git a/docker/server/Dockerfile.ubuntu b/docker/server/Dockerfile.ubuntu
index 5b96b208b11..0cefa3c14cb 100644
--- a/docker/server/Dockerfile.ubuntu
+++ b/docker/server/Dockerfile.ubuntu
@@ -30,7 +30,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
ARG REPO_CHANNEL="stable"
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
-ARG VERSION="23.12.1.1368"
+ARG VERSION="23.12.2.59"
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
# set non-empty deb_location_url url to create a docker image
diff --git a/docker/test/stateless/stress_tests.lib b/docker/test/stateless/stress_tests.lib
index 8f89c1b80dd..6f0dabb5207 100644
--- a/docker/test/stateless/stress_tests.lib
+++ b/docker/test/stateless/stress_tests.lib
@@ -236,6 +236,10 @@ function check_logs_for_critical_errors()
&& echo -e "S3_ERROR No such key thrown (see clickhouse-server.log or no_such_key_errors.txt)$FAIL$(trim_server_logs no_such_key_errors.txt)" >> /test_output/test_results.tsv \
|| echo -e "No lost s3 keys$OK" >> /test_output/test_results.tsv
+ rg -Fa "it is lost forever" /var/log/clickhouse-server/clickhouse-server*.log | grep 'SharedMergeTreePartCheckThread' > /dev/null \
+ && echo -e "Lost forever for SharedMergeTree$FAIL" >> /test_output/test_results.tsv \
+ || echo -e "No SharedMergeTree lost forever in clickhouse-server.log$OK" >> /test_output/test_results.tsv
+
# Remove file no_such_key_errors.txt if it's empty
[ -s /test_output/no_such_key_errors.txt ] || rm /test_output/no_such_key_errors.txt
diff --git a/docs/changelogs/v23.10.6.60-stable.md b/docs/changelogs/v23.10.6.60-stable.md
new file mode 100644
index 00000000000..5e1c126e729
--- /dev/null
+++ b/docs/changelogs/v23.10.6.60-stable.md
@@ -0,0 +1,51 @@
+---
+sidebar_position: 1
+sidebar_label: 2024
+---
+
+# 2024 Changelog
+
+### ClickHouse release v23.10.6.60-stable (68907bbe643) FIXME as compared to v23.10.5.20-stable (e84001e5c61)
+
+#### Improvement
+* Backported in [#58493](https://github.com/ClickHouse/ClickHouse/issues/58493): Fix transfer query to MySQL compatible query. Fixes [#57253](https://github.com/ClickHouse/ClickHouse/issues/57253). Fixes [#52654](https://github.com/ClickHouse/ClickHouse/issues/52654). Fixes [#56729](https://github.com/ClickHouse/ClickHouse/issues/56729). [#56456](https://github.com/ClickHouse/ClickHouse/pull/56456) ([flynn](https://github.com/ucasfl)).
+* Backported in [#57659](https://github.com/ClickHouse/ClickHouse/issues/57659): Handle sigabrt case when getting PostgreSQl table structure with empty array. [#57618](https://github.com/ClickHouse/ClickHouse/pull/57618) ([Mike Kot (Михаил Кот)](https://github.com/myrrc)).
+
+#### Build/Testing/Packaging Improvement
+* Backported in [#57586](https://github.com/ClickHouse/ClickHouse/issues/57586): Fix issue caught in https://github.com/docker-library/official-images/pull/15846. [#57571](https://github.com/ClickHouse/ClickHouse/pull/57571) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
+
+#### Bug Fix (user-visible misbehavior in an official stable release)
+
+* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)).
+* Fix ALTER COLUMN with ALIAS [#56493](https://github.com/ClickHouse/ClickHouse/pull/56493) ([Nikolay Degterinsky](https://github.com/evillique)).
+* Prevent incompatible ALTER of projection columns [#56948](https://github.com/ClickHouse/ClickHouse/pull/56948) ([Amos Bird](https://github.com/amosbird)).
+* Fix segfault after ALTER UPDATE with Nullable MATERIALIZED column [#57147](https://github.com/ClickHouse/ClickHouse/pull/57147) ([Nikolay Degterinsky](https://github.com/evillique)).
+* Fix incorrect JOIN plan optimization with partially materialized normal projection [#57196](https://github.com/ClickHouse/ClickHouse/pull/57196) ([Amos Bird](https://github.com/amosbird)).
+* Fix `ReadonlyReplica` metric for all cases [#57267](https://github.com/ClickHouse/ClickHouse/pull/57267) ([Antonio Andelic](https://github.com/antonio2368)).
+* Background merges correctly use temporary data storage in the cache [#57275](https://github.com/ClickHouse/ClickHouse/pull/57275) ([vdimir](https://github.com/vdimir)).
+* MergeTree mutations reuse source part index granularity [#57352](https://github.com/ClickHouse/ClickHouse/pull/57352) ([Maksim Kita](https://github.com/kitaisreal)).
+* Fix function jsonMergePatch for partially const columns [#57379](https://github.com/ClickHouse/ClickHouse/pull/57379) ([Nikolay Degterinsky](https://github.com/evillique)).
+* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)).
+* bugfix: correctly parse SYSTEM STOP LISTEN TCP SECURE [#57483](https://github.com/ClickHouse/ClickHouse/pull/57483) ([joelynch](https://github.com/joelynch)).
+* Ignore ON CLUSTER clause in grant/revoke queries for management of replicated access entities. [#57538](https://github.com/ClickHouse/ClickHouse/pull/57538) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
+* Disable system.kafka_consumers by default (due to possible live memory leak) [#57822](https://github.com/ClickHouse/ClickHouse/pull/57822) ([Azat Khuzhin](https://github.com/azat)).
+* Fix invalid memory access in BLAKE3 (Rust) [#57876](https://github.com/ClickHouse/ClickHouse/pull/57876) ([Raúl Marín](https://github.com/Algunenano)).
+* Normalize function names in CREATE INDEX [#57906](https://github.com/ClickHouse/ClickHouse/pull/57906) ([Alexander Tokmakov](https://github.com/tavplubix)).
+* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)).
+* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)).
+* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
+* Fix parallel parsing for JSONCompactEachRow [#58250](https://github.com/ClickHouse/ClickHouse/pull/58250) ([Kruglov Pavel](https://github.com/Avogar)).
+* Fix lost blobs after dropping a replica with broken detached parts [#58333](https://github.com/ClickHouse/ClickHouse/pull/58333) ([Alexander Tokmakov](https://github.com/tavplubix)).
+* MergeTreePrefetchedReadPool disable for LIMIT only queries [#58505](https://github.com/ClickHouse/ClickHouse/pull/58505) ([Maksim Kita](https://github.com/kitaisreal)).
+
+#### NO CL CATEGORY
+
+* Backported in [#57916](https://github.com/ClickHouse/ClickHouse/issues/57916):. [#57909](https://github.com/ClickHouse/ClickHouse/pull/57909) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
+
+#### NOT FOR CHANGELOG / INSIGNIFICANT
+
+* Pin alpine version of integration tests helper container [#57669](https://github.com/ClickHouse/ClickHouse/pull/57669) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
+* Remove heavy rust stable toolchain [#57905](https://github.com/ClickHouse/ClickHouse/pull/57905) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
+* Fix docker image for integration tests (fixes CI) [#57952](https://github.com/ClickHouse/ClickHouse/pull/57952) ([Azat Khuzhin](https://github.com/azat)).
+* Fix test_user_valid_until [#58409](https://github.com/ClickHouse/ClickHouse/pull/58409) ([Nikolay Degterinsky](https://github.com/evillique)).
+
diff --git a/docs/changelogs/v23.11.4.24-stable.md b/docs/changelogs/v23.11.4.24-stable.md
new file mode 100644
index 00000000000..40096285b06
--- /dev/null
+++ b/docs/changelogs/v23.11.4.24-stable.md
@@ -0,0 +1,26 @@
+---
+sidebar_position: 1
+sidebar_label: 2024
+---
+
+# 2024 Changelog
+
+### ClickHouse release v23.11.4.24-stable (e79d840d7fe) FIXME as compared to v23.11.3.23-stable (a14ab450b0e)
+
+#### Bug Fix (user-visible misbehavior in an official stable release)
+
+* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)).
+* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)).
+* Disable system.kafka_consumers by default (due to possible live memory leak) [#57822](https://github.com/ClickHouse/ClickHouse/pull/57822) ([Azat Khuzhin](https://github.com/azat)).
+* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)).
+* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)).
+* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
+* Fix parallel parsing for JSONCompactEachRow [#58250](https://github.com/ClickHouse/ClickHouse/pull/58250) ([Kruglov Pavel](https://github.com/Avogar)).
+* Fix lost blobs after dropping a replica with broken detached parts [#58333](https://github.com/ClickHouse/ClickHouse/pull/58333) ([Alexander Tokmakov](https://github.com/tavplubix)).
+* MergeTreePrefetchedReadPool disable for LIMIT only queries [#58505](https://github.com/ClickHouse/ClickHouse/pull/58505) ([Maksim Kita](https://github.com/kitaisreal)).
+
+#### NOT FOR CHANGELOG / INSIGNIFICANT
+
+* Handle another case for preprocessing in Keeper [#58308](https://github.com/ClickHouse/ClickHouse/pull/58308) ([Antonio Andelic](https://github.com/antonio2368)).
+* Fix test_user_valid_until [#58409](https://github.com/ClickHouse/ClickHouse/pull/58409) ([Nikolay Degterinsky](https://github.com/evillique)).
+
diff --git a/docs/changelogs/v23.12.2.59-stable.md b/docs/changelogs/v23.12.2.59-stable.md
new file mode 100644
index 00000000000..6533f4e6b86
--- /dev/null
+++ b/docs/changelogs/v23.12.2.59-stable.md
@@ -0,0 +1,32 @@
+---
+sidebar_position: 1
+sidebar_label: 2024
+---
+
+# 2024 Changelog
+
+### ClickHouse release v23.12.2.59-stable (17ab210e761) FIXME as compared to v23.12.1.1368-stable (a2faa65b080)
+
+#### Backward Incompatible Change
+* Backported in [#58389](https://github.com/ClickHouse/ClickHouse/issues/58389): The MergeTree setting `clean_deleted_rows` is deprecated, it has no effect anymore. The `CLEANUP` keyword for `OPTIMIZE` is not allowed by default (unless `allow_experimental_replacing_merge_with_cleanup` is enabled). [#58316](https://github.com/ClickHouse/ClickHouse/pull/58316) ([Alexander Tokmakov](https://github.com/tavplubix)).
+
+#### Bug Fix (user-visible misbehavior in an official stable release)
+
+* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)).
+* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)).
+* Fix lost blobs after dropping a replica with broken detached parts [#58333](https://github.com/ClickHouse/ClickHouse/pull/58333) ([Alexander Tokmakov](https://github.com/tavplubix)).
+* Fix segfault when graphite table does not have agg function [#58453](https://github.com/ClickHouse/ClickHouse/pull/58453) ([Duc Canh Le](https://github.com/canhld94)).
+* MergeTreePrefetchedReadPool disable for LIMIT only queries [#58505](https://github.com/ClickHouse/ClickHouse/pull/58505) ([Maksim Kita](https://github.com/kitaisreal)).
+
+#### NO CL ENTRY
+
+* NO CL ENTRY: 'Revert "Refreshable materialized views (takeover)"'. [#58296](https://github.com/ClickHouse/ClickHouse/pull/58296) ([Alexander Tokmakov](https://github.com/tavplubix)).
+
+#### NOT FOR CHANGELOG / INSIGNIFICANT
+
+* Fix an error in the release script - it didn't allow to make 23.12. [#58288](https://github.com/ClickHouse/ClickHouse/pull/58288) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
+* Update version_date.tsv and changelogs after v23.12.1.1368-stable [#58290](https://github.com/ClickHouse/ClickHouse/pull/58290) ([robot-clickhouse](https://github.com/robot-clickhouse)).
+* Fix test_storage_s3_queue/test.py::test_drop_table [#58293](https://github.com/ClickHouse/ClickHouse/pull/58293) ([Kseniia Sumarokova](https://github.com/kssenii)).
+* Handle another case for preprocessing in Keeper [#58308](https://github.com/ClickHouse/ClickHouse/pull/58308) ([Antonio Andelic](https://github.com/antonio2368)).
+* Fix test_user_valid_until [#58409](https://github.com/ClickHouse/ClickHouse/pull/58409) ([Nikolay Degterinsky](https://github.com/evillique)).
+
diff --git a/docs/changelogs/v23.3.19.32-lts.md b/docs/changelogs/v23.3.19.32-lts.md
new file mode 100644
index 00000000000..4604c986fe6
--- /dev/null
+++ b/docs/changelogs/v23.3.19.32-lts.md
@@ -0,0 +1,36 @@
+---
+sidebar_position: 1
+sidebar_label: 2024
+---
+
+# 2024 Changelog
+
+### ClickHouse release v23.3.19.32-lts (c4d4ca8ec02) FIXME as compared to v23.3.18.15-lts (7228475d77a)
+
+#### Backward Incompatible Change
+* Backported in [#57840](https://github.com/ClickHouse/ClickHouse/issues/57840): Remove function `arrayFold` because it has a bug. This closes [#57816](https://github.com/ClickHouse/ClickHouse/issues/57816). This closes [#57458](https://github.com/ClickHouse/ClickHouse/issues/57458). [#57836](https://github.com/ClickHouse/ClickHouse/pull/57836) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
+
+#### Improvement
+* Backported in [#58489](https://github.com/ClickHouse/ClickHouse/issues/58489): Fix transfer query to MySQL compatible query. Fixes [#57253](https://github.com/ClickHouse/ClickHouse/issues/57253). Fixes [#52654](https://github.com/ClickHouse/ClickHouse/issues/52654). Fixes [#56729](https://github.com/ClickHouse/ClickHouse/issues/56729). [#56456](https://github.com/ClickHouse/ClickHouse/pull/56456) ([flynn](https://github.com/ucasfl)).
+* Backported in [#57653](https://github.com/ClickHouse/ClickHouse/issues/57653): Handle sigabrt case when getting PostgreSQl table structure with empty array. [#57618](https://github.com/ClickHouse/ClickHouse/pull/57618) ([Mike Kot (Михаил Кот)](https://github.com/myrrc)).
+
+#### Build/Testing/Packaging Improvement
+* Backported in [#57580](https://github.com/ClickHouse/ClickHouse/issues/57580): Fix issue caught in https://github.com/docker-library/official-images/pull/15846. [#57571](https://github.com/ClickHouse/ClickHouse/pull/57571) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
+
+#### Bug Fix (user-visible misbehavior in an official stable release)
+
+* Prevent incompatible ALTER of projection columns [#56948](https://github.com/ClickHouse/ClickHouse/pull/56948) ([Amos Bird](https://github.com/amosbird)).
+* Fix segfault after ALTER UPDATE with Nullable MATERIALIZED column [#57147](https://github.com/ClickHouse/ClickHouse/pull/57147) ([Nikolay Degterinsky](https://github.com/evillique)).
+* Fix incorrect JOIN plan optimization with partially materialized normal projection [#57196](https://github.com/ClickHouse/ClickHouse/pull/57196) ([Amos Bird](https://github.com/amosbird)).
+* MergeTree mutations reuse source part index granularity [#57352](https://github.com/ClickHouse/ClickHouse/pull/57352) ([Maksim Kita](https://github.com/kitaisreal)).
+* Fix invalid memory access in BLAKE3 (Rust) [#57876](https://github.com/ClickHouse/ClickHouse/pull/57876) ([Raúl Marín](https://github.com/Algunenano)).
+* Normalize function names in CREATE INDEX [#57906](https://github.com/ClickHouse/ClickHouse/pull/57906) ([Alexander Tokmakov](https://github.com/tavplubix)).
+* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)).
+* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)).
+* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
+
+#### NOT FOR CHANGELOG / INSIGNIFICANT
+
+* Pin alpine version of integration tests helper container [#57669](https://github.com/ClickHouse/ClickHouse/pull/57669) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
+* Fix docker image for integration tests (fixes CI) [#57952](https://github.com/ClickHouse/ClickHouse/pull/57952) ([Azat Khuzhin](https://github.com/azat)).
+
diff --git a/docs/changelogs/v23.8.9.54-lts.md b/docs/changelogs/v23.8.9.54-lts.md
new file mode 100644
index 00000000000..00607c60c39
--- /dev/null
+++ b/docs/changelogs/v23.8.9.54-lts.md
@@ -0,0 +1,47 @@
+---
+sidebar_position: 1
+sidebar_label: 2024
+---
+
+# 2024 Changelog
+
+### ClickHouse release v23.8.9.54-lts (192a1d231fa) FIXME as compared to v23.8.8.20-lts (5e012a03bf2)
+
+#### Improvement
+* Backported in [#57668](https://github.com/ClickHouse/ClickHouse/issues/57668): Output valid JSON/XML on excetpion during HTTP query execution. Add setting `http_write_exception_in_output_format` to enable/disable this behaviour (enabled by default). [#52853](https://github.com/ClickHouse/ClickHouse/pull/52853) ([Kruglov Pavel](https://github.com/Avogar)).
+* Backported in [#58491](https://github.com/ClickHouse/ClickHouse/issues/58491): Fix transfer query to MySQL compatible query. Fixes [#57253](https://github.com/ClickHouse/ClickHouse/issues/57253). Fixes [#52654](https://github.com/ClickHouse/ClickHouse/issues/52654). Fixes [#56729](https://github.com/ClickHouse/ClickHouse/issues/56729). [#56456](https://github.com/ClickHouse/ClickHouse/pull/56456) ([flynn](https://github.com/ucasfl)).
+* Backported in [#57238](https://github.com/ClickHouse/ClickHouse/issues/57238): Fetching a part waits when that part is fully committed on remote replica. It is better not send part in PreActive state. In case of zero copy this is mandatory restriction. [#56808](https://github.com/ClickHouse/ClickHouse/pull/56808) ([Sema Checherinda](https://github.com/CheSema)).
+* Backported in [#57655](https://github.com/ClickHouse/ClickHouse/issues/57655): Handle sigabrt case when getting PostgreSQl table structure with empty array. [#57618](https://github.com/ClickHouse/ClickHouse/pull/57618) ([Mike Kot (Михаил Кот)](https://github.com/myrrc)).
+
+#### Build/Testing/Packaging Improvement
+* Backported in [#57582](https://github.com/ClickHouse/ClickHouse/issues/57582): Fix issue caught in https://github.com/docker-library/official-images/pull/15846. [#57571](https://github.com/ClickHouse/ClickHouse/pull/57571) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
+
+#### Bug Fix (user-visible misbehavior in an official stable release)
+
+* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)).
+* Fix ALTER COLUMN with ALIAS [#56493](https://github.com/ClickHouse/ClickHouse/pull/56493) ([Nikolay Degterinsky](https://github.com/evillique)).
+* Prevent incompatible ALTER of projection columns [#56948](https://github.com/ClickHouse/ClickHouse/pull/56948) ([Amos Bird](https://github.com/amosbird)).
+* Fix segfault after ALTER UPDATE with Nullable MATERIALIZED column [#57147](https://github.com/ClickHouse/ClickHouse/pull/57147) ([Nikolay Degterinsky](https://github.com/evillique)).
+* Fix incorrect JOIN plan optimization with partially materialized normal projection [#57196](https://github.com/ClickHouse/ClickHouse/pull/57196) ([Amos Bird](https://github.com/amosbird)).
+* Fix `ReadonlyReplica` metric for all cases [#57267](https://github.com/ClickHouse/ClickHouse/pull/57267) ([Antonio Andelic](https://github.com/antonio2368)).
+* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)).
+* bugfix: correctly parse SYSTEM STOP LISTEN TCP SECURE [#57483](https://github.com/ClickHouse/ClickHouse/pull/57483) ([joelynch](https://github.com/joelynch)).
+* Ignore ON CLUSTER clause in grant/revoke queries for management of replicated access entities. [#57538](https://github.com/ClickHouse/ClickHouse/pull/57538) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
+* Disable system.kafka_consumers by default (due to possible live memory leak) [#57822](https://github.com/ClickHouse/ClickHouse/pull/57822) ([Azat Khuzhin](https://github.com/azat)).
+* Fix invalid memory access in BLAKE3 (Rust) [#57876](https://github.com/ClickHouse/ClickHouse/pull/57876) ([Raúl Marín](https://github.com/Algunenano)).
+* Normalize function names in CREATE INDEX [#57906](https://github.com/ClickHouse/ClickHouse/pull/57906) ([Alexander Tokmakov](https://github.com/tavplubix)).
+* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)).
+* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)).
+* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
+* Fix parallel parsing for JSONCompactEachRow [#58250](https://github.com/ClickHouse/ClickHouse/pull/58250) ([Kruglov Pavel](https://github.com/Avogar)).
+
+#### NO CL ENTRY
+
+* NO CL ENTRY: 'Update PeekableWriteBuffer.cpp'. [#57701](https://github.com/ClickHouse/ClickHouse/pull/57701) ([Kruglov Pavel](https://github.com/Avogar)).
+
+#### NOT FOR CHANGELOG / INSIGNIFICANT
+
+* Pin alpine version of integration tests helper container [#57669](https://github.com/ClickHouse/ClickHouse/pull/57669) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
+* Remove heavy rust stable toolchain [#57905](https://github.com/ClickHouse/ClickHouse/pull/57905) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
+* Fix docker image for integration tests (fixes CI) [#57952](https://github.com/ClickHouse/ClickHouse/pull/57952) ([Azat Khuzhin](https://github.com/azat)).
+
diff --git a/docs/en/interfaces/formats.md b/docs/en/interfaces/formats.md
index 836b1f2f637..ed67af48af7 100644
--- a/docs/en/interfaces/formats.md
+++ b/docs/en/interfaces/formats.md
@@ -1262,6 +1262,7 @@ SELECT * FROM json_each_row_nested
- [input_format_import_nested_json](/docs/en/operations/settings/settings-formats.md/#input_format_import_nested_json) - map nested JSON data to nested tables (it works for JSONEachRow format). Default value - `false`.
- [input_format_json_read_bools_as_numbers](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_bools_as_numbers) - allow to parse bools as numbers in JSON input formats. Default value - `true`.
+- [input_format_json_read_bools_as_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_bools_as_strings) - allow to parse bools as strings in JSON input formats. Default value - `true`.
- [input_format_json_read_numbers_as_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_numbers_as_strings) - allow to parse numbers as strings in JSON input formats. Default value - `true`.
- [input_format_json_read_arrays_as_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_arrays_as_strings) - allow to parse JSON arrays as strings in JSON input formats. Default value - `true`.
- [input_format_json_read_objects_as_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_objects_as_strings) - allow to parse JSON objects as strings in JSON input formats. Default value - `true`.
diff --git a/docs/en/interfaces/schema-inference.md b/docs/en/interfaces/schema-inference.md
index ef858796936..4db1d53987a 100644
--- a/docs/en/interfaces/schema-inference.md
+++ b/docs/en/interfaces/schema-inference.md
@@ -614,6 +614,26 @@ DESC format(JSONEachRow, $$
└───────┴─────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
```
+##### input_format_json_read_bools_as_strings
+
+Enabling this setting allows reading Bool values as strings.
+
+This setting is enabled by default.
+
+**Example:**
+
+```sql
+SET input_format_json_read_bools_as_strings = 1;
+DESC format(JSONEachRow, $$
+ {"value" : true}
+ {"value" : "Hello, World"}
+ $$)
+```
+```response
+┌─name──┬─type─────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
+│ value │ Nullable(String) │ │ │ │ │ │
+└───────┴──────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
+```
##### input_format_json_read_arrays_as_strings
Enabling this setting allows reading JSON array values as strings.
diff --git a/docs/en/operations/settings/settings-formats.md b/docs/en/operations/settings/settings-formats.md
index 3d76bd9df73..43a73844b79 100644
--- a/docs/en/operations/settings/settings-formats.md
+++ b/docs/en/operations/settings/settings-formats.md
@@ -377,6 +377,12 @@ Allow parsing bools as numbers in JSON input formats.
Enabled by default.
+## input_format_json_read_bools_as_strings {#input_format_json_read_bools_as_strings}
+
+Allow parsing bools as strings in JSON input formats.
+
+Enabled by default.
+
## input_format_json_read_numbers_as_strings {#input_format_json_read_numbers_as_strings}
Allow parsing numbers as strings in JSON input formats.
diff --git a/docs/en/operations/settings/settings.md b/docs/en/operations/settings/settings.md
index 6e087467bb9..d4ee8106320 100644
--- a/docs/en/operations/settings/settings.md
+++ b/docs/en/operations/settings/settings.md
@@ -3847,6 +3847,8 @@ Possible values:
- `none` — Is similar to throw, but distributed DDL query returns no result set.
- `null_status_on_timeout` — Returns `NULL` as execution status in some rows of result set instead of throwing `TIMEOUT_EXCEEDED` if query is not finished on the corresponding hosts.
- `never_throw` — Do not throw `TIMEOUT_EXCEEDED` and do not rethrow exceptions if query has failed on some hosts.
+- `null_status_on_timeout_only_active` — similar to `null_status_on_timeout`, but doesn't wait for inactive replicas of the `Replicated` database
+- `throw_only_active` — similar to `throw`, but doesn't wait for inactive replicas of the `Replicated` database
Default value: `throw`.
diff --git a/docs/en/operations/utilities/clickhouse-format.md b/docs/en/operations/utilities/clickhouse-format.md
index 101310cc65e..3e4295598aa 100644
--- a/docs/en/operations/utilities/clickhouse-format.md
+++ b/docs/en/operations/utilities/clickhouse-format.md
@@ -27,7 +27,7 @@ $ clickhouse-format --query "select number from numbers(10) where number%2 order
Result:
-```sql
+```bash
SELECT number
FROM numbers(10)
WHERE number % 2
@@ -49,22 +49,20 @@ SELECT sum(number) FROM numbers(5)
3. Multiqueries:
```bash
-$ clickhouse-format -n <<< "SELECT * FROM (SELECT 1 AS x UNION ALL SELECT 1 UNION DISTINCT SELECT 3);"
+$ clickhouse-format -n <<< "SELECT min(number) FROM numbers(5); SELECT max(number) FROM numbers(5);"
```
Result:
-```sql
-SELECT *
-FROM
-(
- SELECT 1 AS x
- UNION ALL
- SELECT 1
- UNION DISTINCT
- SELECT 3
-)
+```
+SELECT min(number)
+FROM numbers(5)
;
+
+SELECT max(number)
+FROM numbers(5)
+;
+
```
4. Obfuscating:
@@ -75,7 +73,7 @@ $ clickhouse-format --seed Hello --obfuscate <<< "SELECT cost_first_screen BETWE
Result:
-```sql
+```
SELECT treasury_mammoth_hazelnut BETWEEN nutmeg AND span, CASE WHEN chive >= 116 THEN switching ELSE ANYTHING END;
```
@@ -87,7 +85,7 @@ $ clickhouse-format --seed World --obfuscate <<< "SELECT cost_first_screen BETWE
Result:
-```sql
+```
SELECT horse_tape_summer BETWEEN folklore AND moccasins, CASE WHEN intestine >= 116 THEN nonconformist ELSE FORESTRY END;
```
@@ -99,7 +97,7 @@ $ clickhouse-format --backslash <<< "SELECT * FROM (SELECT 1 AS x UNION ALL SELE
Result:
-```sql
+```
SELECT * \
FROM \
( \
diff --git a/docs/en/sql-reference/functions/date-time-functions.md b/docs/en/sql-reference/functions/date-time-functions.md
index 0261589b968..5622097537e 100644
--- a/docs/en/sql-reference/functions/date-time-functions.md
+++ b/docs/en/sql-reference/functions/date-time-functions.md
@@ -1483,7 +1483,9 @@ For mode values with a meaning of “with 4 or more days this year,” weeks are
- Otherwise, it is the last week of the previous year, and the next week is week 1.
-For mode values with a meaning of “contains January 1”, the week contains January 1 is week 1. It does not matter how many days in the new year the week contained, even if it contained only one day.
+For mode values with a meaning of “contains January 1”, the week contains January 1 is week 1.
+It does not matter how many days in the new year the week contained, even if it contained only one day.
+I.e. if the last week of December contains January 1 of the next year, it will be week 1 of the next year.
**Syntax**
diff --git a/docs/en/sql-reference/functions/hash-functions.md b/docs/en/sql-reference/functions/hash-functions.md
index a23849c13aa..2c6a468af0e 100644
--- a/docs/en/sql-reference/functions/hash-functions.md
+++ b/docs/en/sql-reference/functions/hash-functions.md
@@ -1779,7 +1779,9 @@ Result:
## sqid
-Transforms numbers into YouTube-like short URL hash called [Sqid](https://sqids.org/).
+Transforms numbers into a [Sqid](https://sqids.org/) which is a YouTube-like ID string.
+The output alphabet is `abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789`.
+Do not use this function for hashing - the generated IDs can be decoded back into numbers.
**Syntax**
diff --git a/docs/en/sql-reference/functions/rounding-functions.md b/docs/en/sql-reference/functions/rounding-functions.md
index 84839c2489c..3ede66cf316 100644
--- a/docs/en/sql-reference/functions/rounding-functions.md
+++ b/docs/en/sql-reference/functions/rounding-functions.md
@@ -53,7 +53,7 @@ The rounded number of the same type as the input number.
**Example of use with Float**
``` sql
-SELECT number / 2 AS x, round(x) FROM system.numbers LIMIT 3
+SELECT number / 2 AS x, round(x) FROM system.numbers LIMIT 3;
```
``` text
@@ -67,7 +67,22 @@ SELECT number / 2 AS x, round(x) FROM system.numbers LIMIT 3
**Example of use with Decimal**
``` sql
-SELECT cast(number / 2 AS Decimal(10,4)) AS x, round(x) FROM system.numbers LIMIT 3
+SELECT cast(number / 2 AS Decimal(10,4)) AS x, round(x) FROM system.numbers LIMIT 3;
+```
+
+``` text
+┌───x─┬─round(CAST(divide(number, 2), 'Decimal(10, 4)'))─┐
+│ 0 │ 0 │
+│ 0.5 │ 1 │
+│ 1 │ 1 │
+└─────┴──────────────────────────────────────────────────┘
+```
+
+If you want to keep the trailing zeros, you need to enable `output_format_decimal_trailing_zeros`
+
+``` sql
+SELECT cast(number / 2 AS Decimal(10,4)) AS x, round(x) FROM system.numbers LIMIT 3 settings output_format_decimal_trailing_zeros=1;
+
```
``` text
diff --git a/docs/ru/sql-reference/functions/date-time-functions.md b/docs/ru/sql-reference/functions/date-time-functions.md
index fa5728a097d..cbbb456aa80 100644
--- a/docs/ru/sql-reference/functions/date-time-functions.md
+++ b/docs/ru/sql-reference/functions/date-time-functions.md
@@ -578,7 +578,9 @@ SELECT
- В противном случае это последняя неделя предыдущего года, а следующая неделя - неделя 1.
-Для режимов со значением «содержит 1 января», неделя 1 – это неделя содержащая 1 января. Не имеет значения, сколько дней в новом году содержала неделя, даже если она содержала только один день.
+Для режимов со значением «содержит 1 января», неделя 1 – это неделя, содержащая 1 января.
+Не имеет значения, сколько дней нового года содержит эта неделя, даже если она содержит только один день.
+Так, если последняя неделя декабря содержит 1 января следующего года, то она считается неделей 1 следующего года.
**Пример**
diff --git a/programs/library-bridge/LibraryBridgeHandlers.cpp b/programs/library-bridge/LibraryBridgeHandlers.cpp
index 7c77e633a44..9642dd7ee63 100644
--- a/programs/library-bridge/LibraryBridgeHandlers.cpp
+++ b/programs/library-bridge/LibraryBridgeHandlers.cpp
@@ -2,7 +2,6 @@
#include "CatBoostLibraryHandler.h"
#include "CatBoostLibraryHandlerFactory.h"
-#include "Common/ProfileEvents.h"
#include "ExternalDictionaryLibraryHandler.h"
#include "ExternalDictionaryLibraryHandlerFactory.h"
@@ -45,7 +44,7 @@ namespace
response.setStatusAndReason(HTTPResponse::HTTP_INTERNAL_SERVER_ERROR);
if (!response.sent())
- *response.send() << message << '\n';
+ *response.send() << message << std::endl;
LOG_WARNING(&Poco::Logger::get("LibraryBridge"), fmt::runtime(message));
}
@@ -97,7 +96,7 @@ ExternalDictionaryLibraryBridgeRequestHandler::ExternalDictionaryLibraryBridgeRe
}
-void ExternalDictionaryLibraryBridgeRequestHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & /*write_event*/)
+void ExternalDictionaryLibraryBridgeRequestHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response)
{
LOG_TRACE(log, "Request URI: {}", request.getURI());
HTMLForm params(getContext()->getSettingsRef(), request);
@@ -385,7 +384,7 @@ ExternalDictionaryLibraryBridgeExistsHandler::ExternalDictionaryLibraryBridgeExi
}
-void ExternalDictionaryLibraryBridgeExistsHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & /*write_event*/)
+void ExternalDictionaryLibraryBridgeExistsHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response)
{
try
{
@@ -424,7 +423,7 @@ CatBoostLibraryBridgeRequestHandler::CatBoostLibraryBridgeRequestHandler(
}
-void CatBoostLibraryBridgeRequestHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & /*write_event*/)
+void CatBoostLibraryBridgeRequestHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response)
{
LOG_TRACE(log, "Request URI: {}", request.getURI());
HTMLForm params(getContext()->getSettingsRef(), request);
@@ -622,7 +621,7 @@ CatBoostLibraryBridgeExistsHandler::CatBoostLibraryBridgeExistsHandler(size_t ke
}
-void CatBoostLibraryBridgeExistsHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & /*write_event*/)
+void CatBoostLibraryBridgeExistsHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response)
{
try
{
diff --git a/programs/library-bridge/LibraryBridgeHandlers.h b/programs/library-bridge/LibraryBridgeHandlers.h
index 4f08d7a6084..16815e84723 100644
--- a/programs/library-bridge/LibraryBridgeHandlers.h
+++ b/programs/library-bridge/LibraryBridgeHandlers.h
@@ -20,7 +20,7 @@ class ExternalDictionaryLibraryBridgeRequestHandler : public HTTPRequestHandler,
public:
ExternalDictionaryLibraryBridgeRequestHandler(size_t keep_alive_timeout_, ContextPtr context_);
- void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & write_event) override;
+ void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) override;
private:
static constexpr inline auto FORMAT = "RowBinary";
@@ -36,7 +36,7 @@ class ExternalDictionaryLibraryBridgeExistsHandler : public HTTPRequestHandler,
public:
ExternalDictionaryLibraryBridgeExistsHandler(size_t keep_alive_timeout_, ContextPtr context_);
- void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & write_event) override;
+ void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) override;
private:
const size_t keep_alive_timeout;
@@ -65,7 +65,7 @@ class CatBoostLibraryBridgeRequestHandler : public HTTPRequestHandler, WithConte
public:
CatBoostLibraryBridgeRequestHandler(size_t keep_alive_timeout_, ContextPtr context_);
- void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & write_event) override;
+ void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) override;
private:
const size_t keep_alive_timeout;
@@ -79,7 +79,7 @@ class CatBoostLibraryBridgeExistsHandler : public HTTPRequestHandler, WithContex
public:
CatBoostLibraryBridgeExistsHandler(size_t keep_alive_timeout_, ContextPtr context_);
- void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & write_event) override;
+ void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) override;
private:
const size_t keep_alive_timeout;
diff --git a/programs/odbc-bridge/ColumnInfoHandler.cpp b/programs/odbc-bridge/ColumnInfoHandler.cpp
index 774883657b7..434abf0bf14 100644
--- a/programs/odbc-bridge/ColumnInfoHandler.cpp
+++ b/programs/odbc-bridge/ColumnInfoHandler.cpp
@@ -69,7 +69,7 @@ namespace
}
-void ODBCColumnsInfoHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & /*write_event*/)
+void ODBCColumnsInfoHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response)
{
HTMLForm params(getContext()->getSettingsRef(), request, request.getStream());
LOG_TRACE(log, "Request URI: {}", request.getURI());
@@ -78,7 +78,7 @@ void ODBCColumnsInfoHandler::handleRequest(HTTPServerRequest & request, HTTPServ
{
response.setStatusAndReason(Poco::Net::HTTPResponse::HTTP_INTERNAL_SERVER_ERROR);
if (!response.sent())
- *response.send() << message << '\n';
+ *response.send() << message << std::endl;
LOG_WARNING(log, fmt::runtime(message));
};
diff --git a/programs/odbc-bridge/ColumnInfoHandler.h b/programs/odbc-bridge/ColumnInfoHandler.h
index e3087701182..3ba8b182ba6 100644
--- a/programs/odbc-bridge/ColumnInfoHandler.h
+++ b/programs/odbc-bridge/ColumnInfoHandler.h
@@ -23,7 +23,7 @@ public:
{
}
- void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & write_event) override;
+ void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) override;
private:
Poco::Logger * log;
diff --git a/programs/odbc-bridge/IdentifierQuoteHandler.cpp b/programs/odbc-bridge/IdentifierQuoteHandler.cpp
index a23efb112de..f622995bf15 100644
--- a/programs/odbc-bridge/IdentifierQuoteHandler.cpp
+++ b/programs/odbc-bridge/IdentifierQuoteHandler.cpp
@@ -21,7 +21,7 @@
namespace DB
{
-void IdentifierQuoteHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & /*write_event*/)
+void IdentifierQuoteHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response)
{
HTMLForm params(getContext()->getSettingsRef(), request, request.getStream());
LOG_TRACE(log, "Request URI: {}", request.getURI());
@@ -30,7 +30,7 @@ void IdentifierQuoteHandler::handleRequest(HTTPServerRequest & request, HTTPServ
{
response.setStatusAndReason(Poco::Net::HTTPResponse::HTTP_INTERNAL_SERVER_ERROR);
if (!response.sent())
- response.send()->writeln(message);
+ *response.send() << message << std::endl;
LOG_WARNING(log, fmt::runtime(message));
};
diff --git a/programs/odbc-bridge/IdentifierQuoteHandler.h b/programs/odbc-bridge/IdentifierQuoteHandler.h
index ff5c02ca07b..d57bbc0ca8a 100644
--- a/programs/odbc-bridge/IdentifierQuoteHandler.h
+++ b/programs/odbc-bridge/IdentifierQuoteHandler.h
@@ -21,7 +21,7 @@ public:
{
}
- void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & write_event) override;
+ void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) override;
private:
Poco::Logger * log;
diff --git a/programs/odbc-bridge/MainHandler.cpp b/programs/odbc-bridge/MainHandler.cpp
index e350afa2b10..9130b3e0f47 100644
--- a/programs/odbc-bridge/MainHandler.cpp
+++ b/programs/odbc-bridge/MainHandler.cpp
@@ -46,12 +46,12 @@ void ODBCHandler::processError(HTTPServerResponse & response, const std::string
{
response.setStatusAndReason(HTTPResponse::HTTP_INTERNAL_SERVER_ERROR);
if (!response.sent())
- *response.send() << message << '\n';
+ *response.send() << message << std::endl;
LOG_WARNING(log, fmt::runtime(message));
}
-void ODBCHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & /*write_event*/)
+void ODBCHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response)
{
HTMLForm params(getContext()->getSettingsRef(), request);
LOG_TRACE(log, "Request URI: {}", request.getURI());
diff --git a/programs/odbc-bridge/MainHandler.h b/programs/odbc-bridge/MainHandler.h
index 7977245ff82..bc0fca8b9a5 100644
--- a/programs/odbc-bridge/MainHandler.h
+++ b/programs/odbc-bridge/MainHandler.h
@@ -30,7 +30,7 @@ public:
{
}
- void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & write_event) override;
+ void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) override;
private:
Poco::Logger * log;
diff --git a/programs/odbc-bridge/PingHandler.cpp b/programs/odbc-bridge/PingHandler.cpp
index 80d0e2bf4a9..e3ab5e5cd00 100644
--- a/programs/odbc-bridge/PingHandler.cpp
+++ b/programs/odbc-bridge/PingHandler.cpp
@@ -6,7 +6,7 @@
namespace DB
{
-void PingHandler::handleRequest(HTTPServerRequest & /* request */, HTTPServerResponse & response, const ProfileEvents::Event & /*write_event*/)
+void PingHandler::handleRequest(HTTPServerRequest & /* request */, HTTPServerResponse & response)
{
try
{
diff --git a/programs/odbc-bridge/PingHandler.h b/programs/odbc-bridge/PingHandler.h
index c5447107e0c..c969ec55af7 100644
--- a/programs/odbc-bridge/PingHandler.h
+++ b/programs/odbc-bridge/PingHandler.h
@@ -10,7 +10,7 @@ class PingHandler : public HTTPRequestHandler
{
public:
explicit PingHandler(size_t keep_alive_timeout_) : keep_alive_timeout(keep_alive_timeout_) {}
- void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & write_event) override;
+ void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) override;
private:
size_t keep_alive_timeout;
diff --git a/programs/odbc-bridge/SchemaAllowedHandler.cpp b/programs/odbc-bridge/SchemaAllowedHandler.cpp
index c7025ca4311..020359f51fd 100644
--- a/programs/odbc-bridge/SchemaAllowedHandler.cpp
+++ b/programs/odbc-bridge/SchemaAllowedHandler.cpp
@@ -29,7 +29,7 @@ namespace
}
-void SchemaAllowedHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & /*write_event*/)
+void SchemaAllowedHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response)
{
HTMLForm params(getContext()->getSettingsRef(), request, request.getStream());
LOG_TRACE(log, "Request URI: {}", request.getURI());
@@ -38,7 +38,7 @@ void SchemaAllowedHandler::handleRequest(HTTPServerRequest & request, HTTPServer
{
response.setStatusAndReason(Poco::Net::HTTPResponse::HTTP_INTERNAL_SERVER_ERROR);
if (!response.sent())
- *response.send() << message << '\n';
+ *response.send() << message << std::endl;
LOG_WARNING(log, fmt::runtime(message));
};
diff --git a/programs/odbc-bridge/SchemaAllowedHandler.h b/programs/odbc-bridge/SchemaAllowedHandler.h
index aa0b04b1d31..cb71a6fb5a2 100644
--- a/programs/odbc-bridge/SchemaAllowedHandler.h
+++ b/programs/odbc-bridge/SchemaAllowedHandler.h
@@ -24,7 +24,7 @@ public:
{
}
- void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & write_event) override;
+ void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) override;
private:
Poco::Logger * log;
diff --git a/programs/server/Server.cpp b/programs/server/Server.cpp
index 7ad7460f6f8..1fa3d1cfa73 100644
--- a/programs/server/Server.cpp
+++ b/programs/server/Server.cpp
@@ -152,18 +152,6 @@ namespace ProfileEvents
{
extern const Event MainConfigLoads;
extern const Event ServerStartupMilliseconds;
- extern const Event InterfaceNativeSendBytes;
- extern const Event InterfaceNativeReceiveBytes;
- extern const Event InterfaceHTTPSendBytes;
- extern const Event InterfaceHTTPReceiveBytes;
- extern const Event InterfacePrometheusSendBytes;
- extern const Event InterfacePrometheusReceiveBytes;
- extern const Event InterfaceInterserverSendBytes;
- extern const Event InterfaceInterserverReceiveBytes;
- extern const Event InterfaceMySQLSendBytes;
- extern const Event InterfaceMySQLReceiveBytes;
- extern const Event InterfacePostgreSQLSendBytes;
- extern const Event InterfacePostgreSQLReceiveBytes;
}
namespace fs = std::filesystem;
@@ -1272,11 +1260,11 @@ try
{
Settings::checkNoSettingNamesAtTopLevel(*config, config_path);
- ServerSettings server_settings_;
- server_settings_.loadSettingsFromConfig(*config);
+ ServerSettings new_server_settings;
+ new_server_settings.loadSettingsFromConfig(*config);
- size_t max_server_memory_usage = server_settings_.max_server_memory_usage;
- double max_server_memory_usage_to_ram_ratio = server_settings_.max_server_memory_usage_to_ram_ratio;
+ size_t max_server_memory_usage = new_server_settings.max_server_memory_usage;
+ double max_server_memory_usage_to_ram_ratio = new_server_settings.max_server_memory_usage_to_ram_ratio;
size_t current_physical_server_memory = getMemoryAmount(); /// With cgroups, the amount of memory available to the server can be changed dynamically.
size_t default_max_server_memory_usage = static_cast(current_physical_server_memory * max_server_memory_usage_to_ram_ratio);
@@ -1306,9 +1294,9 @@ try
total_memory_tracker.setDescription("(total)");
total_memory_tracker.setMetric(CurrentMetrics::MemoryTracking);
- size_t merges_mutations_memory_usage_soft_limit = server_settings_.merges_mutations_memory_usage_soft_limit;
+ size_t merges_mutations_memory_usage_soft_limit = new_server_settings.merges_mutations_memory_usage_soft_limit;
- size_t default_merges_mutations_server_memory_usage = static_cast(current_physical_server_memory * server_settings_.merges_mutations_memory_usage_to_ram_ratio);
+ size_t default_merges_mutations_server_memory_usage = static_cast(current_physical_server_memory * new_server_settings.merges_mutations_memory_usage_to_ram_ratio);
if (merges_mutations_memory_usage_soft_limit == 0)
{
merges_mutations_memory_usage_soft_limit = default_merges_mutations_server_memory_usage;
@@ -1316,7 +1304,7 @@ try
" ({} available * {:.2f} merges_mutations_memory_usage_to_ram_ratio)",
formatReadableSizeWithBinarySuffix(merges_mutations_memory_usage_soft_limit),
formatReadableSizeWithBinarySuffix(current_physical_server_memory),
- server_settings_.merges_mutations_memory_usage_to_ram_ratio);
+ new_server_settings.merges_mutations_memory_usage_to_ram_ratio);
}
else if (merges_mutations_memory_usage_soft_limit > default_merges_mutations_server_memory_usage)
{
@@ -1325,7 +1313,7 @@ try
" ({} available * {:.2f} merges_mutations_memory_usage_to_ram_ratio)",
formatReadableSizeWithBinarySuffix(merges_mutations_memory_usage_soft_limit),
formatReadableSizeWithBinarySuffix(current_physical_server_memory),
- server_settings_.merges_mutations_memory_usage_to_ram_ratio);
+ new_server_settings.merges_mutations_memory_usage_to_ram_ratio);
}
LOG_INFO(log, "Merges and mutations memory limit is set to {}",
@@ -1334,7 +1322,7 @@ try
background_memory_tracker.setDescription("(background)");
background_memory_tracker.setMetric(CurrentMetrics::MergesMutationsMemoryTracking);
- total_memory_tracker.setAllowUseJemallocMemory(server_settings_.allow_use_jemalloc_memory);
+ total_memory_tracker.setAllowUseJemallocMemory(new_server_settings.allow_use_jemalloc_memory);
auto * global_overcommit_tracker = global_context->getGlobalOvercommitTracker();
total_memory_tracker.setOvercommitTracker(global_overcommit_tracker);
@@ -1358,26 +1346,26 @@ try
global_context->setRemoteHostFilter(*config);
global_context->setHTTPHeaderFilter(*config);
- global_context->setMaxTableSizeToDrop(server_settings_.max_table_size_to_drop);
- global_context->setMaxPartitionSizeToDrop(server_settings_.max_partition_size_to_drop);
- global_context->setMaxTableNumToWarn(server_settings_.max_table_num_to_warn);
- global_context->setMaxDatabaseNumToWarn(server_settings_.max_database_num_to_warn);
- global_context->setMaxPartNumToWarn(server_settings_.max_part_num_to_warn);
+ global_context->setMaxTableSizeToDrop(new_server_settings.max_table_size_to_drop);
+ global_context->setMaxPartitionSizeToDrop(new_server_settings.max_partition_size_to_drop);
+ global_context->setMaxTableNumToWarn(new_server_settings.max_table_num_to_warn);
+ global_context->setMaxDatabaseNumToWarn(new_server_settings.max_database_num_to_warn);
+ global_context->setMaxPartNumToWarn(new_server_settings.max_part_num_to_warn);
ConcurrencyControl::SlotCount concurrent_threads_soft_limit = ConcurrencyControl::Unlimited;
- if (server_settings_.concurrent_threads_soft_limit_num > 0 && server_settings_.concurrent_threads_soft_limit_num < concurrent_threads_soft_limit)
- concurrent_threads_soft_limit = server_settings_.concurrent_threads_soft_limit_num;
- if (server_settings_.concurrent_threads_soft_limit_ratio_to_cores > 0)
+ if (new_server_settings.concurrent_threads_soft_limit_num > 0 && new_server_settings.concurrent_threads_soft_limit_num < concurrent_threads_soft_limit)
+ concurrent_threads_soft_limit = new_server_settings.concurrent_threads_soft_limit_num;
+ if (new_server_settings.concurrent_threads_soft_limit_ratio_to_cores > 0)
{
- auto value = server_settings_.concurrent_threads_soft_limit_ratio_to_cores * std::thread::hardware_concurrency();
+ auto value = new_server_settings.concurrent_threads_soft_limit_ratio_to_cores * std::thread::hardware_concurrency();
if (value > 0 && value < concurrent_threads_soft_limit)
concurrent_threads_soft_limit = value;
}
ConcurrencyControl::instance().setMaxConcurrency(concurrent_threads_soft_limit);
- global_context->getProcessList().setMaxSize(server_settings_.max_concurrent_queries);
- global_context->getProcessList().setMaxInsertQueriesAmount(server_settings_.max_concurrent_insert_queries);
- global_context->getProcessList().setMaxSelectQueriesAmount(server_settings_.max_concurrent_select_queries);
+ global_context->getProcessList().setMaxSize(new_server_settings.max_concurrent_queries);
+ global_context->getProcessList().setMaxInsertQueriesAmount(new_server_settings.max_concurrent_insert_queries);
+ global_context->getProcessList().setMaxSelectQueriesAmount(new_server_settings.max_concurrent_select_queries);
if (config->has("keeper_server"))
global_context->updateKeeperConfiguration(*config);
@@ -1388,68 +1376,68 @@ try
/// This is done for backward compatibility.
if (global_context->areBackgroundExecutorsInitialized())
{
- auto new_pool_size = server_settings_.background_pool_size;
- auto new_ratio = server_settings_.background_merges_mutations_concurrency_ratio;
+ auto new_pool_size = new_server_settings.background_pool_size;
+ auto new_ratio = new_server_settings.background_merges_mutations_concurrency_ratio;
global_context->getMergeMutateExecutor()->increaseThreadsAndMaxTasksCount(new_pool_size, static_cast(new_pool_size * new_ratio));
- global_context->getMergeMutateExecutor()->updateSchedulingPolicy(server_settings_.background_merges_mutations_scheduling_policy.toString());
+ global_context->getMergeMutateExecutor()->updateSchedulingPolicy(new_server_settings.background_merges_mutations_scheduling_policy.toString());
}
if (global_context->areBackgroundExecutorsInitialized())
{
- auto new_pool_size = server_settings_.background_move_pool_size;
+ auto new_pool_size = new_server_settings.background_move_pool_size;
global_context->getMovesExecutor()->increaseThreadsAndMaxTasksCount(new_pool_size, new_pool_size);
}
if (global_context->areBackgroundExecutorsInitialized())
{
- auto new_pool_size = server_settings_.background_fetches_pool_size;
+ auto new_pool_size = new_server_settings.background_fetches_pool_size;
global_context->getFetchesExecutor()->increaseThreadsAndMaxTasksCount(new_pool_size, new_pool_size);
}
if (global_context->areBackgroundExecutorsInitialized())
{
- auto new_pool_size = server_settings_.background_common_pool_size;
+ auto new_pool_size = new_server_settings.background_common_pool_size;
global_context->getCommonExecutor()->increaseThreadsAndMaxTasksCount(new_pool_size, new_pool_size);
}
- global_context->getBufferFlushSchedulePool().increaseThreadsCount(server_settings_.background_buffer_flush_schedule_pool_size);
- global_context->getSchedulePool().increaseThreadsCount(server_settings_.background_schedule_pool_size);
- global_context->getMessageBrokerSchedulePool().increaseThreadsCount(server_settings_.background_message_broker_schedule_pool_size);
- global_context->getDistributedSchedulePool().increaseThreadsCount(server_settings_.background_distributed_schedule_pool_size);
+ global_context->getBufferFlushSchedulePool().increaseThreadsCount(new_server_settings.background_buffer_flush_schedule_pool_size);
+ global_context->getSchedulePool().increaseThreadsCount(new_server_settings.background_schedule_pool_size);
+ global_context->getMessageBrokerSchedulePool().increaseThreadsCount(new_server_settings.background_message_broker_schedule_pool_size);
+ global_context->getDistributedSchedulePool().increaseThreadsCount(new_server_settings.background_distributed_schedule_pool_size);
- global_context->getAsyncLoader().setMaxThreads(TablesLoaderForegroundPoolId, server_settings_.tables_loader_foreground_pool_size);
- global_context->getAsyncLoader().setMaxThreads(TablesLoaderBackgroundLoadPoolId, server_settings_.tables_loader_background_pool_size);
- global_context->getAsyncLoader().setMaxThreads(TablesLoaderBackgroundStartupPoolId, server_settings_.tables_loader_background_pool_size);
+ global_context->getAsyncLoader().setMaxThreads(TablesLoaderForegroundPoolId, new_server_settings.tables_loader_foreground_pool_size);
+ global_context->getAsyncLoader().setMaxThreads(TablesLoaderBackgroundLoadPoolId, new_server_settings.tables_loader_background_pool_size);
+ global_context->getAsyncLoader().setMaxThreads(TablesLoaderBackgroundStartupPoolId, new_server_settings.tables_loader_background_pool_size);
getIOThreadPool().reloadConfiguration(
- server_settings.max_io_thread_pool_size,
- server_settings.max_io_thread_pool_free_size,
- server_settings.io_thread_pool_queue_size);
+ new_server_settings.max_io_thread_pool_size,
+ new_server_settings.max_io_thread_pool_free_size,
+ new_server_settings.io_thread_pool_queue_size);
getBackupsIOThreadPool().reloadConfiguration(
- server_settings.max_backups_io_thread_pool_size,
- server_settings.max_backups_io_thread_pool_free_size,
- server_settings.backups_io_thread_pool_queue_size);
+ new_server_settings.max_backups_io_thread_pool_size,
+ new_server_settings.max_backups_io_thread_pool_free_size,
+ new_server_settings.backups_io_thread_pool_queue_size);
getActivePartsLoadingThreadPool().reloadConfiguration(
- server_settings.max_active_parts_loading_thread_pool_size,
+ new_server_settings.max_active_parts_loading_thread_pool_size,
0, // We don't need any threads once all the parts will be loaded
- server_settings.max_active_parts_loading_thread_pool_size);
+ new_server_settings.max_active_parts_loading_thread_pool_size);
getOutdatedPartsLoadingThreadPool().reloadConfiguration(
- server_settings.max_outdated_parts_loading_thread_pool_size,
+ new_server_settings.max_outdated_parts_loading_thread_pool_size,
0, // We don't need any threads once all the parts will be loaded
- server_settings.max_outdated_parts_loading_thread_pool_size);
+ new_server_settings.max_outdated_parts_loading_thread_pool_size);
/// It could grow if we need to synchronously wait until all the data parts will be loaded.
getOutdatedPartsLoadingThreadPool().setMaxTurboThreads(
- server_settings.max_active_parts_loading_thread_pool_size
+ new_server_settings.max_active_parts_loading_thread_pool_size
);
getPartsCleaningThreadPool().reloadConfiguration(
- server_settings.max_parts_cleaning_thread_pool_size,
+ new_server_settings.max_parts_cleaning_thread_pool_size,
0, // We don't need any threads one all the parts will be deleted
- server_settings.max_parts_cleaning_thread_pool_size);
+ new_server_settings.max_parts_cleaning_thread_pool_size);
if (config->has("resources"))
{
@@ -2059,7 +2047,7 @@ std::unique_ptr Server::buildProtocolStackFromConfig(
auto create_factory = [&](const std::string & type, const std::string & conf_name) -> TCPServerConnectionFactory::Ptr
{
if (type == "tcp")
- return TCPServerConnectionFactory::Ptr(new TCPHandlerFactory(*this, false, false, ProfileEvents::InterfaceNativeReceiveBytes, ProfileEvents::InterfaceNativeSendBytes));
+ return TCPServerConnectionFactory::Ptr(new TCPHandlerFactory(*this, false, false));
if (type == "tls")
#if USE_SSL
@@ -2071,20 +2059,20 @@ std::unique_ptr Server::buildProtocolStackFromConfig(
if (type == "proxy1")
return TCPServerConnectionFactory::Ptr(new ProxyV1HandlerFactory(*this, conf_name));
if (type == "mysql")
- return TCPServerConnectionFactory::Ptr(new MySQLHandlerFactory(*this, ProfileEvents::InterfaceMySQLReceiveBytes, ProfileEvents::InterfaceMySQLSendBytes));
+ return TCPServerConnectionFactory::Ptr(new MySQLHandlerFactory(*this));
if (type == "postgres")
- return TCPServerConnectionFactory::Ptr(new PostgreSQLHandlerFactory(*this, ProfileEvents::InterfacePostgreSQLReceiveBytes, ProfileEvents::InterfacePostgreSQLSendBytes));
+ return TCPServerConnectionFactory::Ptr(new PostgreSQLHandlerFactory(*this));
if (type == "http")
return TCPServerConnectionFactory::Ptr(
- new HTTPServerConnectionFactory(httpContext(), http_params, createHandlerFactory(*this, config, async_metrics, "HTTPHandler-factory"), ProfileEvents::InterfaceHTTPReceiveBytes, ProfileEvents::InterfaceHTTPSendBytes)
+ new HTTPServerConnectionFactory(httpContext(), http_params, createHandlerFactory(*this, config, async_metrics, "HTTPHandler-factory"))
);
if (type == "prometheus")
return TCPServerConnectionFactory::Ptr(
- new HTTPServerConnectionFactory(httpContext(), http_params, createHandlerFactory(*this, config, async_metrics, "PrometheusHandler-factory"), ProfileEvents::InterfacePrometheusReceiveBytes, ProfileEvents::InterfacePrometheusSendBytes)
+ new HTTPServerConnectionFactory(httpContext(), http_params, createHandlerFactory(*this, config, async_metrics, "PrometheusHandler-factory"))
);
if (type == "interserver")
return TCPServerConnectionFactory::Ptr(
- new HTTPServerConnectionFactory(httpContext(), http_params, createHandlerFactory(*this, config, async_metrics, "InterserverIOHTTPHandler-factory"), ProfileEvents::InterfaceInterserverReceiveBytes, ProfileEvents::InterfaceInterserverSendBytes)
+ new HTTPServerConnectionFactory(httpContext(), http_params, createHandlerFactory(*this, config, async_metrics, "InterserverIOHTTPHandler-factory"))
);
throw Exception(ErrorCodes::INVALID_CONFIG_PARAMETER, "Protocol configuration error, unknown protocol name '{}'", type);
@@ -2217,7 +2205,7 @@ void Server::createServers(
port_name,
"http://" + address.toString(),
std::make_unique(
- httpContext(), createHandlerFactory(*this, config, async_metrics, "HTTPHandler-factory"), server_pool, socket, http_params, ProfileEvents::InterfaceHTTPReceiveBytes, ProfileEvents::InterfaceHTTPSendBytes));
+ httpContext(), createHandlerFactory(*this, config, async_metrics, "HTTPHandler-factory"), server_pool, socket, http_params));
});
}
@@ -2237,7 +2225,7 @@ void Server::createServers(
port_name,
"https://" + address.toString(),
std::make_unique(
- httpContext(), createHandlerFactory(*this, config, async_metrics, "HTTPSHandler-factory"), server_pool, socket, http_params, ProfileEvents::InterfaceHTTPReceiveBytes, ProfileEvents::InterfaceHTTPSendBytes));
+ httpContext(), createHandlerFactory(*this, config, async_metrics, "HTTPSHandler-factory"), server_pool, socket, http_params));
#else
UNUSED(port);
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "HTTPS protocol is disabled because Poco library was built without NetSSL support.");
@@ -2260,7 +2248,7 @@ void Server::createServers(
port_name,
"native protocol (tcp): " + address.toString(),
std::make_unique(
- new TCPHandlerFactory(*this, /* secure */ false, /* proxy protocol */ false, ProfileEvents::InterfaceNativeReceiveBytes, ProfileEvents::InterfaceNativeSendBytes),
+ new TCPHandlerFactory(*this, /* secure */ false, /* proxy protocol */ false),
server_pool,
socket,
new Poco::Net::TCPServerParams));
@@ -2282,7 +2270,7 @@ void Server::createServers(
port_name,
"native protocol (tcp) with PROXY: " + address.toString(),
std::make_unique(
- new TCPHandlerFactory(*this, /* secure */ false, /* proxy protocol */ true, ProfileEvents::InterfaceNativeReceiveBytes, ProfileEvents::InterfaceNativeSendBytes),
+ new TCPHandlerFactory(*this, /* secure */ false, /* proxy protocol */ true),
server_pool,
socket,
new Poco::Net::TCPServerParams));
@@ -2305,7 +2293,7 @@ void Server::createServers(
port_name,
"secure native protocol (tcp_secure): " + address.toString(),
std::make_unique(
- new TCPHandlerFactory(*this, /* secure */ true, /* proxy protocol */ false, ProfileEvents::InterfaceNativeReceiveBytes, ProfileEvents::InterfaceNativeSendBytes),
+ new TCPHandlerFactory(*this, /* secure */ true, /* proxy protocol */ false),
server_pool,
socket,
new Poco::Net::TCPServerParams));
@@ -2329,7 +2317,7 @@ void Server::createServers(
listen_host,
port_name,
"MySQL compatibility protocol: " + address.toString(),
- std::make_unique(new MySQLHandlerFactory(*this, ProfileEvents::InterfaceMySQLReceiveBytes, ProfileEvents::InterfaceMySQLSendBytes), server_pool, socket, new Poco::Net::TCPServerParams));
+ std::make_unique(new MySQLHandlerFactory(*this), server_pool, socket, new Poco::Net::TCPServerParams));
});
}
@@ -2346,7 +2334,7 @@ void Server::createServers(
listen_host,
port_name,
"PostgreSQL compatibility protocol: " + address.toString(),
- std::make_unique(new PostgreSQLHandlerFactory(*this, ProfileEvents::InterfacePostgreSQLReceiveBytes, ProfileEvents::InterfacePostgreSQLSendBytes), server_pool, socket, new Poco::Net::TCPServerParams));
+ std::make_unique(new PostgreSQLHandlerFactory(*this), server_pool, socket, new Poco::Net::TCPServerParams));
});
}
@@ -2380,7 +2368,7 @@ void Server::createServers(
port_name,
"Prometheus: http://" + address.toString(),
std::make_unique(
- httpContext(), createHandlerFactory(*this, config, async_metrics, "PrometheusHandler-factory"), server_pool, socket, http_params, ProfileEvents::InterfacePrometheusReceiveBytes, ProfileEvents::InterfacePrometheusSendBytes));
+ httpContext(), createHandlerFactory(*this, config, async_metrics, "PrometheusHandler-factory"), server_pool, socket, http_params));
});
}
}
@@ -2426,9 +2414,7 @@ void Server::createInterserverServers(
createHandlerFactory(*this, config, async_metrics, "InterserverIOHTTPHandler-factory"),
server_pool,
socket,
- http_params,
- ProfileEvents::InterfaceInterserverReceiveBytes,
- ProfileEvents::InterfaceInterserverSendBytes));
+ http_params));
});
}
@@ -2451,9 +2437,7 @@ void Server::createInterserverServers(
createHandlerFactory(*this, config, async_metrics, "InterserverIOHTTPSHandler-factory"),
server_pool,
socket,
- http_params,
- ProfileEvents::InterfaceInterserverReceiveBytes,
- ProfileEvents::InterfaceInterserverSendBytes));
+ http_params));
#else
UNUSED(port);
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "SSL support for TCP protocol is disabled because Poco library was built without NetSSL support.");
diff --git a/programs/server/config.xml b/programs/server/config.xml
index 52a1c528040..1be20c5cad8 100644
--- a/programs/server/config.xml
+++ b/programs/server/config.xml
@@ -1379,6 +1379,9 @@
+
+
+
diff --git a/src/Access/SettingsProfilesCache.cpp b/src/Access/SettingsProfilesCache.cpp
index f03e68ba455..275b3aeb6b5 100644
--- a/src/Access/SettingsProfilesCache.cpp
+++ b/src/Access/SettingsProfilesCache.cpp
@@ -140,8 +140,7 @@ void SettingsProfilesCache::mergeSettingsAndConstraintsFor(EnabledSettings & ena
auto info = std::make_shared(access_control);
- info->profiles = merged_settings.toProfileIDs();
- substituteProfiles(merged_settings, info->profiles_with_implicit, info->names_of_profiles);
+ substituteProfiles(merged_settings, info->profiles, info->profiles_with_implicit, info->names_of_profiles);
info->settings = merged_settings.toSettingsChanges();
info->constraints = merged_settings.toSettingsConstraints(access_control);
@@ -152,9 +151,12 @@ void SettingsProfilesCache::mergeSettingsAndConstraintsFor(EnabledSettings & ena
void SettingsProfilesCache::substituteProfiles(
SettingsProfileElements & elements,
+ std::vector & profiles,
std::vector & substituted_profiles,
std::unordered_map & names_of_substituted_profiles) const
{
+ profiles = elements.toProfileIDs();
+
/// We should substitute profiles in reversive order because the same profile can occur
/// in `elements` multiple times (with some other settings in between) and in this case
/// the last occurrence should override all the previous ones.
@@ -184,6 +186,11 @@ void SettingsProfilesCache::substituteProfiles(
names_of_substituted_profiles.emplace(profile_id, profile->getName());
}
std::reverse(substituted_profiles.begin(), substituted_profiles.end());
+
+ std::erase_if(profiles, [&substituted_profiles_set](const UUID & profile_id)
+ {
+ return !substituted_profiles_set.contains(profile_id);
+ });
}
std::shared_ptr SettingsProfilesCache::getEnabledSettings(
@@ -225,13 +232,13 @@ std::shared_ptr SettingsProfilesCache::getSettingsPr
if (auto pos = this->profile_infos_cache.get(profile_id))
return *pos;
- SettingsProfileElements elements = all_profiles[profile_id]->elements;
+ SettingsProfileElements elements;
+ auto & element = elements.emplace_back();
+ element.parent_profile = profile_id;
auto info = std::make_shared(access_control);
- info->profiles.push_back(profile_id);
- info->profiles_with_implicit.push_back(profile_id);
- substituteProfiles(elements, info->profiles_with_implicit, info->names_of_profiles);
+ substituteProfiles(elements, info->profiles, info->profiles_with_implicit, info->names_of_profiles);
info->settings = elements.toSettingsChanges();
info->constraints.merge(elements.toSettingsConstraints(access_control));
diff --git a/src/Access/SettingsProfilesCache.h b/src/Access/SettingsProfilesCache.h
index 28914596ccc..afc3c3e13a5 100644
--- a/src/Access/SettingsProfilesCache.h
+++ b/src/Access/SettingsProfilesCache.h
@@ -37,7 +37,11 @@ private:
void profileRemoved(const UUID & profile_id);
void mergeSettingsAndConstraints();
void mergeSettingsAndConstraintsFor(EnabledSettings & enabled) const;
- void substituteProfiles(SettingsProfileElements & elements, std::vector & substituted_profiles, std::unordered_map & names_of_substituted_profiles) const;
+
+ void substituteProfiles(SettingsProfileElements & elements,
+ std::vector & profiles,
+ std::vector & substituted_profiles,
+ std::unordered_map & names_of_substituted_profiles) const;
const AccessControl & access_control;
std::unordered_map all_profiles;
diff --git a/src/AggregateFunctions/AggregateFunctionLargestTriangleThreeBuckets.cpp b/src/AggregateFunctions/AggregateFunctionLargestTriangleThreeBuckets.cpp
index 850a7c688ad..d5abdbc12fb 100644
--- a/src/AggregateFunctions/AggregateFunctionLargestTriangleThreeBuckets.cpp
+++ b/src/AggregateFunctions/AggregateFunctionLargestTriangleThreeBuckets.cpp
@@ -14,8 +14,9 @@
#include
#include
#include
-#include
#include
+#include
+#include
#include
#include
@@ -48,7 +49,7 @@ struct LargestTriangleThreeBucketsData : public StatisticalSamplex and this->y in ascending order of this->x using index
std::vector index(this->x.size());
- std::iota(index.begin(), index.end(), 0);
+ iota(index.data(), index.size(), size_t(0));
::sort(index.begin(), index.end(), [&](size_t i1, size_t i2) { return this->x[i1] < this->x[i2]; });
SampleX temp_x{};
diff --git a/src/AggregateFunctions/AggregateFunctionMax.cpp b/src/AggregateFunctions/AggregateFunctionMax.cpp
index e74224a24c3..e9cd651b8db 100644
--- a/src/AggregateFunctions/AggregateFunctionMax.cpp
+++ b/src/AggregateFunctions/AggregateFunctionMax.cpp
@@ -1,7 +1,8 @@
#include
#include
#include
-#include
+#include
+#include
namespace DB
{
@@ -19,7 +20,7 @@ public:
explicit AggregateFunctionsSingleValueMax(const DataTypePtr & type) : Parent(type) { }
/// Specializations for native numeric types
- ALWAYS_INLINE inline void addBatchSinglePlace(
+ void addBatchSinglePlace(
size_t row_begin,
size_t row_end,
AggregateDataPtr __restrict place,
@@ -27,7 +28,7 @@ public:
Arena * arena,
ssize_t if_argument_pos) const override;
- ALWAYS_INLINE inline void addBatchSinglePlaceNotNull(
+ void addBatchSinglePlaceNotNull(
size_t row_begin,
size_t row_end,
AggregateDataPtr __restrict place,
@@ -53,10 +54,10 @@ void AggregateFunctionsSingleValueMax= 0) \
{ \
const auto & flags = assert_cast(*columns[if_argument_pos]).getData(); \
- opt = findNumericMaxIf(column.getData().data(), flags.data(), row_begin, row_end); \
+ opt = findExtremeMaxIf(column.getData().data(), flags.data(), row_begin, row_end); \
} \
else \
- opt = findNumericMax(column.getData().data(), row_begin, row_end); \
+ opt = findExtremeMax(column.getData().data(), row_begin, row_end); \
if (opt.has_value()) \
this->data(place).changeIfGreater(opt.value()); \
}
@@ -74,7 +75,57 @@ void AggregateFunctionsSingleValueMax::addBatchSinglePlace(
Arena * arena,
ssize_t if_argument_pos) const
{
- return Parent::addBatchSinglePlace(row_begin, row_end, place, columns, arena, if_argument_pos);
+ if constexpr (!is_any_of)
+ {
+ /// Leave other numeric types (large integers, decimals, etc) to keep doing the comparison as it's
+ /// faster than doing a permutation
+ return Parent::addBatchSinglePlace(row_begin, row_end, place, columns, arena, if_argument_pos);
+ }
+
+ constexpr int nan_direction_hint = 1;
+ auto const & column = *columns[0];
+ if (if_argument_pos >= 0)
+ {
+ size_t index = row_begin;
+ const auto & if_flags = assert_cast(*columns[if_argument_pos]).getData();
+ while (if_flags[index] == 0 && index < row_end)
+ index++;
+ if (index >= row_end)
+ return;
+
+ for (size_t i = index + 1; i < row_end; i++)
+ {
+ if ((if_flags[i] != 0) && (column.compareAt(i, index, column, nan_direction_hint) > 0))
+ index = i;
+ }
+ this->data(place).changeIfGreater(column, index, arena);
+ }
+ else
+ {
+ if (row_begin >= row_end)
+ return;
+
+ /// TODO: Introduce row_begin and row_end to getPermutation
+ if (row_begin != 0 || row_end != column.size())
+ {
+ size_t index = row_begin;
+ for (size_t i = index + 1; i < row_end; i++)
+ {
+ if (column.compareAt(i, index, column, nan_direction_hint) > 0)
+ index = i;
+ }
+ this->data(place).changeIfGreater(column, index, arena);
+ }
+ else
+ {
+ constexpr IColumn::PermutationSortDirection direction = IColumn::PermutationSortDirection::Descending;
+ constexpr IColumn::PermutationSortStability stability = IColumn::PermutationSortStability::Unstable;
+ IColumn::Permutation permutation;
+ constexpr UInt64 limit = 1;
+ column.getPermutation(direction, stability, limit, nan_direction_hint, permutation);
+ this->data(place).changeIfGreater(column, permutation[0], arena);
+ }
+ }
}
// NOLINTBEGIN(bugprone-macro-parentheses)
@@ -97,10 +148,10 @@ void AggregateFunctionsSingleValueMax(row_end); \
for (size_t i = row_begin; i < row_end; ++i) \
final_flags[i] = (!null_map[i]) & !!if_flags[i]; \
- opt = findNumericMaxIf(column.getData().data(), final_flags.get(), row_begin, row_end); \
+ opt = findExtremeMaxIf(column.getData().data(), final_flags.get(), row_begin, row_end); \
} \
else \
- opt = findNumericMaxNotNull(column.getData().data(), null_map, row_begin, row_end); \
+ opt = findExtremeMaxNotNull(column.getData().data(), null_map, row_begin, row_end); \
if (opt.has_value()) \
this->data(place).changeIfGreater(opt.value()); \
}
@@ -119,7 +170,46 @@ void AggregateFunctionsSingleValueMax::addBatchSinglePlaceNotNull(
Arena * arena,
ssize_t if_argument_pos) const
{
- return Parent::addBatchSinglePlaceNotNull(row_begin, row_end, place, columns, null_map, arena, if_argument_pos);
+ if constexpr (!is_any_of)
+ {
+ /// Leave other numeric types (large integers, decimals, etc) to keep doing the comparison as it's
+ /// faster than doing a permutation
+ return Parent::addBatchSinglePlaceNotNull(row_begin, row_end, place, columns, null_map, arena, if_argument_pos);
+ }
+
+ constexpr int nan_direction_hint = 1;
+ auto const & column = *columns[0];
+ if (if_argument_pos >= 0)
+ {
+ size_t index = row_begin;
+ const auto & if_flags = assert_cast(*columns[if_argument_pos]).getData();
+ while ((if_flags[index] == 0 || null_map[index] != 0) && (index < row_end))
+ index++;
+ if (index >= row_end)
+ return;
+
+ for (size_t i = index + 1; i < row_end; i++)
+ {
+ if ((if_flags[i] != 0) && (null_map[i] == 0) && (column.compareAt(i, index, column, nan_direction_hint) > 0))
+ index = i;
+ }
+ this->data(place).changeIfGreater(column, index, arena);
+ }
+ else
+ {
+ size_t index = row_begin;
+ while ((null_map[index] != 0) && (index < row_end))
+ index++;
+ if (index >= row_end)
+ return;
+
+ for (size_t i = index + 1; i < row_end; i++)
+ {
+ if ((null_map[i] == 0) && (column.compareAt(i, index, column, nan_direction_hint) > 0))
+ index = i;
+ }
+ this->data(place).changeIfGreater(column, index, arena);
+ }
}
AggregateFunctionPtr createAggregateFunctionMax(
diff --git a/src/AggregateFunctions/AggregateFunctionMin.cpp b/src/AggregateFunctions/AggregateFunctionMin.cpp
index 48758aa74b0..d767bd5c563 100644
--- a/src/AggregateFunctions/AggregateFunctionMin.cpp
+++ b/src/AggregateFunctions/AggregateFunctionMin.cpp
@@ -1,7 +1,8 @@
#include
#include
#include
-#include
+#include
+#include
namespace DB
@@ -20,7 +21,7 @@ public:
explicit AggregateFunctionsSingleValueMin(const DataTypePtr & type) : Parent(type) { }
/// Specializations for native numeric types
- ALWAYS_INLINE inline void addBatchSinglePlace(
+ void addBatchSinglePlace(
size_t row_begin,
size_t row_end,
AggregateDataPtr __restrict place,
@@ -28,7 +29,7 @@ public:
Arena * arena,
ssize_t if_argument_pos) const override;
- ALWAYS_INLINE inline void addBatchSinglePlaceNotNull(
+ void addBatchSinglePlaceNotNull(
size_t row_begin,
size_t row_end,
AggregateDataPtr __restrict place,
@@ -54,10 +55,10 @@ public:
if (if_argument_pos >= 0) \
{ \
const auto & flags = assert_cast(*columns[if_argument_pos]).getData(); \
- opt = findNumericMinIf(column.getData().data(), flags.data(), row_begin, row_end); \
+ opt = findExtremeMinIf(column.getData().data(), flags.data(), row_begin, row_end); \
} \
else \
- opt = findNumericMin(column.getData().data(), row_begin, row_end); \
+ opt = findExtremeMin(column.getData().data(), row_begin, row_end); \
if (opt.has_value()) \
this->data(place).changeIfLess(opt.value()); \
}
@@ -75,7 +76,57 @@ void AggregateFunctionsSingleValueMin::addBatchSinglePlace(
Arena * arena,
ssize_t if_argument_pos) const
{
- return Parent::addBatchSinglePlace(row_begin, row_end, place, columns, arena, if_argument_pos);
+ if constexpr (!is_any_of)
+ {
+ /// Leave other numeric types (large integers, decimals, etc) to keep doing the comparison as it's
+ /// faster than doing a permutation
+ return Parent::addBatchSinglePlace(row_begin, row_end, place, columns, arena, if_argument_pos);
+ }
+
+ constexpr int nan_direction_hint = 1;
+ auto const & column = *columns[0];
+ if (if_argument_pos >= 0)
+ {
+ size_t index = row_begin;
+ const auto & if_flags = assert_cast(*columns[if_argument_pos]).getData();
+ while (if_flags[index] == 0 && index < row_end)
+ index++;
+ if (index >= row_end)
+ return;
+
+ for (size_t i = index + 1; i < row_end; i++)
+ {
+ if ((if_flags[i] != 0) && (column.compareAt(i, index, column, nan_direction_hint) < 0))
+ index = i;
+ }
+ this->data(place).changeIfLess(column, index, arena);
+ }
+ else
+ {
+ if (row_begin >= row_end)
+ return;
+
+ /// TODO: Introduce row_begin and row_end to getPermutation
+ if (row_begin != 0 || row_end != column.size())
+ {
+ size_t index = row_begin;
+ for (size_t i = index + 1; i < row_end; i++)
+ {
+ if (column.compareAt(i, index, column, nan_direction_hint) < 0)
+ index = i;
+ }
+ this->data(place).changeIfLess(column, index, arena);
+ }
+ else
+ {
+ constexpr IColumn::PermutationSortDirection direction = IColumn::PermutationSortDirection::Ascending;
+ constexpr IColumn::PermutationSortStability stability = IColumn::PermutationSortStability::Unstable;
+ IColumn::Permutation permutation;
+ constexpr UInt64 limit = 1;
+ column.getPermutation(direction, stability, limit, nan_direction_hint, permutation);
+ this->data(place).changeIfLess(column, permutation[0], arena);
+ }
+ }
}
// NOLINTBEGIN(bugprone-macro-parentheses)
@@ -98,10 +149,10 @@ void AggregateFunctionsSingleValueMin::addBatchSinglePlace(
auto final_flags = std::make_unique(row_end); \
for (size_t i = row_begin; i < row_end; ++i) \
final_flags[i] = (!null_map[i]) & !!if_flags[i]; \
- opt = findNumericMinIf(column.getData().data(), final_flags.get(), row_begin, row_end); \
+ opt = findExtremeMinIf(column.getData().data(), final_flags.get(), row_begin, row_end); \
} \
else \
- opt = findNumericMinNotNull(column.getData().data(), null_map, row_begin, row_end); \
+ opt = findExtremeMinNotNull(column.getData().data(), null_map, row_begin, row_end); \
if (opt.has_value()) \
this->data(place).changeIfLess(opt.value()); \
}
@@ -120,7 +171,46 @@ void AggregateFunctionsSingleValueMin::addBatchSinglePlaceNotNull(
Arena * arena,
ssize_t if_argument_pos) const
{
- return Parent::addBatchSinglePlaceNotNull(row_begin, row_end, place, columns, null_map, arena, if_argument_pos);
+ if constexpr (!is_any_of)
+ {
+ /// Leave other numeric types (large integers, decimals, etc) to keep doing the comparison as it's
+ /// faster than doing a permutation
+ return Parent::addBatchSinglePlaceNotNull(row_begin, row_end, place, columns, null_map, arena, if_argument_pos);
+ }
+
+ constexpr int nan_direction_hint = 1;
+ auto const & column = *columns[0];
+ if (if_argument_pos >= 0)
+ {
+ size_t index = row_begin;
+ const auto & if_flags = assert_cast(*columns[if_argument_pos]).getData();
+ while ((if_flags[index] == 0 || null_map[index] != 0) && (index < row_end))
+ index++;
+ if (index >= row_end)
+ return;
+
+ for (size_t i = index + 1; i < row_end; i++)
+ {
+ if ((if_flags[i] != 0) && (null_map[index] == 0) && (column.compareAt(i, index, column, nan_direction_hint) < 0))
+ index = i;
+ }
+ this->data(place).changeIfLess(column, index, arena);
+ }
+ else
+ {
+ size_t index = row_begin;
+ while ((null_map[index] != 0) && (index < row_end))
+ index++;
+ if (index >= row_end)
+ return;
+
+ for (size_t i = index + 1; i < row_end; i++)
+ {
+ if ((null_map[i] == 0) && (column.compareAt(i, index, column, nan_direction_hint) < 0))
+ index = i;
+ }
+ this->data(place).changeIfLess(column, index, arena);
+ }
}
AggregateFunctionPtr createAggregateFunctionMin(
diff --git a/src/AggregateFunctions/AggregateFunctionMinMaxAny.h b/src/AggregateFunctions/AggregateFunctionMinMaxAny.h
index b69a0b100a3..dec70861543 100644
--- a/src/AggregateFunctions/AggregateFunctionMinMaxAny.h
+++ b/src/AggregateFunctions/AggregateFunctionMinMaxAny.h
@@ -965,6 +965,7 @@ template
struct AggregateFunctionMinData : Data
{
using Self = AggregateFunctionMinData;
+ using Impl = Data;
bool changeIfBetter(const IColumn & column, size_t row_num, Arena * arena) { return this->changeIfLess(column, row_num, arena); }
bool changeIfBetter(const Self & to, Arena * arena) { return this->changeIfLess(to, arena); }
@@ -993,6 +994,7 @@ template
struct AggregateFunctionMaxData : Data
{
using Self = AggregateFunctionMaxData;
+ using Impl = Data;
bool changeIfBetter(const IColumn & column, size_t row_num, Arena * arena) { return this->changeIfGreater(column, row_num, arena); }
bool changeIfBetter(const Self & to, Arena * arena) { return this->changeIfGreater(to, arena); }
diff --git a/src/AggregateFunctions/QuantilesCommon.h b/src/AggregateFunctions/QuantilesCommon.h
index 3dda0119485..afbca84b827 100644
--- a/src/AggregateFunctions/QuantilesCommon.h
+++ b/src/AggregateFunctions/QuantilesCommon.h
@@ -6,6 +6,7 @@
#include
#include
+#include
namespace DB
@@ -63,10 +64,9 @@ struct QuantileLevels
if (isNaN(levels[i]) || levels[i] < 0 || levels[i] > 1)
throw Exception(ErrorCodes::PARAMETER_OUT_OF_BOUND, "Quantile level is out of range [0..1]");
-
- permutation[i] = i;
}
+ iota(permutation.data(), size, Permutation::value_type(0));
::sort(permutation.begin(), permutation.end(), [this] (size_t a, size_t b) { return levels[a] < levels[b]; });
}
};
diff --git a/src/AggregateFunctions/StatCommon.h b/src/AggregateFunctions/StatCommon.h
index 23054e25189..8b1395ea95c 100644
--- a/src/AggregateFunctions/StatCommon.h
+++ b/src/AggregateFunctions/StatCommon.h
@@ -7,6 +7,7 @@
#include
#include
+#include
#include
#include
@@ -30,7 +31,7 @@ std::pair computeRanksAndTieCorrection(const Values & value
const size_t size = values.size();
/// Save initial positions, than sort indices according to the values.
std::vector indexes(size);
- std::iota(indexes.begin(), indexes.end(), 0);
+ iota(indexes.data(), indexes.size(), size_t(0));
std::sort(indexes.begin(), indexes.end(),
[&] (size_t lhs, size_t rhs) { return values[lhs] < values[rhs]; });
diff --git a/src/AggregateFunctions/findNumeric.cpp b/src/AggregateFunctions/findNumeric.cpp
deleted file mode 100644
index bbad8c1fe3d..00000000000
--- a/src/AggregateFunctions/findNumeric.cpp
+++ /dev/null
@@ -1,15 +0,0 @@
-#include
-
-namespace DB
-{
-#define INSTANTIATION(T) \
- template std::optional findNumericMin(const T * __restrict ptr, size_t start, size_t end); \
- template std::optional findNumericMinNotNull(const T * __restrict ptr, const UInt8 * __restrict condition_map, size_t start, size_t end); \
- template std::optional findNumericMinIf(const T * __restrict ptr, const UInt8 * __restrict condition_map, size_t start, size_t end); \
- template std::optional findNumericMax(const T * __restrict ptr, size_t start, size_t end); \
- template std::optional findNumericMaxNotNull(const T * __restrict ptr, const UInt8 * __restrict condition_map, size_t start, size_t end); \
- template std::optional findNumericMaxIf(const T * __restrict ptr, const UInt8 * __restrict condition_map, size_t start, size_t end);
-
-FOR_BASIC_NUMERIC_TYPES(INSTANTIATION)
-#undef INSTANTIATION
-}
diff --git a/src/Analyzer/IQueryTreeNode.h b/src/Analyzer/IQueryTreeNode.h
index 922eaabe75c..b07aa2d31b0 100644
--- a/src/Analyzer/IQueryTreeNode.h
+++ b/src/Analyzer/IQueryTreeNode.h
@@ -143,9 +143,17 @@ public:
return alias;
}
+ const String & getOriginalAlias() const
+ {
+ return original_alias.empty() ? alias : original_alias;
+ }
+
/// Set node alias
void setAlias(String alias_value)
{
+ if (original_alias.empty())
+ original_alias = std::move(alias);
+
alias = std::move(alias_value);
}
@@ -276,6 +284,9 @@ protected:
private:
String alias;
+ /// An alias from query. Alias can be replaced by query passes,
+ /// but we need to keep the original one to support additional_table_filters.
+ String original_alias;
ASTPtr original_ast;
};
diff --git a/src/Analyzer/Passes/FuseFunctionsPass.cpp b/src/Analyzer/Passes/FuseFunctionsPass.cpp
index e77b3ddcb20..443e13b7d9d 100644
--- a/src/Analyzer/Passes/FuseFunctionsPass.cpp
+++ b/src/Analyzer/Passes/FuseFunctionsPass.cpp
@@ -1,5 +1,6 @@
#include
+#include
#include
#include
#include
@@ -184,7 +185,7 @@ FunctionNodePtr createFusedQuantilesNode(std::vector & nodes
{
/// Sort nodes and parameters in ascending order of quantile level
std::vector permutation(nodes.size());
- std::iota(permutation.begin(), permutation.end(), 0);
+ iota(permutation.data(), permutation.size(), size_t(0));
std::sort(permutation.begin(), permutation.end(), [&](size_t i, size_t j) { return parameters[i].get() < parameters[j].get(); });
std::vector new_nodes;
diff --git a/src/Analyzer/Passes/QueryAnalysisPass.cpp b/src/Analyzer/Passes/QueryAnalysisPass.cpp
index 3290d918a8b..4ad9581b5b6 100644
--- a/src/Analyzer/Passes/QueryAnalysisPass.cpp
+++ b/src/Analyzer/Passes/QueryAnalysisPass.cpp
@@ -52,6 +52,7 @@
#include
+#include
#include
#include
#include
@@ -1198,7 +1199,7 @@ private:
static void mergeWindowWithParentWindow(const QueryTreeNodePtr & window_node, const QueryTreeNodePtr & parent_window_node, IdentifierResolveScope & scope);
- static void replaceNodesWithPositionalArguments(QueryTreeNodePtr & node_list, const QueryTreeNodes & projection_nodes, IdentifierResolveScope & scope);
+ void replaceNodesWithPositionalArguments(QueryTreeNodePtr & node_list, const QueryTreeNodes & projection_nodes, IdentifierResolveScope & scope);
static void convertLimitOffsetExpression(QueryTreeNodePtr & expression_node, const String & expression_description, IdentifierResolveScope & scope);
@@ -2168,7 +2169,12 @@ void QueryAnalyzer::replaceNodesWithPositionalArguments(QueryTreeNodePtr & node_
scope.scope_node->formatASTForErrorMessage());
--positional_argument_number;
- *node_to_replace = projection_nodes[positional_argument_number];
+ *node_to_replace = projection_nodes[positional_argument_number]->clone();
+ if (auto it = resolved_expressions.find(projection_nodes[positional_argument_number]);
+ it != resolved_expressions.end())
+ {
+ resolved_expressions[*node_to_replace] = it->second;
+ }
}
}
@@ -7366,6 +7372,7 @@ void QueryAnalysisPass::run(QueryTreeNodePtr query_tree_node, ContextPtr context
{
QueryAnalyzer analyzer;
analyzer.resolve(query_tree_node, table_expression, context);
+ createUniqueTableAliases(query_tree_node, table_expression, context);
}
}
diff --git a/src/Analyzer/Utils.cpp b/src/Analyzer/Utils.cpp
index f75022220e7..53fcf534f64 100644
--- a/src/Analyzer/Utils.cpp
+++ b/src/Analyzer/Utils.cpp
@@ -326,7 +326,7 @@ void addTableExpressionOrJoinIntoTablesInSelectQuery(ASTPtr & tables_in_select_q
}
}
-QueryTreeNodes extractTableExpressions(const QueryTreeNodePtr & join_tree_node)
+QueryTreeNodes extractTableExpressions(const QueryTreeNodePtr & join_tree_node, bool add_array_join)
{
QueryTreeNodes result;
@@ -357,6 +357,8 @@ QueryTreeNodes extractTableExpressions(const QueryTreeNodePtr & join_tree_node)
{
auto & array_join_node = node_to_process->as();
nodes_to_process.push_front(array_join_node.getTableExpression());
+ if (add_array_join)
+ result.push_back(std::move(node_to_process));
break;
}
case QueryTreeNodeType::JOIN:
diff --git a/src/Analyzer/Utils.h b/src/Analyzer/Utils.h
index e3316f5ad6b..d3eb6ba3cc2 100644
--- a/src/Analyzer/Utils.h
+++ b/src/Analyzer/Utils.h
@@ -51,7 +51,7 @@ std::optional tryExtractConstantFromConditionNode(const QueryTreeNodePtr &
void addTableExpressionOrJoinIntoTablesInSelectQuery(ASTPtr & tables_in_select_query_ast, const QueryTreeNodePtr & table_expression, const IQueryTreeNode::ConvertToASTOptions & convert_to_ast_options);
/// Extract table, table function, query, union from join tree
-QueryTreeNodes extractTableExpressions(const QueryTreeNodePtr & join_tree_node);
+QueryTreeNodes extractTableExpressions(const QueryTreeNodePtr & join_tree_node, bool add_array_join = false);
/// Extract left table expression from join tree
QueryTreeNodePtr extractLeftTableExpression(const QueryTreeNodePtr & join_tree_node);
diff --git a/src/Analyzer/createUniqueTableAliases.cpp b/src/Analyzer/createUniqueTableAliases.cpp
new file mode 100644
index 00000000000..8f850fe8dec
--- /dev/null
+++ b/src/Analyzer/createUniqueTableAliases.cpp
@@ -0,0 +1,141 @@
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+
+namespace DB
+{
+
+namespace
+{
+
+class CreateUniqueTableAliasesVisitor : public InDepthQueryTreeVisitorWithContext
+{
+public:
+ using Base = InDepthQueryTreeVisitorWithContext;
+
+ explicit CreateUniqueTableAliasesVisitor(const ContextPtr & context)
+ : Base(context)
+ {
+ // Insert a fake node on top of the stack.
+ scope_nodes_stack.push_back(std::make_shared(Names{}, nullptr));
+ }
+
+ void enterImpl(QueryTreeNodePtr & node)
+ {
+ auto node_type = node->getNodeType();
+
+ switch (node_type)
+ {
+ case QueryTreeNodeType::QUERY:
+ [[fallthrough]];
+ case QueryTreeNodeType::UNION:
+ {
+ /// Queries like `(SELECT 1) as t` have invalid syntax. To avoid creating such queries (e.g. in StorageDistributed)
+ /// we need to remove aliases for top level queries.
+ /// N.B. Subquery depth starts count from 1, so the following condition checks if it's a top level.
+ if (getSubqueryDepth() == 1)
+ {
+ node->removeAlias();
+ break;
+ }
+ [[fallthrough]];
+ }
+ case QueryTreeNodeType::TABLE:
+ [[fallthrough]];
+ case QueryTreeNodeType::TABLE_FUNCTION:
+ [[fallthrough]];
+ case QueryTreeNodeType::ARRAY_JOIN:
+ {
+ auto & alias = table_expression_to_alias[node];
+ if (alias.empty())
+ {
+ scope_to_nodes_with_aliases[scope_nodes_stack.back()].push_back(node);
+ alias = fmt::format("__table{}", ++next_id);
+ node->setAlias(alias);
+ }
+ break;
+ }
+ default:
+ break;
+ }
+
+ switch (node_type)
+ {
+ case QueryTreeNodeType::QUERY:
+ [[fallthrough]];
+ case QueryTreeNodeType::UNION:
+ [[fallthrough]];
+ case QueryTreeNodeType::LAMBDA:
+ scope_nodes_stack.push_back(node);
+ break;
+ default:
+ break;
+ }
+ }
+
+ void leaveImpl(QueryTreeNodePtr & node)
+ {
+ if (scope_nodes_stack.back() == node)
+ {
+ if (auto it = scope_to_nodes_with_aliases.find(scope_nodes_stack.back());
+ it != scope_to_nodes_with_aliases.end())
+ {
+ for (const auto & node_with_alias : it->second)
+ {
+ table_expression_to_alias.erase(node_with_alias);
+ }
+ scope_to_nodes_with_aliases.erase(it);
+ }
+ scope_nodes_stack.pop_back();
+ }
+
+ /// Here we revisit subquery for IN function. Reasons:
+ /// * For remote query execution, query tree may be traversed a few times.
+ /// In such a case, it is possible to get AST like
+ /// `IN ((SELECT ... FROM table AS __table4) AS __table1)` which result in
+ /// `Multiple expressions for the alias` exception
+ /// * Tables in subqueries could have different aliases => different three hashes,
+ /// which is important to be able to find a set in PreparedSets
+ /// See 01253_subquery_in_aggregate_function_JustStranger.
+ ///
+ /// So, we revisit this subquery to make aliases stable.
+ /// This should be safe cause columns from IN subquery can't be used in main query anyway.
+ if (node->getNodeType() == QueryTreeNodeType::FUNCTION)
+ {
+ auto * function_node = node->as();
+ if (isNameOfInFunction(function_node->getFunctionName()))
+ {
+ auto arg = function_node->getArguments().getNodes().back();
+ /// Avoid aliasing IN `table`
+ if (arg->getNodeType() != QueryTreeNodeType::TABLE)
+ CreateUniqueTableAliasesVisitor(getContext()).visit(function_node->getArguments().getNodes().back());
+ }
+ }
+ }
+
+private:
+ size_t next_id = 0;
+
+ // Stack of nodes which create scopes: QUERY, UNION and LAMBDA.
+ QueryTreeNodes scope_nodes_stack;
+
+ std::unordered_map scope_to_nodes_with_aliases;
+
+ // We need to use raw pointer as a key, not a QueryTreeNodePtrWithHash.
+ std::unordered_map table_expression_to_alias;
+};
+
+}
+
+
+void createUniqueTableAliases(QueryTreeNodePtr & node, const QueryTreeNodePtr & /*table_expression*/, const ContextPtr & context)
+{
+ CreateUniqueTableAliasesVisitor(context).visit(node);
+}
+
+}
diff --git a/src/Analyzer/createUniqueTableAliases.h b/src/Analyzer/createUniqueTableAliases.h
new file mode 100644
index 00000000000..d57a198498c
--- /dev/null
+++ b/src/Analyzer/createUniqueTableAliases.h
@@ -0,0 +1,18 @@
+#pragma once
+
+#include
+#include
+
+class IQueryTreeNode;
+using QueryTreeNodePtr = std::shared_ptr;
+
+namespace DB
+{
+
+/*
+ * For each table expression in the Query Tree generate and add a unique alias.
+ * If table expression had an alias in initial query tree, override it.
+ */
+void createUniqueTableAliases(QueryTreeNodePtr & node, const QueryTreeNodePtr & table_expression, const ContextPtr & context);
+
+}
diff --git a/src/Backups/RestorerFromBackup.cpp b/src/Backups/RestorerFromBackup.cpp
index 4e580e493a7..a33773f19ab 100644
--- a/src/Backups/RestorerFromBackup.cpp
+++ b/src/Backups/RestorerFromBackup.cpp
@@ -573,11 +573,12 @@ void RestorerFromBackup::createDatabase(const String & database_name) const
create_database_query->if_not_exists = (restore_settings.create_table == RestoreTableCreationMode::kCreateIfNotExists);
LOG_TRACE(log, "Creating database {}: {}", backQuoteIfNeed(database_name), serializeAST(*create_database_query));
-
+ auto query_context = Context::createCopy(context);
+ query_context->setSetting("allow_deprecated_database_ordinary", 1);
try
{
/// Execute CREATE DATABASE query.
- InterpreterCreateQuery interpreter{create_database_query, context};
+ InterpreterCreateQuery interpreter{create_database_query, query_context};
interpreter.setInternal(true);
interpreter.execute();
}
diff --git a/src/Columns/ColumnAggregateFunction.cpp b/src/Columns/ColumnAggregateFunction.cpp
index 0ec5db6c69d..2018015b46d 100644
--- a/src/Columns/ColumnAggregateFunction.cpp
+++ b/src/Columns/ColumnAggregateFunction.cpp
@@ -1,18 +1,19 @@
#include
#include
#include
-#include
-#include
+#include
#include
#include
-#include
-#include
-#include
+#include
#include
-#include
#include
-#include
+#include
#include
+#include
+#include
+#include
+#include
+#include
namespace DB
@@ -626,8 +627,7 @@ void ColumnAggregateFunction::getPermutation(PermutationSortDirection /*directio
{
size_t s = data.size();
res.resize(s);
- for (size_t i = 0; i < s; ++i)
- res[i] = i;
+ iota(res.data(), s, IColumn::Permutation::value_type(0));
}
void ColumnAggregateFunction::updatePermutation(PermutationSortDirection, PermutationSortStability,
diff --git a/src/Columns/ColumnConst.cpp b/src/Columns/ColumnConst.cpp
index 10e960ea244..9aa0f5cfa49 100644
--- a/src/Columns/ColumnConst.cpp
+++ b/src/Columns/ColumnConst.cpp
@@ -2,9 +2,10 @@
#include
#include
-#include
-#include
#include
+#include
+#include
+#include
#include
@@ -128,8 +129,7 @@ void ColumnConst::getPermutation(PermutationSortDirection /*direction*/, Permuta
size_t /*limit*/, int /*nan_direction_hint*/, Permutation & res) const
{
res.resize(s);
- for (size_t i = 0; i < s; ++i)
- res[i] = i;
+ iota(res.data(), s, IColumn::Permutation::value_type(0));
}
void ColumnConst::updatePermutation(PermutationSortDirection /*direction*/, PermutationSortStability /*stability*/,
diff --git a/src/Columns/ColumnDecimal.cpp b/src/Columns/ColumnDecimal.cpp
index baccfc69147..20fc5d8e1fe 100644
--- a/src/Columns/ColumnDecimal.cpp
+++ b/src/Columns/ColumnDecimal.cpp
@@ -1,10 +1,11 @@
-#include
#include
-#include
-#include
-#include
+#include
#include
#include
+#include
+#include
+#include
+#include
#include
@@ -163,8 +164,7 @@ void ColumnDecimal::getPermutation(IColumn::PermutationSortDirection directio
if (limit >= data_size)
limit = 0;
- for (size_t i = 0; i < data_size; ++i)
- res[i] = i;
+ iota(res.data(), data_size, IColumn::Permutation::value_type(0));
if constexpr (is_arithmetic_v && !is_big_int_v)
{
@@ -183,8 +183,7 @@ void ColumnDecimal::getPermutation(IColumn::PermutationSortDirection directio
/// Thresholds on size. Lower threshold is arbitrary. Upper threshold is chosen by the type for histogram counters.
if (data_size >= 256 && data_size <= std::numeric_limits::max() && use_radix_sort)
{
- for (size_t i = 0; i < data_size; ++i)
- res[i] = i;
+ iota(res.data(), data_size, IColumn::Permutation::value_type(0));
bool try_sort = false;
diff --git a/src/Columns/ColumnObject.cpp b/src/Columns/ColumnObject.cpp
index 2052ec3c968..f7176568a1b 100644
--- a/src/Columns/ColumnObject.cpp
+++ b/src/Columns/ColumnObject.cpp
@@ -2,6 +2,7 @@
#include
#include
#include
+#include
#include
#include
#include
@@ -838,7 +839,7 @@ MutableColumnPtr ColumnObject::cloneResized(size_t new_size) const
void ColumnObject::getPermutation(PermutationSortDirection, PermutationSortStability, size_t, int, Permutation & res) const
{
res.resize(num_rows);
- std::iota(res.begin(), res.end(), 0);
+ iota(res.data(), res.size(), size_t(0));
}
void ColumnObject::compareColumn(const IColumn & rhs, size_t rhs_row_num,
diff --git a/src/Columns/ColumnSparse.cpp b/src/Columns/ColumnSparse.cpp
index 057c0cd7112..02e6e9e56b4 100644
--- a/src/Columns/ColumnSparse.cpp
+++ b/src/Columns/ColumnSparse.cpp
@@ -1,11 +1,12 @@
-#include
-#include
#include
+#include
#include
-#include
-#include
-#include
+#include
#include
+#include
+#include
+#include
+#include
#include
#include
@@ -499,8 +500,7 @@ void ColumnSparse::getPermutationImpl(IColumn::PermutationSortDirection directio
res.resize(_size);
if (offsets->empty())
{
- for (size_t i = 0; i < _size; ++i)
- res[i] = i;
+ iota(res.data(), _size, IColumn::Permutation::value_type(0));
return;
}
diff --git a/src/Columns/ColumnTuple.cpp b/src/Columns/ColumnTuple.cpp
index d8992125be4..356bb0493d2 100644
--- a/src/Columns/ColumnTuple.cpp
+++ b/src/Columns/ColumnTuple.cpp
@@ -1,16 +1,17 @@
#include
-#include
-#include
#include
+#include
#include
-#include
+#include
#include
#include
+#include
+#include
#include
#include
+#include
#include
-#include
namespace DB
@@ -378,8 +379,7 @@ void ColumnTuple::getPermutationImpl(IColumn::PermutationSortDirection direction
{
size_t rows = size();
res.resize(rows);
- for (size_t i = 0; i < rows; ++i)
- res[i] = i;
+ iota(res.data(), rows, IColumn::Permutation::value_type(0));
if (limit >= rows)
limit = 0;
diff --git a/src/Columns/ColumnVector.cpp b/src/Columns/ColumnVector.cpp
index 37e62c76596..b1cf449dfde 100644
--- a/src/Columns/ColumnVector.cpp
+++ b/src/Columns/ColumnVector.cpp
@@ -1,24 +1,25 @@
#include "ColumnVector.h"
-#include
#include
+#include
#include
#include
-#include
#include
+#include
+#include
+#include
+#include
+#include
#include
#include
#include
#include
#include
#include
-#include
#include
+#include
#include
-#include
-#include
-#include
-#include
+#include
#include
#include
@@ -244,8 +245,7 @@ void ColumnVector::getPermutation(IColumn::PermutationSortDirection direction
if (limit >= data_size)
limit = 0;
- for (size_t i = 0; i < data_size; ++i)
- res[i] = i;
+ iota(res.data(), data_size, IColumn::Permutation::value_type(0));
if constexpr (is_arithmetic_v && !is_big_int_v)
{
diff --git a/src/Columns/IColumnDummy.cpp b/src/Columns/IColumnDummy.cpp
index 01091a87049..7c237536f94 100644
--- a/src/Columns/IColumnDummy.cpp
+++ b/src/Columns/IColumnDummy.cpp
@@ -1,7 +1,8 @@
-#include
-#include
-#include
#include
+#include
+#include
+#include
+#include
namespace DB
@@ -87,8 +88,7 @@ void IColumnDummy::getPermutation(IColumn::PermutationSortDirection /*direction*
size_t /*limit*/, int /*nan_direction_hint*/, Permutation & res) const
{
res.resize(s);
- for (size_t i = 0; i < s; ++i)
- res[i] = i;
+ iota(res.data(), s, IColumn::Permutation::value_type(0));
}
ColumnPtr IColumnDummy::replicate(const Offsets & offsets) const
diff --git a/src/Columns/IColumnImpl.h b/src/Columns/IColumnImpl.h
index 0eab9452813..8e0bf0014f2 100644
--- a/src/Columns/IColumnImpl.h
+++ b/src/Columns/IColumnImpl.h
@@ -6,10 +6,11 @@
* implementation.
*/
-#include
-#include
-#include
#include
+#include
+#include
+#include
+#include
namespace DB
@@ -299,8 +300,7 @@ void IColumn::getPermutationImpl(
if (limit >= data_size)
limit = 0;
- for (size_t i = 0; i < data_size; ++i)
- res[i] = i;
+ iota(res.data(), data_size, Permutation::value_type(0));
if (limit)
{
diff --git a/src/Columns/tests/gtest_column_sparse.cpp b/src/Columns/tests/gtest_column_sparse.cpp
index c3450ff91b4..02b15a2f5c4 100644
--- a/src/Columns/tests/gtest_column_sparse.cpp
+++ b/src/Columns/tests/gtest_column_sparse.cpp
@@ -1,6 +1,7 @@
#include
#include
+#include
#include
#include
#include
@@ -191,7 +192,7 @@ TEST(ColumnSparse, Permute)
auto [sparse_src, full_src] = createColumns(n, k);
IColumn::Permutation perm(n);
- std::iota(perm.begin(), perm.end(), 0);
+ iota(perm.data(), perm.size(), size_t(0));
std::shuffle(perm.begin(), perm.end(), rng);
auto sparse_dst = sparse_src->permute(perm, limit);
diff --git a/src/Columns/tests/gtest_column_stable_permutation.cpp b/src/Columns/tests/gtest_column_stable_permutation.cpp
index df898cffa04..0dabd4d1fc2 100644
--- a/src/Columns/tests/gtest_column_stable_permutation.cpp
+++ b/src/Columns/tests/gtest_column_stable_permutation.cpp
@@ -9,7 +9,6 @@
#include
#include