mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-12-11 08:52:06 +00:00
Merge branch 'master' into Support_parameterized_view_with_analyzer
This commit is contained in:
commit
16e682e0ae
@ -22,7 +22,7 @@
|
||||
* The MergeTree setting `clean_deleted_rows` is deprecated, it has no effect anymore. The `CLEANUP` keyword for the `OPTIMIZE` is not allowed by default (it can be unlocked with the `allow_experimental_replacing_merge_with_cleanup` setting). [#58267](https://github.com/ClickHouse/ClickHouse/pull/58267) ([Alexander Tokmakov](https://github.com/tavplubix)). This fixes [#57930](https://github.com/ClickHouse/ClickHouse/issues/57930). This closes [#54988](https://github.com/ClickHouse/ClickHouse/issues/54988). This closes [#54570](https://github.com/ClickHouse/ClickHouse/issues/54570). This closes [#50346](https://github.com/ClickHouse/ClickHouse/issues/50346). This closes [#47579](https://github.com/ClickHouse/ClickHouse/issues/47579). The feature has to be removed because it is not good. We have to remove it as quickly as possible, because there is no other option. [#57932](https://github.com/ClickHouse/ClickHouse/pull/57932) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### New Feature
|
||||
* Implement Refreshable Materialized Views, requested in [#33919](https://github.com/ClickHouse/ClickHouse/issues/57995). [#56946](https://github.com/ClickHouse/ClickHouse/pull/56946) ([Michael Kolupaev](https://github.com/al13n321), [Michael Guzov](https://github.com/koloshmet)).
|
||||
* Implement Refreshable Materialized Views, requested in [#33919](https://github.com/ClickHouse/ClickHouse/issues/33919). [#56946](https://github.com/ClickHouse/ClickHouse/pull/56946) ([Michael Kolupaev](https://github.com/al13n321), [Michael Guzov](https://github.com/koloshmet)).
|
||||
* Introduce `PASTE JOIN`, which allows users to join tables without `ON` clause simply by row numbers. Example: `SELECT * FROM (SELECT number AS a FROM numbers(2)) AS t1 PASTE JOIN (SELECT number AS a FROM numbers(2) ORDER BY a DESC) AS t2`. [#57995](https://github.com/ClickHouse/ClickHouse/pull/57995) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
* The `ORDER BY` clause now supports specifying `ALL`, meaning that ClickHouse sorts by all columns in the `SELECT` clause. Example: `SELECT col1, col2 FROM tab WHERE [...] ORDER BY ALL`. [#57875](https://github.com/ClickHouse/ClickHouse/pull/57875) ([zhongyuankai](https://github.com/zhongyuankai)).
|
||||
* Added a new mutation command `ALTER TABLE <table> APPLY DELETED MASK`, which allows to enforce applying of mask written by lightweight delete and to remove rows marked as deleted from disk. [#57433](https://github.com/ClickHouse/ClickHouse/pull/57433) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
@ -375,6 +375,7 @@
|
||||
* Do not interpret the `send_timeout` set on the client side as the `receive_timeout` on the server side and vise-versa. [#56035](https://github.com/ClickHouse/ClickHouse/pull/56035) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Comparison of time intervals with different units will throw an exception. This closes [#55942](https://github.com/ClickHouse/ClickHouse/issues/55942). You might have occasionally rely on the previous behavior when the underlying numeric values were compared regardless of the units. [#56090](https://github.com/ClickHouse/ClickHouse/pull/56090) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Rewrited the experimental `S3Queue` table engine completely: changed the way we keep information in zookeeper which allows to make less zookeeper requests, added caching of zookeeper state in cases when we know the state will not change, improved the polling from s3 process to make it less aggressive, changed the way ttl and max set for trached files is maintained, now it is a background process. Added `system.s3queue` and `system.s3queue_log` tables. Closes [#54998](https://github.com/ClickHouse/ClickHouse/issues/54998). [#54422](https://github.com/ClickHouse/ClickHouse/pull/54422) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Arbitrary paths on HTTP endpoint are no longer interpreted as a request to the `/query` endpoint. [#55521](https://github.com/ClickHouse/ClickHouse/pull/55521) ([Konstantin Bogdanov](https://github.com/thevar1able)).
|
||||
|
||||
#### New Feature
|
||||
* Add function `arrayFold(accumulator, x1, ..., xn -> expression, initial, array1, ..., arrayn)` which applies a lambda function to multiple arrays of the same cardinality and collects the result in an accumulator. [#49794](https://github.com/ClickHouse/ClickHouse/pull/49794) ([Lirikl](https://github.com/Lirikl)).
|
||||
|
@ -34,7 +34,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
# lts / testing / prestable / etc
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||
ARG VERSION="23.12.1.1368"
|
||||
ARG VERSION="23.12.2.59"
|
||||
ARG PACKAGES="clickhouse-keeper"
|
||||
ARG DIRECT_DOWNLOAD_URLS=""
|
||||
|
||||
|
@ -32,7 +32,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
# lts / testing / prestable / etc
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||
ARG VERSION="23.12.1.1368"
|
||||
ARG VERSION="23.12.2.59"
|
||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||
ARG DIRECT_DOWNLOAD_URLS=""
|
||||
|
||||
|
@ -30,7 +30,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
|
||||
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
||||
ARG VERSION="23.12.1.1368"
|
||||
ARG VERSION="23.12.2.59"
|
||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||
|
||||
# set non-empty deb_location_url url to create a docker image
|
||||
|
51
docs/changelogs/v23.10.6.60-stable.md
Normal file
51
docs/changelogs/v23.10.6.60-stable.md
Normal file
@ -0,0 +1,51 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
sidebar_label: 2024
|
||||
---
|
||||
|
||||
# 2024 Changelog
|
||||
|
||||
### ClickHouse release v23.10.6.60-stable (68907bbe643) FIXME as compared to v23.10.5.20-stable (e84001e5c61)
|
||||
|
||||
#### Improvement
|
||||
* Backported in [#58493](https://github.com/ClickHouse/ClickHouse/issues/58493): Fix transfer query to MySQL compatible query. Fixes [#57253](https://github.com/ClickHouse/ClickHouse/issues/57253). Fixes [#52654](https://github.com/ClickHouse/ClickHouse/issues/52654). Fixes [#56729](https://github.com/ClickHouse/ClickHouse/issues/56729). [#56456](https://github.com/ClickHouse/ClickHouse/pull/56456) ([flynn](https://github.com/ucasfl)).
|
||||
* Backported in [#57659](https://github.com/ClickHouse/ClickHouse/issues/57659): Handle sigabrt case when getting PostgreSQl table structure with empty array. [#57618](https://github.com/ClickHouse/ClickHouse/pull/57618) ([Mike Kot (Михаил Кот)](https://github.com/myrrc)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Backported in [#57586](https://github.com/ClickHouse/ClickHouse/issues/57586): Fix issue caught in https://github.com/docker-library/official-images/pull/15846. [#57571](https://github.com/ClickHouse/ClickHouse/pull/57571) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
|
||||
* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix ALTER COLUMN with ALIAS [#56493](https://github.com/ClickHouse/ClickHouse/pull/56493) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Prevent incompatible ALTER of projection columns [#56948](https://github.com/ClickHouse/ClickHouse/pull/56948) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix segfault after ALTER UPDATE with Nullable MATERIALIZED column [#57147](https://github.com/ClickHouse/ClickHouse/pull/57147) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix incorrect JOIN plan optimization with partially materialized normal projection [#57196](https://github.com/ClickHouse/ClickHouse/pull/57196) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix `ReadonlyReplica` metric for all cases [#57267](https://github.com/ClickHouse/ClickHouse/pull/57267) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Background merges correctly use temporary data storage in the cache [#57275](https://github.com/ClickHouse/ClickHouse/pull/57275) ([vdimir](https://github.com/vdimir)).
|
||||
* MergeTree mutations reuse source part index granularity [#57352](https://github.com/ClickHouse/ClickHouse/pull/57352) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Fix function jsonMergePatch for partially const columns [#57379](https://github.com/ClickHouse/ClickHouse/pull/57379) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* bugfix: correctly parse SYSTEM STOP LISTEN TCP SECURE [#57483](https://github.com/ClickHouse/ClickHouse/pull/57483) ([joelynch](https://github.com/joelynch)).
|
||||
* Ignore ON CLUSTER clause in grant/revoke queries for management of replicated access entities. [#57538](https://github.com/ClickHouse/ClickHouse/pull/57538) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||
* Disable system.kafka_consumers by default (due to possible live memory leak) [#57822](https://github.com/ClickHouse/ClickHouse/pull/57822) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix invalid memory access in BLAKE3 (Rust) [#57876](https://github.com/ClickHouse/ClickHouse/pull/57876) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Normalize function names in CREATE INDEX [#57906](https://github.com/ClickHouse/ClickHouse/pull/57906) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)).
|
||||
* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix parallel parsing for JSONCompactEachRow [#58250](https://github.com/ClickHouse/ClickHouse/pull/58250) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix lost blobs after dropping a replica with broken detached parts [#58333](https://github.com/ClickHouse/ClickHouse/pull/58333) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* MergeTreePrefetchedReadPool disable for LIMIT only queries [#58505](https://github.com/ClickHouse/ClickHouse/pull/58505) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
|
||||
#### NO CL CATEGORY
|
||||
|
||||
* Backported in [#57916](https://github.com/ClickHouse/ClickHouse/issues/57916):. [#57909](https://github.com/ClickHouse/ClickHouse/pull/57909) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||
|
||||
* Pin alpine version of integration tests helper container [#57669](https://github.com/ClickHouse/ClickHouse/pull/57669) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Remove heavy rust stable toolchain [#57905](https://github.com/ClickHouse/ClickHouse/pull/57905) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Fix docker image for integration tests (fixes CI) [#57952](https://github.com/ClickHouse/ClickHouse/pull/57952) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix test_user_valid_until [#58409](https://github.com/ClickHouse/ClickHouse/pull/58409) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
|
26
docs/changelogs/v23.11.4.24-stable.md
Normal file
26
docs/changelogs/v23.11.4.24-stable.md
Normal file
@ -0,0 +1,26 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
sidebar_label: 2024
|
||||
---
|
||||
|
||||
# 2024 Changelog
|
||||
|
||||
### ClickHouse release v23.11.4.24-stable (e79d840d7fe) FIXME as compared to v23.11.3.23-stable (a14ab450b0e)
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
|
||||
* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Disable system.kafka_consumers by default (due to possible live memory leak) [#57822](https://github.com/ClickHouse/ClickHouse/pull/57822) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)).
|
||||
* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix parallel parsing for JSONCompactEachRow [#58250](https://github.com/ClickHouse/ClickHouse/pull/58250) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix lost blobs after dropping a replica with broken detached parts [#58333](https://github.com/ClickHouse/ClickHouse/pull/58333) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* MergeTreePrefetchedReadPool disable for LIMIT only queries [#58505](https://github.com/ClickHouse/ClickHouse/pull/58505) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
|
||||
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||
|
||||
* Handle another case for preprocessing in Keeper [#58308](https://github.com/ClickHouse/ClickHouse/pull/58308) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix test_user_valid_until [#58409](https://github.com/ClickHouse/ClickHouse/pull/58409) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
|
32
docs/changelogs/v23.12.2.59-stable.md
Normal file
32
docs/changelogs/v23.12.2.59-stable.md
Normal file
@ -0,0 +1,32 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
sidebar_label: 2024
|
||||
---
|
||||
|
||||
# 2024 Changelog
|
||||
|
||||
### ClickHouse release v23.12.2.59-stable (17ab210e761) FIXME as compared to v23.12.1.1368-stable (a2faa65b080)
|
||||
|
||||
#### Backward Incompatible Change
|
||||
* Backported in [#58389](https://github.com/ClickHouse/ClickHouse/issues/58389): The MergeTree setting `clean_deleted_rows` is deprecated, it has no effect anymore. The `CLEANUP` keyword for `OPTIMIZE` is not allowed by default (unless `allow_experimental_replacing_merge_with_cleanup` is enabled). [#58316](https://github.com/ClickHouse/ClickHouse/pull/58316) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
|
||||
* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix lost blobs after dropping a replica with broken detached parts [#58333](https://github.com/ClickHouse/ClickHouse/pull/58333) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix segfault when graphite table does not have agg function [#58453](https://github.com/ClickHouse/ClickHouse/pull/58453) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* MergeTreePrefetchedReadPool disable for LIMIT only queries [#58505](https://github.com/ClickHouse/ClickHouse/pull/58505) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
|
||||
#### NO CL ENTRY
|
||||
|
||||
* NO CL ENTRY: 'Revert "Refreshable materialized views (takeover)"'. [#58296](https://github.com/ClickHouse/ClickHouse/pull/58296) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
|
||||
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||
|
||||
* Fix an error in the release script - it didn't allow to make 23.12. [#58288](https://github.com/ClickHouse/ClickHouse/pull/58288) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update version_date.tsv and changelogs after v23.12.1.1368-stable [#58290](https://github.com/ClickHouse/ClickHouse/pull/58290) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Fix test_storage_s3_queue/test.py::test_drop_table [#58293](https://github.com/ClickHouse/ClickHouse/pull/58293) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Handle another case for preprocessing in Keeper [#58308](https://github.com/ClickHouse/ClickHouse/pull/58308) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix test_user_valid_until [#58409](https://github.com/ClickHouse/ClickHouse/pull/58409) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
|
36
docs/changelogs/v23.3.19.32-lts.md
Normal file
36
docs/changelogs/v23.3.19.32-lts.md
Normal file
@ -0,0 +1,36 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
sidebar_label: 2024
|
||||
---
|
||||
|
||||
# 2024 Changelog
|
||||
|
||||
### ClickHouse release v23.3.19.32-lts (c4d4ca8ec02) FIXME as compared to v23.3.18.15-lts (7228475d77a)
|
||||
|
||||
#### Backward Incompatible Change
|
||||
* Backported in [#57840](https://github.com/ClickHouse/ClickHouse/issues/57840): Remove function `arrayFold` because it has a bug. This closes [#57816](https://github.com/ClickHouse/ClickHouse/issues/57816). This closes [#57458](https://github.com/ClickHouse/ClickHouse/issues/57458). [#57836](https://github.com/ClickHouse/ClickHouse/pull/57836) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Improvement
|
||||
* Backported in [#58489](https://github.com/ClickHouse/ClickHouse/issues/58489): Fix transfer query to MySQL compatible query. Fixes [#57253](https://github.com/ClickHouse/ClickHouse/issues/57253). Fixes [#52654](https://github.com/ClickHouse/ClickHouse/issues/52654). Fixes [#56729](https://github.com/ClickHouse/ClickHouse/issues/56729). [#56456](https://github.com/ClickHouse/ClickHouse/pull/56456) ([flynn](https://github.com/ucasfl)).
|
||||
* Backported in [#57653](https://github.com/ClickHouse/ClickHouse/issues/57653): Handle sigabrt case when getting PostgreSQl table structure with empty array. [#57618](https://github.com/ClickHouse/ClickHouse/pull/57618) ([Mike Kot (Михаил Кот)](https://github.com/myrrc)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Backported in [#57580](https://github.com/ClickHouse/ClickHouse/issues/57580): Fix issue caught in https://github.com/docker-library/official-images/pull/15846. [#57571](https://github.com/ClickHouse/ClickHouse/pull/57571) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
|
||||
* Prevent incompatible ALTER of projection columns [#56948](https://github.com/ClickHouse/ClickHouse/pull/56948) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix segfault after ALTER UPDATE with Nullable MATERIALIZED column [#57147](https://github.com/ClickHouse/ClickHouse/pull/57147) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix incorrect JOIN plan optimization with partially materialized normal projection [#57196](https://github.com/ClickHouse/ClickHouse/pull/57196) ([Amos Bird](https://github.com/amosbird)).
|
||||
* MergeTree mutations reuse source part index granularity [#57352](https://github.com/ClickHouse/ClickHouse/pull/57352) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Fix invalid memory access in BLAKE3 (Rust) [#57876](https://github.com/ClickHouse/ClickHouse/pull/57876) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Normalize function names in CREATE INDEX [#57906](https://github.com/ClickHouse/ClickHouse/pull/57906) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)).
|
||||
* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||
|
||||
* Pin alpine version of integration tests helper container [#57669](https://github.com/ClickHouse/ClickHouse/pull/57669) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Fix docker image for integration tests (fixes CI) [#57952](https://github.com/ClickHouse/ClickHouse/pull/57952) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
47
docs/changelogs/v23.8.9.54-lts.md
Normal file
47
docs/changelogs/v23.8.9.54-lts.md
Normal file
@ -0,0 +1,47 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
sidebar_label: 2024
|
||||
---
|
||||
|
||||
# 2024 Changelog
|
||||
|
||||
### ClickHouse release v23.8.9.54-lts (192a1d231fa) FIXME as compared to v23.8.8.20-lts (5e012a03bf2)
|
||||
|
||||
#### Improvement
|
||||
* Backported in [#57668](https://github.com/ClickHouse/ClickHouse/issues/57668): Output valid JSON/XML on excetpion during HTTP query execution. Add setting `http_write_exception_in_output_format` to enable/disable this behaviour (enabled by default). [#52853](https://github.com/ClickHouse/ClickHouse/pull/52853) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Backported in [#58491](https://github.com/ClickHouse/ClickHouse/issues/58491): Fix transfer query to MySQL compatible query. Fixes [#57253](https://github.com/ClickHouse/ClickHouse/issues/57253). Fixes [#52654](https://github.com/ClickHouse/ClickHouse/issues/52654). Fixes [#56729](https://github.com/ClickHouse/ClickHouse/issues/56729). [#56456](https://github.com/ClickHouse/ClickHouse/pull/56456) ([flynn](https://github.com/ucasfl)).
|
||||
* Backported in [#57238](https://github.com/ClickHouse/ClickHouse/issues/57238): Fetching a part waits when that part is fully committed on remote replica. It is better not send part in PreActive state. In case of zero copy this is mandatory restriction. [#56808](https://github.com/ClickHouse/ClickHouse/pull/56808) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Backported in [#57655](https://github.com/ClickHouse/ClickHouse/issues/57655): Handle sigabrt case when getting PostgreSQl table structure with empty array. [#57618](https://github.com/ClickHouse/ClickHouse/pull/57618) ([Mike Kot (Михаил Кот)](https://github.com/myrrc)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Backported in [#57582](https://github.com/ClickHouse/ClickHouse/issues/57582): Fix issue caught in https://github.com/docker-library/official-images/pull/15846. [#57571](https://github.com/ClickHouse/ClickHouse/pull/57571) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
|
||||
* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix ALTER COLUMN with ALIAS [#56493](https://github.com/ClickHouse/ClickHouse/pull/56493) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Prevent incompatible ALTER of projection columns [#56948](https://github.com/ClickHouse/ClickHouse/pull/56948) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix segfault after ALTER UPDATE with Nullable MATERIALIZED column [#57147](https://github.com/ClickHouse/ClickHouse/pull/57147) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix incorrect JOIN plan optimization with partially materialized normal projection [#57196](https://github.com/ClickHouse/ClickHouse/pull/57196) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix `ReadonlyReplica` metric for all cases [#57267](https://github.com/ClickHouse/ClickHouse/pull/57267) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* bugfix: correctly parse SYSTEM STOP LISTEN TCP SECURE [#57483](https://github.com/ClickHouse/ClickHouse/pull/57483) ([joelynch](https://github.com/joelynch)).
|
||||
* Ignore ON CLUSTER clause in grant/revoke queries for management of replicated access entities. [#57538](https://github.com/ClickHouse/ClickHouse/pull/57538) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||
* Disable system.kafka_consumers by default (due to possible live memory leak) [#57822](https://github.com/ClickHouse/ClickHouse/pull/57822) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix invalid memory access in BLAKE3 (Rust) [#57876](https://github.com/ClickHouse/ClickHouse/pull/57876) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Normalize function names in CREATE INDEX [#57906](https://github.com/ClickHouse/ClickHouse/pull/57906) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)).
|
||||
* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix parallel parsing for JSONCompactEachRow [#58250](https://github.com/ClickHouse/ClickHouse/pull/58250) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
|
||||
#### NO CL ENTRY
|
||||
|
||||
* NO CL ENTRY: 'Update PeekableWriteBuffer.cpp'. [#57701](https://github.com/ClickHouse/ClickHouse/pull/57701) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
|
||||
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||
|
||||
* Pin alpine version of integration tests helper container [#57669](https://github.com/ClickHouse/ClickHouse/pull/57669) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Remove heavy rust stable toolchain [#57905](https://github.com/ClickHouse/ClickHouse/pull/57905) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Fix docker image for integration tests (fixes CI) [#57952](https://github.com/ClickHouse/ClickHouse/pull/57952) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
@ -1262,6 +1262,7 @@ SELECT * FROM json_each_row_nested
|
||||
|
||||
- [input_format_import_nested_json](/docs/en/operations/settings/settings-formats.md/#input_format_import_nested_json) - map nested JSON data to nested tables (it works for JSONEachRow format). Default value - `false`.
|
||||
- [input_format_json_read_bools_as_numbers](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_bools_as_numbers) - allow to parse bools as numbers in JSON input formats. Default value - `true`.
|
||||
- [input_format_json_read_bools_as_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_bools_as_strings) - allow to parse bools as strings in JSON input formats. Default value - `true`.
|
||||
- [input_format_json_read_numbers_as_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_numbers_as_strings) - allow to parse numbers as strings in JSON input formats. Default value - `true`.
|
||||
- [input_format_json_read_arrays_as_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_arrays_as_strings) - allow to parse JSON arrays as strings in JSON input formats. Default value - `true`.
|
||||
- [input_format_json_read_objects_as_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_objects_as_strings) - allow to parse JSON objects as strings in JSON input formats. Default value - `true`.
|
||||
|
@ -614,6 +614,26 @@ DESC format(JSONEachRow, $$
|
||||
└───────┴─────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
|
||||
```
|
||||
|
||||
##### input_format_json_read_bools_as_strings
|
||||
|
||||
Enabling this setting allows reading Bool values as strings.
|
||||
|
||||
This setting is enabled by default.
|
||||
|
||||
**Example:**
|
||||
|
||||
```sql
|
||||
SET input_format_json_read_bools_as_strings = 1;
|
||||
DESC format(JSONEachRow, $$
|
||||
{"value" : true}
|
||||
{"value" : "Hello, World"}
|
||||
$$)
|
||||
```
|
||||
```response
|
||||
┌─name──┬─type─────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
|
||||
│ value │ Nullable(String) │ │ │ │ │ │
|
||||
└───────┴──────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
|
||||
```
|
||||
##### input_format_json_read_arrays_as_strings
|
||||
|
||||
Enabling this setting allows reading JSON array values as strings.
|
||||
|
@ -377,6 +377,12 @@ Allow parsing bools as numbers in JSON input formats.
|
||||
|
||||
Enabled by default.
|
||||
|
||||
## input_format_json_read_bools_as_strings {#input_format_json_read_bools_as_strings}
|
||||
|
||||
Allow parsing bools as strings in JSON input formats.
|
||||
|
||||
Enabled by default.
|
||||
|
||||
## input_format_json_read_numbers_as_strings {#input_format_json_read_numbers_as_strings}
|
||||
|
||||
Allow parsing numbers as strings in JSON input formats.
|
||||
|
@ -27,7 +27,7 @@ $ clickhouse-format --query "select number from numbers(10) where number%2 order
|
||||
|
||||
Result:
|
||||
|
||||
```sql
|
||||
```bash
|
||||
SELECT number
|
||||
FROM numbers(10)
|
||||
WHERE number % 2
|
||||
@ -49,22 +49,20 @@ SELECT sum(number) FROM numbers(5)
|
||||
3. Multiqueries:
|
||||
|
||||
```bash
|
||||
$ clickhouse-format -n <<< "SELECT * FROM (SELECT 1 AS x UNION ALL SELECT 1 UNION DISTINCT SELECT 3);"
|
||||
$ clickhouse-format -n <<< "SELECT min(number) FROM numbers(5); SELECT max(number) FROM numbers(5);"
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```sql
|
||||
SELECT *
|
||||
FROM
|
||||
(
|
||||
SELECT 1 AS x
|
||||
UNION ALL
|
||||
SELECT 1
|
||||
UNION DISTINCT
|
||||
SELECT 3
|
||||
)
|
||||
```
|
||||
SELECT min(number)
|
||||
FROM numbers(5)
|
||||
;
|
||||
|
||||
SELECT max(number)
|
||||
FROM numbers(5)
|
||||
;
|
||||
|
||||
```
|
||||
|
||||
4. Obfuscating:
|
||||
@ -75,7 +73,7 @@ $ clickhouse-format --seed Hello --obfuscate <<< "SELECT cost_first_screen BETWE
|
||||
|
||||
Result:
|
||||
|
||||
```sql
|
||||
```
|
||||
SELECT treasury_mammoth_hazelnut BETWEEN nutmeg AND span, CASE WHEN chive >= 116 THEN switching ELSE ANYTHING END;
|
||||
```
|
||||
|
||||
@ -87,7 +85,7 @@ $ clickhouse-format --seed World --obfuscate <<< "SELECT cost_first_screen BETWE
|
||||
|
||||
Result:
|
||||
|
||||
```sql
|
||||
```
|
||||
SELECT horse_tape_summer BETWEEN folklore AND moccasins, CASE WHEN intestine >= 116 THEN nonconformist ELSE FORESTRY END;
|
||||
```
|
||||
|
||||
@ -99,7 +97,7 @@ $ clickhouse-format --backslash <<< "SELECT * FROM (SELECT 1 AS x UNION ALL SELE
|
||||
|
||||
Result:
|
||||
|
||||
```sql
|
||||
```
|
||||
SELECT * \
|
||||
FROM \
|
||||
( \
|
||||
|
@ -143,9 +143,17 @@ public:
|
||||
return alias;
|
||||
}
|
||||
|
||||
const String & getOriginalAlias() const
|
||||
{
|
||||
return original_alias.empty() ? alias : original_alias;
|
||||
}
|
||||
|
||||
/// Set node alias
|
||||
void setAlias(String alias_value)
|
||||
{
|
||||
if (original_alias.empty())
|
||||
original_alias = std::move(alias);
|
||||
|
||||
alias = std::move(alias_value);
|
||||
}
|
||||
|
||||
@ -276,6 +284,9 @@ protected:
|
||||
|
||||
private:
|
||||
String alias;
|
||||
/// An alias from query. Alias can be replaced by query passes,
|
||||
/// but we need to keep the original one to support additional_table_filters.
|
||||
String original_alias;
|
||||
ASTPtr original_ast;
|
||||
};
|
||||
|
||||
|
@ -52,6 +52,7 @@
|
||||
|
||||
#include <Processors/Executors/PullingAsyncPipelineExecutor.h>
|
||||
|
||||
#include <Analyzer/createUniqueTableAliases.h>
|
||||
#include <Analyzer/Utils.h>
|
||||
#include <Analyzer/SetUtils.h>
|
||||
#include <Analyzer/AggregationUtils.h>
|
||||
@ -1204,7 +1205,7 @@ private:
|
||||
|
||||
static void mergeWindowWithParentWindow(const QueryTreeNodePtr & window_node, const QueryTreeNodePtr & parent_window_node, IdentifierResolveScope & scope);
|
||||
|
||||
static void replaceNodesWithPositionalArguments(QueryTreeNodePtr & node_list, const QueryTreeNodes & projection_nodes, IdentifierResolveScope & scope);
|
||||
void replaceNodesWithPositionalArguments(QueryTreeNodePtr & node_list, const QueryTreeNodes & projection_nodes, IdentifierResolveScope & scope);
|
||||
|
||||
static void convertLimitOffsetExpression(QueryTreeNodePtr & expression_node, const String & expression_description, IdentifierResolveScope & scope);
|
||||
|
||||
@ -2174,7 +2175,12 @@ void QueryAnalyzer::replaceNodesWithPositionalArguments(QueryTreeNodePtr & node_
|
||||
scope.scope_node->formatASTForErrorMessage());
|
||||
|
||||
--positional_argument_number;
|
||||
*node_to_replace = projection_nodes[positional_argument_number];
|
||||
*node_to_replace = projection_nodes[positional_argument_number]->clone();
|
||||
if (auto it = resolved_expressions.find(projection_nodes[positional_argument_number]);
|
||||
it != resolved_expressions.end())
|
||||
{
|
||||
resolved_expressions[*node_to_replace] = it->second;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -7413,6 +7419,7 @@ void QueryAnalysisPass::run(QueryTreeNodePtr query_tree_node, ContextPtr context
|
||||
{
|
||||
QueryAnalyzer analyzer;
|
||||
analyzer.resolve(query_tree_node, table_expression, context);
|
||||
createUniqueTableAliases(query_tree_node, table_expression, context);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -326,7 +326,7 @@ void addTableExpressionOrJoinIntoTablesInSelectQuery(ASTPtr & tables_in_select_q
|
||||
}
|
||||
}
|
||||
|
||||
QueryTreeNodes extractTableExpressions(const QueryTreeNodePtr & join_tree_node)
|
||||
QueryTreeNodes extractTableExpressions(const QueryTreeNodePtr & join_tree_node, bool add_array_join)
|
||||
{
|
||||
QueryTreeNodes result;
|
||||
|
||||
@ -357,6 +357,8 @@ QueryTreeNodes extractTableExpressions(const QueryTreeNodePtr & join_tree_node)
|
||||
{
|
||||
auto & array_join_node = node_to_process->as<ArrayJoinNode &>();
|
||||
nodes_to_process.push_front(array_join_node.getTableExpression());
|
||||
if (add_array_join)
|
||||
result.push_back(std::move(node_to_process));
|
||||
break;
|
||||
}
|
||||
case QueryTreeNodeType::JOIN:
|
||||
|
@ -51,7 +51,7 @@ std::optional<bool> tryExtractConstantFromConditionNode(const QueryTreeNodePtr &
|
||||
void addTableExpressionOrJoinIntoTablesInSelectQuery(ASTPtr & tables_in_select_query_ast, const QueryTreeNodePtr & table_expression, const IQueryTreeNode::ConvertToASTOptions & convert_to_ast_options);
|
||||
|
||||
/// Extract table, table function, query, union from join tree
|
||||
QueryTreeNodes extractTableExpressions(const QueryTreeNodePtr & join_tree_node);
|
||||
QueryTreeNodes extractTableExpressions(const QueryTreeNodePtr & join_tree_node, bool add_array_join = false);
|
||||
|
||||
/// Extract left table expression from join tree
|
||||
QueryTreeNodePtr extractLeftTableExpression(const QueryTreeNodePtr & join_tree_node);
|
||||
|
141
src/Analyzer/createUniqueTableAliases.cpp
Normal file
141
src/Analyzer/createUniqueTableAliases.cpp
Normal file
@ -0,0 +1,141 @@
|
||||
#include <memory>
|
||||
#include <unordered_map>
|
||||
#include <Analyzer/createUniqueTableAliases.h>
|
||||
#include <Analyzer/FunctionNode.h>
|
||||
#include <Analyzer/InDepthQueryTreeVisitor.h>
|
||||
#include <Analyzer/IQueryTreeNode.h>
|
||||
#include <Analyzer/LambdaNode.h>
|
||||
#include <Analyzer/Utils.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
class CreateUniqueTableAliasesVisitor : public InDepthQueryTreeVisitorWithContext<CreateUniqueTableAliasesVisitor>
|
||||
{
|
||||
public:
|
||||
using Base = InDepthQueryTreeVisitorWithContext<CreateUniqueTableAliasesVisitor>;
|
||||
|
||||
explicit CreateUniqueTableAliasesVisitor(const ContextPtr & context)
|
||||
: Base(context)
|
||||
{
|
||||
// Insert a fake node on top of the stack.
|
||||
scope_nodes_stack.push_back(std::make_shared<LambdaNode>(Names{}, nullptr));
|
||||
}
|
||||
|
||||
void enterImpl(QueryTreeNodePtr & node)
|
||||
{
|
||||
auto node_type = node->getNodeType();
|
||||
|
||||
switch (node_type)
|
||||
{
|
||||
case QueryTreeNodeType::QUERY:
|
||||
[[fallthrough]];
|
||||
case QueryTreeNodeType::UNION:
|
||||
{
|
||||
/// Queries like `(SELECT 1) as t` have invalid syntax. To avoid creating such queries (e.g. in StorageDistributed)
|
||||
/// we need to remove aliases for top level queries.
|
||||
/// N.B. Subquery depth starts count from 1, so the following condition checks if it's a top level.
|
||||
if (getSubqueryDepth() == 1)
|
||||
{
|
||||
node->removeAlias();
|
||||
break;
|
||||
}
|
||||
[[fallthrough]];
|
||||
}
|
||||
case QueryTreeNodeType::TABLE:
|
||||
[[fallthrough]];
|
||||
case QueryTreeNodeType::TABLE_FUNCTION:
|
||||
[[fallthrough]];
|
||||
case QueryTreeNodeType::ARRAY_JOIN:
|
||||
{
|
||||
auto & alias = table_expression_to_alias[node];
|
||||
if (alias.empty())
|
||||
{
|
||||
scope_to_nodes_with_aliases[scope_nodes_stack.back()].push_back(node);
|
||||
alias = fmt::format("__table{}", ++next_id);
|
||||
node->setAlias(alias);
|
||||
}
|
||||
break;
|
||||
}
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
switch (node_type)
|
||||
{
|
||||
case QueryTreeNodeType::QUERY:
|
||||
[[fallthrough]];
|
||||
case QueryTreeNodeType::UNION:
|
||||
[[fallthrough]];
|
||||
case QueryTreeNodeType::LAMBDA:
|
||||
scope_nodes_stack.push_back(node);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
void leaveImpl(QueryTreeNodePtr & node)
|
||||
{
|
||||
if (scope_nodes_stack.back() == node)
|
||||
{
|
||||
if (auto it = scope_to_nodes_with_aliases.find(scope_nodes_stack.back());
|
||||
it != scope_to_nodes_with_aliases.end())
|
||||
{
|
||||
for (const auto & node_with_alias : it->second)
|
||||
{
|
||||
table_expression_to_alias.erase(node_with_alias);
|
||||
}
|
||||
scope_to_nodes_with_aliases.erase(it);
|
||||
}
|
||||
scope_nodes_stack.pop_back();
|
||||
}
|
||||
|
||||
/// Here we revisit subquery for IN function. Reasons:
|
||||
/// * For remote query execution, query tree may be traversed a few times.
|
||||
/// In such a case, it is possible to get AST like
|
||||
/// `IN ((SELECT ... FROM table AS __table4) AS __table1)` which result in
|
||||
/// `Multiple expressions for the alias` exception
|
||||
/// * Tables in subqueries could have different aliases => different three hashes,
|
||||
/// which is important to be able to find a set in PreparedSets
|
||||
/// See 01253_subquery_in_aggregate_function_JustStranger.
|
||||
///
|
||||
/// So, we revisit this subquery to make aliases stable.
|
||||
/// This should be safe cause columns from IN subquery can't be used in main query anyway.
|
||||
if (node->getNodeType() == QueryTreeNodeType::FUNCTION)
|
||||
{
|
||||
auto * function_node = node->as<FunctionNode>();
|
||||
if (isNameOfInFunction(function_node->getFunctionName()))
|
||||
{
|
||||
auto arg = function_node->getArguments().getNodes().back();
|
||||
/// Avoid aliasing IN `table`
|
||||
if (arg->getNodeType() != QueryTreeNodeType::TABLE)
|
||||
CreateUniqueTableAliasesVisitor(getContext()).visit(function_node->getArguments().getNodes().back());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private:
|
||||
size_t next_id = 0;
|
||||
|
||||
// Stack of nodes which create scopes: QUERY, UNION and LAMBDA.
|
||||
QueryTreeNodes scope_nodes_stack;
|
||||
|
||||
std::unordered_map<QueryTreeNodePtr, QueryTreeNodes> scope_to_nodes_with_aliases;
|
||||
|
||||
// We need to use raw pointer as a key, not a QueryTreeNodePtrWithHash.
|
||||
std::unordered_map<QueryTreeNodePtr, String> table_expression_to_alias;
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
|
||||
void createUniqueTableAliases(QueryTreeNodePtr & node, const QueryTreeNodePtr & /*table_expression*/, const ContextPtr & context)
|
||||
{
|
||||
CreateUniqueTableAliasesVisitor(context).visit(node);
|
||||
}
|
||||
|
||||
}
|
18
src/Analyzer/createUniqueTableAliases.h
Normal file
18
src/Analyzer/createUniqueTableAliases.h
Normal file
@ -0,0 +1,18 @@
|
||||
#pragma once
|
||||
|
||||
#include <memory>
|
||||
#include <Interpreters/Context_fwd.h>
|
||||
|
||||
class IQueryTreeNode;
|
||||
using QueryTreeNodePtr = std::shared_ptr<IQueryTreeNode>;
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
/*
|
||||
* For each table expression in the Query Tree generate and add a unique alias.
|
||||
* If table expression had an alias in initial query tree, override it.
|
||||
*/
|
||||
void createUniqueTableAliases(QueryTreeNodePtr & node, const QueryTreeNodePtr & table_expression, const ContextPtr & context);
|
||||
|
||||
}
|
@ -573,11 +573,12 @@ void RestorerFromBackup::createDatabase(const String & database_name) const
|
||||
create_database_query->if_not_exists = (restore_settings.create_table == RestoreTableCreationMode::kCreateIfNotExists);
|
||||
|
||||
LOG_TRACE(log, "Creating database {}: {}", backQuoteIfNeed(database_name), serializeAST(*create_database_query));
|
||||
|
||||
auto query_context = Context::createCopy(context);
|
||||
query_context->setSetting("allow_deprecated_database_ordinary", 1);
|
||||
try
|
||||
{
|
||||
/// Execute CREATE DATABASE query.
|
||||
InterpreterCreateQuery interpreter{create_database_query, context};
|
||||
InterpreterCreateQuery interpreter{create_database_query, query_context};
|
||||
interpreter.setInternal(true);
|
||||
interpreter.execute();
|
||||
}
|
||||
|
@ -589,6 +589,7 @@
|
||||
M(707, GCP_ERROR) \
|
||||
M(708, ILLEGAL_STATISTIC) \
|
||||
M(709, CANNOT_GET_REPLICATED_DATABASE_SNAPSHOT) \
|
||||
M(710, FAULT_INJECTED) \
|
||||
\
|
||||
M(999, KEEPER_EXCEPTION) \
|
||||
M(1000, POCO_EXCEPTION) \
|
||||
|
@ -34,6 +34,8 @@ static struct InitFiu
|
||||
|
||||
#define APPLY_FOR_FAILPOINTS(ONCE, REGULAR, PAUSEABLE_ONCE, PAUSEABLE) \
|
||||
ONCE(replicated_merge_tree_commit_zk_fail_after_op) \
|
||||
ONCE(replicated_queue_fail_next_entry) \
|
||||
REGULAR(replicated_queue_unfail_entries) \
|
||||
ONCE(replicated_merge_tree_insert_quorum_fail_0) \
|
||||
REGULAR(replicated_merge_tree_commit_zk_fail_when_recovering_from_hw_fault) \
|
||||
REGULAR(use_delayed_remote_source) \
|
||||
|
@ -26,6 +26,8 @@ namespace DB
|
||||
M(UInt64, max_active_parts_loading_thread_pool_size, 64, "The number of threads to load active set of data parts (Active ones) at startup.", 0) \
|
||||
M(UInt64, max_outdated_parts_loading_thread_pool_size, 32, "The number of threads to load inactive set of data parts (Outdated ones) at startup.", 0) \
|
||||
M(UInt64, max_parts_cleaning_thread_pool_size, 128, "The number of threads for concurrent removal of inactive data parts.", 0) \
|
||||
M(UInt64, max_mutations_bandwidth_for_server, 0, "The maximum read speed of all mutations on server in bytes per second. Zero means unlimited.", 0) \
|
||||
M(UInt64, max_merges_bandwidth_for_server, 0, "The maximum read speed of all merges on server in bytes per second. Zero means unlimited.", 0) \
|
||||
M(UInt64, max_replicated_fetches_network_bandwidth_for_server, 0, "The maximum speed of data exchange over the network in bytes per second for replicated fetches. Zero means unlimited.", 0) \
|
||||
M(UInt64, max_replicated_sends_network_bandwidth_for_server, 0, "The maximum speed of data exchange over the network in bytes per second for replicated sends. Zero means unlimited.", 0) \
|
||||
M(UInt64, max_remote_read_network_bandwidth_for_server, 0, "The maximum speed of data exchange over the network in bytes per second for read. Zero means unlimited.", 0) \
|
||||
|
@ -157,7 +157,7 @@ class IColumn;
|
||||
M(Bool, allow_suspicious_fixed_string_types, false, "In CREATE TABLE statement allows creating columns of type FixedString(n) with n > 256. FixedString with length >= 256 is suspicious and most likely indicates misusage", 0) \
|
||||
M(Bool, allow_suspicious_indices, false, "Reject primary/secondary indexes and sorting keys with identical expressions", 0) \
|
||||
M(Bool, allow_suspicious_ttl_expressions, false, "Reject TTL expressions that don't depend on any of table's columns. It indicates a user error most of the time.", 0) \
|
||||
M(Bool, compile_expressions, true, "Compile some scalar functions and operators to native code.", 0) \
|
||||
M(Bool, compile_expressions, false, "Compile some scalar functions and operators to native code.", 0) \
|
||||
M(UInt64, min_count_to_compile_expression, 3, "The number of identical expressions before they are JIT-compiled", 0) \
|
||||
M(Bool, compile_aggregate_expressions, true, "Compile aggregate functions to native code.", 0) \
|
||||
M(UInt64, min_count_to_compile_aggregate_expression, 3, "The number of identical aggregate expressions before they are JIT-compiled", 0) \
|
||||
@ -709,7 +709,6 @@ class IColumn;
|
||||
M(Bool, query_plan_execute_functions_after_sorting, true, "Allow to re-order functions after sorting", 0) \
|
||||
M(Bool, query_plan_reuse_storage_ordering_for_window_functions, true, "Allow to use the storage sorting for window functions", 0) \
|
||||
M(Bool, query_plan_lift_up_union, true, "Allow to move UNIONs up so that more parts of the query plan can be optimized", 0) \
|
||||
M(Bool, query_plan_optimize_primary_key, true, "Analyze primary key using query plan (instead of AST)", 0) \
|
||||
M(Bool, query_plan_read_in_order, true, "Use query plan for read-in-order optimization", 0) \
|
||||
M(Bool, query_plan_aggregation_in_order, true, "Use query plan for aggregation-in-order optimization", 0) \
|
||||
M(Bool, query_plan_remove_redundant_sorting, true, "Remove redundant sorting in query plan. For example, sorting steps related to ORDER BY clauses in subqueries", 0) \
|
||||
@ -845,7 +844,7 @@ class IColumn;
|
||||
M(Timezone, session_timezone, "", "This setting can be removed in the future due to potential caveats. It is experimental and is not suitable for production usage. The default timezone for current session or query. The server default timezone if empty.", 0) \
|
||||
M(Bool, allow_create_index_without_type, false, "Allow CREATE INDEX query without TYPE. Query will be ignored. Made for SQL compatibility tests.", 0) \
|
||||
M(Bool, create_index_ignore_unique, false, "Ignore UNIQUE keyword in CREATE UNIQUE INDEX. Made for SQL compatibility tests.", 0) \
|
||||
M(Bool, print_pretty_type_names, false, "Print pretty type names in DESCRIBE query and toTypeName() function", 0) \
|
||||
M(Bool, print_pretty_type_names, true, "Print pretty type names in DESCRIBE query and toTypeName() function", 0) \
|
||||
M(Bool, create_table_empty_primary_key_by_default, false, "Allow to create *MergeTree tables with empty primary key when ORDER BY and PRIMARY KEY not specified", 0) \
|
||||
M(Bool, allow_named_collection_override_by_default, true, "Allow named collections' fields override by default.", 0)\
|
||||
M(Bool, allow_experimental_shared_merge_tree, false, "Only available in ClickHouse Cloud", 0) \
|
||||
@ -918,6 +917,7 @@ class IColumn;
|
||||
MAKE_OBSOLETE(M, Bool, optimize_move_functions_out_of_any, false) \
|
||||
MAKE_OBSOLETE(M, Bool, allow_experimental_undrop_table_query, true) \
|
||||
MAKE_OBSOLETE(M, Bool, allow_experimental_s3queue, true) \
|
||||
MAKE_OBSOLETE(M, Bool, query_plan_optimize_primary_key, true) \
|
||||
|
||||
/** The section above is for obsolete settings. Do not add anything there. */
|
||||
|
||||
@ -983,6 +983,7 @@ class IColumn;
|
||||
M(SchemaInferenceMode, schema_inference_mode, "default", "Mode of schema inference. 'default' - assume that all files have the same schema and schema can be inferred from any file, 'union' - files can have different schemas and the resulting schema should be the a union of schemas of all files", 0) \
|
||||
M(Bool, schema_inference_make_columns_nullable, true, "If set to true, all inferred types will be Nullable in schema inference for formats without information about nullability.", 0) \
|
||||
M(Bool, input_format_json_read_bools_as_numbers, true, "Allow to parse bools as numbers in JSON input formats", 0) \
|
||||
M(Bool, input_format_json_read_bools_as_strings, true, "Allow to parse bools as strings in JSON input formats", 0) \
|
||||
M(Bool, input_format_json_try_infer_numbers_from_strings, false, "Try to infer numbers from string fields while schema inference", 0) \
|
||||
M(Bool, input_format_json_validate_types_from_metadata, true, "For JSON/JSONCompact/JSONColumnsWithMetadata input formats this controls whether format parser should check if data types from input metadata match data types of the corresponding columns from the table", 0) \
|
||||
M(Bool, input_format_json_read_numbers_as_strings, true, "Allow to parse numbers as strings in JSON input formats", 0) \
|
||||
|
@ -81,6 +81,8 @@ namespace SettingsChangesHistory
|
||||
/// It's used to implement `compatibility` setting (see https://github.com/ClickHouse/ClickHouse/issues/35972)
|
||||
static std::map<ClickHouseVersion, SettingsChangesHistory::SettingsChanges> settings_changes_history =
|
||||
{
|
||||
{"24.1", {{"print_pretty_type_names", false, true, "Better user experience."},
|
||||
{"input_format_json_read_bools_as_strings", false, true, "Allow to read bools as strings in JSON formats by default"}}},
|
||||
{"23.12", {{"allow_suspicious_ttl_expressions", true, false, "It is a new setting, and in previous versions the behavior was equivalent to allowing."},
|
||||
{"input_format_parquet_allow_missing_columns", false, true, "Allow missing columns in Parquet files by default"},
|
||||
{"input_format_orc_allow_missing_columns", false, true, "Allow missing columns in ORC files by default"},
|
||||
|
@ -85,10 +85,7 @@ std::string DataTypeMap::doGetName() const
|
||||
std::string DataTypeMap::doGetPrettyName(size_t indent) const
|
||||
{
|
||||
WriteBufferFromOwnString s;
|
||||
s << "Map(\n"
|
||||
<< fourSpaceIndent(indent + 1) << key_type->getPrettyName(indent + 1) << ",\n"
|
||||
<< fourSpaceIndent(indent + 1) << value_type->getPrettyName(indent + 1) << '\n'
|
||||
<< fourSpaceIndent(indent) << ')';
|
||||
s << "Map(" << key_type->getPrettyName(indent) << ", " << value_type->getPrettyName(indent) << ')';
|
||||
return s.str();
|
||||
}
|
||||
|
||||
|
@ -98,21 +98,38 @@ std::string DataTypeTuple::doGetPrettyName(size_t indent) const
|
||||
{
|
||||
size_t size = elems.size();
|
||||
WriteBufferFromOwnString s;
|
||||
s << "Tuple(\n";
|
||||
|
||||
for (size_t i = 0; i != size; ++i)
|
||||
/// If the Tuple is named, we will output it in multiple lines with indentation.
|
||||
if (have_explicit_names)
|
||||
{
|
||||
if (i != 0)
|
||||
s << ",\n";
|
||||
s << "Tuple(\n";
|
||||
|
||||
s << fourSpaceIndent(indent + 1);
|
||||
if (have_explicit_names)
|
||||
s << backQuoteIfNeed(names[i]) << ' ';
|
||||
for (size_t i = 0; i != size; ++i)
|
||||
{
|
||||
if (i != 0)
|
||||
s << ",\n";
|
||||
|
||||
s << elems[i]->getPrettyName(indent + 1);
|
||||
s << fourSpaceIndent(indent + 1)
|
||||
<< backQuoteIfNeed(names[i]) << ' '
|
||||
<< elems[i]->getPrettyName(indent + 1);
|
||||
}
|
||||
|
||||
s << ')';
|
||||
}
|
||||
else
|
||||
{
|
||||
s << "Tuple(";
|
||||
|
||||
for (size_t i = 0; i != size; ++i)
|
||||
{
|
||||
if (i != 0)
|
||||
s << ", ";
|
||||
s << elems[i]->getPrettyName(indent);
|
||||
}
|
||||
|
||||
s << ')';
|
||||
}
|
||||
|
||||
s << '\n' << fourSpaceIndent(indent) << ')';
|
||||
return s.str();
|
||||
}
|
||||
|
||||
|
@ -335,6 +335,22 @@ void SerializationString::deserializeTextJSON(IColumn & column, ReadBuffer & ist
|
||||
{
|
||||
read(column, [&](ColumnString::Chars & data) { readJSONArrayInto(data, istr); });
|
||||
}
|
||||
else if (settings.json.read_bools_as_strings && !istr.eof() && (*istr.position() == 't' || *istr.position() == 'f'))
|
||||
{
|
||||
String str_value;
|
||||
if (*istr.position() == 't')
|
||||
{
|
||||
assertString("true", istr);
|
||||
str_value = "true";
|
||||
}
|
||||
else if (*istr.position() == 'f')
|
||||
{
|
||||
assertString("false", istr);
|
||||
str_value = "false";
|
||||
}
|
||||
|
||||
read(column, [&](ColumnString::Chars & data) { data.insert(str_value.begin(), str_value.end()); });
|
||||
}
|
||||
else if (settings.json.read_numbers_as_strings && !istr.eof() && *istr.position() != '"')
|
||||
{
|
||||
String field;
|
||||
|
@ -92,9 +92,16 @@ void validate(const ASTCreateQuery & create_query)
|
||||
|
||||
DatabasePtr DatabaseFactory::get(const ASTCreateQuery & create, const String & metadata_path, ContextPtr context)
|
||||
{
|
||||
const auto engine_name = create.storage->engine->name;
|
||||
/// check if the database engine is a valid one before proceeding
|
||||
if (!database_engines.contains(create.storage->engine->name))
|
||||
throw Exception(ErrorCodes::UNKNOWN_DATABASE_ENGINE, "Unknown database engine: {}", create.storage->engine->name);
|
||||
if (!database_engines.contains(engine_name))
|
||||
{
|
||||
auto hints = getHints(engine_name);
|
||||
if (!hints.empty())
|
||||
throw Exception(ErrorCodes::UNKNOWN_DATABASE_ENGINE, "Unknown database engine {}. Maybe you meant: {}", engine_name, toString(hints));
|
||||
else
|
||||
throw Exception(ErrorCodes::UNKNOWN_DATABASE_ENGINE, "Unknown database engine: {}", create.storage->engine->name);
|
||||
}
|
||||
|
||||
/// if the engine is found (i.e. registered with the factory instance), then validate if the
|
||||
/// supplied engine arguments, settings and table overrides are valid for the engine.
|
||||
|
@ -1,5 +1,6 @@
|
||||
#pragma once
|
||||
|
||||
#include <Common/NamePrompter.h>
|
||||
#include <Interpreters/Context_fwd.h>
|
||||
#include <Databases/IDatabase.h>
|
||||
#include <Parsers/ASTCreateQuery.h>
|
||||
@ -24,7 +25,7 @@ static inline ValueType safeGetLiteralValue(const ASTPtr &ast, const String &eng
|
||||
return ast->as<ASTLiteral>()->value.safeGet<ValueType>();
|
||||
}
|
||||
|
||||
class DatabaseFactory : private boost::noncopyable
|
||||
class DatabaseFactory : private boost::noncopyable, public IHints<>
|
||||
{
|
||||
public:
|
||||
|
||||
@ -52,6 +53,14 @@ public:
|
||||
|
||||
const DatabaseEngines & getDatabaseEngines() const { return database_engines; }
|
||||
|
||||
std::vector<String> getAllRegisteredNames() const override
|
||||
{
|
||||
std::vector<String> result;
|
||||
auto getter = [](const auto & pair) { return pair.first; };
|
||||
std::transform(database_engines.begin(), database_engines.end(), std::back_inserter(result), getter);
|
||||
return result;
|
||||
}
|
||||
|
||||
private:
|
||||
DatabaseEngines database_engines;
|
||||
|
||||
|
@ -450,10 +450,11 @@ String getAdditionalFormatInfoByEscapingRule(const FormatSettings & settings, Fo
|
||||
break;
|
||||
case FormatSettings::EscapingRule::JSON:
|
||||
result += fmt::format(
|
||||
", try_infer_numbers_from_strings={}, read_bools_as_numbers={}, read_objects_as_strings={}, read_numbers_as_strings={}, "
|
||||
", try_infer_numbers_from_strings={}, read_bools_as_numbers={}, read_bools_as_strings={}, read_objects_as_strings={}, read_numbers_as_strings={}, "
|
||||
"read_arrays_as_strings={}, try_infer_objects_as_tuples={}, infer_incomplete_types_as_strings={}, try_infer_objects={}",
|
||||
settings.json.try_infer_numbers_from_strings,
|
||||
settings.json.read_bools_as_numbers,
|
||||
settings.json.read_bools_as_strings,
|
||||
settings.json.read_objects_as_strings,
|
||||
settings.json.read_numbers_as_strings,
|
||||
settings.json.read_arrays_as_strings,
|
||||
|
@ -111,6 +111,7 @@ FormatSettings getFormatSettings(ContextPtr context, const Settings & settings)
|
||||
format_settings.json.quote_denormals = settings.output_format_json_quote_denormals;
|
||||
format_settings.json.quote_decimals = settings.output_format_json_quote_decimals;
|
||||
format_settings.json.read_bools_as_numbers = settings.input_format_json_read_bools_as_numbers;
|
||||
format_settings.json.read_bools_as_strings = settings.input_format_json_read_bools_as_strings;
|
||||
format_settings.json.read_numbers_as_strings = settings.input_format_json_read_numbers_as_strings;
|
||||
format_settings.json.read_objects_as_strings = settings.input_format_json_read_objects_as_strings;
|
||||
format_settings.json.read_arrays_as_strings = settings.input_format_json_read_arrays_as_strings;
|
||||
|
@ -204,6 +204,7 @@ struct FormatSettings
|
||||
bool ignore_unknown_keys_in_named_tuple = false;
|
||||
bool serialize_as_strings = false;
|
||||
bool read_bools_as_numbers = true;
|
||||
bool read_bools_as_strings = true;
|
||||
bool read_numbers_as_strings = true;
|
||||
bool read_objects_as_strings = true;
|
||||
bool read_arrays_as_strings = true;
|
||||
|
@ -377,6 +377,22 @@ namespace
|
||||
type_indexes.erase(TypeIndex::UInt8);
|
||||
}
|
||||
|
||||
/// If we have Bool and String types convert all numbers to String.
|
||||
/// It's applied only when setting input_format_json_read_bools_as_strings is enabled.
|
||||
void transformJSONBoolsAndStringsToString(DataTypes & data_types, TypeIndexesSet & type_indexes)
|
||||
{
|
||||
if (!type_indexes.contains(TypeIndex::String) || !type_indexes.contains(TypeIndex::UInt8))
|
||||
return;
|
||||
|
||||
for (auto & type : data_types)
|
||||
{
|
||||
if (isBool(type))
|
||||
type = std::make_shared<DataTypeString>();
|
||||
}
|
||||
|
||||
type_indexes.erase(TypeIndex::UInt8);
|
||||
}
|
||||
|
||||
/// If we have type Nothing/Nullable(Nothing) and some other non Nothing types,
|
||||
/// convert all Nothing/Nullable(Nothing) types to the first non Nothing.
|
||||
/// For example, when we have [Nothing, Array(Int64)] it will convert it to [Array(Int64), Array(Int64)]
|
||||
@ -628,6 +644,10 @@ namespace
|
||||
if (settings.json.read_bools_as_numbers)
|
||||
transformBoolsAndNumbersToNumbers(data_types, type_indexes);
|
||||
|
||||
/// Convert Bool to String if needed.
|
||||
if (settings.json.read_bools_as_strings)
|
||||
transformJSONBoolsAndStringsToString(data_types, type_indexes);
|
||||
|
||||
if (settings.json.try_infer_objects_as_tuples)
|
||||
mergeJSONPaths(data_types, type_indexes, settings, json_info);
|
||||
};
|
||||
|
@ -1382,8 +1382,12 @@ void skipJSONField(ReadBuffer & buf, StringRef name_of_field)
|
||||
}
|
||||
else
|
||||
{
|
||||
throw Exception(ErrorCodes::INCORRECT_DATA, "Unexpected symbol '{}' for key '{}'",
|
||||
std::string(*buf.position(), 1), name_of_field.toString());
|
||||
throw Exception(
|
||||
ErrorCodes::INCORRECT_DATA,
|
||||
"Cannot read JSON field here: '{}'. Unexpected symbol '{}'{}",
|
||||
String(buf.position(), std::min(buf.available(), size_t(10))),
|
||||
std::string(1, *buf.position()),
|
||||
name_of_field.empty() ? "" : " for key " + name_of_field.toString());
|
||||
}
|
||||
}
|
||||
|
||||
@ -1753,7 +1757,7 @@ void readQuotedField(String & s, ReadBuffer & buf)
|
||||
void readJSONField(String & s, ReadBuffer & buf)
|
||||
{
|
||||
s.clear();
|
||||
auto parse_func = [](ReadBuffer & in) { skipJSONField(in, "json_field"); };
|
||||
auto parse_func = [](ReadBuffer & in) { skipJSONField(in, ""); };
|
||||
readParsedValueInto(s, buf, parse_func);
|
||||
}
|
||||
|
||||
|
@ -1419,7 +1419,7 @@ FutureSetPtr ActionsMatcher::makeSet(const ASTFunction & node, Data & data, bool
|
||||
return set;
|
||||
}
|
||||
|
||||
FutureSetPtr external_table_set;
|
||||
FutureSetFromSubqueryPtr external_table_set;
|
||||
|
||||
/// A special case is if the name of the table is specified on the right side of the IN statement,
|
||||
/// and the table has the type Set (a previously prepared set).
|
||||
|
@ -664,26 +664,26 @@ void Aggregator::compileAggregateFunctionsIfNeeded()
|
||||
for (size_t i = 0; i < aggregate_functions.size(); ++i)
|
||||
{
|
||||
const auto * function = aggregate_functions[i];
|
||||
bool function_is_compilable = function->isCompilable();
|
||||
if (!function_is_compilable)
|
||||
continue;
|
||||
|
||||
size_t offset_of_aggregate_function = offsets_of_aggregate_states[i];
|
||||
AggregateFunctionWithOffset function_to_compile
|
||||
|
||||
if (function->isCompilable())
|
||||
{
|
||||
.function = function,
|
||||
.aggregate_data_offset = offset_of_aggregate_function
|
||||
};
|
||||
AggregateFunctionWithOffset function_to_compile
|
||||
{
|
||||
.function = function,
|
||||
.aggregate_data_offset = offset_of_aggregate_function
|
||||
};
|
||||
|
||||
functions_to_compile.emplace_back(std::move(function_to_compile));
|
||||
functions_to_compile.emplace_back(std::move(function_to_compile));
|
||||
|
||||
functions_description += function->getDescription();
|
||||
functions_description += ' ';
|
||||
functions_description += function->getDescription();
|
||||
functions_description += ' ';
|
||||
|
||||
functions_description += std::to_string(offset_of_aggregate_function);
|
||||
functions_description += ' ';
|
||||
functions_description += std::to_string(offset_of_aggregate_function);
|
||||
functions_description += ' ';
|
||||
}
|
||||
|
||||
is_aggregate_function_compiled[i] = true;
|
||||
is_aggregate_function_compiled[i] = function->isCompilable();
|
||||
}
|
||||
|
||||
if (functions_to_compile.empty())
|
||||
@ -1685,13 +1685,14 @@ bool Aggregator::executeOnBlock(Columns columns,
|
||||
/// For the case when there are no keys (all aggregate into one row).
|
||||
if (result.type == AggregatedDataVariants::Type::without_key)
|
||||
{
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
if (compiled_aggregate_functions_holder && !hasSparseArguments(aggregate_functions_instructions.data()))
|
||||
{
|
||||
executeWithoutKeyImpl<true>(result.without_key, row_begin, row_end, aggregate_functions_instructions.data(), result.aggregates_pool);
|
||||
}
|
||||
else
|
||||
#endif
|
||||
/// TODO: Enable compilation after investigation
|
||||
// #if USE_EMBEDDED_COMPILER
|
||||
// if (compiled_aggregate_functions_holder)
|
||||
// {
|
||||
// executeWithoutKeyImpl<true>(result.without_key, row_begin, row_end, aggregate_functions_instructions.data(), result.aggregates_pool);
|
||||
// }
|
||||
// else
|
||||
// #endif
|
||||
{
|
||||
executeWithoutKeyImpl<false>(result.without_key, row_begin, row_end, aggregate_functions_instructions.data(), result.aggregates_pool);
|
||||
}
|
||||
|
@ -330,6 +330,9 @@ struct ContextSharedPart : boost::noncopyable
|
||||
|
||||
mutable ThrottlerPtr backups_server_throttler; /// A server-wide throttler for BACKUPs
|
||||
|
||||
mutable ThrottlerPtr mutations_throttler; /// A server-wide throttler for mutations
|
||||
mutable ThrottlerPtr merges_throttler; /// A server-wide throttler for merges
|
||||
|
||||
MultiVersion<Macros> macros; /// Substitutions extracted from config.
|
||||
std::unique_ptr<DDLWorker> ddl_worker TSA_GUARDED_BY(mutex); /// Process ddl commands from zk.
|
||||
LoadTaskPtr ddl_worker_startup_task; /// To postpone `ddl_worker->startup()` after all tables startup
|
||||
@ -738,6 +741,12 @@ struct ContextSharedPart : boost::noncopyable
|
||||
|
||||
if (auto bandwidth = server_settings.max_backup_bandwidth_for_server)
|
||||
backups_server_throttler = std::make_shared<Throttler>(bandwidth);
|
||||
|
||||
if (auto bandwidth = server_settings.max_mutations_bandwidth_for_server)
|
||||
mutations_throttler = std::make_shared<Throttler>(bandwidth);
|
||||
|
||||
if (auto bandwidth = server_settings.max_merges_bandwidth_for_server)
|
||||
merges_throttler = std::make_shared<Throttler>(bandwidth);
|
||||
}
|
||||
};
|
||||
|
||||
@ -3001,6 +3010,16 @@ ThrottlerPtr Context::getBackupsThrottler() const
|
||||
return throttler;
|
||||
}
|
||||
|
||||
ThrottlerPtr Context::getMutationsThrottler() const
|
||||
{
|
||||
return shared->mutations_throttler;
|
||||
}
|
||||
|
||||
ThrottlerPtr Context::getMergesThrottler() const
|
||||
{
|
||||
return shared->merges_throttler;
|
||||
}
|
||||
|
||||
bool Context::hasDistributedDDL() const
|
||||
{
|
||||
return getConfigRef().has("distributed_ddl");
|
||||
|
@ -1328,6 +1328,9 @@ public:
|
||||
|
||||
ThrottlerPtr getBackupsThrottler() const;
|
||||
|
||||
ThrottlerPtr getMutationsThrottler() const;
|
||||
ThrottlerPtr getMergesThrottler() const;
|
||||
|
||||
/// Kitchen sink
|
||||
using ContextData::KitchenSink;
|
||||
using ContextData::kitchen_sink;
|
||||
|
@ -82,8 +82,8 @@ private:
|
||||
|
||||
using DDLGuardPtr = std::unique_ptr<DDLGuard>;
|
||||
|
||||
class FutureSet;
|
||||
using FutureSetPtr = std::shared_ptr<FutureSet>;
|
||||
class FutureSetFromSubquery;
|
||||
using FutureSetFromSubqueryPtr = std::shared_ptr<FutureSetFromSubquery>;
|
||||
|
||||
/// Creates temporary table in `_temporary_and_external_tables` with randomly generated unique StorageID.
|
||||
/// Such table can be accessed from everywhere by its ID.
|
||||
@ -116,7 +116,7 @@ struct TemporaryTableHolder : boost::noncopyable, WithContext
|
||||
|
||||
IDatabase * temporary_tables = nullptr;
|
||||
UUID id = UUIDHelpers::Nil;
|
||||
FutureSetPtr future_set;
|
||||
FutureSetFromSubqueryPtr future_set;
|
||||
};
|
||||
|
||||
///TODO maybe remove shared_ptr from here?
|
||||
|
@ -2378,12 +2378,25 @@ std::optional<UInt64> InterpreterSelectQuery::getTrivialCount(UInt64 max_paralle
|
||||
else
|
||||
{
|
||||
// It's possible to optimize count() given only partition predicates
|
||||
SelectQueryInfo temp_query_info;
|
||||
temp_query_info.query = query_ptr;
|
||||
temp_query_info.syntax_analyzer_result = syntax_analyzer_result;
|
||||
temp_query_info.prepared_sets = query_analyzer->getPreparedSets();
|
||||
ActionsDAG::NodeRawConstPtrs filter_nodes;
|
||||
if (analysis_result.hasPrewhere())
|
||||
{
|
||||
auto & prewhere_info = analysis_result.prewhere_info;
|
||||
filter_nodes.push_back(&prewhere_info->prewhere_actions->findInOutputs(prewhere_info->prewhere_column_name));
|
||||
|
||||
return storage->totalRowsByPartitionPredicate(temp_query_info, context);
|
||||
if (prewhere_info->row_level_filter)
|
||||
filter_nodes.push_back(&prewhere_info->row_level_filter->findInOutputs(prewhere_info->row_level_column_name));
|
||||
}
|
||||
if (analysis_result.hasWhere())
|
||||
{
|
||||
filter_nodes.push_back(&analysis_result.before_where->findInOutputs(analysis_result.where_column_name));
|
||||
}
|
||||
|
||||
auto filter_actions_dag = ActionsDAG::buildFilterActionsDAG(filter_nodes, {}, context);
|
||||
if (!filter_actions_dag)
|
||||
return {};
|
||||
|
||||
return storage->totalRowsByPartitionPredicate(filter_actions_dag, context);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -67,8 +67,7 @@ static void compileFunction(llvm::Module & module, const IFunctionBase & functio
|
||||
{
|
||||
const auto & function_argument_types = function.getArgumentTypes();
|
||||
|
||||
auto & context = module.getContext();
|
||||
llvm::IRBuilder<> b(context);
|
||||
llvm::IRBuilder<> b(module.getContext());
|
||||
auto * size_type = b.getIntNTy(sizeof(size_t) * 8);
|
||||
auto * data_type = llvm::StructType::get(b.getInt8PtrTy(), b.getInt8PtrTy());
|
||||
auto * func_type = llvm::FunctionType::get(b.getVoidTy(), { size_type, data_type->getPointerTo() }, /*isVarArg=*/false);
|
||||
@ -76,8 +75,6 @@ static void compileFunction(llvm::Module & module, const IFunctionBase & functio
|
||||
/// Create function in module
|
||||
|
||||
auto * func = llvm::Function::Create(func_type, llvm::Function::ExternalLinkage, function.getName(), module);
|
||||
func->setAttributes(llvm::AttributeList::get(context, {{2, llvm::Attribute::get(context, llvm::Attribute::AttrKind::NoAlias)}}));
|
||||
|
||||
auto * args = func->args().begin();
|
||||
llvm::Value * rows_count_arg = args++;
|
||||
llvm::Value * columns_arg = args++;
|
||||
@ -199,9 +196,6 @@ static void compileCreateAggregateStatesFunctions(llvm::Module & module, const s
|
||||
auto * create_aggregate_states_function_type = llvm::FunctionType::get(b.getVoidTy(), { aggregate_data_places_type }, false);
|
||||
auto * create_aggregate_states_function = llvm::Function::Create(create_aggregate_states_function_type, llvm::Function::ExternalLinkage, name, module);
|
||||
|
||||
create_aggregate_states_function->setAttributes(
|
||||
llvm::AttributeList::get(context, {{1, llvm::Attribute::get(context, llvm::Attribute::AttrKind::NoAlias)}}));
|
||||
|
||||
auto * arguments = create_aggregate_states_function->args().begin();
|
||||
llvm::Value * aggregate_data_place_arg = arguments++;
|
||||
|
||||
@ -247,11 +241,6 @@ static void compileAddIntoAggregateStatesFunctions(llvm::Module & module,
|
||||
auto * add_into_aggregate_states_func_declaration = llvm::FunctionType::get(b.getVoidTy(), { size_type, size_type, column_type->getPointerTo(), places_type }, false);
|
||||
auto * add_into_aggregate_states_func = llvm::Function::Create(add_into_aggregate_states_func_declaration, llvm::Function::ExternalLinkage, name, module);
|
||||
|
||||
add_into_aggregate_states_func->setAttributes(llvm::AttributeList::get(
|
||||
context,
|
||||
{{3, llvm::Attribute::get(context, llvm::Attribute::AttrKind::NoAlias)},
|
||||
{4, llvm::Attribute::get(context, llvm::Attribute::AttrKind::NoAlias)}}));
|
||||
|
||||
auto * arguments = add_into_aggregate_states_func->args().begin();
|
||||
llvm::Value * row_start_arg = arguments++;
|
||||
llvm::Value * row_end_arg = arguments++;
|
||||
@ -307,7 +296,7 @@ static void compileAddIntoAggregateStatesFunctions(llvm::Module & module,
|
||||
llvm::Value * aggregation_place = nullptr;
|
||||
|
||||
if (places_argument_type == AddIntoAggregateStatesPlacesArgumentType::MultiplePlaces)
|
||||
aggregation_place = b.CreateLoad(b.getInt8Ty()->getPointerTo(), b.CreateInBoundsGEP(b.getInt8Ty()->getPointerTo(), places_arg, counter_phi));
|
||||
aggregation_place = b.CreateLoad(b.getInt8Ty()->getPointerTo(), b.CreateGEP(b.getInt8Ty()->getPointerTo(), places_arg, counter_phi));
|
||||
else
|
||||
aggregation_place = places_arg;
|
||||
|
||||
@ -324,7 +313,7 @@ static void compileAddIntoAggregateStatesFunctions(llvm::Module & module,
|
||||
auto & column = columns[previous_columns_size + column_argument_index];
|
||||
const auto & argument_type = arguments_types[column_argument_index];
|
||||
|
||||
auto * column_data_element = b.CreateLoad(column.data_element_type, b.CreateInBoundsGEP(column.data_element_type, column.data_ptr, counter_phi));
|
||||
auto * column_data_element = b.CreateLoad(column.data_element_type, b.CreateGEP(column.data_element_type, column.data_ptr, counter_phi));
|
||||
|
||||
if (!argument_type->isNullable())
|
||||
{
|
||||
@ -332,7 +321,7 @@ static void compileAddIntoAggregateStatesFunctions(llvm::Module & module,
|
||||
continue;
|
||||
}
|
||||
|
||||
auto * column_null_data_with_offset = b.CreateInBoundsGEP(b.getInt8Ty(), column.null_data_ptr, counter_phi);
|
||||
auto * column_null_data_with_offset = b.CreateGEP(b.getInt8Ty(), column.null_data_ptr, counter_phi);
|
||||
auto * is_null = b.CreateICmpNE(b.CreateLoad(b.getInt8Ty(), column_null_data_with_offset), b.getInt8(0));
|
||||
auto * nullable_unitialized = llvm::Constant::getNullValue(toNullableType(b, column.data_element_type));
|
||||
auto * first_insert = b.CreateInsertValue(nullable_unitialized, column_data_element, {0});
|
||||
@ -365,8 +354,7 @@ static void compileAddIntoAggregateStatesFunctions(llvm::Module & module,
|
||||
|
||||
static void compileMergeAggregatesStates(llvm::Module & module, const std::vector<AggregateFunctionWithOffset> & functions, const std::string & name)
|
||||
{
|
||||
auto & context = module.getContext();
|
||||
llvm::IRBuilder<> b(context);
|
||||
llvm::IRBuilder<> b(module.getContext());
|
||||
|
||||
auto * aggregate_data_place_type = b.getInt8Ty()->getPointerTo();
|
||||
auto * aggregate_data_places_type = aggregate_data_place_type->getPointerTo();
|
||||
@ -377,11 +365,6 @@ static void compileMergeAggregatesStates(llvm::Module & module, const std::vecto
|
||||
auto * merge_aggregates_states_func
|
||||
= llvm::Function::Create(merge_aggregates_states_func_declaration, llvm::Function::ExternalLinkage, name, module);
|
||||
|
||||
merge_aggregates_states_func->setAttributes(llvm::AttributeList::get(
|
||||
context,
|
||||
{{1, llvm::Attribute::get(context, llvm::Attribute::AttrKind::NoAlias)},
|
||||
{2, llvm::Attribute::get(context, llvm::Attribute::AttrKind::NoAlias)}}));
|
||||
|
||||
auto * arguments = merge_aggregates_states_func->args().begin();
|
||||
llvm::Value * aggregate_data_places_dst_arg = arguments++;
|
||||
llvm::Value * aggregate_data_places_src_arg = arguments++;
|
||||
@ -443,11 +426,6 @@ static void compileInsertAggregatesIntoResultColumns(llvm::Module & module, cons
|
||||
auto * insert_aggregates_into_result_func_declaration = llvm::FunctionType::get(b.getVoidTy(), { size_type, size_type, column_type->getPointerTo(), aggregate_data_places_type }, false);
|
||||
auto * insert_aggregates_into_result_func = llvm::Function::Create(insert_aggregates_into_result_func_declaration, llvm::Function::ExternalLinkage, name, module);
|
||||
|
||||
insert_aggregates_into_result_func->setAttributes(llvm::AttributeList::get(
|
||||
context,
|
||||
{{3, llvm::Attribute::get(context, llvm::Attribute::AttrKind::NoAlias)},
|
||||
{4, llvm::Attribute::get(context, llvm::Attribute::AttrKind::NoAlias)}}));
|
||||
|
||||
auto * arguments = insert_aggregates_into_result_func->args().begin();
|
||||
llvm::Value * row_start_arg = arguments++;
|
||||
llvm::Value * row_end_arg = arguments++;
|
||||
@ -482,7 +460,7 @@ static void compileInsertAggregatesIntoResultColumns(llvm::Module & module, cons
|
||||
auto * counter_phi = b.CreatePHI(row_start_arg->getType(), 2);
|
||||
counter_phi->addIncoming(row_start_arg, entry);
|
||||
|
||||
auto * aggregate_data_place = b.CreateLoad(b.getInt8Ty()->getPointerTo(), b.CreateInBoundsGEP(b.getInt8Ty()->getPointerTo(), aggregate_data_places_arg, counter_phi));
|
||||
auto * aggregate_data_place = b.CreateLoad(b.getInt8Ty()->getPointerTo(), b.CreateGEP(b.getInt8Ty()->getPointerTo(), aggregate_data_places_arg, counter_phi));
|
||||
|
||||
for (size_t i = 0; i < functions.size(); ++i)
|
||||
{
|
||||
@ -492,11 +470,11 @@ static void compileInsertAggregatesIntoResultColumns(llvm::Module & module, cons
|
||||
const auto * aggregate_function_ptr = functions[i].function;
|
||||
auto * final_value = aggregate_function_ptr->compileGetResult(b, aggregation_place_with_offset);
|
||||
|
||||
auto * result_column_data_element = b.CreateInBoundsGEP(columns[i].data_element_type, columns[i].data_ptr, counter_phi);
|
||||
auto * result_column_data_element = b.CreateGEP(columns[i].data_element_type, columns[i].data_ptr, counter_phi);
|
||||
if (columns[i].null_data_ptr)
|
||||
{
|
||||
b.CreateStore(b.CreateExtractValue(final_value, {0}), result_column_data_element);
|
||||
auto * result_column_is_null_element = b.CreateInBoundsGEP(b.getInt8Ty(), columns[i].null_data_ptr, counter_phi);
|
||||
auto * result_column_is_null_element = b.CreateGEP(b.getInt8Ty(), columns[i].null_data_ptr, counter_phi);
|
||||
b.CreateStore(b.CreateSelect(b.CreateExtractValue(final_value, {1}), b.getInt8(1), b.getInt8(0)), result_column_is_null_element);
|
||||
}
|
||||
else
|
||||
|
@ -1280,6 +1280,7 @@ void MutationsInterpreter::Source::read(
|
||||
VirtualColumns virtual_columns(std::move(required_columns), part);
|
||||
|
||||
createReadFromPartStep(
|
||||
MergeTreeSequentialSourceType::Mutation,
|
||||
plan, *data, storage_snapshot, part,
|
||||
std::move(virtual_columns.columns_to_read),
|
||||
apply_deleted_mask_, filter, context_,
|
||||
|
@ -97,7 +97,7 @@ FutureSetFromSubquery::FutureSetFromSubquery(
|
||||
String key,
|
||||
std::unique_ptr<QueryPlan> source_,
|
||||
StoragePtr external_table_,
|
||||
FutureSetPtr external_table_set_,
|
||||
std::shared_ptr<FutureSetFromSubquery> external_table_set_,
|
||||
const Settings & settings,
|
||||
bool in_subquery_)
|
||||
: external_table(std::move(external_table_))
|
||||
@ -168,6 +168,24 @@ std::unique_ptr<QueryPlan> FutureSetFromSubquery::build(const ContextPtr & conte
|
||||
return plan;
|
||||
}
|
||||
|
||||
void FutureSetFromSubquery::buildSetInplace(const ContextPtr & context)
|
||||
{
|
||||
if (external_table_set)
|
||||
external_table_set->buildSetInplace(context);
|
||||
|
||||
auto plan = build(context);
|
||||
|
||||
if (!plan)
|
||||
return;
|
||||
|
||||
auto builder = plan->buildQueryPipeline(QueryPlanOptimizationSettings::fromContext(context), BuildQueryPipelineSettings::fromContext(context));
|
||||
auto pipeline = QueryPipelineBuilder::getPipeline(std::move(*builder));
|
||||
pipeline.complete(std::make_shared<EmptySink>(Block()));
|
||||
|
||||
CompletedPipelineExecutor executor(pipeline);
|
||||
executor.execute();
|
||||
}
|
||||
|
||||
SetPtr FutureSetFromSubquery::buildOrderedSetInplace(const ContextPtr & context)
|
||||
{
|
||||
if (!context->getSettingsRef().use_index_for_in_with_subqueries)
|
||||
@ -233,7 +251,7 @@ String PreparedSets::toString(const PreparedSets::Hash & key, const DataTypes &
|
||||
return buf.str();
|
||||
}
|
||||
|
||||
FutureSetPtr PreparedSets::addFromTuple(const Hash & key, Block block, const Settings & settings)
|
||||
FutureSetFromTuplePtr PreparedSets::addFromTuple(const Hash & key, Block block, const Settings & settings)
|
||||
{
|
||||
auto from_tuple = std::make_shared<FutureSetFromTuple>(std::move(block), settings);
|
||||
const auto & set_types = from_tuple->getTypes();
|
||||
@ -247,7 +265,7 @@ FutureSetPtr PreparedSets::addFromTuple(const Hash & key, Block block, const Set
|
||||
return from_tuple;
|
||||
}
|
||||
|
||||
FutureSetPtr PreparedSets::addFromStorage(const Hash & key, SetPtr set_)
|
||||
FutureSetFromStoragePtr PreparedSets::addFromStorage(const Hash & key, SetPtr set_)
|
||||
{
|
||||
auto from_storage = std::make_shared<FutureSetFromStorage>(std::move(set_));
|
||||
auto [it, inserted] = sets_from_storage.emplace(key, from_storage);
|
||||
@ -258,11 +276,11 @@ FutureSetPtr PreparedSets::addFromStorage(const Hash & key, SetPtr set_)
|
||||
return from_storage;
|
||||
}
|
||||
|
||||
FutureSetPtr PreparedSets::addFromSubquery(
|
||||
FutureSetFromSubqueryPtr PreparedSets::addFromSubquery(
|
||||
const Hash & key,
|
||||
std::unique_ptr<QueryPlan> source,
|
||||
StoragePtr external_table,
|
||||
FutureSetPtr external_table_set,
|
||||
FutureSetFromSubqueryPtr external_table_set,
|
||||
const Settings & settings,
|
||||
bool in_subquery)
|
||||
{
|
||||
@ -282,7 +300,7 @@ FutureSetPtr PreparedSets::addFromSubquery(
|
||||
return from_subquery;
|
||||
}
|
||||
|
||||
FutureSetPtr PreparedSets::addFromSubquery(
|
||||
FutureSetFromSubqueryPtr PreparedSets::addFromSubquery(
|
||||
const Hash & key,
|
||||
QueryTreeNodePtr query_tree,
|
||||
const Settings & settings)
|
||||
@ -300,7 +318,7 @@ FutureSetPtr PreparedSets::addFromSubquery(
|
||||
return from_subquery;
|
||||
}
|
||||
|
||||
FutureSetPtr PreparedSets::findTuple(const Hash & key, const DataTypes & types) const
|
||||
FutureSetFromTuplePtr PreparedSets::findTuple(const Hash & key, const DataTypes & types) const
|
||||
{
|
||||
auto it = sets_from_tuple.find(key);
|
||||
if (it == sets_from_tuple.end())
|
||||
|
@ -69,6 +69,8 @@ private:
|
||||
SetPtr set;
|
||||
};
|
||||
|
||||
using FutureSetFromStoragePtr = std::shared_ptr<FutureSetFromStorage>;
|
||||
|
||||
/// Set from tuple is filled as well as set from storage.
|
||||
/// Additionally, it can be converted to set useful for PK.
|
||||
class FutureSetFromTuple final : public FutureSet
|
||||
@ -86,6 +88,8 @@ private:
|
||||
SetKeyColumns set_key_columns;
|
||||
};
|
||||
|
||||
using FutureSetFromTuplePtr = std::shared_ptr<FutureSetFromTuple>;
|
||||
|
||||
/// Set from subquery can be built inplace for PK or in CreatingSet step.
|
||||
/// If use_index_for_in_with_subqueries_max_values is reached, set for PK won't be created,
|
||||
/// but ordinary set would be created instead.
|
||||
@ -96,7 +100,7 @@ public:
|
||||
String key,
|
||||
std::unique_ptr<QueryPlan> source_,
|
||||
StoragePtr external_table_,
|
||||
FutureSetPtr external_table_set_,
|
||||
std::shared_ptr<FutureSetFromSubquery> external_table_set_,
|
||||
const Settings & settings,
|
||||
bool in_subquery_);
|
||||
|
||||
@ -110,6 +114,7 @@ public:
|
||||
SetPtr buildOrderedSetInplace(const ContextPtr & context) override;
|
||||
|
||||
std::unique_ptr<QueryPlan> build(const ContextPtr & context);
|
||||
void buildSetInplace(const ContextPtr & context);
|
||||
|
||||
QueryTreeNodePtr detachQueryTree() { return std::move(query_tree); }
|
||||
void setQueryPlan(std::unique_ptr<QueryPlan> source_);
|
||||
@ -119,7 +124,7 @@ public:
|
||||
private:
|
||||
SetAndKeyPtr set_and_key;
|
||||
StoragePtr external_table;
|
||||
FutureSetPtr external_table_set;
|
||||
std::shared_ptr<FutureSetFromSubquery> external_table_set;
|
||||
|
||||
std::unique_ptr<QueryPlan> source;
|
||||
QueryTreeNodePtr query_tree;
|
||||
@ -130,6 +135,8 @@ private:
|
||||
// with new analyzer it's not a case
|
||||
};
|
||||
|
||||
using FutureSetFromSubqueryPtr = std::shared_ptr<FutureSetFromSubquery>;
|
||||
|
||||
/// Container for all the sets used in query.
|
||||
class PreparedSets
|
||||
{
|
||||
@ -141,32 +148,32 @@ public:
|
||||
UInt64 operator()(const Hash & key) const { return key.low64 ^ key.high64; }
|
||||
};
|
||||
|
||||
using SetsFromTuple = std::unordered_map<Hash, std::vector<std::shared_ptr<FutureSetFromTuple>>, Hashing>;
|
||||
using SetsFromStorage = std::unordered_map<Hash, std::shared_ptr<FutureSetFromStorage>, Hashing>;
|
||||
using SetsFromSubqueries = std::unordered_map<Hash, std::shared_ptr<FutureSetFromSubquery>, Hashing>;
|
||||
using SetsFromTuple = std::unordered_map<Hash, std::vector<FutureSetFromTuplePtr>, Hashing>;
|
||||
using SetsFromStorage = std::unordered_map<Hash, FutureSetFromStoragePtr, Hashing>;
|
||||
using SetsFromSubqueries = std::unordered_map<Hash, FutureSetFromSubqueryPtr, Hashing>;
|
||||
|
||||
FutureSetPtr addFromStorage(const Hash & key, SetPtr set_);
|
||||
FutureSetPtr addFromTuple(const Hash & key, Block block, const Settings & settings);
|
||||
FutureSetFromStoragePtr addFromStorage(const Hash & key, SetPtr set_);
|
||||
FutureSetFromTuplePtr addFromTuple(const Hash & key, Block block, const Settings & settings);
|
||||
|
||||
FutureSetPtr addFromSubquery(
|
||||
FutureSetFromSubqueryPtr addFromSubquery(
|
||||
const Hash & key,
|
||||
std::unique_ptr<QueryPlan> source,
|
||||
StoragePtr external_table,
|
||||
FutureSetPtr external_table_set,
|
||||
FutureSetFromSubqueryPtr external_table_set,
|
||||
const Settings & settings,
|
||||
bool in_subquery = false);
|
||||
|
||||
FutureSetPtr addFromSubquery(
|
||||
FutureSetFromSubqueryPtr addFromSubquery(
|
||||
const Hash & key,
|
||||
QueryTreeNodePtr query_tree,
|
||||
const Settings & settings);
|
||||
|
||||
FutureSetPtr findTuple(const Hash & key, const DataTypes & types) const;
|
||||
std::shared_ptr<FutureSetFromStorage> findStorage(const Hash & key) const;
|
||||
std::shared_ptr<FutureSetFromSubquery> findSubquery(const Hash & key) const;
|
||||
FutureSetFromTuplePtr findTuple(const Hash & key, const DataTypes & types) const;
|
||||
FutureSetFromStoragePtr findStorage(const Hash & key) const;
|
||||
FutureSetFromSubqueryPtr findSubquery(const Hash & key) const;
|
||||
void markAsINSubquery(const Hash & key);
|
||||
|
||||
using Subqueries = std::vector<std::shared_ptr<FutureSetFromSubquery>>;
|
||||
using Subqueries = std::vector<FutureSetFromSubqueryPtr>;
|
||||
Subqueries getSubqueries() const;
|
||||
bool hasSubqueries() const { return !sets_from_subqueries.empty(); }
|
||||
|
||||
|
@ -36,7 +36,6 @@ struct RequiredSourceColumnsData
|
||||
|
||||
bool has_table_join = false;
|
||||
bool has_array_join = false;
|
||||
bool visit_index_hint = false;
|
||||
|
||||
bool addColumnAliasIfAny(const IAST & ast);
|
||||
void addColumnIdentifier(const ASTIdentifier & node);
|
||||
|
@ -72,11 +72,6 @@ void RequiredSourceColumnsMatcher::visit(const ASTPtr & ast, Data & data)
|
||||
}
|
||||
if (auto * t = ast->as<ASTFunction>())
|
||||
{
|
||||
/// "indexHint" is a special function for index analysis.
|
||||
/// Everything that is inside it is not calculated. See KeyCondition
|
||||
if (!data.visit_index_hint && t->name == "indexHint")
|
||||
return;
|
||||
|
||||
data.addColumnAliasIfAny(*ast);
|
||||
visit(*t, ast, data);
|
||||
return;
|
||||
|
@ -995,13 +995,12 @@ void TreeRewriterResult::collectSourceColumns(bool add_special)
|
||||
/// Calculate which columns are required to execute the expression.
|
||||
/// Then, delete all other columns from the list of available columns.
|
||||
/// After execution, columns will only contain the list of columns needed to read from the table.
|
||||
bool TreeRewriterResult::collectUsedColumns(const ASTPtr & query, bool is_select, bool visit_index_hint, bool no_throw)
|
||||
bool TreeRewriterResult::collectUsedColumns(const ASTPtr & query, bool is_select, bool no_throw)
|
||||
{
|
||||
/// We calculate required_source_columns with source_columns modifications and swap them on exit
|
||||
required_source_columns = source_columns;
|
||||
|
||||
RequiredSourceColumnsVisitor::Data columns_context;
|
||||
columns_context.visit_index_hint = visit_index_hint;
|
||||
RequiredSourceColumnsVisitor(columns_context).visit(query);
|
||||
|
||||
NameSet source_column_names;
|
||||
@ -1385,7 +1384,7 @@ TreeRewriterResultPtr TreeRewriter::analyzeSelect(
|
||||
result.window_function_asts = getWindowFunctions(query, *select_query);
|
||||
result.expressions_with_window_function = getExpressionsWithWindowFunctions(query);
|
||||
|
||||
result.collectUsedColumns(query, true, settings.query_plan_optimize_primary_key);
|
||||
result.collectUsedColumns(query, true);
|
||||
|
||||
if (!result.missed_subcolumns.empty())
|
||||
{
|
||||
@ -1422,7 +1421,7 @@ TreeRewriterResultPtr TreeRewriter::analyzeSelect(
|
||||
result.aggregates = getAggregates(query, *select_query);
|
||||
result.window_function_asts = getWindowFunctions(query, *select_query);
|
||||
result.expressions_with_window_function = getExpressionsWithWindowFunctions(query);
|
||||
result.collectUsedColumns(query, true, settings.query_plan_optimize_primary_key);
|
||||
result.collectUsedColumns(query, true);
|
||||
}
|
||||
}
|
||||
|
||||
@ -1499,7 +1498,7 @@ TreeRewriterResultPtr TreeRewriter::analyze(
|
||||
else
|
||||
assertNoAggregates(query, "in wrong place");
|
||||
|
||||
bool is_ok = result.collectUsedColumns(query, false, settings.query_plan_optimize_primary_key, no_throw);
|
||||
bool is_ok = result.collectUsedColumns(query, false, no_throw);
|
||||
if (!is_ok)
|
||||
return {};
|
||||
|
||||
|
@ -88,7 +88,7 @@ struct TreeRewriterResult
|
||||
bool add_special = true);
|
||||
|
||||
void collectSourceColumns(bool add_special);
|
||||
bool collectUsedColumns(const ASTPtr & query, bool is_select, bool visit_index_hint, bool no_throw = false);
|
||||
bool collectUsedColumns(const ASTPtr & query, bool is_select, bool no_throw = false);
|
||||
Names requiredSourceColumns() const { return required_source_columns.getNames(); }
|
||||
const Names & requiredSourceColumnsForAccessCheck() const { return required_source_columns_before_expanding_alias_columns; }
|
||||
NameSet getArrayJoinSourceNameSet() const;
|
||||
|
@ -1057,7 +1057,7 @@ void addBuildSubqueriesForSetsStepIfNeeded(
|
||||
Planner subquery_planner(
|
||||
query_tree,
|
||||
subquery_options,
|
||||
planner_context->getGlobalPlannerContext());
|
||||
std::make_shared<GlobalPlannerContext>()); //planner_context->getGlobalPlannerContext());
|
||||
subquery_planner.buildQueryPlanIfNeeded();
|
||||
|
||||
subquery->setQueryPlan(std::make_unique<QueryPlan>(std::move(subquery_planner).extractQueryPlan()));
|
||||
|
@ -20,12 +20,15 @@ const ColumnIdentifier & GlobalPlannerContext::createColumnIdentifier(const Quer
|
||||
return createColumnIdentifier(column_node_typed.getColumn(), column_source_node);
|
||||
}
|
||||
|
||||
const ColumnIdentifier & GlobalPlannerContext::createColumnIdentifier(const NameAndTypePair & column, const QueryTreeNodePtr & /*column_source_node*/)
|
||||
const ColumnIdentifier & GlobalPlannerContext::createColumnIdentifier(const NameAndTypePair & column, const QueryTreeNodePtr & column_source_node)
|
||||
{
|
||||
std::string column_identifier;
|
||||
|
||||
column_identifier += column.name;
|
||||
column_identifier += '_' + std::to_string(column_identifiers.size());
|
||||
const auto & source_alias = column_source_node->getAlias();
|
||||
if (!source_alias.empty())
|
||||
column_identifier = source_alias + "." + column.name;
|
||||
else
|
||||
column_identifier = column.name;
|
||||
|
||||
auto [it, inserted] = column_identifiers.emplace(column_identifier);
|
||||
assert(inserted);
|
||||
|
@ -817,7 +817,7 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(QueryTreeNodePtr table_expres
|
||||
}
|
||||
}
|
||||
|
||||
const auto & table_expression_alias = table_expression->getAlias();
|
||||
const auto & table_expression_alias = table_expression->getOriginalAlias();
|
||||
auto additional_filters_info = buildAdditionalFiltersIfNeeded(storage, table_expression_alias, table_expression_query_info, planner_context);
|
||||
add_filter(additional_filters_info, "additional filter");
|
||||
|
||||
@ -1058,6 +1058,18 @@ JoinTreeQueryPlan buildQueryPlanForJoinNode(const QueryTreeNodePtr & join_table_
|
||||
auto right_plan = std::move(right_join_tree_query_plan.query_plan);
|
||||
auto right_plan_output_columns = right_plan.getCurrentDataStream().header.getColumnsWithTypeAndName();
|
||||
|
||||
// {
|
||||
// WriteBufferFromOwnString buf;
|
||||
// left_plan.explainPlan(buf, {.header = true, .actions = true});
|
||||
// std::cerr << "left plan \n "<< buf.str() << std::endl;
|
||||
// }
|
||||
|
||||
// {
|
||||
// WriteBufferFromOwnString buf;
|
||||
// right_plan.explainPlan(buf, {.header = true, .actions = true});
|
||||
// std::cerr << "right plan \n "<< buf.str() << std::endl;
|
||||
// }
|
||||
|
||||
JoinClausesAndActions join_clauses_and_actions;
|
||||
JoinKind join_kind = join_node.getKind();
|
||||
JoinStrictness join_strictness = join_node.getStrictness();
|
||||
|
@ -20,6 +20,7 @@
|
||||
|
||||
#include <Analyzer/Utils.h>
|
||||
#include <Analyzer/FunctionNode.h>
|
||||
#include <Analyzer/ColumnNode.h>
|
||||
#include <Analyzer/ConstantNode.h>
|
||||
#include <Analyzer/TableNode.h>
|
||||
#include <Analyzer/TableFunctionNode.h>
|
||||
@ -113,41 +114,96 @@ String JoinClause::dump() const
|
||||
namespace
|
||||
{
|
||||
|
||||
std::optional<JoinTableSide> extractJoinTableSideFromExpression(const ActionsDAG::Node * expression_root_node,
|
||||
const std::unordered_set<const ActionsDAG::Node *> & join_expression_dag_input_nodes,
|
||||
const NameSet & left_table_expression_columns_names,
|
||||
const NameSet & right_table_expression_columns_names,
|
||||
using TableExpressionSet = std::unordered_set<const IQueryTreeNode *>;
|
||||
|
||||
TableExpressionSet extractTableExpressionsSet(const QueryTreeNodePtr & node)
|
||||
{
|
||||
TableExpressionSet res;
|
||||
for (const auto & expr : extractTableExpressions(node, true))
|
||||
res.insert(expr.get());
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
std::optional<JoinTableSide> extractJoinTableSideFromExpression(//const ActionsDAG::Node * expression_root_node,
|
||||
const IQueryTreeNode * expression_root_node,
|
||||
//const std::unordered_set<const ActionsDAG::Node *> & join_expression_dag_input_nodes,
|
||||
const TableExpressionSet & left_table_expressions,
|
||||
const TableExpressionSet & right_table_expressions,
|
||||
const JoinNode & join_node)
|
||||
{
|
||||
std::optional<JoinTableSide> table_side;
|
||||
std::vector<const ActionsDAG::Node *> nodes_to_process;
|
||||
std::vector<const IQueryTreeNode *> nodes_to_process;
|
||||
nodes_to_process.push_back(expression_root_node);
|
||||
|
||||
// std::cerr << "==== extractJoinTableSideFromExpression\n";
|
||||
// std::cerr << "inp nodes" << std::endl;
|
||||
// for (const auto * node : join_expression_dag_input_nodes)
|
||||
// std::cerr << reinterpret_cast<const void *>(node) << ' ' << node->result_name << std::endl;
|
||||
|
||||
|
||||
// std::cerr << "l names" << std::endl;
|
||||
// for (const auto & l : left_table_expression_columns_names)
|
||||
// std::cerr << l << std::endl;
|
||||
|
||||
// std::cerr << "r names" << std::endl;
|
||||
// for (const auto & r : right_table_expression_columns_names)
|
||||
// std::cerr << r << std::endl;
|
||||
|
||||
// const auto * left_table_expr = join_node.getLeftTableExpression().get();
|
||||
// const auto * right_table_expr = join_node.getRightTableExpression().get();
|
||||
|
||||
while (!nodes_to_process.empty())
|
||||
{
|
||||
const auto * node_to_process = nodes_to_process.back();
|
||||
nodes_to_process.pop_back();
|
||||
|
||||
for (const auto & child : node_to_process->children)
|
||||
nodes_to_process.push_back(child);
|
||||
//std::cerr << "... " << reinterpret_cast<const void *>(node_to_process) << ' ' << node_to_process->result_name << std::endl;
|
||||
|
||||
if (!join_expression_dag_input_nodes.contains(node_to_process))
|
||||
if (const auto * function_node = node_to_process->as<FunctionNode>())
|
||||
{
|
||||
for (const auto & child : function_node->getArguments())
|
||||
nodes_to_process.push_back(child.get());
|
||||
|
||||
continue;
|
||||
}
|
||||
|
||||
const auto * column_node = node_to_process->as<ColumnNode>();
|
||||
if (!column_node)
|
||||
continue;
|
||||
|
||||
const auto & input_name = node_to_process->result_name;
|
||||
// if (!join_expression_dag_input_nodes.contains(node_to_process))
|
||||
// continue;
|
||||
|
||||
bool left_table_expression_contains_input = left_table_expression_columns_names.contains(input_name);
|
||||
bool right_table_expression_contains_input = right_table_expression_columns_names.contains(input_name);
|
||||
const auto & input_name = column_node->getColumnName();
|
||||
|
||||
if (!left_table_expression_contains_input && !right_table_expression_contains_input)
|
||||
// bool left_table_expression_contains_input = left_table_expression_columns_names.contains(input_name);
|
||||
// bool right_table_expression_contains_input = right_table_expression_columns_names.contains(input_name);
|
||||
|
||||
// if (!left_table_expression_contains_input && !right_table_expression_contains_input)
|
||||
// throw Exception(ErrorCodes::INVALID_JOIN_ON_EXPRESSION,
|
||||
// "JOIN {} actions has column {} that do not exist in left {} or right {} table expression columns",
|
||||
// join_node.formatASTForErrorMessage(),
|
||||
// input_name,
|
||||
// boost::join(left_table_expression_columns_names, ", "),
|
||||
// boost::join(right_table_expression_columns_names, ", "));
|
||||
|
||||
const auto * column_source = column_node->getColumnSource().get();
|
||||
if (!column_source)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "No source for column {} in JOIN {}", input_name, join_node.formatASTForErrorMessage());
|
||||
|
||||
bool is_column_from_left_expr = left_table_expressions.contains(column_source);
|
||||
bool is_column_from_right_expr = right_table_expressions.contains(column_source);
|
||||
|
||||
if (!is_column_from_left_expr && !is_column_from_right_expr)
|
||||
throw Exception(ErrorCodes::INVALID_JOIN_ON_EXPRESSION,
|
||||
"JOIN {} actions has column {} that do not exist in left {} or right {} table expression columns",
|
||||
join_node.formatASTForErrorMessage(),
|
||||
input_name,
|
||||
boost::join(left_table_expression_columns_names, ", "),
|
||||
boost::join(right_table_expression_columns_names, ", "));
|
||||
column_source->formatASTForErrorMessage(),
|
||||
join_node.getLeftTableExpression()->formatASTForErrorMessage(),
|
||||
join_node.getRightTableExpression()->formatASTForErrorMessage());
|
||||
|
||||
auto input_table_side = left_table_expression_contains_input ? JoinTableSide::Left : JoinTableSide::Right;
|
||||
auto input_table_side = is_column_from_left_expr ? JoinTableSide::Left : JoinTableSide::Right;
|
||||
if (table_side && (*table_side) != input_table_side)
|
||||
throw Exception(ErrorCodes::INVALID_JOIN_ON_EXPRESSION,
|
||||
"JOIN {} join expression contains column from left and right table",
|
||||
@ -159,29 +215,58 @@ std::optional<JoinTableSide> extractJoinTableSideFromExpression(const ActionsDAG
|
||||
return table_side;
|
||||
}
|
||||
|
||||
void buildJoinClause(ActionsDAGPtr join_expression_dag,
|
||||
const std::unordered_set<const ActionsDAG::Node *> & join_expression_dag_input_nodes,
|
||||
const ActionsDAG::Node * join_expressions_actions_node,
|
||||
const NameSet & left_table_expression_columns_names,
|
||||
const NameSet & right_table_expression_columns_names,
|
||||
const ActionsDAG::Node * appendExpression(
|
||||
ActionsDAGPtr & dag,
|
||||
const QueryTreeNodePtr & expression,
|
||||
const PlannerContextPtr & planner_context,
|
||||
const JoinNode & join_node)
|
||||
{
|
||||
PlannerActionsVisitor join_expression_visitor(planner_context);
|
||||
auto join_expression_dag_node_raw_pointers = join_expression_visitor.visit(dag, expression);
|
||||
if (join_expression_dag_node_raw_pointers.size() != 1)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||
"JOIN {} ON clause contains multiple expressions",
|
||||
join_node.formatASTForErrorMessage());
|
||||
|
||||
return join_expression_dag_node_raw_pointers[0];
|
||||
}
|
||||
|
||||
void buildJoinClause(
|
||||
ActionsDAGPtr & left_dag,
|
||||
ActionsDAGPtr & right_dag,
|
||||
const PlannerContextPtr & planner_context,
|
||||
//ActionsDAGPtr join_expression_dag,
|
||||
//const std::unordered_set<const ActionsDAG::Node *> & join_expression_dag_input_nodes,
|
||||
//const ActionsDAG::Node * join_expressions_actions_node,
|
||||
const QueryTreeNodePtr & join_expression,
|
||||
const TableExpressionSet & left_table_expressions,
|
||||
const TableExpressionSet & right_table_expressions,
|
||||
const JoinNode & join_node,
|
||||
JoinClause & join_clause)
|
||||
{
|
||||
std::string function_name;
|
||||
|
||||
if (join_expressions_actions_node->function)
|
||||
function_name = join_expressions_actions_node->function->getName();
|
||||
//std::cerr << join_expression_dag->dumpDAG() << std::endl;
|
||||
auto * function_node = join_expression->as<FunctionNode>();
|
||||
if (function_node)
|
||||
function_name = function_node->getFunction()->getName();
|
||||
|
||||
// if (join_expressions_actions_node->function)
|
||||
// function_name = join_expressions_actions_node->function->getName();
|
||||
|
||||
/// For 'and' function go into children
|
||||
if (function_name == "and")
|
||||
{
|
||||
for (const auto & child : join_expressions_actions_node->children)
|
||||
for (const auto & child : function_node->getArguments())
|
||||
{
|
||||
buildJoinClause(join_expression_dag,
|
||||
join_expression_dag_input_nodes,
|
||||
buildJoinClause(//join_expression_dag,
|
||||
//join_expression_dag_input_nodes,
|
||||
left_dag,
|
||||
right_dag,
|
||||
planner_context,
|
||||
child,
|
||||
left_table_expression_columns_names,
|
||||
right_table_expression_columns_names,
|
||||
left_table_expressions,
|
||||
right_table_expressions,
|
||||
join_node,
|
||||
join_clause);
|
||||
}
|
||||
@ -194,45 +279,49 @@ void buildJoinClause(ActionsDAGPtr join_expression_dag,
|
||||
|
||||
if (function_name == "equals" || function_name == "isNotDistinctFrom" || is_asof_join_inequality)
|
||||
{
|
||||
const auto * left_child = join_expressions_actions_node->children.at(0);
|
||||
const auto * right_child = join_expressions_actions_node->children.at(1);
|
||||
const auto left_child = function_node->getArguments().getNodes().at(0);//join_expressions_actions_node->children.at(0);
|
||||
const auto right_child = function_node->getArguments().getNodes().at(1); //join_expressions_actions_node->children.at(1);
|
||||
|
||||
auto left_expression_side_optional = extractJoinTableSideFromExpression(left_child,
|
||||
join_expression_dag_input_nodes,
|
||||
left_table_expression_columns_names,
|
||||
right_table_expression_columns_names,
|
||||
auto left_expression_side_optional = extractJoinTableSideFromExpression(left_child.get(),
|
||||
//join_expression_dag_input_nodes,
|
||||
left_table_expressions,
|
||||
right_table_expressions,
|
||||
join_node);
|
||||
|
||||
auto right_expression_side_optional = extractJoinTableSideFromExpression(right_child,
|
||||
join_expression_dag_input_nodes,
|
||||
left_table_expression_columns_names,
|
||||
right_table_expression_columns_names,
|
||||
auto right_expression_side_optional = extractJoinTableSideFromExpression(right_child.get(),
|
||||
//join_expression_dag_input_nodes,
|
||||
left_table_expressions,
|
||||
right_table_expressions,
|
||||
join_node);
|
||||
|
||||
if (!left_expression_side_optional && !right_expression_side_optional)
|
||||
{
|
||||
throw Exception(ErrorCodes::INVALID_JOIN_ON_EXPRESSION,
|
||||
"JOIN {} ON expression {} with constants is not supported",
|
||||
join_node.formatASTForErrorMessage(),
|
||||
join_expressions_actions_node->result_name);
|
||||
"JOIN {} ON expression with constants is not supported",
|
||||
join_node.formatASTForErrorMessage());
|
||||
}
|
||||
else if (left_expression_side_optional && !right_expression_side_optional)
|
||||
{
|
||||
join_clause.addCondition(*left_expression_side_optional, join_expressions_actions_node);
|
||||
auto & dag = *left_expression_side_optional == JoinTableSide::Left ? left_dag : right_dag;
|
||||
const auto * node = appendExpression(dag, join_expression, planner_context, join_node);
|
||||
join_clause.addCondition(*left_expression_side_optional, node);
|
||||
}
|
||||
else if (!left_expression_side_optional && right_expression_side_optional)
|
||||
{
|
||||
join_clause.addCondition(*right_expression_side_optional, join_expressions_actions_node);
|
||||
auto & dag = *right_expression_side_optional == JoinTableSide::Left ? left_dag : right_dag;
|
||||
const auto * node = appendExpression(dag, join_expression, planner_context, join_node);
|
||||
join_clause.addCondition(*right_expression_side_optional, node);
|
||||
}
|
||||
else
|
||||
{
|
||||
// std::cerr << "===============\n";
|
||||
auto left_expression_side = *left_expression_side_optional;
|
||||
auto right_expression_side = *right_expression_side_optional;
|
||||
|
||||
if (left_expression_side != right_expression_side)
|
||||
{
|
||||
const ActionsDAG::Node * left_key = left_child;
|
||||
const ActionsDAG::Node * right_key = right_child;
|
||||
auto left_key = left_child;
|
||||
auto right_key = right_child;
|
||||
|
||||
if (left_expression_side == JoinTableSide::Right)
|
||||
{
|
||||
@ -241,6 +330,9 @@ void buildJoinClause(ActionsDAGPtr join_expression_dag,
|
||||
asof_inequality = reverseASOFJoinInequality(asof_inequality);
|
||||
}
|
||||
|
||||
const auto * left_node = appendExpression(left_dag, left_key, planner_context, join_node);
|
||||
const auto * right_node = appendExpression(right_dag, right_key, planner_context, join_node);
|
||||
|
||||
if (is_asof_join_inequality)
|
||||
{
|
||||
if (join_clause.hasASOF())
|
||||
@ -250,55 +342,66 @@ void buildJoinClause(ActionsDAGPtr join_expression_dag,
|
||||
join_node.formatASTForErrorMessage());
|
||||
}
|
||||
|
||||
join_clause.addASOFKey(left_key, right_key, asof_inequality);
|
||||
join_clause.addASOFKey(left_node, right_node, asof_inequality);
|
||||
}
|
||||
else
|
||||
{
|
||||
bool null_safe_comparison = function_name == "isNotDistinctFrom";
|
||||
join_clause.addKey(left_key, right_key, null_safe_comparison);
|
||||
join_clause.addKey(left_node, right_node, null_safe_comparison);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
join_clause.addCondition(left_expression_side, join_expressions_actions_node);
|
||||
auto & dag = left_expression_side == JoinTableSide::Left ? left_dag : right_dag;
|
||||
const auto * node = appendExpression(dag, join_expression, planner_context, join_node);
|
||||
join_clause.addCondition(left_expression_side, node);
|
||||
}
|
||||
}
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
auto expression_side_optional = extractJoinTableSideFromExpression(join_expressions_actions_node,
|
||||
join_expression_dag_input_nodes,
|
||||
left_table_expression_columns_names,
|
||||
right_table_expression_columns_names,
|
||||
auto expression_side_optional = extractJoinTableSideFromExpression(//join_expressions_actions_node,
|
||||
//join_expression_dag_input_nodes,
|
||||
join_expression.get(),
|
||||
left_table_expressions,
|
||||
right_table_expressions,
|
||||
join_node);
|
||||
|
||||
if (!expression_side_optional)
|
||||
expression_side_optional = JoinTableSide::Right;
|
||||
|
||||
auto expression_side = *expression_side_optional;
|
||||
join_clause.addCondition(expression_side, join_expressions_actions_node);
|
||||
auto & dag = expression_side == JoinTableSide::Left ? left_dag : right_dag;
|
||||
const auto * node = appendExpression(dag, join_expression, planner_context, join_node);
|
||||
join_clause.addCondition(expression_side, node);
|
||||
}
|
||||
|
||||
JoinClausesAndActions buildJoinClausesAndActions(const ColumnsWithTypeAndName & join_expression_input_columns,
|
||||
JoinClausesAndActions buildJoinClausesAndActions(//const ColumnsWithTypeAndName & join_expression_input_columns,
|
||||
const ColumnsWithTypeAndName & left_table_expression_columns,
|
||||
const ColumnsWithTypeAndName & right_table_expression_columns,
|
||||
const JoinNode & join_node,
|
||||
const PlannerContextPtr & planner_context)
|
||||
{
|
||||
ActionsDAGPtr join_expression_actions = std::make_shared<ActionsDAG>(join_expression_input_columns);
|
||||
//ActionsDAGPtr join_expression_actions = std::make_shared<ActionsDAG>(join_expression_input_columns);
|
||||
|
||||
ActionsDAGPtr left_join_actions = std::make_shared<ActionsDAG>(left_table_expression_columns);
|
||||
ActionsDAGPtr right_join_actions = std::make_shared<ActionsDAG>(right_table_expression_columns);
|
||||
|
||||
// LOG_TRACE(&Poco::Logger::get("Planner"), "buildJoinClausesAndActions cols {} ", left_join_actions->dumpDAG());
|
||||
// LOG_TRACE(&Poco::Logger::get("Planner"), "buildJoinClausesAndActions cols {} ", right_join_actions->dumpDAG());
|
||||
|
||||
/** In ActionsDAG if input node has constant representation additional constant column is added.
|
||||
* That way we cannot simply check that node has INPUT type during resolution of expression join table side.
|
||||
* Put all nodes after actions dag initialization in set.
|
||||
* To check if actions dag node is input column, we check if set contains it.
|
||||
*/
|
||||
const auto & join_expression_actions_nodes = join_expression_actions->getNodes();
|
||||
// const auto & join_expression_actions_nodes = join_expression_actions->getNodes();
|
||||
|
||||
std::unordered_set<const ActionsDAG::Node *> join_expression_dag_input_nodes;
|
||||
join_expression_dag_input_nodes.reserve(join_expression_actions_nodes.size());
|
||||
for (const auto & node : join_expression_actions_nodes)
|
||||
join_expression_dag_input_nodes.insert(&node);
|
||||
// std::unordered_set<const ActionsDAG::Node *> join_expression_dag_input_nodes;
|
||||
// join_expression_dag_input_nodes.reserve(join_expression_actions_nodes.size());
|
||||
// for (const auto & node : join_expression_actions_nodes)
|
||||
// join_expression_dag_input_nodes.insert(&node);
|
||||
|
||||
/** It is possible to have constant value in JOIN ON section, that we need to ignore during DAG construction.
|
||||
* If we do not ignore it, this function will be replaced by underlying constant.
|
||||
@ -308,6 +411,9 @@ JoinClausesAndActions buildJoinClausesAndActions(const ColumnsWithTypeAndName &
|
||||
* ON (t1.id = t2.id) AND 1 != 1 AND (t1.value >= t1.value);
|
||||
*/
|
||||
auto join_expression = join_node.getJoinExpression();
|
||||
// LOG_TRACE(&Poco::Logger::get("Planner"), "buildJoinClausesAndActions expr {} ", join_expression->formatConvertedASTForErrorMessage());
|
||||
// LOG_TRACE(&Poco::Logger::get("Planner"), "buildJoinClausesAndActions expr {} ", join_expression->dumpTree());
|
||||
|
||||
auto * constant_join_expression = join_expression->as<ConstantNode>();
|
||||
|
||||
if (constant_join_expression && constant_join_expression->hasSourceExpression())
|
||||
@ -319,18 +425,18 @@ JoinClausesAndActions buildJoinClausesAndActions(const ColumnsWithTypeAndName &
|
||||
"JOIN {} join expression expected function",
|
||||
join_node.formatASTForErrorMessage());
|
||||
|
||||
PlannerActionsVisitor join_expression_visitor(planner_context);
|
||||
auto join_expression_dag_node_raw_pointers = join_expression_visitor.visit(join_expression_actions, join_expression);
|
||||
if (join_expression_dag_node_raw_pointers.size() != 1)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||
"JOIN {} ON clause contains multiple expressions",
|
||||
join_node.formatASTForErrorMessage());
|
||||
// PlannerActionsVisitor join_expression_visitor(planner_context);
|
||||
// auto join_expression_dag_node_raw_pointers = join_expression_visitor.visit(join_expression_actions, join_expression);
|
||||
// if (join_expression_dag_node_raw_pointers.size() != 1)
|
||||
// throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||
// "JOIN {} ON clause contains multiple expressions",
|
||||
// join_node.formatASTForErrorMessage());
|
||||
|
||||
const auto * join_expressions_actions_root_node = join_expression_dag_node_raw_pointers[0];
|
||||
if (!join_expressions_actions_root_node->function)
|
||||
throw Exception(ErrorCodes::INVALID_JOIN_ON_EXPRESSION,
|
||||
"JOIN {} join expression expected function",
|
||||
join_node.formatASTForErrorMessage());
|
||||
// const auto * join_expressions_actions_root_node = join_expression_dag_node_raw_pointers[0];
|
||||
// if (!join_expressions_actions_root_node->function)
|
||||
// throw Exception(ErrorCodes::INVALID_JOIN_ON_EXPRESSION,
|
||||
// "JOIN {} join expression expected function",
|
||||
// join_node.formatASTForErrorMessage());
|
||||
|
||||
size_t left_table_expression_columns_size = left_table_expression_columns.size();
|
||||
|
||||
@ -360,21 +466,27 @@ JoinClausesAndActions buildJoinClausesAndActions(const ColumnsWithTypeAndName &
|
||||
join_right_actions_names_set.insert(right_table_expression_column.name);
|
||||
}
|
||||
|
||||
JoinClausesAndActions result;
|
||||
result.join_expression_actions = join_expression_actions;
|
||||
auto join_left_table_expressions = extractTableExpressionsSet(join_node.getLeftTableExpression());
|
||||
auto join_right_table_expressions = extractTableExpressionsSet(join_node.getRightTableExpression());
|
||||
|
||||
const auto & function_name = join_expressions_actions_root_node->function->getName();
|
||||
JoinClausesAndActions result;
|
||||
//result.join_expression_actions = join_expression_actions;
|
||||
|
||||
const auto & function_name = function_node->getFunction()->getName();
|
||||
if (function_name == "or")
|
||||
{
|
||||
for (const auto & child : join_expressions_actions_root_node->children)
|
||||
for (const auto & child : function_node->getArguments())
|
||||
{
|
||||
result.join_clauses.emplace_back();
|
||||
|
||||
buildJoinClause(join_expression_actions,
|
||||
join_expression_dag_input_nodes,
|
||||
buildJoinClause(//join_expression_actions,
|
||||
//join_expression_dag_input_nodes,
|
||||
left_join_actions,
|
||||
right_join_actions,
|
||||
planner_context,
|
||||
child,
|
||||
join_left_actions_names_set,
|
||||
join_right_actions_names_set,
|
||||
join_left_table_expressions,
|
||||
join_right_table_expressions,
|
||||
join_node,
|
||||
result.join_clauses.back());
|
||||
}
|
||||
@ -383,11 +495,15 @@ JoinClausesAndActions buildJoinClausesAndActions(const ColumnsWithTypeAndName &
|
||||
{
|
||||
result.join_clauses.emplace_back();
|
||||
|
||||
buildJoinClause(join_expression_actions,
|
||||
join_expression_dag_input_nodes,
|
||||
join_expressions_actions_root_node,
|
||||
join_left_actions_names_set,
|
||||
join_right_actions_names_set,
|
||||
buildJoinClause(
|
||||
left_join_actions,
|
||||
right_join_actions,
|
||||
planner_context,
|
||||
//join_expression_actions,
|
||||
//join_expression_dag_input_nodes,
|
||||
join_expression, //join_expressions_actions_root_node,
|
||||
join_left_table_expressions,
|
||||
join_right_table_expressions,
|
||||
join_node,
|
||||
result.join_clauses.back());
|
||||
}
|
||||
@ -412,12 +528,12 @@ JoinClausesAndActions buildJoinClausesAndActions(const ColumnsWithTypeAndName &
|
||||
const ActionsDAG::Node * dag_filter_condition_node = nullptr;
|
||||
|
||||
if (left_filter_condition_nodes.size() > 1)
|
||||
dag_filter_condition_node = &join_expression_actions->addFunction(and_function, left_filter_condition_nodes, {});
|
||||
dag_filter_condition_node = &left_join_actions->addFunction(and_function, left_filter_condition_nodes, {});
|
||||
else
|
||||
dag_filter_condition_node = left_filter_condition_nodes[0];
|
||||
|
||||
join_clause.getLeftFilterConditionNodes() = {dag_filter_condition_node};
|
||||
join_expression_actions->addOrReplaceInOutputs(*dag_filter_condition_node);
|
||||
left_join_actions->addOrReplaceInOutputs(*dag_filter_condition_node);
|
||||
|
||||
add_necessary_name_if_needed(JoinTableSide::Left, dag_filter_condition_node->result_name);
|
||||
}
|
||||
@ -428,12 +544,12 @@ JoinClausesAndActions buildJoinClausesAndActions(const ColumnsWithTypeAndName &
|
||||
const ActionsDAG::Node * dag_filter_condition_node = nullptr;
|
||||
|
||||
if (right_filter_condition_nodes.size() > 1)
|
||||
dag_filter_condition_node = &join_expression_actions->addFunction(and_function, right_filter_condition_nodes, {});
|
||||
dag_filter_condition_node = &right_join_actions->addFunction(and_function, right_filter_condition_nodes, {});
|
||||
else
|
||||
dag_filter_condition_node = right_filter_condition_nodes[0];
|
||||
|
||||
join_clause.getRightFilterConditionNodes() = {dag_filter_condition_node};
|
||||
join_expression_actions->addOrReplaceInOutputs(*dag_filter_condition_node);
|
||||
right_join_actions->addOrReplaceInOutputs(*dag_filter_condition_node);
|
||||
|
||||
add_necessary_name_if_needed(JoinTableSide::Right, dag_filter_condition_node->result_name);
|
||||
}
|
||||
@ -470,10 +586,10 @@ JoinClausesAndActions buildJoinClausesAndActions(const ColumnsWithTypeAndName &
|
||||
}
|
||||
|
||||
if (!left_key_node->result_type->equals(*common_type))
|
||||
left_key_node = &join_expression_actions->addCast(*left_key_node, common_type, {});
|
||||
left_key_node = &left_join_actions->addCast(*left_key_node, common_type, {});
|
||||
|
||||
if (!right_key_node->result_type->equals(*common_type))
|
||||
right_key_node = &join_expression_actions->addCast(*right_key_node, common_type, {});
|
||||
right_key_node = &right_join_actions->addCast(*right_key_node, common_type, {});
|
||||
}
|
||||
|
||||
if (join_clause.isNullsafeCompareKey(i) && left_key_node->result_type->isNullable() && right_key_node->result_type->isNullable())
|
||||
@ -490,22 +606,29 @@ JoinClausesAndActions buildJoinClausesAndActions(const ColumnsWithTypeAndName &
|
||||
* SELECT * FROM t1 JOIN t2 ON tuple(t1.a) == tuple(t2.b)
|
||||
*/
|
||||
auto wrap_nullsafe_function = FunctionFactory::instance().get("tuple", planner_context->getQueryContext());
|
||||
left_key_node = &join_expression_actions->addFunction(wrap_nullsafe_function, {left_key_node}, {});
|
||||
right_key_node = &join_expression_actions->addFunction(wrap_nullsafe_function, {right_key_node}, {});
|
||||
left_key_node = &left_join_actions->addFunction(wrap_nullsafe_function, {left_key_node}, {});
|
||||
right_key_node = &right_join_actions->addFunction(wrap_nullsafe_function, {right_key_node}, {});
|
||||
}
|
||||
|
||||
join_expression_actions->addOrReplaceInOutputs(*left_key_node);
|
||||
join_expression_actions->addOrReplaceInOutputs(*right_key_node);
|
||||
left_join_actions->addOrReplaceInOutputs(*left_key_node);
|
||||
right_join_actions->addOrReplaceInOutputs(*right_key_node);
|
||||
|
||||
add_necessary_name_if_needed(JoinTableSide::Left, left_key_node->result_name);
|
||||
add_necessary_name_if_needed(JoinTableSide::Right, right_key_node->result_name);
|
||||
}
|
||||
}
|
||||
|
||||
result.left_join_expressions_actions = join_expression_actions->clone();
|
||||
result.left_join_expressions_actions = left_join_actions->clone();
|
||||
result.left_join_tmp_expression_actions = std::move(left_join_actions);
|
||||
result.left_join_expressions_actions->removeUnusedActions(join_left_actions_names);
|
||||
|
||||
result.right_join_expressions_actions = join_expression_actions->clone();
|
||||
// for (const auto & name : join_right_actions_names)
|
||||
// std::cerr << ".. " << name << std::endl;
|
||||
|
||||
// std::cerr << right_join_actions->dumpDAG() << std::endl;
|
||||
|
||||
result.right_join_expressions_actions = right_join_actions->clone();
|
||||
result.right_join_tmp_expression_actions = std::move(right_join_actions);
|
||||
result.right_join_expressions_actions->removeUnusedActions(join_right_actions_names);
|
||||
|
||||
return result;
|
||||
@ -525,10 +648,10 @@ JoinClausesAndActions buildJoinClausesAndActions(
|
||||
"JOIN {} join does not have ON section",
|
||||
join_node_typed.formatASTForErrorMessage());
|
||||
|
||||
auto join_expression_input_columns = left_table_expression_columns;
|
||||
join_expression_input_columns.insert(join_expression_input_columns.end(), right_table_expression_columns.begin(), right_table_expression_columns.end());
|
||||
// auto join_expression_input_columns = left_table_expression_columns;
|
||||
// join_expression_input_columns.insert(join_expression_input_columns.end(), right_table_expression_columns.begin(), right_table_expression_columns.end());
|
||||
|
||||
return buildJoinClausesAndActions(join_expression_input_columns, left_table_expression_columns, right_table_expression_columns, join_node_typed, planner_context);
|
||||
return buildJoinClausesAndActions(/*join_expression_input_columns,*/ left_table_expression_columns, right_table_expression_columns, join_node_typed, planner_context);
|
||||
}
|
||||
|
||||
std::optional<bool> tryExtractConstantFromJoinNode(const QueryTreeNodePtr & join_node)
|
||||
|
@ -165,7 +165,8 @@ struct JoinClausesAndActions
|
||||
/// Join clauses. Actions dag nodes point into join_expression_actions.
|
||||
JoinClauses join_clauses;
|
||||
/// Whole JOIN ON section expressions
|
||||
ActionsDAGPtr join_expression_actions;
|
||||
ActionsDAGPtr left_join_tmp_expression_actions;
|
||||
ActionsDAGPtr right_join_tmp_expression_actions;
|
||||
/// Left join expressions actions
|
||||
ActionsDAGPtr left_join_expressions_actions;
|
||||
/// Right join expressions actions
|
||||
|
@ -357,6 +357,7 @@ QueryTreeNodePtr mergeConditionNodes(const QueryTreeNodes & condition_nodes, con
|
||||
|
||||
QueryTreeNodePtr replaceTableExpressionsWithDummyTables(const QueryTreeNodePtr & query_node,
|
||||
const ContextPtr & context,
|
||||
//PlannerContext & planner_context,
|
||||
ResultReplacementMap * result_replacement_map)
|
||||
{
|
||||
auto & query_node_typed = query_node->as<QueryNode &>();
|
||||
@ -406,6 +407,13 @@ QueryTreeNodePtr replaceTableExpressionsWithDummyTables(const QueryTreeNodePtr &
|
||||
if (result_replacement_map)
|
||||
result_replacement_map->emplace(table_expression, dummy_table_node);
|
||||
|
||||
dummy_table_node->setAlias(table_expression->getAlias());
|
||||
|
||||
// auto & src_table_expression_data = planner_context.getOrCreateTableExpressionData(table_expression);
|
||||
// auto & dst_table_expression_data = planner_context.getOrCreateTableExpressionData(dummy_table_node);
|
||||
|
||||
// dst_table_expression_data = src_table_expression_data;
|
||||
|
||||
replacement_map.emplace(table_expression.get(), std::move(dummy_table_node));
|
||||
}
|
||||
|
||||
|
@ -436,7 +436,6 @@ AggregateProjectionCandidates getAggregateProjectionCandidates(
|
||||
AggregateProjectionCandidates candidates;
|
||||
|
||||
const auto & parts = reading.getParts();
|
||||
const auto & query_info = reading.getQueryInfo();
|
||||
|
||||
const auto metadata = reading.getStorageMetadata();
|
||||
ContextPtr context = reading.getContext();
|
||||
@ -481,8 +480,7 @@ AggregateProjectionCandidates getAggregateProjectionCandidates(
|
||||
auto block = reading.getMergeTreeData().getMinMaxCountProjectionBlock(
|
||||
metadata,
|
||||
candidate.dag->getRequiredColumnsNames(),
|
||||
dag.filter_node != nullptr,
|
||||
query_info,
|
||||
(dag.filter_node ? dag.dag : nullptr),
|
||||
parts,
|
||||
max_added_blocks.get(),
|
||||
context);
|
||||
|
@ -23,6 +23,8 @@
|
||||
#include <Processors/Transforms/ReverseTransform.h>
|
||||
#include <QueryPipeline/QueryPipelineBuilder.h>
|
||||
#include <Storages/MergeTree/MergeTreeDataSelectExecutor.h>
|
||||
#include <Storages/MergeTree/MergeTreeIndexAnnoy.h>
|
||||
#include <Storages/MergeTree/MergeTreeIndexUSearch.h>
|
||||
#include <Storages/MergeTree/MergeTreeReadPool.h>
|
||||
#include <Storages/MergeTree/MergeTreePrefetchedReadPool.h>
|
||||
#include <Storages/MergeTree/MergeTreeReadPoolInOrder.h>
|
||||
@ -1337,26 +1339,12 @@ static void buildIndexes(
|
||||
const Names & primary_key_column_names = primary_key.column_names;
|
||||
|
||||
const auto & settings = context->getSettingsRef();
|
||||
if (settings.query_plan_optimize_primary_key)
|
||||
{
|
||||
NameSet array_join_name_set;
|
||||
if (query_info.syntax_analyzer_result)
|
||||
array_join_name_set = query_info.syntax_analyzer_result->getArrayJoinSourceNameSet();
|
||||
|
||||
indexes.emplace(ReadFromMergeTree::Indexes{{
|
||||
filter_actions_dag,
|
||||
context,
|
||||
primary_key_column_names,
|
||||
primary_key.expression}, {}, {}, {}, {}, false, {}});
|
||||
}
|
||||
else
|
||||
{
|
||||
indexes.emplace(ReadFromMergeTree::Indexes{{
|
||||
query_info,
|
||||
context,
|
||||
primary_key_column_names,
|
||||
primary_key.expression}, {}, {}, {}, {}, false, {}});
|
||||
}
|
||||
indexes.emplace(ReadFromMergeTree::Indexes{{
|
||||
filter_actions_dag,
|
||||
context,
|
||||
primary_key_column_names,
|
||||
primary_key.expression}, {}, {}, {}, {}, false, {}});
|
||||
|
||||
if (metadata_snapshot->hasPartitionKey())
|
||||
{
|
||||
@ -1369,11 +1357,7 @@ static void buildIndexes(
|
||||
}
|
||||
|
||||
/// TODO Support row_policy_filter and additional_filters
|
||||
if (settings.allow_experimental_analyzer)
|
||||
indexes->part_values = MergeTreeDataSelectExecutor::filterPartsByVirtualColumns(data, parts, filter_actions_dag, context);
|
||||
else
|
||||
indexes->part_values = MergeTreeDataSelectExecutor::filterPartsByVirtualColumns(data, parts, query_info.query, context);
|
||||
|
||||
indexes->part_values = MergeTreeDataSelectExecutor::filterPartsByVirtualColumns(data, parts, filter_actions_dag, context);
|
||||
MergeTreeDataSelectExecutor::buildKeyConditionFromPartOffset(indexes->part_offset_condition, filter_actions_dag, context);
|
||||
|
||||
indexes->use_skip_indexes = settings.use_skip_indexes;
|
||||
@ -1385,14 +1369,18 @@ static void buildIndexes(
|
||||
if (!indexes->use_skip_indexes)
|
||||
return;
|
||||
|
||||
const SelectQueryInfo * info = &query_info;
|
||||
std::optional<SelectQueryInfo> info_copy;
|
||||
if (settings.allow_experimental_analyzer)
|
||||
auto get_query_info = [&]() -> const SelectQueryInfo &
|
||||
{
|
||||
info_copy.emplace(query_info);
|
||||
info_copy->filter_actions_dag = filter_actions_dag;
|
||||
info = &*info_copy;
|
||||
}
|
||||
if (settings.allow_experimental_analyzer)
|
||||
{
|
||||
info_copy.emplace(query_info);
|
||||
info_copy->filter_actions_dag = filter_actions_dag;
|
||||
return *info_copy;
|
||||
}
|
||||
|
||||
return query_info;
|
||||
};
|
||||
|
||||
std::unordered_set<std::string> ignored_index_names;
|
||||
|
||||
@ -1433,14 +1421,30 @@ static void buildIndexes(
|
||||
if (inserted)
|
||||
{
|
||||
skip_indexes.merged_indices.emplace_back();
|
||||
skip_indexes.merged_indices.back().condition = index_helper->createIndexMergedCondition(*info, metadata_snapshot);
|
||||
skip_indexes.merged_indices.back().condition = index_helper->createIndexMergedCondition(get_query_info(), metadata_snapshot);
|
||||
}
|
||||
|
||||
skip_indexes.merged_indices[it->second].addIndex(index_helper);
|
||||
}
|
||||
else
|
||||
{
|
||||
auto condition = index_helper->createIndexCondition(*info, context);
|
||||
MergeTreeIndexConditionPtr condition;
|
||||
if (index_helper->isVectorSearch())
|
||||
{
|
||||
#ifdef ENABLE_ANNOY
|
||||
if (const auto * annoy = typeid_cast<const MergeTreeIndexAnnoy *>(index_helper.get()))
|
||||
condition = annoy->createIndexCondition(get_query_info(), context);
|
||||
#endif
|
||||
#ifdef ENABLE_USEARCH
|
||||
if (const auto * usearch = typeid_cast<const MergeTreeIndexUSearch *>(index_helper.get()))
|
||||
condition = usearch->createIndexCondition(get_query_info(), context);
|
||||
#endif
|
||||
if (!condition)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Unknown vector search index {}", index_helper->index.name);
|
||||
}
|
||||
else
|
||||
condition = index_helper->createIndexCondition(filter_actions_dag, context);
|
||||
|
||||
if (!condition->alwaysUnknownOrTrue())
|
||||
skip_indexes.useful_indices.emplace_back(index_helper, condition);
|
||||
}
|
||||
@ -1473,34 +1477,15 @@ MergeTreeDataSelectAnalysisResultPtr ReadFromMergeTree::selectRangesToRead(
|
||||
Poco::Logger * log,
|
||||
std::optional<Indexes> & indexes)
|
||||
{
|
||||
const auto & settings = context->getSettingsRef();
|
||||
if (settings.allow_experimental_analyzer || settings.query_plan_optimize_primary_key)
|
||||
{
|
||||
auto updated_query_info_with_filter_dag = query_info;
|
||||
updated_query_info_with_filter_dag.filter_actions_dag = buildFilterDAG(context, prewhere_info, added_filter_nodes, query_info);
|
||||
|
||||
return selectRangesToReadImpl(
|
||||
std::move(parts),
|
||||
std::move(alter_conversions),
|
||||
metadata_snapshot_base,
|
||||
metadata_snapshot,
|
||||
updated_query_info_with_filter_dag,
|
||||
context,
|
||||
num_streams,
|
||||
max_block_numbers_to_read,
|
||||
data,
|
||||
real_column_names,
|
||||
sample_factor_column_queried,
|
||||
log,
|
||||
indexes);
|
||||
}
|
||||
auto updated_query_info_with_filter_dag = query_info;
|
||||
updated_query_info_with_filter_dag.filter_actions_dag = buildFilterDAG(context, prewhere_info, added_filter_nodes, query_info);
|
||||
|
||||
return selectRangesToReadImpl(
|
||||
std::move(parts),
|
||||
std::move(alter_conversions),
|
||||
metadata_snapshot_base,
|
||||
metadata_snapshot,
|
||||
query_info,
|
||||
updated_query_info_with_filter_dag,
|
||||
context,
|
||||
num_streams,
|
||||
max_block_numbers_to_read,
|
||||
|
@ -30,19 +30,9 @@ void ReadFromStorageStep::applyFilters()
|
||||
if (!context)
|
||||
return;
|
||||
|
||||
std::shared_ptr<const KeyCondition> key_condition;
|
||||
if (!context->getSettingsRef().allow_experimental_analyzer)
|
||||
{
|
||||
for (const auto & processor : pipe.getProcessors())
|
||||
if (auto * source = dynamic_cast<SourceWithKeyCondition *>(processor.get()))
|
||||
source->setKeyCondition(query_info, context);
|
||||
}
|
||||
else
|
||||
{
|
||||
for (const auto & processor : pipe.getProcessors())
|
||||
if (auto * source = dynamic_cast<SourceWithKeyCondition *>(processor.get()))
|
||||
source->setKeyCondition(filter_nodes.nodes, context);
|
||||
}
|
||||
for (const auto & processor : pipe.getProcessors())
|
||||
if (auto * source = dynamic_cast<SourceWithKeyCondition *>(processor.get()))
|
||||
source->setKeyCondition(filter_nodes.nodes, context);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -16,15 +16,6 @@ protected:
|
||||
/// Represents pushed down filters in source
|
||||
std::shared_ptr<const KeyCondition> key_condition;
|
||||
|
||||
void setKeyConditionImpl(const SelectQueryInfo & query_info, ContextPtr context, const Block & keys)
|
||||
{
|
||||
key_condition = std::make_shared<const KeyCondition>(
|
||||
query_info,
|
||||
context,
|
||||
keys.getNames(),
|
||||
std::make_shared<ExpressionActions>(std::make_shared<ActionsDAG>(keys.getColumnsWithTypeAndName())));
|
||||
}
|
||||
|
||||
void setKeyConditionImpl(const ActionsDAG::NodeRawConstPtrs & nodes, ContextPtr context, const Block & keys)
|
||||
{
|
||||
std::unordered_map<std::string, DB::ColumnWithTypeAndName> node_name_to_input_column;
|
||||
@ -46,10 +37,7 @@ public:
|
||||
/// Set key_condition directly. It is used for filter push down in source.
|
||||
virtual void setKeyCondition(const std::shared_ptr<const KeyCondition> & key_condition_) { key_condition = key_condition_; }
|
||||
|
||||
/// Set key_condition created by query_info and context. It is used for filter push down when allow_experimental_analyzer is false.
|
||||
virtual void setKeyCondition(const SelectQueryInfo & /*query_info*/, ContextPtr /*context*/) { }
|
||||
|
||||
/// Set key_condition created by nodes and context. It is used for filter push down when allow_experimental_analyzer is true.
|
||||
/// Set key_condition created by nodes and context.
|
||||
virtual void setKeyCondition(const ActionsDAG::NodeRawConstPtrs & /*nodes*/, ContextPtr /*context*/) { }
|
||||
};
|
||||
}
|
||||
|
@ -29,10 +29,14 @@
|
||||
#include <Parsers/ASTLiteral.h>
|
||||
#include <QueryPipeline/Pipe.h>
|
||||
#include <QueryPipeline/QueryPipeline.h>
|
||||
#include <QueryPipeline/QueryPipelineBuilder.h>
|
||||
#include <Processors/ISource.h>
|
||||
#include <Processors/Formats/IInputFormat.h>
|
||||
#include <Processors/Executors/PullingPipelineExecutor.h>
|
||||
#include <Processors/Transforms/AddingDefaultsTransform.h>
|
||||
#include <Processors/QueryPlan/QueryPlan.h>
|
||||
#include <Processors/QueryPlan/SourceStepWithFilter.h>
|
||||
#include <Processors/Sources/NullSource.h>
|
||||
#include <Storages/AlterCommands.h>
|
||||
#include <Storages/HDFS/ReadBufferFromHDFS.h>
|
||||
#include <Storages/HDFS/AsynchronousReadBufferFromHDFS.h>
|
||||
@ -123,7 +127,6 @@ public:
|
||||
String compression_method_,
|
||||
Block sample_block_,
|
||||
ContextPtr context_,
|
||||
const SelectQueryInfo & query_info_,
|
||||
UInt64 max_block_size_,
|
||||
const StorageHive & storage_,
|
||||
const Names & text_input_field_names_ = {})
|
||||
@ -140,7 +143,6 @@ public:
|
||||
, text_input_field_names(text_input_field_names_)
|
||||
, format_settings(getFormatSettings(getContext()))
|
||||
, read_settings(getContext()->getReadSettings())
|
||||
, query_info(query_info_)
|
||||
{
|
||||
to_read_block = sample_block;
|
||||
|
||||
@ -395,7 +397,6 @@ private:
|
||||
const Names & text_input_field_names;
|
||||
FormatSettings format_settings;
|
||||
ReadSettings read_settings;
|
||||
SelectQueryInfo query_info;
|
||||
|
||||
HiveFilePtr current_file;
|
||||
String current_path;
|
||||
@ -574,7 +575,7 @@ static HiveFilePtr createHiveFile(
|
||||
|
||||
HiveFiles StorageHive::collectHiveFilesFromPartition(
|
||||
const Apache::Hadoop::Hive::Partition & partition,
|
||||
const SelectQueryInfo & query_info,
|
||||
const ActionsDAGPtr & filter_actions_dag,
|
||||
const HiveTableMetadataPtr & hive_table_metadata,
|
||||
const HDFSFSPtr & fs,
|
||||
const ContextPtr & context_,
|
||||
@ -638,7 +639,7 @@ HiveFiles StorageHive::collectHiveFilesFromPartition(
|
||||
for (size_t i = 0; i < partition_names.size(); ++i)
|
||||
ranges.emplace_back(fields[i]);
|
||||
|
||||
const KeyCondition partition_key_condition(query_info, getContext(), partition_names, partition_minmax_idx_expr);
|
||||
const KeyCondition partition_key_condition(filter_actions_dag, getContext(), partition_names, partition_minmax_idx_expr);
|
||||
if (!partition_key_condition.checkInHyperrectangle(ranges, partition_types).can_be_true)
|
||||
return {};
|
||||
}
|
||||
@ -648,7 +649,7 @@ HiveFiles StorageHive::collectHiveFilesFromPartition(
|
||||
hive_files.reserve(file_infos.size());
|
||||
for (const auto & file_info : file_infos)
|
||||
{
|
||||
auto hive_file = getHiveFileIfNeeded(file_info, fields, query_info, hive_table_metadata, context_, prune_level);
|
||||
auto hive_file = getHiveFileIfNeeded(file_info, fields, filter_actions_dag, hive_table_metadata, context_, prune_level);
|
||||
if (hive_file)
|
||||
{
|
||||
LOG_TRACE(
|
||||
@ -672,7 +673,7 @@ StorageHive::listDirectory(const String & path, const HiveTableMetadataPtr & hiv
|
||||
HiveFilePtr StorageHive::getHiveFileIfNeeded(
|
||||
const FileInfo & file_info,
|
||||
const FieldVector & fields,
|
||||
const SelectQueryInfo & query_info,
|
||||
const ActionsDAGPtr & filter_actions_dag,
|
||||
const HiveTableMetadataPtr & hive_table_metadata,
|
||||
const ContextPtr & context_,
|
||||
PruneLevel prune_level) const
|
||||
@ -706,7 +707,7 @@ HiveFilePtr StorageHive::getHiveFileIfNeeded(
|
||||
|
||||
if (prune_level >= PruneLevel::File)
|
||||
{
|
||||
const KeyCondition hivefile_key_condition(query_info, getContext(), hivefile_name_types.getNames(), hivefile_minmax_idx_expr);
|
||||
const KeyCondition hivefile_key_condition(filter_actions_dag, getContext(), hivefile_name_types.getNames(), hivefile_minmax_idx_expr);
|
||||
if (hive_file->useFileMinMaxIndex())
|
||||
{
|
||||
/// Load file level minmax index and apply
|
||||
@ -758,10 +759,77 @@ bool StorageHive::supportsSubsetOfColumns() const
|
||||
return format_name == "Parquet" || format_name == "ORC";
|
||||
}
|
||||
|
||||
Pipe StorageHive::read(
|
||||
class ReadFromHive : public SourceStepWithFilter
|
||||
{
|
||||
public:
|
||||
std::string getName() const override { return "ReadFromHive"; }
|
||||
void initializePipeline(QueryPipelineBuilder & pipeline, const BuildQueryPipelineSettings &) override;
|
||||
void applyFilters() override;
|
||||
|
||||
ReadFromHive(
|
||||
Block header,
|
||||
std::shared_ptr<StorageHive> storage_,
|
||||
std::shared_ptr<StorageHiveSource::SourcesInfo> sources_info_,
|
||||
HDFSBuilderWrapper builder_,
|
||||
HDFSFSPtr fs_,
|
||||
HiveMetastoreClient::HiveTableMetadataPtr hive_table_metadata_,
|
||||
Block sample_block_,
|
||||
Poco::Logger * log_,
|
||||
ContextPtr context_,
|
||||
size_t max_block_size_,
|
||||
size_t num_streams_)
|
||||
: SourceStepWithFilter(DataStream{.header = std::move(header)})
|
||||
, storage(std::move(storage_))
|
||||
, sources_info(std::move(sources_info_))
|
||||
, builder(std::move(builder_))
|
||||
, fs(std::move(fs_))
|
||||
, hive_table_metadata(std::move(hive_table_metadata_))
|
||||
, sample_block(std::move(sample_block_))
|
||||
, log(log_)
|
||||
, context(std::move(context_))
|
||||
, max_block_size(max_block_size_)
|
||||
, num_streams(num_streams_)
|
||||
{
|
||||
}
|
||||
|
||||
private:
|
||||
std::shared_ptr<StorageHive> storage;
|
||||
std::shared_ptr<StorageHiveSource::SourcesInfo> sources_info;
|
||||
HDFSBuilderWrapper builder;
|
||||
HDFSFSPtr fs;
|
||||
HiveMetastoreClient::HiveTableMetadataPtr hive_table_metadata;
|
||||
Block sample_block;
|
||||
Poco::Logger * log;
|
||||
|
||||
ContextPtr context;
|
||||
size_t max_block_size;
|
||||
size_t num_streams;
|
||||
|
||||
std::optional<HiveFiles> hive_files;
|
||||
|
||||
void createFiles(const ActionsDAGPtr & filter_actions_dag);
|
||||
};
|
||||
|
||||
void ReadFromHive::applyFilters()
|
||||
{
|
||||
auto filter_actions_dag = ActionsDAG::buildFilterActionsDAG(filter_nodes.nodes, {}, context);
|
||||
createFiles(filter_actions_dag);
|
||||
}
|
||||
|
||||
void ReadFromHive::createFiles(const ActionsDAGPtr & filter_actions_dag)
|
||||
{
|
||||
if (hive_files)
|
||||
return;
|
||||
|
||||
hive_files = storage->collectHiveFiles(num_streams, filter_actions_dag, hive_table_metadata, fs, context);
|
||||
LOG_INFO(log, "Collect {} hive files to read", hive_files->size());
|
||||
}
|
||||
|
||||
void StorageHive::read(
|
||||
QueryPlan & query_plan,
|
||||
const Names & column_names,
|
||||
const StorageSnapshotPtr & storage_snapshot,
|
||||
SelectQueryInfo & query_info,
|
||||
SelectQueryInfo &,
|
||||
ContextPtr context_,
|
||||
QueryProcessingStage::Enum /* processed_stage */,
|
||||
size_t max_block_size,
|
||||
@ -774,15 +842,7 @@ Pipe StorageHive::read(
|
||||
auto hive_metastore_client = HiveMetastoreClientFactory::instance().getOrCreate(hive_metastore_url);
|
||||
auto hive_table_metadata = hive_metastore_client->getTableMetadata(hive_database, hive_table);
|
||||
|
||||
/// Collect Hive files to read
|
||||
HiveFiles hive_files = collectHiveFiles(num_streams, query_info, hive_table_metadata, fs, context_);
|
||||
LOG_INFO(log, "Collect {} hive files to read", hive_files.size());
|
||||
|
||||
if (hive_files.empty())
|
||||
return {};
|
||||
|
||||
auto sources_info = std::make_shared<StorageHiveSource::SourcesInfo>();
|
||||
sources_info->hive_files = std::move(hive_files);
|
||||
sources_info->database_name = hive_database;
|
||||
sources_info->table_name = hive_table;
|
||||
sources_info->hive_metastore_client = hive_metastore_client;
|
||||
@ -822,6 +882,36 @@ Pipe StorageHive::read(
|
||||
sources_info->need_file_column = true;
|
||||
}
|
||||
|
||||
auto this_ptr = std::static_pointer_cast<StorageHive>(shared_from_this());
|
||||
|
||||
auto reading = std::make_unique<ReadFromHive>(
|
||||
StorageHiveSource::getHeader(sample_block, sources_info),
|
||||
std::move(this_ptr),
|
||||
std::move(sources_info),
|
||||
std::move(builder),
|
||||
std::move(fs),
|
||||
std::move(hive_table_metadata),
|
||||
std::move(sample_block),
|
||||
log,
|
||||
context_,
|
||||
max_block_size,
|
||||
num_streams);
|
||||
|
||||
query_plan.addStep(std::move(reading));
|
||||
}
|
||||
|
||||
void ReadFromHive::initializePipeline(QueryPipelineBuilder & pipeline, const BuildQueryPipelineSettings &)
|
||||
{
|
||||
createFiles(nullptr);
|
||||
|
||||
if (hive_files->empty())
|
||||
{
|
||||
pipeline.init(Pipe(std::make_shared<NullSource>(getOutputStream().header)));
|
||||
return;
|
||||
}
|
||||
|
||||
sources_info->hive_files = std::move(*hive_files);
|
||||
|
||||
if (num_streams > sources_info->hive_files.size())
|
||||
num_streams = sources_info->hive_files.size();
|
||||
|
||||
@ -830,22 +920,29 @@ Pipe StorageHive::read(
|
||||
{
|
||||
pipes.emplace_back(std::make_shared<StorageHiveSource>(
|
||||
sources_info,
|
||||
hdfs_namenode_url,
|
||||
format_name,
|
||||
compression_method,
|
||||
storage->hdfs_namenode_url,
|
||||
storage->format_name,
|
||||
storage->compression_method,
|
||||
sample_block,
|
||||
context_,
|
||||
query_info,
|
||||
context,
|
||||
max_block_size,
|
||||
*this,
|
||||
text_input_field_names));
|
||||
*storage,
|
||||
storage->text_input_field_names));
|
||||
}
|
||||
return Pipe::unitePipes(std::move(pipes));
|
||||
|
||||
auto pipe = Pipe::unitePipes(std::move(pipes));
|
||||
if (pipe.empty())
|
||||
pipe = Pipe(std::make_shared<NullSource>(getOutputStream().header));
|
||||
|
||||
for (const auto & processor : pipe.getProcessors())
|
||||
processors.emplace_back(processor);
|
||||
|
||||
pipeline.init(std::move(pipe));
|
||||
}
|
||||
|
||||
HiveFiles StorageHive::collectHiveFiles(
|
||||
size_t max_threads,
|
||||
const SelectQueryInfo & query_info,
|
||||
const ActionsDAGPtr & filter_actions_dag,
|
||||
const HiveTableMetadataPtr & hive_table_metadata,
|
||||
const HDFSFSPtr & fs,
|
||||
const ContextPtr & context_,
|
||||
@ -871,7 +968,7 @@ HiveFiles StorageHive::collectHiveFiles(
|
||||
[&]()
|
||||
{
|
||||
auto hive_files_in_partition
|
||||
= collectHiveFilesFromPartition(partition, query_info, hive_table_metadata, fs, context_, prune_level);
|
||||
= collectHiveFilesFromPartition(partition, filter_actions_dag, hive_table_metadata, fs, context_, prune_level);
|
||||
if (!hive_files_in_partition.empty())
|
||||
{
|
||||
std::lock_guard lock(hive_files_mutex);
|
||||
@ -897,7 +994,7 @@ HiveFiles StorageHive::collectHiveFiles(
|
||||
pool.scheduleOrThrowOnError(
|
||||
[&]()
|
||||
{
|
||||
auto hive_file = getHiveFileIfNeeded(file_info, {}, query_info, hive_table_metadata, context_, prune_level);
|
||||
auto hive_file = getHiveFileIfNeeded(file_info, {}, filter_actions_dag, hive_table_metadata, context_, prune_level);
|
||||
if (hive_file)
|
||||
{
|
||||
std::lock_guard lock(hive_files_mutex);
|
||||
@ -925,13 +1022,12 @@ NamesAndTypesList StorageHive::getVirtuals() const
|
||||
std::optional<UInt64> StorageHive::totalRows(const Settings & settings) const
|
||||
{
|
||||
/// query_info is not used when prune_level == PruneLevel::None
|
||||
SelectQueryInfo query_info;
|
||||
return totalRowsImpl(settings, query_info, getContext(), PruneLevel::None);
|
||||
return totalRowsImpl(settings, nullptr, getContext(), PruneLevel::None);
|
||||
}
|
||||
|
||||
std::optional<UInt64> StorageHive::totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, ContextPtr context_) const
|
||||
std::optional<UInt64> StorageHive::totalRowsByPartitionPredicate(const ActionsDAGPtr & filter_actions_dag, ContextPtr context_) const
|
||||
{
|
||||
return totalRowsImpl(context_->getSettingsRef(), query_info, context_, PruneLevel::Partition);
|
||||
return totalRowsImpl(context_->getSettingsRef(), filter_actions_dag, context_, PruneLevel::Partition);
|
||||
}
|
||||
|
||||
void StorageHive::checkAlterIsPossible(const AlterCommands & commands, ContextPtr /*local_context*/) const
|
||||
@ -946,7 +1042,7 @@ void StorageHive::checkAlterIsPossible(const AlterCommands & commands, ContextPt
|
||||
}
|
||||
|
||||
std::optional<UInt64>
|
||||
StorageHive::totalRowsImpl(const Settings & settings, const SelectQueryInfo & query_info, ContextPtr context_, PruneLevel prune_level) const
|
||||
StorageHive::totalRowsImpl(const Settings & settings, const ActionsDAGPtr & filter_actions_dag, ContextPtr context_, PruneLevel prune_level) const
|
||||
{
|
||||
/// Row-based format like Text doesn't support totalRowsByPartitionPredicate
|
||||
if (!supportsSubsetOfColumns())
|
||||
@ -958,7 +1054,7 @@ StorageHive::totalRowsImpl(const Settings & settings, const SelectQueryInfo & qu
|
||||
HDFSFSPtr fs = createHDFSFS(builder.get());
|
||||
HiveFiles hive_files = collectHiveFiles(
|
||||
settings.max_threads,
|
||||
query_info,
|
||||
filter_actions_dag,
|
||||
hive_table_metadata,
|
||||
fs,
|
||||
context_,
|
||||
|
@ -42,10 +42,11 @@ public:
|
||||
|
||||
bool supportsSubcolumns() const override { return true; }
|
||||
|
||||
Pipe read(
|
||||
void read(
|
||||
QueryPlan & query_plan,
|
||||
const Names & column_names,
|
||||
const StorageSnapshotPtr & storage_snapshot,
|
||||
SelectQueryInfo & query_info,
|
||||
SelectQueryInfo &,
|
||||
ContextPtr context,
|
||||
QueryProcessingStage::Enum processed_stage,
|
||||
size_t max_block_size,
|
||||
@ -58,9 +59,12 @@ public:
|
||||
bool supportsSubsetOfColumns() const;
|
||||
|
||||
std::optional<UInt64> totalRows(const Settings & settings) const override;
|
||||
std::optional<UInt64> totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, ContextPtr context_) const override;
|
||||
std::optional<UInt64> totalRowsByPartitionPredicate(const ActionsDAGPtr & filter_actions_dag, ContextPtr context_) const override;
|
||||
void checkAlterIsPossible(const AlterCommands & commands, ContextPtr local_context) const override;
|
||||
|
||||
protected:
|
||||
friend class ReadFromHive;
|
||||
|
||||
private:
|
||||
using FileFormat = IHiveFile::FileFormat;
|
||||
using FileInfo = HiveMetastoreClient::FileInfo;
|
||||
@ -88,7 +92,7 @@ private:
|
||||
|
||||
HiveFiles collectHiveFiles(
|
||||
size_t max_threads,
|
||||
const SelectQueryInfo & query_info,
|
||||
const ActionsDAGPtr & filter_actions_dag,
|
||||
const HiveTableMetadataPtr & hive_table_metadata,
|
||||
const HDFSFSPtr & fs,
|
||||
const ContextPtr & context_,
|
||||
@ -96,7 +100,7 @@ private:
|
||||
|
||||
HiveFiles collectHiveFilesFromPartition(
|
||||
const Apache::Hadoop::Hive::Partition & partition,
|
||||
const SelectQueryInfo & query_info,
|
||||
const ActionsDAGPtr & filter_actions_dag,
|
||||
const HiveTableMetadataPtr & hive_table_metadata,
|
||||
const HDFSFSPtr & fs,
|
||||
const ContextPtr & context_,
|
||||
@ -105,7 +109,7 @@ private:
|
||||
HiveFilePtr getHiveFileIfNeeded(
|
||||
const FileInfo & file_info,
|
||||
const FieldVector & fields,
|
||||
const SelectQueryInfo & query_info,
|
||||
const ActionsDAGPtr & filter_actions_dag,
|
||||
const HiveTableMetadataPtr & hive_table_metadata,
|
||||
const ContextPtr & context_,
|
||||
PruneLevel prune_level = PruneLevel::Max) const;
|
||||
@ -113,7 +117,7 @@ private:
|
||||
void lazyInitialize();
|
||||
|
||||
std::optional<UInt64>
|
||||
totalRowsImpl(const Settings & settings, const SelectQueryInfo & query_info, ContextPtr context_, PruneLevel prune_level) const;
|
||||
totalRowsImpl(const Settings & settings, const ActionsDAGPtr & filter_actions_dag, ContextPtr context_, PruneLevel prune_level) const;
|
||||
|
||||
String hive_metastore_url;
|
||||
|
||||
|
@ -669,7 +669,7 @@ public:
|
||||
virtual std::optional<UInt64> totalRows(const Settings &) const { return {}; }
|
||||
|
||||
/// Same as above but also take partition predicate into account.
|
||||
virtual std::optional<UInt64> totalRowsByPartitionPredicate(const SelectQueryInfo &, ContextPtr) const { return {}; }
|
||||
virtual std::optional<UInt64> totalRowsByPartitionPredicate(const ActionsDAGPtr &, ContextPtr) const { return {}; }
|
||||
|
||||
/// If it is possible to quickly determine exact number of bytes for the table on storage:
|
||||
/// - memory (approximated, resident)
|
||||
|
@ -762,92 +762,6 @@ void KeyCondition::getAllSpaceFillingCurves()
|
||||
}
|
||||
}
|
||||
|
||||
KeyCondition::KeyCondition(
|
||||
const ASTPtr & query,
|
||||
const ASTs & additional_filter_asts,
|
||||
Block block_with_constants,
|
||||
PreparedSetsPtr prepared_sets,
|
||||
ContextPtr context,
|
||||
const Names & key_column_names,
|
||||
const ExpressionActionsPtr & key_expr_,
|
||||
NameSet array_joined_column_names_,
|
||||
bool single_point_,
|
||||
bool strict_)
|
||||
: key_expr(key_expr_)
|
||||
, key_subexpr_names(getAllSubexpressionNames(*key_expr))
|
||||
, array_joined_column_names(std::move(array_joined_column_names_))
|
||||
, single_point(single_point_)
|
||||
, strict(strict_)
|
||||
{
|
||||
size_t key_index = 0;
|
||||
for (const auto & name : key_column_names)
|
||||
{
|
||||
if (!key_columns.contains(name))
|
||||
{
|
||||
key_columns[name] = key_columns.size();
|
||||
key_indices.push_back(key_index);
|
||||
}
|
||||
++key_index;
|
||||
}
|
||||
|
||||
if (context->getSettingsRef().analyze_index_with_space_filling_curves)
|
||||
getAllSpaceFillingCurves();
|
||||
|
||||
ASTPtr filter_node;
|
||||
if (query)
|
||||
filter_node = buildFilterNode(query, additional_filter_asts);
|
||||
|
||||
if (!filter_node)
|
||||
{
|
||||
has_filter = false;
|
||||
rpn.emplace_back(RPNElement::FUNCTION_UNKNOWN);
|
||||
return;
|
||||
}
|
||||
|
||||
has_filter = true;
|
||||
|
||||
/** When non-strictly monotonic functions are employed in functional index (e.g. ORDER BY toStartOfHour(dateTime)),
|
||||
* the use of NOT operator in predicate will result in the indexing algorithm leave out some data.
|
||||
* This is caused by rewriting in KeyCondition::tryParseAtomFromAST of relational operators to less strict
|
||||
* when parsing the AST into internal RPN representation.
|
||||
* To overcome the problem, before parsing the AST we transform it to its semantically equivalent form where all NOT's
|
||||
* are pushed down and applied (when possible) to leaf nodes.
|
||||
*/
|
||||
auto inverted_filter_node = DB::cloneASTWithInversionPushDown(filter_node);
|
||||
|
||||
RPNBuilder<RPNElement> builder(
|
||||
inverted_filter_node,
|
||||
std::move(context),
|
||||
std::move(block_with_constants),
|
||||
std::move(prepared_sets),
|
||||
[&](const RPNBuilderTreeNode & node, RPNElement & out) { return extractAtomFromTree(node, out); });
|
||||
|
||||
rpn = std::move(builder).extractRPN();
|
||||
|
||||
findHyperrectanglesForArgumentsOfSpaceFillingCurves();
|
||||
}
|
||||
|
||||
KeyCondition::KeyCondition(
|
||||
const SelectQueryInfo & query_info,
|
||||
ContextPtr context,
|
||||
const Names & key_column_names,
|
||||
const ExpressionActionsPtr & key_expr_,
|
||||
bool single_point_,
|
||||
bool strict_)
|
||||
: KeyCondition(
|
||||
query_info.query,
|
||||
query_info.filter_asts,
|
||||
KeyCondition::getBlockWithConstants(query_info.query, query_info.syntax_analyzer_result, context),
|
||||
query_info.prepared_sets,
|
||||
context,
|
||||
key_column_names,
|
||||
key_expr_,
|
||||
query_info.syntax_analyzer_result ? query_info.syntax_analyzer_result->getArrayJoinSourceNameSet() : NameSet{},
|
||||
single_point_,
|
||||
strict_)
|
||||
{
|
||||
}
|
||||
|
||||
KeyCondition::KeyCondition(
|
||||
ActionsDAGPtr filter_dag,
|
||||
ContextPtr context,
|
||||
@ -883,6 +797,13 @@ KeyCondition::KeyCondition(
|
||||
|
||||
has_filter = true;
|
||||
|
||||
/** When non-strictly monotonic functions are employed in functional index (e.g. ORDER BY toStartOfHour(dateTime)),
|
||||
* the use of NOT operator in predicate will result in the indexing algorithm leave out some data.
|
||||
* This is caused by rewriting in KeyCondition::tryParseAtomFromAST of relational operators to less strict
|
||||
* when parsing the AST into internal RPN representation.
|
||||
* To overcome the problem, before parsing the AST we transform it to its semantically equivalent form where all NOT's
|
||||
* are pushed down and applied (when possible) to leaf nodes.
|
||||
*/
|
||||
auto inverted_dag = cloneASTWithInversionPushDown({filter_dag->getOutputs().at(0)}, context);
|
||||
assert(inverted_dag->getOutputs().size() == 1);
|
||||
|
||||
|
@ -39,30 +39,6 @@ struct ActionDAGNodes;
|
||||
class KeyCondition
|
||||
{
|
||||
public:
|
||||
/// Construct key condition from AST SELECT query WHERE, PREWHERE and additional filters
|
||||
KeyCondition(
|
||||
const ASTPtr & query,
|
||||
const ASTs & additional_filter_asts,
|
||||
Block block_with_constants,
|
||||
PreparedSetsPtr prepared_sets_,
|
||||
ContextPtr context,
|
||||
const Names & key_column_names,
|
||||
const ExpressionActionsPtr & key_expr,
|
||||
NameSet array_joined_column_names,
|
||||
bool single_point_ = false,
|
||||
bool strict_ = false);
|
||||
|
||||
/** Construct key condition from AST SELECT query WHERE, PREWHERE and additional filters.
|
||||
* Select query, additional filters, prepared sets are initialized using query info.
|
||||
*/
|
||||
KeyCondition(
|
||||
const SelectQueryInfo & query_info,
|
||||
ContextPtr context,
|
||||
const Names & key_column_names,
|
||||
const ExpressionActionsPtr & key_expr_,
|
||||
bool single_point_ = false,
|
||||
bool strict_ = false);
|
||||
|
||||
/// Construct key condition from ActionsDAG nodes
|
||||
KeyCondition(
|
||||
ActionsDAGPtr filter_dag,
|
||||
|
@ -43,6 +43,8 @@ ReplicatedMergeMutateTaskBase::PrepareResult MergeFromLogEntryTask::prepare()
|
||||
LOG_TRACE(log, "Executing log entry to merge parts {} to {}",
|
||||
fmt::join(entry.source_parts, ", "), entry.new_part_name);
|
||||
|
||||
StorageMetadataPtr metadata_snapshot = storage.getInMemoryMetadataPtr();
|
||||
int32_t metadata_version = metadata_snapshot->getMetadataVersion();
|
||||
const auto storage_settings_ptr = storage.getSettings();
|
||||
|
||||
if (storage_settings_ptr->always_fetch_merged_part)
|
||||
@ -129,6 +131,18 @@ ReplicatedMergeMutateTaskBase::PrepareResult MergeFromLogEntryTask::prepare()
|
||||
};
|
||||
}
|
||||
|
||||
int32_t part_metadata_version = source_part_or_covering->getMetadataVersion();
|
||||
if (part_metadata_version > metadata_version)
|
||||
{
|
||||
LOG_DEBUG(log, "Source part metadata version {} is newer then the table metadata version {}. ALTER_METADATA is still in progress.",
|
||||
part_metadata_version, metadata_version);
|
||||
return PrepareResult{
|
||||
.prepared_successfully = false,
|
||||
.need_to_check_missing_part_in_fetch = false,
|
||||
.part_log_writer = {}
|
||||
};
|
||||
}
|
||||
|
||||
parts.push_back(source_part_or_covering);
|
||||
}
|
||||
|
||||
@ -176,8 +190,6 @@ ReplicatedMergeMutateTaskBase::PrepareResult MergeFromLogEntryTask::prepare()
|
||||
/// It will live until the whole task is being destroyed
|
||||
table_lock_holder = storage.lockForShare(RWLockImpl::NO_QUERY, storage_settings_ptr->lock_acquire_timeout_for_background_operations);
|
||||
|
||||
StorageMetadataPtr metadata_snapshot = storage.getInMemoryMetadataPtr();
|
||||
|
||||
auto future_merged_part = std::make_shared<FutureMergedMutatedPart>(parts, entry.new_part_format);
|
||||
if (future_merged_part->name != entry.new_part_name)
|
||||
{
|
||||
|
@ -570,6 +570,7 @@ void MergeTask::VerticalMergeStage::prepareVerticalMergeForOneColumn() const
|
||||
for (size_t part_num = 0; part_num < global_ctx->future_part->parts.size(); ++part_num)
|
||||
{
|
||||
Pipe pipe = createMergeTreeSequentialSource(
|
||||
MergeTreeSequentialSourceType::Merge,
|
||||
*global_ctx->data,
|
||||
global_ctx->storage_snapshot,
|
||||
global_ctx->future_part->parts[part_num],
|
||||
@ -925,6 +926,7 @@ void MergeTask::ExecuteAndFinalizeHorizontalPart::createMergedStream()
|
||||
for (const auto & part : global_ctx->future_part->parts)
|
||||
{
|
||||
Pipe pipe = createMergeTreeSequentialSource(
|
||||
MergeTreeSequentialSourceType::Merge,
|
||||
*global_ctx->data,
|
||||
global_ctx->storage_snapshot,
|
||||
part,
|
||||
|
@ -1075,26 +1075,30 @@ Block MergeTreeData::getBlockWithVirtualPartColumns(const MergeTreeData::DataPar
|
||||
|
||||
|
||||
std::optional<UInt64> MergeTreeData::totalRowsByPartitionPredicateImpl(
|
||||
const SelectQueryInfo & query_info, ContextPtr local_context, const DataPartsVector & parts) const
|
||||
const ActionsDAGPtr & filter_actions_dag, ContextPtr local_context, const DataPartsVector & parts) const
|
||||
{
|
||||
if (parts.empty())
|
||||
return 0u;
|
||||
auto metadata_snapshot = getInMemoryMetadataPtr();
|
||||
ASTPtr expression_ast;
|
||||
Block virtual_columns_block = getBlockWithVirtualPartColumns(parts, true /* one_part */);
|
||||
|
||||
// Generate valid expressions for filtering
|
||||
bool valid = VirtualColumnUtils::prepareFilterBlockWithQuery(query_info.query, local_context, virtual_columns_block, expression_ast);
|
||||
auto filter_dag = VirtualColumnUtils::splitFilterDagForAllowedInputs(filter_actions_dag->getOutputs().at(0), nullptr);
|
||||
|
||||
PartitionPruner partition_pruner(metadata_snapshot, query_info, local_context, true /* strict */);
|
||||
// Generate valid expressions for filtering
|
||||
bool valid = true;
|
||||
for (const auto * input : filter_dag->getInputs())
|
||||
if (!virtual_columns_block.has(input->result_name))
|
||||
valid = false;
|
||||
|
||||
PartitionPruner partition_pruner(metadata_snapshot, filter_dag, local_context, true /* strict */);
|
||||
if (partition_pruner.isUseless() && !valid)
|
||||
return {};
|
||||
|
||||
std::unordered_set<String> part_values;
|
||||
if (valid && expression_ast)
|
||||
if (valid)
|
||||
{
|
||||
virtual_columns_block = getBlockWithVirtualPartColumns(parts, false /* one_part */);
|
||||
VirtualColumnUtils::filterBlockWithQuery(query_info.query, virtual_columns_block, local_context, expression_ast);
|
||||
VirtualColumnUtils::filterBlockWithDAG(filter_dag, virtual_columns_block, local_context);
|
||||
part_values = VirtualColumnUtils::extractSingleValueFromBlock<String>(virtual_columns_block, "_part");
|
||||
if (part_values.empty())
|
||||
return 0;
|
||||
@ -3985,8 +3989,15 @@ MergeTreeData::PartsToRemoveFromZooKeeper MergeTreeData::removePartsInRangeFromW
|
||||
/// FIXME refactor removePartsFromWorkingSet(...), do not remove parts twice
|
||||
removePartsFromWorkingSet(txn, parts_to_remove, clear_without_timeout, lock);
|
||||
|
||||
/// We can only create a covering part for a blocks range that starts with 0 (otherwise we may get "intersecting parts"
|
||||
/// if we remove a range from the middle when dropping a part).
|
||||
/// Maybe we could do it by incrementing mutation version to get a name for the empty covering part,
|
||||
/// but it's okay to simply avoid creating it for DROP PART (for a part in the middle).
|
||||
/// NOTE: Block numbers in ReplicatedMergeTree start from 0. For MergeTree, is_new_syntax is always false.
|
||||
assert(!create_empty_part || supportsReplication());
|
||||
bool range_in_the_middle = drop_range.min_block;
|
||||
bool is_new_syntax = format_version >= MERGE_TREE_DATA_MIN_FORMAT_VERSION_WITH_CUSTOM_PARTITIONING;
|
||||
if (create_empty_part && !parts_to_remove.empty() && is_new_syntax)
|
||||
if (create_empty_part && !parts_to_remove.empty() && is_new_syntax && !range_in_the_middle)
|
||||
{
|
||||
/// We are going to remove a lot of parts from zookeeper just after returning from this function.
|
||||
/// And we will remove parts from disk later (because some queries may use them).
|
||||
@ -3995,12 +4006,9 @@ MergeTreeData::PartsToRemoveFromZooKeeper MergeTreeData::removePartsInRangeFromW
|
||||
/// We don't need to commit it to zk, and don't even need to activate it.
|
||||
|
||||
MergeTreePartInfo empty_info = drop_range;
|
||||
empty_info.level = empty_info.mutation = 0;
|
||||
if (!empty_info.min_block)
|
||||
empty_info.min_block = MergeTreePartInfo::MAX_BLOCK_NUMBER;
|
||||
empty_info.min_block = empty_info.level = empty_info.mutation = 0;
|
||||
for (const auto & part : parts_to_remove)
|
||||
{
|
||||
empty_info.min_block = std::min(empty_info.min_block, part->info.min_block);
|
||||
empty_info.level = std::max(empty_info.level, part->info.level);
|
||||
empty_info.mutation = std::max(empty_info.mutation, part->info.mutation);
|
||||
}
|
||||
@ -6617,8 +6625,7 @@ using PartitionIdToMaxBlock = std::unordered_map<String, Int64>;
|
||||
Block MergeTreeData::getMinMaxCountProjectionBlock(
|
||||
const StorageMetadataPtr & metadata_snapshot,
|
||||
const Names & required_columns,
|
||||
bool has_filter,
|
||||
const SelectQueryInfo & query_info,
|
||||
const ActionsDAGPtr & filter_dag,
|
||||
const DataPartsVector & parts,
|
||||
const PartitionIdToMaxBlock * max_block_numbers_to_read,
|
||||
ContextPtr query_context) const
|
||||
@ -6668,7 +6675,7 @@ Block MergeTreeData::getMinMaxCountProjectionBlock(
|
||||
Block virtual_columns_block;
|
||||
auto virtual_block = getSampleBlockWithVirtualColumns();
|
||||
bool has_virtual_column = std::any_of(required_columns.begin(), required_columns.end(), [&](const auto & name) { return virtual_block.has(name); });
|
||||
if (has_virtual_column || has_filter)
|
||||
if (has_virtual_column || filter_dag)
|
||||
{
|
||||
virtual_columns_block = getBlockWithVirtualPartColumns(parts, false /* one_part */, true /* ignore_empty */);
|
||||
if (virtual_columns_block.rows() == 0)
|
||||
@ -6680,7 +6687,7 @@ Block MergeTreeData::getMinMaxCountProjectionBlock(
|
||||
std::optional<PartitionPruner> partition_pruner;
|
||||
std::optional<KeyCondition> minmax_idx_condition;
|
||||
DataTypes minmax_columns_types;
|
||||
if (has_filter)
|
||||
if (filter_dag)
|
||||
{
|
||||
if (metadata_snapshot->hasPartitionKey())
|
||||
{
|
||||
@ -6689,16 +6696,15 @@ Block MergeTreeData::getMinMaxCountProjectionBlock(
|
||||
minmax_columns_types = getMinMaxColumnsTypes(partition_key);
|
||||
|
||||
minmax_idx_condition.emplace(
|
||||
query_info, query_context, minmax_columns_names,
|
||||
filter_dag, query_context, minmax_columns_names,
|
||||
getMinMaxExpr(partition_key, ExpressionActionsSettings::fromContext(query_context)));
|
||||
partition_pruner.emplace(metadata_snapshot, query_info, query_context, false /* strict */);
|
||||
partition_pruner.emplace(metadata_snapshot, filter_dag, query_context, false /* strict */);
|
||||
}
|
||||
|
||||
const auto * predicate = filter_dag->getOutputs().at(0);
|
||||
|
||||
// Generate valid expressions for filtering
|
||||
ASTPtr expression_ast;
|
||||
VirtualColumnUtils::prepareFilterBlockWithQuery(query_info.query, query_context, virtual_columns_block, expression_ast);
|
||||
if (expression_ast)
|
||||
VirtualColumnUtils::filterBlockWithQuery(query_info.query, virtual_columns_block, query_context, expression_ast);
|
||||
VirtualColumnUtils::filterBlockWithPredicate(predicate, virtual_columns_block, query_context);
|
||||
|
||||
rows = virtual_columns_block.rows();
|
||||
part_name_column = virtual_columns_block.getByName("_part").column;
|
||||
|
@ -404,8 +404,7 @@ public:
|
||||
Block getMinMaxCountProjectionBlock(
|
||||
const StorageMetadataPtr & metadata_snapshot,
|
||||
const Names & required_columns,
|
||||
bool has_filter,
|
||||
const SelectQueryInfo & query_info,
|
||||
const ActionsDAGPtr & filter_dag,
|
||||
const DataPartsVector & parts,
|
||||
const PartitionIdToMaxBlock * max_block_numbers_to_read,
|
||||
ContextPtr query_context) const;
|
||||
@ -1222,7 +1221,7 @@ protected:
|
||||
boost::iterator_range<DataPartIteratorByStateAndInfo> range, const ColumnsDescription & storage_columns);
|
||||
|
||||
std::optional<UInt64> totalRowsByPartitionPredicateImpl(
|
||||
const SelectQueryInfo & query_info, ContextPtr context, const DataPartsVector & parts) const;
|
||||
const ActionsDAGPtr & filter_actions_dag, ContextPtr context, const DataPartsVector & parts) const;
|
||||
|
||||
static decltype(auto) getStateModifier(DataPartState state)
|
||||
{
|
||||
|
@ -784,7 +784,7 @@ void MergeTreeDataSelectExecutor::buildKeyConditionFromPartOffset(
|
||||
= {ColumnWithTypeAndName(part_offset_type->createColumn(), part_offset_type, "_part_offset"),
|
||||
ColumnWithTypeAndName(part_type->createColumn(), part_type, "_part")};
|
||||
|
||||
auto dag = VirtualColumnUtils::splitFilterDagForAllowedInputs(filter_dag->getOutputs().at(0), sample);
|
||||
auto dag = VirtualColumnUtils::splitFilterDagForAllowedInputs(filter_dag->getOutputs().at(0), &sample);
|
||||
if (!dag)
|
||||
return;
|
||||
|
||||
@ -810,7 +810,7 @@ std::optional<std::unordered_set<String>> MergeTreeDataSelectExecutor::filterPar
|
||||
if (!filter_dag)
|
||||
return {};
|
||||
auto sample = data.getSampleBlockWithVirtualColumns();
|
||||
auto dag = VirtualColumnUtils::splitFilterDagForAllowedInputs(filter_dag->getOutputs().at(0), sample);
|
||||
auto dag = VirtualColumnUtils::splitFilterDagForAllowedInputs(filter_dag->getOutputs().at(0), &sample);
|
||||
if (!dag)
|
||||
return {};
|
||||
|
||||
@ -819,34 +819,6 @@ std::optional<std::unordered_set<String>> MergeTreeDataSelectExecutor::filterPar
|
||||
return VirtualColumnUtils::extractSingleValueFromBlock<String>(virtual_columns_block, "_part");
|
||||
}
|
||||
|
||||
|
||||
std::optional<std::unordered_set<String>> MergeTreeDataSelectExecutor::filterPartsByVirtualColumns(
|
||||
const MergeTreeData & data,
|
||||
const MergeTreeData::DataPartsVector & parts,
|
||||
const ASTPtr & query,
|
||||
ContextPtr context)
|
||||
{
|
||||
std::unordered_set<String> part_values;
|
||||
ASTPtr expression_ast;
|
||||
auto virtual_columns_block = data.getBlockWithVirtualPartColumns(parts, true /* one_part */);
|
||||
|
||||
if (virtual_columns_block.rows() == 0)
|
||||
return {};
|
||||
|
||||
// Generate valid expressions for filtering
|
||||
VirtualColumnUtils::prepareFilterBlockWithQuery(query, context, virtual_columns_block, expression_ast);
|
||||
|
||||
// If there is still something left, fill the virtual block and do the filtering.
|
||||
if (expression_ast)
|
||||
{
|
||||
virtual_columns_block = data.getBlockWithVirtualPartColumns(parts, false /* one_part */);
|
||||
VirtualColumnUtils::filterBlockWithQuery(query, virtual_columns_block, context, expression_ast);
|
||||
return VirtualColumnUtils::extractSingleValueFromBlock<String>(virtual_columns_block, "_part");
|
||||
}
|
||||
|
||||
return {};
|
||||
}
|
||||
|
||||
void MergeTreeDataSelectExecutor::filterPartsByPartition(
|
||||
const std::optional<PartitionPruner> & partition_pruner,
|
||||
const std::optional<KeyCondition> & minmax_idx_condition,
|
||||
|
@ -169,12 +169,6 @@ public:
|
||||
/// If possible, filter using expression on virtual columns.
|
||||
/// Example: SELECT count() FROM table WHERE _part = 'part_name'
|
||||
/// If expression found, return a set with allowed part names (std::nullopt otherwise).
|
||||
static std::optional<std::unordered_set<String>> filterPartsByVirtualColumns(
|
||||
const MergeTreeData & data,
|
||||
const MergeTreeData::DataPartsVector & parts,
|
||||
const ASTPtr & query,
|
||||
ContextPtr context);
|
||||
|
||||
static std::optional<std::unordered_set<String>> filterPartsByVirtualColumns(
|
||||
const MergeTreeData & data,
|
||||
const MergeTreeData::DataPartsVector & parts,
|
||||
|
@ -23,6 +23,7 @@ namespace ErrorCodes
|
||||
extern const int INCORRECT_NUMBER_OF_COLUMNS;
|
||||
extern const int INCORRECT_QUERY;
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
}
|
||||
|
||||
template <typename Distance>
|
||||
@ -331,6 +332,11 @@ MergeTreeIndexConditionPtr MergeTreeIndexAnnoy::createIndexCondition(const Selec
|
||||
return std::make_shared<MergeTreeIndexConditionAnnoy>(index, query, distance_function, context);
|
||||
};
|
||||
|
||||
MergeTreeIndexConditionPtr MergeTreeIndexAnnoy::createIndexCondition(const ActionsDAGPtr &, ContextPtr) const
|
||||
{
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "MergeTreeIndexAnnoy cannot be created with ActionsDAG");
|
||||
}
|
||||
|
||||
MergeTreeIndexPtr annoyIndexCreator(const IndexDescription & index)
|
||||
{
|
||||
static constexpr auto DEFAULT_DISTANCE_FUNCTION = DISTANCE_FUNCTION_L2;
|
||||
|
@ -88,7 +88,7 @@ private:
|
||||
};
|
||||
|
||||
|
||||
class MergeTreeIndexAnnoy : public IMergeTreeIndex
|
||||
class MergeTreeIndexAnnoy final : public IMergeTreeIndex
|
||||
{
|
||||
public:
|
||||
|
||||
@ -98,7 +98,9 @@ public:
|
||||
|
||||
MergeTreeIndexGranulePtr createIndexGranule() const override;
|
||||
MergeTreeIndexAggregatorPtr createIndexAggregator(const MergeTreeWriterSettings & settings) const override;
|
||||
MergeTreeIndexConditionPtr createIndexCondition(const SelectQueryInfo & query, ContextPtr context) const override;
|
||||
MergeTreeIndexConditionPtr createIndexCondition(const SelectQueryInfo & query, ContextPtr context) const;
|
||||
MergeTreeIndexConditionPtr createIndexCondition(const ActionsDAGPtr &, ContextPtr) const override;
|
||||
bool isVectorSearch() const override { return true; }
|
||||
|
||||
private:
|
||||
const UInt64 trees;
|
||||
|
@ -43,9 +43,9 @@ MergeTreeIndexAggregatorPtr MergeTreeIndexBloomFilter::createIndexAggregator(con
|
||||
return std::make_shared<MergeTreeIndexAggregatorBloomFilter>(bits_per_row, hash_functions, index.column_names);
|
||||
}
|
||||
|
||||
MergeTreeIndexConditionPtr MergeTreeIndexBloomFilter::createIndexCondition(const SelectQueryInfo & query_info, ContextPtr context) const
|
||||
MergeTreeIndexConditionPtr MergeTreeIndexBloomFilter::createIndexCondition(const ActionsDAGPtr & filter_actions_dag, ContextPtr context) const
|
||||
{
|
||||
return std::make_shared<MergeTreeIndexConditionBloomFilter>(query_info, context, index.sample_block, hash_functions);
|
||||
return std::make_shared<MergeTreeIndexConditionBloomFilter>(filter_actions_dag, context, index.sample_block, hash_functions);
|
||||
}
|
||||
|
||||
static void assertIndexColumnsType(const Block & header)
|
||||
|
@ -20,7 +20,7 @@ public:
|
||||
|
||||
MergeTreeIndexAggregatorPtr createIndexAggregator(const MergeTreeWriterSettings & settings) const override;
|
||||
|
||||
MergeTreeIndexConditionPtr createIndexCondition(const SelectQueryInfo & query_info, ContextPtr context) const override;
|
||||
MergeTreeIndexConditionPtr createIndexCondition(const ActionsDAGPtr & filter_actions_dag, ContextPtr context) const override;
|
||||
|
||||
private:
|
||||
size_t bits_per_row;
|
||||
|
@ -97,39 +97,18 @@ bool maybeTrueOnBloomFilter(const IColumn * hash_column, const BloomFilterPtr &
|
||||
}
|
||||
|
||||
MergeTreeIndexConditionBloomFilter::MergeTreeIndexConditionBloomFilter(
|
||||
const SelectQueryInfo & info_, ContextPtr context_, const Block & header_, size_t hash_functions_)
|
||||
: WithContext(context_), header(header_), query_info(info_), hash_functions(hash_functions_)
|
||||
const ActionsDAGPtr & filter_actions_dag, ContextPtr context_, const Block & header_, size_t hash_functions_)
|
||||
: WithContext(context_), header(header_), hash_functions(hash_functions_)
|
||||
{
|
||||
if (context_->getSettingsRef().allow_experimental_analyzer)
|
||||
{
|
||||
if (!query_info.filter_actions_dag)
|
||||
{
|
||||
rpn.push_back(RPNElement::FUNCTION_UNKNOWN);
|
||||
return;
|
||||
}
|
||||
|
||||
RPNBuilder<RPNElement> builder(
|
||||
query_info.filter_actions_dag->getOutputs().at(0),
|
||||
context_,
|
||||
[&](const RPNBuilderTreeNode & node, RPNElement & out) { return extractAtomFromTree(node, out); });
|
||||
rpn = std::move(builder).extractRPN();
|
||||
return;
|
||||
}
|
||||
|
||||
ASTPtr filter_node = buildFilterNode(query_info.query);
|
||||
|
||||
if (!filter_node)
|
||||
if (!filter_actions_dag)
|
||||
{
|
||||
rpn.push_back(RPNElement::FUNCTION_UNKNOWN);
|
||||
return;
|
||||
}
|
||||
|
||||
auto block_with_constants = KeyCondition::getBlockWithConstants(query_info.query, query_info.syntax_analyzer_result, context_);
|
||||
RPNBuilder<RPNElement> builder(
|
||||
filter_node,
|
||||
filter_actions_dag->getOutputs().at(0),
|
||||
context_,
|
||||
std::move(block_with_constants),
|
||||
query_info.prepared_sets,
|
||||
[&](const RPNBuilderTreeNode & node, RPNElement & out) { return extractAtomFromTree(node, out); });
|
||||
rpn = std::move(builder).extractRPN();
|
||||
}
|
||||
|
@ -44,7 +44,7 @@ public:
|
||||
std::vector<std::pair<size_t, ColumnPtr>> predicate;
|
||||
};
|
||||
|
||||
MergeTreeIndexConditionBloomFilter(const SelectQueryInfo & info_, ContextPtr context_, const Block & header_, size_t hash_functions_);
|
||||
MergeTreeIndexConditionBloomFilter(const ActionsDAGPtr & filter_actions_dag, ContextPtr context_, const Block & header_, size_t hash_functions_);
|
||||
|
||||
bool alwaysUnknownOrTrue() const override;
|
||||
|
||||
@ -58,7 +58,6 @@ public:
|
||||
|
||||
private:
|
||||
const Block & header;
|
||||
const SelectQueryInfo & query_info;
|
||||
const size_t hash_functions;
|
||||
std::vector<RPNElement> rpn;
|
||||
|
||||
|
@ -138,7 +138,7 @@ void MergeTreeIndexAggregatorFullText::update(const Block & block, size_t * pos,
|
||||
}
|
||||
|
||||
MergeTreeConditionFullText::MergeTreeConditionFullText(
|
||||
const SelectQueryInfo & query_info,
|
||||
const ActionsDAGPtr & filter_actions_dag,
|
||||
ContextPtr context,
|
||||
const Block & index_sample_block,
|
||||
const BloomFilterParameters & params_,
|
||||
@ -147,38 +147,16 @@ MergeTreeConditionFullText::MergeTreeConditionFullText(
|
||||
, index_data_types(index_sample_block.getNamesAndTypesList().getTypes())
|
||||
, params(params_)
|
||||
, token_extractor(token_extactor_)
|
||||
, prepared_sets(query_info.prepared_sets)
|
||||
{
|
||||
if (context->getSettingsRef().allow_experimental_analyzer)
|
||||
{
|
||||
if (!query_info.filter_actions_dag)
|
||||
{
|
||||
rpn.push_back(RPNElement::FUNCTION_UNKNOWN);
|
||||
return;
|
||||
}
|
||||
|
||||
RPNBuilder<RPNElement> builder(
|
||||
query_info.filter_actions_dag->getOutputs().at(0),
|
||||
context,
|
||||
[&](const RPNBuilderTreeNode & node, RPNElement & out) { return extractAtomFromTree(node, out); });
|
||||
rpn = std::move(builder).extractRPN();
|
||||
return;
|
||||
}
|
||||
|
||||
ASTPtr filter_node = buildFilterNode(query_info.query);
|
||||
|
||||
if (!filter_node)
|
||||
if (!filter_actions_dag)
|
||||
{
|
||||
rpn.push_back(RPNElement::FUNCTION_UNKNOWN);
|
||||
return;
|
||||
}
|
||||
|
||||
auto block_with_constants = KeyCondition::getBlockWithConstants(query_info.query, query_info.syntax_analyzer_result, context);
|
||||
RPNBuilder<RPNElement> builder(
|
||||
filter_node,
|
||||
filter_actions_dag->getOutputs().at(0),
|
||||
context,
|
||||
std::move(block_with_constants),
|
||||
query_info.prepared_sets,
|
||||
[&](const RPNBuilderTreeNode & node, RPNElement & out) { return extractAtomFromTree(node, out); });
|
||||
rpn = std::move(builder).extractRPN();
|
||||
}
|
||||
@ -747,9 +725,9 @@ MergeTreeIndexAggregatorPtr MergeTreeIndexFullText::createIndexAggregator(const
|
||||
}
|
||||
|
||||
MergeTreeIndexConditionPtr MergeTreeIndexFullText::createIndexCondition(
|
||||
const SelectQueryInfo & query, ContextPtr context) const
|
||||
const ActionsDAGPtr & filter_dag, ContextPtr context) const
|
||||
{
|
||||
return std::make_shared<MergeTreeConditionFullText>(query, context, index.sample_block, params, token_extractor.get());
|
||||
return std::make_shared<MergeTreeConditionFullText>(filter_dag, context, index.sample_block, params, token_extractor.get());
|
||||
}
|
||||
|
||||
MergeTreeIndexPtr bloomFilterIndexCreator(
|
||||
|
@ -62,7 +62,7 @@ class MergeTreeConditionFullText final : public IMergeTreeIndexCondition
|
||||
{
|
||||
public:
|
||||
MergeTreeConditionFullText(
|
||||
const SelectQueryInfo & query_info,
|
||||
const ActionsDAGPtr & filter_actions_dag,
|
||||
ContextPtr context,
|
||||
const Block & index_sample_block,
|
||||
const BloomFilterParameters & params_,
|
||||
@ -144,9 +144,6 @@ private:
|
||||
BloomFilterParameters params;
|
||||
TokenExtractorPtr token_extractor;
|
||||
RPN rpn;
|
||||
|
||||
/// Sets from syntax analyzer.
|
||||
PreparedSetsPtr prepared_sets;
|
||||
};
|
||||
|
||||
class MergeTreeIndexFullText final : public IMergeTreeIndex
|
||||
@ -166,7 +163,7 @@ public:
|
||||
MergeTreeIndexAggregatorPtr createIndexAggregator(const MergeTreeWriterSettings & settings) const override;
|
||||
|
||||
MergeTreeIndexConditionPtr createIndexCondition(
|
||||
const SelectQueryInfo & query, ContextPtr context) const override;
|
||||
const ActionsDAGPtr & filter_dag, ContextPtr context) const override;
|
||||
|
||||
BloomFilterParameters params;
|
||||
/// Function for selecting next token.
|
||||
|
@ -79,7 +79,7 @@ MergeTreeIndexAggregatorPtr MergeTreeIndexHypothesis::createIndexAggregator(cons
|
||||
}
|
||||
|
||||
MergeTreeIndexConditionPtr MergeTreeIndexHypothesis::createIndexCondition(
|
||||
const SelectQueryInfo &, ContextPtr) const
|
||||
const ActionsDAGPtr &, ContextPtr) const
|
||||
{
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Not supported");
|
||||
}
|
||||
|
@ -70,7 +70,7 @@ public:
|
||||
MergeTreeIndexAggregatorPtr createIndexAggregator(const MergeTreeWriterSettings & settings) const override;
|
||||
|
||||
MergeTreeIndexConditionPtr createIndexCondition(
|
||||
const SelectQueryInfo & query, ContextPtr context) const override;
|
||||
const ActionsDAGPtr & filter_actions_dag, ContextPtr context) const override;
|
||||
|
||||
MergeTreeIndexMergedConditionPtr createIndexMergedCondition(
|
||||
const SelectQueryInfo & query_info, StorageMetadataPtr storage_metadata) const override;
|
||||
|
@ -184,7 +184,7 @@ void MergeTreeIndexAggregatorInverted::update(const Block & block, size_t * pos,
|
||||
}
|
||||
|
||||
MergeTreeConditionInverted::MergeTreeConditionInverted(
|
||||
const SelectQueryInfo & query_info,
|
||||
const ActionsDAGPtr & filter_actions_dag,
|
||||
ContextPtr context_,
|
||||
const Block & index_sample_block,
|
||||
const GinFilterParameters & params_,
|
||||
@ -192,41 +192,20 @@ MergeTreeConditionInverted::MergeTreeConditionInverted(
|
||||
: WithContext(context_), header(index_sample_block)
|
||||
, params(params_)
|
||||
, token_extractor(token_extactor_)
|
||||
, prepared_sets(query_info.prepared_sets)
|
||||
{
|
||||
if (context_->getSettingsRef().allow_experimental_analyzer)
|
||||
{
|
||||
if (!query_info.filter_actions_dag)
|
||||
{
|
||||
rpn.push_back(RPNElement::FUNCTION_UNKNOWN);
|
||||
return;
|
||||
}
|
||||
|
||||
rpn = std::move(
|
||||
RPNBuilder<RPNElement>(
|
||||
query_info.filter_actions_dag->getOutputs().at(0), context_,
|
||||
[&](const RPNBuilderTreeNode & node, RPNElement & out)
|
||||
{
|
||||
return this->traverseAtomAST(node, out);
|
||||
}).extractRPN());
|
||||
return;
|
||||
}
|
||||
|
||||
ASTPtr filter_node = buildFilterNode(query_info.query);
|
||||
if (!filter_node)
|
||||
if (!filter_actions_dag)
|
||||
{
|
||||
rpn.push_back(RPNElement::FUNCTION_UNKNOWN);
|
||||
return;
|
||||
}
|
||||
|
||||
auto block_with_constants = KeyCondition::getBlockWithConstants(query_info.query, query_info.syntax_analyzer_result, context_);
|
||||
RPNBuilder<RPNElement> builder(
|
||||
filter_node,
|
||||
context_,
|
||||
std::move(block_with_constants),
|
||||
query_info.prepared_sets,
|
||||
[&](const RPNBuilderTreeNode & node, RPNElement & out) { return traverseAtomAST(node, out); });
|
||||
rpn = std::move(builder).extractRPN();
|
||||
rpn = std::move(
|
||||
RPNBuilder<RPNElement>(
|
||||
filter_actions_dag->getOutputs().at(0), context_,
|
||||
[&](const RPNBuilderTreeNode & node, RPNElement & out)
|
||||
{
|
||||
return this->traverseAtomAST(node, out);
|
||||
}).extractRPN());
|
||||
}
|
||||
|
||||
/// Keep in-sync with MergeTreeConditionFullText::alwaysUnknownOrTrue
|
||||
@ -721,9 +700,9 @@ MergeTreeIndexAggregatorPtr MergeTreeIndexInverted::createIndexAggregatorForPart
|
||||
}
|
||||
|
||||
MergeTreeIndexConditionPtr MergeTreeIndexInverted::createIndexCondition(
|
||||
const SelectQueryInfo & query, ContextPtr context) const
|
||||
const ActionsDAGPtr & filter_actions_dag, ContextPtr context) const
|
||||
{
|
||||
return std::make_shared<MergeTreeConditionInverted>(query, context, index.sample_block, params, token_extractor.get());
|
||||
return std::make_shared<MergeTreeConditionInverted>(filter_actions_dag, context, index.sample_block, params, token_extractor.get());
|
||||
};
|
||||
|
||||
MergeTreeIndexPtr invertedIndexCreator(
|
||||
|
@ -64,7 +64,7 @@ class MergeTreeConditionInverted final : public IMergeTreeIndexCondition, WithCo
|
||||
{
|
||||
public:
|
||||
MergeTreeConditionInverted(
|
||||
const SelectQueryInfo & query_info,
|
||||
const ActionsDAGPtr & filter_actions_dag,
|
||||
ContextPtr context,
|
||||
const Block & index_sample_block,
|
||||
const GinFilterParameters & params_,
|
||||
@ -169,7 +169,7 @@ public:
|
||||
MergeTreeIndexGranulePtr createIndexGranule() const override;
|
||||
MergeTreeIndexAggregatorPtr createIndexAggregator(const MergeTreeWriterSettings & settings) const override;
|
||||
MergeTreeIndexAggregatorPtr createIndexAggregatorForPart(const GinIndexStorePtr & store, const MergeTreeWriterSettings & /*settings*/) const override;
|
||||
MergeTreeIndexConditionPtr createIndexCondition(const SelectQueryInfo & query, ContextPtr context) const override;
|
||||
MergeTreeIndexConditionPtr createIndexCondition(const ActionsDAGPtr & filter_actions_dag, ContextPtr context) const override;
|
||||
|
||||
GinFilterParameters params;
|
||||
/// Function for selecting next token.
|
||||
|
@ -156,20 +156,17 @@ void MergeTreeIndexAggregatorMinMax::update(const Block & block, size_t * pos, s
|
||||
namespace
|
||||
{
|
||||
|
||||
KeyCondition buildCondition(const IndexDescription & index, const SelectQueryInfo & query_info, ContextPtr context)
|
||||
KeyCondition buildCondition(const IndexDescription & index, const ActionsDAGPtr & filter_actions_dag, ContextPtr context)
|
||||
{
|
||||
if (context->getSettingsRef().allow_experimental_analyzer)
|
||||
return KeyCondition{query_info.filter_actions_dag, context, index.column_names, index.expression};
|
||||
|
||||
return KeyCondition{query_info, context, index.column_names, index.expression};
|
||||
return KeyCondition{filter_actions_dag, context, index.column_names, index.expression};
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
MergeTreeIndexConditionMinMax::MergeTreeIndexConditionMinMax(
|
||||
const IndexDescription & index, const SelectQueryInfo & query_info, ContextPtr context)
|
||||
const IndexDescription & index, const ActionsDAGPtr & filter_actions_dag, ContextPtr context)
|
||||
: index_data_types(index.data_types)
|
||||
, condition(buildCondition(index, query_info, context))
|
||||
, condition(buildCondition(index, filter_actions_dag, context))
|
||||
{
|
||||
}
|
||||
|
||||
@ -200,9 +197,9 @@ MergeTreeIndexAggregatorPtr MergeTreeIndexMinMax::createIndexAggregator(const Me
|
||||
}
|
||||
|
||||
MergeTreeIndexConditionPtr MergeTreeIndexMinMax::createIndexCondition(
|
||||
const SelectQueryInfo & query, ContextPtr context) const
|
||||
const ActionsDAGPtr & filter_actions_dag, ContextPtr context) const
|
||||
{
|
||||
return std::make_shared<MergeTreeIndexConditionMinMax>(index, query, context);
|
||||
return std::make_shared<MergeTreeIndexConditionMinMax>(index, filter_actions_dag, context);
|
||||
}
|
||||
|
||||
MergeTreeIndexFormat MergeTreeIndexMinMax::getDeserializedFormat(const IDataPartStorage & data_part_storage, const std::string & relative_path_prefix) const
|
||||
|
@ -52,7 +52,7 @@ class MergeTreeIndexConditionMinMax final : public IMergeTreeIndexCondition
|
||||
public:
|
||||
MergeTreeIndexConditionMinMax(
|
||||
const IndexDescription & index,
|
||||
const SelectQueryInfo & query_info,
|
||||
const ActionsDAGPtr & filter_actions_dag,
|
||||
ContextPtr context);
|
||||
|
||||
bool alwaysUnknownOrTrue() const override;
|
||||
@ -79,7 +79,7 @@ public:
|
||||
MergeTreeIndexAggregatorPtr createIndexAggregator(const MergeTreeWriterSettings & settings) const override;
|
||||
|
||||
MergeTreeIndexConditionPtr createIndexCondition(
|
||||
const SelectQueryInfo & query, ContextPtr context) const override;
|
||||
const ActionsDAGPtr & filter_actions_dag, ContextPtr context) const override;
|
||||
|
||||
const char* getSerializedFileExtension() const override { return ".idx2"; }
|
||||
MergeTreeIndexFormat getDeserializedFormat(const IDataPartStorage & data_part_storage, const std::string & path_prefix) const override; /// NOLINT
|
||||
|
@ -247,7 +247,7 @@ MergeTreeIndexConditionSet::MergeTreeIndexConditionSet(
|
||||
const String & index_name_,
|
||||
const Block & index_sample_block,
|
||||
size_t max_rows_,
|
||||
const SelectQueryInfo & query_info,
|
||||
const ActionsDAGPtr & filter_dag,
|
||||
ContextPtr context)
|
||||
: index_name(index_name_)
|
||||
, max_rows(max_rows_)
|
||||
@ -256,42 +256,20 @@ MergeTreeIndexConditionSet::MergeTreeIndexConditionSet(
|
||||
if (!key_columns.contains(name))
|
||||
key_columns.insert(name);
|
||||
|
||||
if (context->getSettingsRef().allow_experimental_analyzer)
|
||||
{
|
||||
if (!query_info.filter_actions_dag)
|
||||
return;
|
||||
if (!filter_dag)
|
||||
return;
|
||||
|
||||
if (checkDAGUseless(*query_info.filter_actions_dag->getOutputs().at(0), context))
|
||||
return;
|
||||
if (checkDAGUseless(*filter_dag->getOutputs().at(0), context))
|
||||
return;
|
||||
|
||||
const auto * filter_node = query_info.filter_actions_dag->getOutputs().at(0);
|
||||
auto filter_actions_dag = ActionsDAG::buildFilterActionsDAG({filter_node}, {}, context);
|
||||
const auto * filter_actions_dag_node = filter_actions_dag->getOutputs().at(0);
|
||||
auto filter_actions_dag = filter_dag->clone();
|
||||
const auto * filter_actions_dag_node = filter_actions_dag->getOutputs().at(0);
|
||||
|
||||
std::unordered_map<const ActionsDAG::Node *, const ActionsDAG::Node *> node_to_result_node;
|
||||
filter_actions_dag->getOutputs()[0] = &traverseDAG(*filter_actions_dag_node, filter_actions_dag, context, node_to_result_node);
|
||||
std::unordered_map<const ActionsDAG::Node *, const ActionsDAG::Node *> node_to_result_node;
|
||||
filter_actions_dag->getOutputs()[0] = &traverseDAG(*filter_actions_dag_node, filter_actions_dag, context, node_to_result_node);
|
||||
|
||||
filter_actions_dag->removeUnusedActions();
|
||||
actions = std::make_shared<ExpressionActions>(filter_actions_dag);
|
||||
}
|
||||
else
|
||||
{
|
||||
ASTPtr ast_filter_node = buildFilterNode(query_info.query);
|
||||
if (!ast_filter_node)
|
||||
return;
|
||||
|
||||
if (checkASTUseless(ast_filter_node))
|
||||
return;
|
||||
|
||||
auto expression_ast = ast_filter_node->clone();
|
||||
|
||||
/// Replace logical functions with bit functions.
|
||||
/// Working with UInt8: last bit = can be true, previous = can be false (Like src/Storages/MergeTree/BoolMask.h).
|
||||
traverseAST(expression_ast);
|
||||
|
||||
auto syntax_analyzer_result = TreeRewriter(context).analyze(expression_ast, index_sample_block.getNamesAndTypesList());
|
||||
actions = ExpressionAnalyzer(expression_ast, syntax_analyzer_result, context).getActions(true);
|
||||
}
|
||||
filter_actions_dag->removeUnusedActions();
|
||||
actions = std::make_shared<ExpressionActions>(filter_actions_dag);
|
||||
}
|
||||
|
||||
bool MergeTreeIndexConditionSet::alwaysUnknownOrTrue() const
|
||||
@ -704,9 +682,9 @@ MergeTreeIndexAggregatorPtr MergeTreeIndexSet::createIndexAggregator(const Merge
|
||||
}
|
||||
|
||||
MergeTreeIndexConditionPtr MergeTreeIndexSet::createIndexCondition(
|
||||
const SelectQueryInfo & query, ContextPtr context) const
|
||||
const ActionsDAGPtr & filter_actions_dag, ContextPtr context) const
|
||||
{
|
||||
return std::make_shared<MergeTreeIndexConditionSet>(index.name, index.sample_block, max_rows, query, context);
|
||||
return std::make_shared<MergeTreeIndexConditionSet>(index.name, index.sample_block, max_rows, filter_actions_dag, context);
|
||||
}
|
||||
|
||||
MergeTreeIndexPtr setIndexCreator(const IndexDescription & index)
|
||||
|
@ -87,7 +87,7 @@ public:
|
||||
const String & index_name_,
|
||||
const Block & index_sample_block,
|
||||
size_t max_rows_,
|
||||
const SelectQueryInfo & query_info,
|
||||
const ActionsDAGPtr & filter_dag,
|
||||
ContextPtr context);
|
||||
|
||||
bool alwaysUnknownOrTrue() const override;
|
||||
@ -149,7 +149,7 @@ public:
|
||||
MergeTreeIndexAggregatorPtr createIndexAggregator(const MergeTreeWriterSettings & settings) const override;
|
||||
|
||||
MergeTreeIndexConditionPtr createIndexCondition(
|
||||
const SelectQueryInfo & query, ContextPtr context) const override;
|
||||
const ActionsDAGPtr & filter_actions_dag, ContextPtr context) const override;
|
||||
|
||||
size_t max_rows = 0;
|
||||
};
|
||||
|
@ -36,6 +36,7 @@ namespace ErrorCodes
|
||||
extern const int INCORRECT_NUMBER_OF_COLUMNS;
|
||||
extern const int INCORRECT_QUERY;
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
}
|
||||
|
||||
namespace
|
||||
@ -366,6 +367,11 @@ MergeTreeIndexConditionPtr MergeTreeIndexUSearch::createIndexCondition(const Sel
|
||||
return std::make_shared<MergeTreeIndexConditionUSearch>(index, query, distance_function, context);
|
||||
};
|
||||
|
||||
MergeTreeIndexConditionPtr MergeTreeIndexUSearch::createIndexCondition(const ActionsDAGPtr &, ContextPtr) const
|
||||
{
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "MergeTreeIndexAnnoy cannot be created with ActionsDAG");
|
||||
}
|
||||
|
||||
MergeTreeIndexPtr usearchIndexCreator(const IndexDescription & index)
|
||||
{
|
||||
static constexpr auto default_distance_function = DISTANCE_FUNCTION_L2;
|
||||
|
@ -100,7 +100,9 @@ public:
|
||||
|
||||
MergeTreeIndexGranulePtr createIndexGranule() const override;
|
||||
MergeTreeIndexAggregatorPtr createIndexAggregator(const MergeTreeWriterSettings & settings) const override;
|
||||
MergeTreeIndexConditionPtr createIndexCondition(const SelectQueryInfo & query, ContextPtr context) const override;
|
||||
MergeTreeIndexConditionPtr createIndexCondition(const SelectQueryInfo & query, ContextPtr context) const;
|
||||
MergeTreeIndexConditionPtr createIndexCondition(const ActionsDAGPtr &, ContextPtr) const override;
|
||||
bool isVectorSearch() const override { return true; }
|
||||
|
||||
private:
|
||||
const String distance_function;
|
||||
|
@ -170,7 +170,9 @@ struct IMergeTreeIndex
|
||||
}
|
||||
|
||||
virtual MergeTreeIndexConditionPtr createIndexCondition(
|
||||
const SelectQueryInfo & query_info, ContextPtr context) const = 0;
|
||||
const ActionsDAGPtr & filter_actions_dag, ContextPtr context) const = 0;
|
||||
|
||||
virtual bool isVectorSearch() const { return false; }
|
||||
|
||||
virtual MergeTreeIndexMergedConditionPtr createIndexMergedCondition(
|
||||
const SelectQueryInfo & /*query_info*/, StorageMetadataPtr /*storage_metadata*/) const
|
||||
|
@ -22,7 +22,9 @@ namespace ErrorCodes
|
||||
}
|
||||
|
||||
|
||||
/// Lightweight (in terms of logic) stream for reading single part from MergeTree
|
||||
/// Lightweight (in terms of logic) stream for reading single part from
|
||||
/// MergeTree, used for merges and mutations.
|
||||
///
|
||||
/// NOTE:
|
||||
/// It doesn't filter out rows that are deleted with lightweight deletes.
|
||||
/// Use createMergeTreeSequentialSource filter out those rows.
|
||||
@ -30,6 +32,7 @@ class MergeTreeSequentialSource : public ISource
|
||||
{
|
||||
public:
|
||||
MergeTreeSequentialSource(
|
||||
MergeTreeSequentialSourceType type,
|
||||
const MergeTreeData & storage_,
|
||||
const StorageSnapshotPtr & storage_snapshot_,
|
||||
MergeTreeData::DataPartPtr data_part_,
|
||||
@ -85,6 +88,7 @@ private:
|
||||
|
||||
|
||||
MergeTreeSequentialSource::MergeTreeSequentialSource(
|
||||
MergeTreeSequentialSourceType type,
|
||||
const MergeTreeData & storage_,
|
||||
const StorageSnapshotPtr & storage_snapshot_,
|
||||
MergeTreeData::DataPartPtr data_part_,
|
||||
@ -144,10 +148,25 @@ MergeTreeSequentialSource::MergeTreeSequentialSource(
|
||||
columns_for_reader = data_part->getColumns().addTypes(columns_to_read);
|
||||
}
|
||||
|
||||
ReadSettings read_settings;
|
||||
const auto & context = storage.getContext();
|
||||
ReadSettings read_settings = context->getReadSettings();
|
||||
read_settings.read_from_filesystem_cache_if_exists_otherwise_bypass_cache = true;
|
||||
/// It does not make sense to use pthread_threadpool for background merges/mutations
|
||||
/// And also to preserve backward compatibility
|
||||
read_settings.local_fs_method = LocalFSReadMethod::pread;
|
||||
if (read_with_direct_io)
|
||||
read_settings.direct_io_threshold = 1;
|
||||
read_settings.read_from_filesystem_cache_if_exists_otherwise_bypass_cache = true;
|
||||
/// Configure throttling
|
||||
switch (type)
|
||||
{
|
||||
case Mutation:
|
||||
read_settings.local_throttler = context->getMutationsThrottler();
|
||||
break;
|
||||
case Merge:
|
||||
read_settings.local_throttler = context->getMergesThrottler();
|
||||
break;
|
||||
}
|
||||
read_settings.remote_throttler = read_settings.local_throttler;
|
||||
|
||||
MergeTreeReaderSettings reader_settings =
|
||||
{
|
||||
@ -242,6 +261,7 @@ MergeTreeSequentialSource::~MergeTreeSequentialSource() = default;
|
||||
|
||||
|
||||
Pipe createMergeTreeSequentialSource(
|
||||
MergeTreeSequentialSourceType type,
|
||||
const MergeTreeData & storage,
|
||||
const StorageSnapshotPtr & storage_snapshot,
|
||||
MergeTreeData::DataPartPtr data_part,
|
||||
@ -262,7 +282,7 @@ Pipe createMergeTreeSequentialSource(
|
||||
if (need_to_filter_deleted_rows && !has_filter_column)
|
||||
columns_to_read.emplace_back(filter_column.name);
|
||||
|
||||
auto column_part_source = std::make_shared<MergeTreeSequentialSource>(
|
||||
auto column_part_source = std::make_shared<MergeTreeSequentialSource>(type,
|
||||
storage, storage_snapshot, data_part, columns_to_read, std::move(mark_ranges),
|
||||
/*apply_deleted_mask=*/ false, read_with_direct_io, take_column_types_from_storage, quiet);
|
||||
|
||||
@ -290,6 +310,7 @@ class ReadFromPart final : public ISourceStep
|
||||
{
|
||||
public:
|
||||
ReadFromPart(
|
||||
MergeTreeSequentialSourceType type_,
|
||||
const MergeTreeData & storage_,
|
||||
const StorageSnapshotPtr & storage_snapshot_,
|
||||
MergeTreeData::DataPartPtr data_part_,
|
||||
@ -299,6 +320,7 @@ public:
|
||||
ContextPtr context_,
|
||||
Poco::Logger * log_)
|
||||
: ISourceStep(DataStream{.header = storage_snapshot_->getSampleBlockForColumns(columns_to_read_)})
|
||||
, type(type_)
|
||||
, storage(storage_)
|
||||
, storage_snapshot(storage_snapshot_)
|
||||
, data_part(std::move(data_part_))
|
||||
@ -335,7 +357,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
auto source = createMergeTreeSequentialSource(
|
||||
auto source = createMergeTreeSequentialSource(type,
|
||||
storage,
|
||||
storage_snapshot,
|
||||
data_part,
|
||||
@ -351,6 +373,7 @@ public:
|
||||
}
|
||||
|
||||
private:
|
||||
MergeTreeSequentialSourceType type;
|
||||
const MergeTreeData & storage;
|
||||
StorageSnapshotPtr storage_snapshot;
|
||||
MergeTreeData::DataPartPtr data_part;
|
||||
@ -362,6 +385,7 @@ private:
|
||||
};
|
||||
|
||||
void createReadFromPartStep(
|
||||
MergeTreeSequentialSourceType type,
|
||||
QueryPlan & plan,
|
||||
const MergeTreeData & storage,
|
||||
const StorageSnapshotPtr & storage_snapshot,
|
||||
@ -372,7 +396,7 @@ void createReadFromPartStep(
|
||||
ContextPtr context,
|
||||
Poco::Logger * log)
|
||||
{
|
||||
auto reading = std::make_unique<ReadFromPart>(
|
||||
auto reading = std::make_unique<ReadFromPart>(type,
|
||||
storage, storage_snapshot, std::move(data_part),
|
||||
std::move(columns_to_read), apply_deleted_mask,
|
||||
filter, std::move(context), log);
|
||||
|
@ -8,9 +8,16 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
enum MergeTreeSequentialSourceType
|
||||
{
|
||||
Mutation,
|
||||
Merge,
|
||||
};
|
||||
|
||||
/// Create stream for reading single part from MergeTree.
|
||||
/// If the part has lightweight delete mask then the deleted rows are filtered out.
|
||||
Pipe createMergeTreeSequentialSource(
|
||||
MergeTreeSequentialSourceType type,
|
||||
const MergeTreeData & storage,
|
||||
const StorageSnapshotPtr & storage_snapshot,
|
||||
MergeTreeData::DataPartPtr data_part,
|
||||
@ -25,6 +32,7 @@ Pipe createMergeTreeSequentialSource(
|
||||
class QueryPlan;
|
||||
|
||||
void createReadFromPartStep(
|
||||
MergeTreeSequentialSourceType type,
|
||||
QueryPlan & plan,
|
||||
const MergeTreeData & storage,
|
||||
const StorageSnapshotPtr & storage_snapshot,
|
||||
|
@ -9,10 +9,7 @@ namespace
|
||||
|
||||
KeyCondition buildKeyCondition(const KeyDescription & partition_key, const SelectQueryInfo & query_info, ContextPtr context, bool strict)
|
||||
{
|
||||
if (context->getSettingsRef().allow_experimental_analyzer)
|
||||
return {query_info.filter_actions_dag, context, partition_key.column_names, partition_key.expression, true /* single_point */, strict};
|
||||
|
||||
return {query_info, context, partition_key.column_names, partition_key.expression, true /* single_point */, strict};
|
||||
return {query_info.filter_actions_dag, context, partition_key.column_names, partition_key.expression, true /* single_point */, strict};
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -202,17 +202,6 @@ public:
|
||||
traverseTree(RPNBuilderTreeNode(filter_actions_dag_node, tree_context));
|
||||
}
|
||||
|
||||
RPNBuilder(const ASTPtr & filter_node,
|
||||
ContextPtr query_context_,
|
||||
Block block_with_constants_,
|
||||
PreparedSetsPtr prepared_sets_,
|
||||
const ExtractAtomFromTreeFunction & extract_atom_from_tree_function_)
|
||||
: tree_context(std::move(query_context_), std::move(block_with_constants_), std::move(prepared_sets_))
|
||||
, extract_atom_from_tree_function(extract_atom_from_tree_function_)
|
||||
{
|
||||
traverseTree(RPNBuilderTreeNode(filter_node.get(), tree_context));
|
||||
}
|
||||
|
||||
RPNElements && extractRPN() && { return std::move(rpn_elements); }
|
||||
|
||||
private:
|
||||
|
@ -172,6 +172,9 @@ struct ReplicatedMergeTreeLogEntryData
|
||||
/// The quorum value (for GET_PART) is a non-zero value when the quorum write is enabled.
|
||||
size_t quorum = 0;
|
||||
|
||||
/// Used only in tests for permanent fault injection for particular queue entry.
|
||||
bool fault_injected = false;
|
||||
|
||||
/// If this MUTATE_PART entry caused by alter(modify/drop) query.
|
||||
bool isAlterMutation() const
|
||||
{
|
||||
|
@ -1056,11 +1056,6 @@ StorageFileSource::~StorageFileSource()
|
||||
beforeDestroy();
|
||||
}
|
||||
|
||||
void StorageFileSource::setKeyCondition(const SelectQueryInfo & query_info_, ContextPtr context_)
|
||||
{
|
||||
setKeyConditionImpl(query_info_, context_, block_for_format);
|
||||
}
|
||||
|
||||
void StorageFileSource::setKeyCondition(const ActionsDAG::NodeRawConstPtrs & nodes, ContextPtr context_)
|
||||
{
|
||||
setKeyConditionImpl(nodes, context_, block_for_format);
|
||||
|
@ -256,8 +256,6 @@ private:
|
||||
return storage->getName();
|
||||
}
|
||||
|
||||
void setKeyCondition(const SelectQueryInfo & query_info_, ContextPtr context_) override;
|
||||
|
||||
void setKeyCondition(const ActionsDAG::NodeRawConstPtrs & nodes, ContextPtr context_) override;
|
||||
|
||||
bool tryGetCountFromCache(const struct stat & file_stat);
|
||||
|
@ -262,10 +262,10 @@ std::optional<UInt64> StorageMergeTree::totalRows(const Settings &) const
|
||||
return getTotalActiveSizeInRows();
|
||||
}
|
||||
|
||||
std::optional<UInt64> StorageMergeTree::totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, ContextPtr local_context) const
|
||||
std::optional<UInt64> StorageMergeTree::totalRowsByPartitionPredicate(const ActionsDAGPtr & filter_actions_dag, ContextPtr local_context) const
|
||||
{
|
||||
auto parts = getVisibleDataPartsVector(local_context);
|
||||
return totalRowsByPartitionPredicateImpl(query_info, local_context, parts);
|
||||
return totalRowsByPartitionPredicateImpl(filter_actions_dag, local_context, parts);
|
||||
}
|
||||
|
||||
std::optional<UInt64> StorageMergeTree::totalBytes(const Settings &) const
|
||||
|
@ -66,7 +66,7 @@ public:
|
||||
size_t num_streams) override;
|
||||
|
||||
std::optional<UInt64> totalRows(const Settings &) const override;
|
||||
std::optional<UInt64> totalRowsByPartitionPredicate(const SelectQueryInfo &, ContextPtr) const override;
|
||||
std::optional<UInt64> totalRowsByPartitionPredicate(const ActionsDAGPtr & filter_actions_dag, ContextPtr) const override;
|
||||
std::optional<UInt64> totalBytes(const Settings &) const override;
|
||||
std::optional<UInt64> totalBytesUncompressed(const Settings &) const override;
|
||||
|
||||
|
@ -18,6 +18,7 @@
|
||||
#include <Common/thread_local_rng.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Common/ThreadFuzzer.h>
|
||||
#include <Common/FailPoint.h>
|
||||
|
||||
#include <Core/ServerUUID.h>
|
||||
|
||||
@ -147,6 +148,12 @@ namespace CurrentMetrics
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace FailPoints
|
||||
{
|
||||
extern const char replicated_queue_fail_next_entry[];
|
||||
extern const char replicated_queue_unfail_entries[];
|
||||
}
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int CANNOT_READ_ALL_DATA;
|
||||
@ -191,6 +198,7 @@ namespace ErrorCodes
|
||||
extern const int TABLE_IS_DROPPED;
|
||||
extern const int CANNOT_BACKUP_TABLE;
|
||||
extern const int SUPPORT_IS_DISABLED;
|
||||
extern const int FAULT_INJECTED;
|
||||
}
|
||||
|
||||
namespace ActionLocks
|
||||
@ -1737,14 +1745,12 @@ bool StorageReplicatedMergeTree::checkPartChecksumsAndAddCommitOps(
|
||||
|
||||
if (replica_part_header.getColumnsHash() != local_part_header.getColumnsHash())
|
||||
{
|
||||
/// Currently there are two (known) cases when it may happen:
|
||||
/// Currently there are only one (known) cases when it may happen:
|
||||
/// - KILL MUTATION query had removed mutation before all replicas have executed assigned MUTATE_PART entries.
|
||||
/// Some replicas may skip this mutation and update part version without actually applying any changes.
|
||||
/// It leads to mismatching checksum if changes were applied on other replicas.
|
||||
/// - ALTER_METADATA and MERGE_PARTS were reordered on some replicas.
|
||||
/// It may lead to different number of columns in merged parts on these replicas.
|
||||
throw Exception(ErrorCodes::CHECKSUM_DOESNT_MATCH, "Part {} from {} has different columns hash "
|
||||
"(it may rarely happen on race condition with KILL MUTATION or ALTER COLUMN).", part_name, replica);
|
||||
"(it may rarely happen on race condition with KILL MUTATION).", part_name, replica);
|
||||
}
|
||||
|
||||
replica_part_header.getChecksums().checkEqual(local_part_header.getChecksums(), true);
|
||||
@ -1931,6 +1937,17 @@ MergeTreeData::MutableDataPartPtr StorageReplicatedMergeTree::attachPartHelperFo
|
||||
|
||||
bool StorageReplicatedMergeTree::executeLogEntry(LogEntry & entry)
|
||||
{
|
||||
fiu_do_on(FailPoints::replicated_queue_fail_next_entry,
|
||||
{
|
||||
entry.fault_injected = true;
|
||||
});
|
||||
fiu_do_on(FailPoints::replicated_queue_unfail_entries,
|
||||
{
|
||||
entry.fault_injected = false;
|
||||
});
|
||||
if (entry.fault_injected)
|
||||
throw Exception(ErrorCodes::FAULT_INJECTED, "Injecting fault for log entry {}", entry.getDescriptionForLogs(format_version));
|
||||
|
||||
if (entry.type == LogEntry::DROP_RANGE || entry.type == LogEntry::DROP_PART)
|
||||
{
|
||||
executeDropRange(entry);
|
||||
@ -5453,11 +5470,11 @@ std::optional<UInt64> StorageReplicatedMergeTree::totalRows(const Settings & set
|
||||
return res;
|
||||
}
|
||||
|
||||
std::optional<UInt64> StorageReplicatedMergeTree::totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, ContextPtr local_context) const
|
||||
std::optional<UInt64> StorageReplicatedMergeTree::totalRowsByPartitionPredicate(const ActionsDAGPtr & filter_actions_dag, ContextPtr local_context) const
|
||||
{
|
||||
DataPartsVector parts;
|
||||
foreachActiveParts([&](auto & part) { parts.push_back(part); }, local_context->getSettingsRef().select_sequential_consistency);
|
||||
return totalRowsByPartitionPredicateImpl(query_info, local_context, parts);
|
||||
return totalRowsByPartitionPredicateImpl(filter_actions_dag, local_context, parts);
|
||||
}
|
||||
|
||||
std::optional<UInt64> StorageReplicatedMergeTree::totalBytes(const Settings & settings) const
|
||||
|
@ -163,7 +163,7 @@ public:
|
||||
size_t num_streams) override;
|
||||
|
||||
std::optional<UInt64> totalRows(const Settings & settings) const override;
|
||||
std::optional<UInt64> totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, ContextPtr context) const override;
|
||||
std::optional<UInt64> totalRowsByPartitionPredicate(const ActionsDAGPtr & filter_actions_dag, ContextPtr context) const override;
|
||||
std::optional<UInt64> totalBytes(const Settings & settings) const override;
|
||||
std::optional<UInt64> totalBytesUncompressed(const Settings & settings) const override;
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user