Merge branch 'master' of https://github.com/ClickHouse/ClickHouse into KRB_CVE_Fix

This commit is contained in:
MeenaRenganathan22 2023-03-13 06:54:47 -07:00
commit dc1e125b31
252 changed files with 2742 additions and 1146 deletions

View File

@ -29,7 +29,7 @@ RUN arch=${TARGETARCH:-amd64} \
esac
ARG REPOSITORY="https://s3.amazonaws.com/clickhouse-builds/22.4/31c367d3cd3aefd316778601ff6565119fe36682/package_release"
ARG VERSION="23.2.3.17"
ARG VERSION="23.2.4.12"
ARG PACKAGES="clickhouse-keeper"
# user/group precreated explicitly with fixed uid/gid on purpose.

View File

@ -33,7 +33,7 @@ RUN arch=${TARGETARCH:-amd64} \
# lts / testing / prestable / etc
ARG REPO_CHANNEL="stable"
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
ARG VERSION="23.2.3.17"
ARG VERSION="23.2.4.12"
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
# user/group precreated explicitly with fixed uid/gid on purpose.

View File

@ -22,7 +22,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
ARG REPO_CHANNEL="stable"
ARG REPOSITORY="deb https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
ARG VERSION="23.2.3.17"
ARG VERSION="23.2.4.12"
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
# set non-empty deb_location_url url to create a docker image

View File

@ -0,0 +1,29 @@
---
sidebar_position: 1
sidebar_label: 2023
---
# 2023 Changelog
### ClickHouse release v22.12.5.34-stable (b82d6401ca1) FIXME as compared to v22.12.4.76-stable (cb5772db805)
#### Improvement
* Backported in [#46983](https://github.com/ClickHouse/ClickHouse/issues/46983): - Apply `ALTER TABLE table_name ON CLUSTER cluster MOVE PARTITION|PART partition_expr TO DISK|VOLUME 'disk_name'` to all replicas. Because `ALTER TABLE t MOVE` is not replicated. [#46402](https://github.com/ClickHouse/ClickHouse/pull/46402) ([lizhuoyu5](https://github.com/lzydmxy)).
#### Bug Fix (user-visible misbehavior in official stable or prestable release)
* Backported in [#45729](https://github.com/ClickHouse/ClickHouse/issues/45729): Fix key description when encountering duplicate primary keys. This can happen in projections. See [#45590](https://github.com/ClickHouse/ClickHouse/issues/45590) for details. [#45686](https://github.com/ClickHouse/ClickHouse/pull/45686) ([Amos Bird](https://github.com/amosbird)).
* Backported in [#46398](https://github.com/ClickHouse/ClickHouse/issues/46398): Fix `SYSTEM UNFREEZE` queries failing with the exception `CANNOT_PARSE_INPUT_ASSERTION_FAILED`. [#46325](https://github.com/ClickHouse/ClickHouse/pull/46325) ([Aleksei Filatov](https://github.com/aalexfvk)).
* Backported in [#46903](https://github.com/ClickHouse/ClickHouse/issues/46903): - Fix incorrect alias recursion in QueryNormalizer. [#46609](https://github.com/ClickHouse/ClickHouse/pull/46609) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#47210](https://github.com/ClickHouse/ClickHouse/issues/47210): `INSERT` queries through native TCP protocol and HTTP protocol were not canceled correctly in some cases. It could lead to a partially applied query if a client canceled the query, or if a client died or, in rare cases, on network errors. As a result, it could lead to not working deduplication. Fixes [#27667](https://github.com/ClickHouse/ClickHouse/issues/27667) and [#45377](https://github.com/ClickHouse/ClickHouse/issues/45377). [#46681](https://github.com/ClickHouse/ClickHouse/pull/46681) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#47157](https://github.com/ClickHouse/ClickHouse/issues/47157): - Fix arithmetic operations in aggregate optimization with `min` and `max`. [#46705](https://github.com/ClickHouse/ClickHouse/pull/46705) ([Duc Canh Le](https://github.com/canhld94)).
* Backported in [#46881](https://github.com/ClickHouse/ClickHouse/issues/46881): Fix MSan report in the `maxIntersections` function. This closes [#43126](https://github.com/ClickHouse/ClickHouse/issues/43126). [#46847](https://github.com/ClickHouse/ClickHouse/pull/46847) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Backported in [#47359](https://github.com/ClickHouse/ClickHouse/issues/47359): Fix possible deadlock on distributed query cancellation. [#47161](https://github.com/ClickHouse/ClickHouse/pull/47161) ([Kruglov Pavel](https://github.com/Avogar)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Use /etc/default/clickhouse in systemd too [#47003](https://github.com/ClickHouse/ClickHouse/pull/47003) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Update typing for a new PyGithub version [#47123](https://github.com/ClickHouse/ClickHouse/pull/47123) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Follow-up to [#46681](https://github.com/ClickHouse/ClickHouse/issues/46681) [#47284](https://github.com/ClickHouse/ClickHouse/pull/47284) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Add a manual trigger for release workflow [#47302](https://github.com/ClickHouse/ClickHouse/pull/47302) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).

View File

@ -0,0 +1,28 @@
---
sidebar_position: 1
sidebar_label: 2023
---
# 2023 Changelog
### ClickHouse release v22.8.15.23-lts (d36fa168bbf) FIXME as compared to v22.8.14.53-lts (4ea67c40077)
#### Improvement
* Backported in [#46981](https://github.com/ClickHouse/ClickHouse/issues/46981): - Apply `ALTER TABLE table_name ON CLUSTER cluster MOVE PARTITION|PART partition_expr TO DISK|VOLUME 'disk_name'` to all replicas. Because `ALTER TABLE t MOVE` is not replicated. [#46402](https://github.com/ClickHouse/ClickHouse/pull/46402) ([lizhuoyu5](https://github.com/lzydmxy)).
#### Bug Fix
* Backported in [#47336](https://github.com/ClickHouse/ClickHouse/issues/47336): Sometimes after changing a role that could be not reflected on the access rights of a user who uses that role. This PR fixes that. [#46772](https://github.com/ClickHouse/ClickHouse/pull/46772) ([Vitaly Baranov](https://github.com/vitlibar)).
#### Bug Fix (user-visible misbehavior in official stable or prestable release)
* Backported in [#46901](https://github.com/ClickHouse/ClickHouse/issues/46901): - Fix incorrect alias recursion in QueryNormalizer. [#46609](https://github.com/ClickHouse/ClickHouse/pull/46609) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#47156](https://github.com/ClickHouse/ClickHouse/issues/47156): - Fix arithmetic operations in aggregate optimization with `min` and `max`. [#46705](https://github.com/ClickHouse/ClickHouse/pull/46705) ([Duc Canh Le](https://github.com/canhld94)).
* Backported in [#46987](https://github.com/ClickHouse/ClickHouse/issues/46987): Fix result of LIKE predicates which translate to substring searches and contain quoted non-LIKE metacharacters. [#46875](https://github.com/ClickHouse/ClickHouse/pull/46875) ([Robert Schulze](https://github.com/rschu1ze)).
* Backported in [#47357](https://github.com/ClickHouse/ClickHouse/issues/47357): Fix possible deadlock on distributed query cancellation. [#47161](https://github.com/ClickHouse/ClickHouse/pull/47161) ([Kruglov Pavel](https://github.com/Avogar)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Reduce updates of Mergeable Check [#46781](https://github.com/ClickHouse/ClickHouse/pull/46781) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Update typing for a new PyGithub version [#47123](https://github.com/ClickHouse/ClickHouse/pull/47123) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Add a manual trigger for release workflow [#47302](https://github.com/ClickHouse/ClickHouse/pull/47302) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).

View File

@ -0,0 +1,28 @@
---
sidebar_position: 1
sidebar_label: 2023
---
# 2023 Changelog
### ClickHouse release v23.1.5.24-stable (0e51b53ba99) FIXME as compared to v23.1.4.58-stable (9ed562163a5)
#### Build/Testing/Packaging Improvement
* Backported in [#47060](https://github.com/ClickHouse/ClickHouse/issues/47060): Fix error during server startup on old distros (e.g. Amazon Linux 2) and on ARM that glibc 2.28 symbols are not found. [#47008](https://github.com/ClickHouse/ClickHouse/pull/47008) ([Robert Schulze](https://github.com/rschu1ze)).
#### Bug Fix (user-visible misbehavior in official stable or prestable release)
* Backported in [#46401](https://github.com/ClickHouse/ClickHouse/issues/46401): Fix `SYSTEM UNFREEZE` queries failing with the exception `CANNOT_PARSE_INPUT_ASSERTION_FAILED`. [#46325](https://github.com/ClickHouse/ClickHouse/pull/46325) ([Aleksei Filatov](https://github.com/aalexfvk)).
* Backported in [#46905](https://github.com/ClickHouse/ClickHouse/issues/46905): - Fix incorrect alias recursion in QueryNormalizer. [#46609](https://github.com/ClickHouse/ClickHouse/pull/46609) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#47211](https://github.com/ClickHouse/ClickHouse/issues/47211): `INSERT` queries through native TCP protocol and HTTP protocol were not canceled correctly in some cases. It could lead to a partially applied query if a client canceled the query, or if a client died or, in rare cases, on network errors. As a result, it could lead to not working deduplication. Fixes [#27667](https://github.com/ClickHouse/ClickHouse/issues/27667) and [#45377](https://github.com/ClickHouse/ClickHouse/issues/45377). [#46681](https://github.com/ClickHouse/ClickHouse/pull/46681) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#47118](https://github.com/ClickHouse/ClickHouse/issues/47118): - Fix arithmetic operations in aggregate optimization with `min` and `max`. [#46705](https://github.com/ClickHouse/ClickHouse/pull/46705) ([Duc Canh Le](https://github.com/canhld94)).
* Backported in [#46883](https://github.com/ClickHouse/ClickHouse/issues/46883): Fix MSan report in the `maxIntersections` function. This closes [#43126](https://github.com/ClickHouse/ClickHouse/issues/43126). [#46847](https://github.com/ClickHouse/ClickHouse/pull/46847) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Backported in [#47361](https://github.com/ClickHouse/ClickHouse/issues/47361): Fix possible deadlock on distributed query cancellation. [#47161](https://github.com/ClickHouse/ClickHouse/pull/47161) ([Kruglov Pavel](https://github.com/Avogar)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Use /etc/default/clickhouse in systemd too [#47003](https://github.com/ClickHouse/ClickHouse/pull/47003) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Update typing for a new PyGithub version [#47123](https://github.com/ClickHouse/ClickHouse/pull/47123) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Follow-up to [#46681](https://github.com/ClickHouse/ClickHouse/issues/46681) [#47284](https://github.com/ClickHouse/ClickHouse/pull/47284) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Add a manual trigger for release workflow [#47302](https://github.com/ClickHouse/ClickHouse/pull/47302) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).

View File

@ -0,0 +1,20 @@
---
sidebar_position: 1
sidebar_label: 2023
---
# 2023 Changelog
### ClickHouse release v23.2.4.12-stable (8fe866cb035) FIXME as compared to v23.2.3.17-stable (dec18bf7281)
#### Bug Fix (user-visible misbehavior in official stable or prestable release)
* Backported in [#47277](https://github.com/ClickHouse/ClickHouse/issues/47277): Fix IPv4/IPv6 serialization/deserialization in binary formats that was broken in https://github.com/ClickHouse/ClickHouse/pull/43221. Closes [#46522](https://github.com/ClickHouse/ClickHouse/issues/46522). [#46616](https://github.com/ClickHouse/ClickHouse/pull/46616) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#47212](https://github.com/ClickHouse/ClickHouse/issues/47212): `INSERT` queries through native TCP protocol and HTTP protocol were not canceled correctly in some cases. It could lead to a partially applied query if a client canceled the query, or if a client died or, in rare cases, on network errors. As a result, it could lead to not working deduplication. Fixes [#27667](https://github.com/ClickHouse/ClickHouse/issues/27667) and [#45377](https://github.com/ClickHouse/ClickHouse/issues/45377). [#46681](https://github.com/ClickHouse/ClickHouse/pull/46681) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#47363](https://github.com/ClickHouse/ClickHouse/issues/47363): Fix possible deadlock on distributed query cancellation. [#47161](https://github.com/ClickHouse/ClickHouse/pull/47161) ([Kruglov Pavel](https://github.com/Avogar)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Follow-up to [#46681](https://github.com/ClickHouse/ClickHouse/issues/46681) [#47284](https://github.com/ClickHouse/ClickHouse/pull/47284) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Add a manual trigger for release workflow [#47302](https://github.com/ClickHouse/ClickHouse/pull/47302) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).

View File

@ -0,0 +1,123 @@
---
slug: /en/development/build-cross-s390x
sidebar_position: 69
title: How to Build, Run and Debug ClickHouse on Linux for s390x (zLinux)
sidebar_label: Build on Linux for s390x (zLinux)
---
As of writing (2023/3/10) building for s390x considered to be experimental. Not all features can be enabled, has broken features and is currently under active development.
## Building
As s390x does not support boringssl, it uses OpenSSL and has two related build options.
- By default, the s390x build will dynamically link to OpenSSL libraries. It will build OpenSSL shared objects, so it's not necessary to install OpenSSL beforehand. (This option is recommended in all cases.)
- Another option is to build OpenSSL in-tree. In this case two build flags need to be supplied to cmake
```bash
-DENABLE_OPENSSL_DYNAMIC=0 -DENABLE_OPENSSL=1
```
These instructions assume that the host machine is x86_64 and has all the tooling required to build natively based on the [build instructions](../development/build.md). It also assumes that the host is Ubuntu 22.04 but the following instructions should also work on Ubuntu 20.04.
In addition to installing the tooling used to build natively, the following additional packages need to be installed:
```bash
apt-get install binutils-s390x-linux-gnu libc6-dev-s390x-cross gcc-s390x-linux-gnu binfmt-support qemu-user-static
```
If you wish to cross compile rust code install the rust cross compile target for s390x:
```bash
rustup target add s390x-unknown-linux-gnu
```
To build for s390x:
```bash
cmake -DCMAKE_TOOLCHAIN_FILE=cmake/linux/toolchain-s390x.cmake ..
ninja
```
## Running
Once built, the binary can be run with, eg.:
```bash
qemu-s390x-static -L /usr/s390x-linux-gnu ./clickhouse
```
## Debugging
Install LLDB:
```bash
apt-get install lldb-15
```
To Debug a s390x executable, run clickhouse using QEMU in debug mode:
```bash
qemu-s390x-static -g 31338 -L /usr/s390x-linux-gnu ./clickhouse
```
In another shell run LLDB and attach, replace `<Clickhouse Parent Directory>` and `<build directory>` with the values corresponding to your environment.
```bash
lldb-15
(lldb) target create ./clickhouse
Current executable set to '/<Clickhouse Parent Directory>/ClickHouse/<build directory>/programs/clickhouse' (s390x).
(lldb) settings set target.source-map <build directory> /<Clickhouse Parent Directory>/ClickHouse
(lldb) gdb-remote 31338
Process 1 stopped
* thread #1, stop reason = signal SIGTRAP
frame #0: 0x0000004020e74cd0
-> 0x4020e74cd0: lgr %r2, %r15
0x4020e74cd4: aghi %r15, -160
0x4020e74cd8: xc 0(8,%r15), 0(%r15)
0x4020e74cde: brasl %r14, 275429939040
(lldb) b main
Breakpoint 1: 9 locations.
(lldb) c
Process 1 resuming
Process 1 stopped
* thread #1, stop reason = breakpoint 1.1
frame #0: 0x0000004005cd9fc0 clickhouse`main(argc_=1, argv_=0x0000004020e594a8) at main.cpp:450:17
447 #if !defined(FUZZING_MODE)
448 int main(int argc_, char ** argv_)
449 {
-> 450 inside_main = true;
451 SCOPE_EXIT({ inside_main = false; });
452
453 /// PHDR cache is required for query profiler to work reliably
```
## Visual Studio Code integration
- (CodeLLDB extension)[https://github.com/vadimcn/vscode-lldb] is required for visual debugging, the (Command Variable)[https://github.com/rioj7/command-variable] extension can help dynamic launches if using (cmake variants)[https://github.com/microsoft/vscode-cmake-tools/blob/main/docs/variants.md].
- Make sure to set the backend to your llvm installation eg. `"lldb.library": "/usr/lib/x86_64-linux-gnu/liblldb-15.so"`
- Launcher:
```json
{
"version": "0.2.0",
"configurations": [
{
"name": "Debug",
"type": "lldb",
"request": "custom",
"targetCreateCommands": ["target create ${command:cmake.launchTargetDirectory}/clickhouse"],
"processCreateCommands": ["settings set target.source-map ${input:targetdir} ${workspaceFolder}", "gdb-remote 31338"],
"sourceMap": { "${input:targetdir}": "${workspaceFolder}" },
}
],
"inputs": [
{
"id": "targetdir",
"type": "command",
"command": "extension.commandvariable.transform",
"args": {
"text": "${command:cmake.launchTargetDirectory}",
"find": ".*/([^/]+)/[^/]+$",
"replace": "$1"
}
}
]
}
```
- Make sure to run the clickhouse executable in debug mode prior to launch. (It is also possible to create a `preLaunchTask` that automates this)

View File

@ -1,6 +1,6 @@
---
slug: /en/development/contrib
sidebar_position: 71
sidebar_position: 72
sidebar_label: Third-Party Libraries
description: A list of third-party libraries used
---

View File

@ -1,6 +1,6 @@
---
slug: /en/development/style
sidebar_position: 69
sidebar_position: 70
sidebar_label: C++ Guide
description: A list of recommendations regarding coding style, naming convention, formatting and more
---

View File

@ -1,6 +1,6 @@
---
slug: /en/development/tests
sidebar_position: 70
sidebar_position: 71
sidebar_label: Testing
title: ClickHouse Testing
description: Most of ClickHouse features can be tested with functional tests and they are mandatory to use for every change in ClickHouse code that can be tested that way.
@ -31,6 +31,9 @@ folder and run the following command:
PATH=$PATH:<path to clickhouse-client> tests/clickhouse-test 01428_hash_set_nan_key
```
Test results (`stderr` and `stdout`) are written to files `01428_hash_set_nan_key.[stderr|stdout]` which
are located near the test file itself (so for `queries/0_stateless/foo.sql` output will be in `queries/0_stateless/foo.stdout`).
For more options, see `tests/clickhouse-test --help`. You can simply run all tests or run subset of tests filtered by substring in test name: `./clickhouse-test substring`. There are also options to run tests in parallel or in randomized order.
### Adding a New Test

View File

@ -15,6 +15,13 @@ Columns:
- `operation_name` ([String](../../sql-reference/data-types/string.md)) — The name of the operation.
- `kind` ([Enum8](../../sql-reference/data-types/enum.md)) — The [SpanKind](https://opentelemetry.io/docs/reference/specification/trace/api/#spankind) of the span.
- `INTERNAL` — Indicates that the span represents an internal operation within an application.
- `SERVER` — Indicates that the span covers server-side handling of a synchronous RPC or other remote request.
- `CLIENT` — Indicates that the span describes a request to some remote service.
- `PRODUCER` — Indicates that the span describes the initiators of an asynchronous request. This parent span will often end before the corresponding child CONSUMER span, possibly even before the child span starts.
- `CONSUMER` - Indicates that the span describes a child of an asynchronous PRODUCER request.
- `start_time_us` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The start time of the `trace span` (in microseconds).
- `finish_time_us` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The finish time of the `trace span` (in microseconds).
@ -42,6 +49,7 @@ trace_id: cdab0847-0d62-61d5-4d38-dd65b19a1914
span_id: 701487461015578150
parent_span_id: 2991972114672045096
operation_name: DB::Block DB::InterpreterSelectQuery::getSampleBlockImpl()
kind: INTERNAL
start_time_us: 1612374594529090
finish_time_us: 1612374594529108
finish_date: 2021-02-03

View File

@ -112,23 +112,21 @@ See also [data_type_default_nullable](../../../operations/settings/settings.md#d
## Default Values {#default_values}
The column description can specify an expression for a default value, in one of the following ways: `DEFAULT expr`, `MATERIALIZED expr`, `ALIAS expr`.
The column description can specify a default value expression in the form of `DEFAULT expr`, `MATERIALIZED expr`, or `ALIAS expr`. Example: `URLDomain String DEFAULT domain(URL)`.
Example: `URLDomain String DEFAULT domain(URL)`.
The expression `expr` is optional. If it is omitted, the column type must be specified explicitly and the default value will be `0` for numeric columns, `''` (the empty string) for string columns, `[]` (the empty array) for array columns, `1970-01-01` for date columns, or `NULL` for nullable columns.
If an expression for the default value is not defined, the default values will be set to zeros for numbers, empty strings for strings, empty arrays for arrays, and `1970-01-01` for dates or zero unix timestamp for DateTime, NULL for Nullable.
The column type of a default value column can be omitted in which case it is infered from `expr`'s type. For example the type of column `EventDate DEFAULT toDate(EventTime)` will be date.
If the default expression is defined, the column type is optional. If there isnt an explicitly defined type, the default expression type is used. Example: `EventDate DEFAULT toDate(EventTime)` the Date type will be used for the EventDate column.
If both a data type and a default value expression are specified, an implicit type casting function inserted which converts the expression to the specified type. Example: `Hits UInt32 DEFAULT 0` is internally represented as `Hits UInt32 DEFAULT toUInt32(0)`.
If the data type and default expression are defined explicitly, this expression will be cast to the specified type using type casting functions. Example: `Hits UInt32 DEFAULT 0` means the same thing as `Hits UInt32 DEFAULT toUInt32(0)`.
Default expressions may be defined as an arbitrary expression from table constants and columns. When creating and changing the table structure, it checks that expressions do not contain loops. For INSERT, it checks that expressions are resolvable that all columns they can be calculated from have been passed.
A default value expression `expr` may reference arbitrary table columns and constants. ClickHouse checks that changes of the table structure do not introduce loops in the expression calculation. For INSERT, it checks that expressions are resolvable that all columns they can be calculated from have been passed.
### DEFAULT
`DEFAULT expr`
Normal default value. If the INSERT query does not specify the corresponding column, it will be filled in by computing the corresponding expression.
Normal default value. If the value of such a column is not specified in an INSERT query, it is computed from `expr`.
Example:
@ -154,9 +152,9 @@ SELECT * FROM test;
`MATERIALIZED expr`
Materialized expression. Such a column cant be specified for INSERT, because it is always calculated.
For an INSERT without a list of columns, these columns are not considered.
In addition, this column is not substituted when using an asterisk in a SELECT query. This is to preserve the invariant that the dump obtained using `SELECT *` can be inserted back into the table using INSERT without specifying the list of columns.
Materialized expression. Values of such columns are always calculated, they cannot be specified in INSERT queries.
Also, default value columns of this type are not included in the result of `SELECT *`. This is to preserve the invariant that the result of a `SELECT *` can always be inserted back into the table using `INSERT`. This behavior can be disabled with setting `asterisk_include_materialized_columns`.
Example:
@ -192,8 +190,9 @@ SELECT * FROM test SETTINGS asterisk_include_materialized_columns=1;
`EPHEMERAL [expr]`
Ephemeral column. Such a column isn't stored in the table and cannot be SELECTed, but can be referenced in the defaults of CREATE statement. If `expr` is omitted type for column is required.
INSERT without list of columns will skip such column, so SELECT/INSERT invariant is preserved - the dump obtained using `SELECT *` can be inserted back into the table using INSERT without specifying the list of columns.
Ephemeral column. Columns of this type are not stored in the table and it is not possible to SELECT from them. The only purpose of ephemeral columns is to build default value expressions of other columns from them.
An insert without explicitly specified columns will skip columns of this type. This is to preserve the invariant that the result of a `SELECT *` can always be inserted back into the table using `INSERT`.
Example:
@ -205,7 +204,7 @@ CREATE OR REPLACE TABLE test
hexed FixedString(4) DEFAULT unhex(unhexed)
)
ENGINE = MergeTree
ORDER BY id
ORDER BY id;
INSERT INTO test (id, unhexed) Values (1, '5a90b714');
@ -227,9 +226,9 @@ hex(hexed): 5A90B714
`ALIAS expr`
Synonym. Such a column isnt stored in the table at all.
Its values cant be inserted in a table, and it is not substituted when using an asterisk in a SELECT query.
It can be used in SELECTs if the alias is expanded during query parsing.
Calculated columns (synonym). Column of this type are not stored in the table and it is not possible to INSERT values into them.
When SELECT queries explicitly reference columns of this type, the value is computed at query time from `expr`. By default, `SELECT *` excludes ALIAS columns. This behavior can be disabled with setting `asteriks_include_alias_columns`.
When using the ALTER query to add new columns, old data for these columns is not written. Instead, when reading old data that does not have values for the new columns, expressions are computed on the fly by default. However, if running the expressions requires different columns that are not indicated in the query, these columns will additionally be read, but only for the blocks of data that need it.

View File

@ -89,7 +89,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
└─────────────────────┴───────────┴──────────┴──────┘
```
Первая строка отменяет предыдущее состояние объекта (пользователя). Она должен повторять все поля из ключа сортировки для отменённого состояния за исключением `Sign`.
Первая строка отменяет предыдущее состояние объекта (пользователя). Она должна повторять все поля из ключа сортировки для отменённого состояния за исключением `Sign`.
Вторая строка содержит текущее состояние.

View File

@ -584,7 +584,7 @@ TTL d + INTERVAL 1 MONTH GROUP BY k1, k2 SET x = max(x), y = min(y);
Данные с истекшим `TTL` удаляются, когда ClickHouse мёржит куски данных.
Когда ClickHouse видит, что некоторые данные устарели, он выполняет внеплановые мёржи. Для управление частотой подобных мёржей, можно задать настройку `merge_with_ttl_timeout`. Если её значение слишком низкое, придется выполнять много внеплановых мёржей, которые могут начать потреблять значительную долю ресурсов сервера.
Когда ClickHouse видит, что некоторые данные устарели, он выполняет внеплановые мёржи. Для управления частотой подобных мёржей, можно задать настройку `merge_with_ttl_timeout`. Если её значение слишком низкое, придется выполнять много внеплановых мёржей, которые могут начать потреблять значительную долю ресурсов сервера.
Если вы выполните запрос `SELECT` между слияниями вы можете получить устаревшие данные. Чтобы избежать этого используйте запрос [OPTIMIZE](../../../engines/table-engines/mergetree-family/mergetree.md#misc_operations-optimize) перед `SELECT`.
@ -679,7 +679,7 @@ TTL d + INTERVAL 1 MONTH GROUP BY k1, k2 SET x = max(x), y = min(y);
- `policy_name_N` — название политики. Названия политик должны быть уникальны.
- `volume_name_N` — название тома. Названия томов должны быть уникальны.
- `disk` — диск, находящийся внутри тома.
- `max_data_part_size_bytes` — максимальный размер куска данных, который может находится на любом из дисков этого тома. Если в результате слияния размер куска ожидается больше, чем max_data_part_size_bytes, то этот кусок будет записан в следующий том. В основном эта функция позволяет хранить новые / мелкие куски на горячем (SSD) томе и перемещать их на холодный (HDD) том, когда они достигают большого размера. Не используйте этот параметр, если политика имеет только один том.
- `max_data_part_size_bytes` — максимальный размер куска данных, который может находиться на любом из дисков этого тома. Если в результате слияния размер куска ожидается больше, чем max_data_part_size_bytes, то этот кусок будет записан в следующий том. В основном эта функция позволяет хранить новые / мелкие куски на горячем (SSD) томе и перемещать их на холодный (HDD) том, когда они достигают большого размера. Не используйте этот параметр, если политика имеет только один том.
- `move_factor` — доля доступного свободного места на томе, если места становится меньше, то данные начнут перемещение на следующий том, если он есть (по умолчанию 0.1). Для перемещения куски сортируются по размеру от большего к меньшему (по убыванию) и выбираются куски, совокупный размер которых достаточен для соблюдения условия `move_factor`, если совокупный размер всех партов недостаточен, будут перемещены все парты.
- `prefer_not_to_merge` — Отключает слияние кусков данных, хранящихся на данном томе. Если данная настройка включена, то слияние данных, хранящихся на данном томе, не допускается. Это позволяет контролировать работу ClickHouse с медленными дисками.
@ -730,7 +730,7 @@ TTL d + INTERVAL 1 MONTH GROUP BY k1, k2 SET x = max(x), y = min(y);
В приведенном примере, политика `hdd_in_order` реализует прицип [round-robin](https://ru.wikipedia.org/wiki/Round-robin_(%D0%B0%D0%BB%D0%B3%D0%BE%D1%80%D0%B8%D1%82%D0%BC)). Так как в политике есть всего один том (`single`), то все записи производятся на его диски по круговому циклу. Такая политика может быть полезна при наличии в системе нескольких похожих дисков, но при этом не сконфигурирован RAID. Учтите, что каждый отдельный диск ненадёжен и чтобы не потерять важные данные это необходимо скомпенсировать за счет хранения данных в трёх копиях.
Если система содержит диски различных типов, то может пригодиться политика `moving_from_ssd_to_hdd`. В томе `hot` находится один SSD-диск (`fast_ssd`), а также задается ограничение на максимальный размер куска, который может храниться на этом томе (1GB). Все куски такой таблицы больше 1GB будут записываться сразу на том `cold`, в котором содержится один HDD-диск `disk1`. Также, при заполнении диска `fast_ssd` более чем на 80% данные будут переносится на диск `disk1` фоновым процессом.
Если система содержит диски различных типов, то может пригодиться политика `moving_from_ssd_to_hdd`. В томе `hot` находится один SSD-диск (`fast_ssd`), а также задается ограничение на максимальный размер куска, который может храниться на этом томе (1GB). Все куски такой таблицы больше 1GB будут записываться сразу на том `cold`, в котором содержится один HDD-диск `disk1`. Также при заполнении диска `fast_ssd` более чем на 80% данные будут переноситься на диск `disk1` фоновым процессом.
Порядок томов в политиках хранения важен, при достижении условий на переполнение тома данные переносятся на следующий. Порядок дисков в томах так же важен, данные пишутся по очереди на каждый из них.

View File

@ -8,6 +8,7 @@ sidebar_label: "Клиентские библиотеки от сторонни
:::danger "Disclaimer"
Яндекс не поддерживает перечисленные ниже библиотеки и не проводит тщательного тестирования для проверки их качества.
:::
- Python:
- [infi.clickhouse_orm](https://github.com/Infinidat/infi.clickhouse_orm)

View File

@ -177,19 +177,20 @@ sidebar_label: "Визуальные интерфейсы от сторонни
### Yandex DataLens {#yandex-datalens}
[Yandex DataLens](https://cloud.yandex.ru/services/datalens) — cервис визуализации и анализа данных.
[Yandex DataLens](https://datalens.yandex.ru) — cервис визуализации и анализа данных.
Основные возможности:
- Широкий выбор инструментов визуализации, от простых столбчатых диаграмм до сложных дашбордов.
- Возможность опубликовать дашборды на широкую аудиторию.
- Поддержка множества источников данных, включая ClickHouse.
- Хранение материализованных данных в кластере ClickHouse DataLens.
Для небольших проектов DataLens [доступен бесплатно](https://cloud.yandex.ru/docs/datalens/pricing), в том числе и для коммерческого использования.
DataLens [доступен бесплатно](https://cloud.yandex.ru/docs/datalens/pricing), в том числе и для коммерческого использования.
- [Знакомство с DataLens]((https://youtu.be/57ngi_6BINE).
- [Чат сообщества DataLens](https://t.me/YandexDataLens)
- [Документация DataLens](https://cloud.yandex.ru/docs/datalens/).
- [Пособие по визуализации данных из ClickHouse](https://cloud.yandex.ru/docs/solutions/datalens/data-from-ch-visualization).
- [Сценарий по визуализации данных из ClickHouse](https://cloud.yandex.ru/docs/solutions/datalens/data-from-ch-visualization).
### Holistics Software {#holistics-software}

View File

@ -325,21 +325,21 @@ clickhouse-keeper-converter --zookeeper-logs-dir /var/lib/zookeeper/version-2 --
Например, для кластера из 3 нод, алгоритм кворума продолжает работать при отказе не более чем одной ноды.
Конфигурация кластера может быть изменена динамически с некоторыми ограничениями.
Переконфигурация также использует Raft, поэтому для добавление новой ноды кластера или исключения старой ноды из него требуется достижения кворума в рамках текущей конфигурации кластера.
Переконфигурация также использует Raft, поэтому для добавления новой ноды кластера или исключения старой ноды требуется достижение кворума в рамках текущей конфигурации кластера.
Если в вашем кластере произошел отказ большего числа нод, чем допускает Raft для вашей текущей конфигурации и у вас нет возможности восстановить их работоспособность, Raft перестанет работать и не позволит изменить конфигурацию стандартным механизмом.
Тем не менее ClickHousr Keeper имеет возможность запуститься в режиме восстановления, который позволяет переконфигурировать класте используя только одну ноду кластера.
Тем не менее ClickHouse Keeper имеет возможность запуститься в режиме восстановления, который позволяет переконфигурировать кластер используя только одну ноду кластера.
Этот механизм может использоваться только как крайняя мера, когда вы не можете восстановить существующие ноды кластера или запустить новый сервер с тем же идентификатором.
Важно:
- Удостоверьтесь, что отказавшие ноды не смогут в дальнейшем подключиться к кластеру в будущем.
- Не запускайте новые ноды, пока не завешите процедуру ниже.
- Не запускайте новые ноды, пока не завершите процедуру ниже.
После того, как выполнили действия выше выполните следующие шаги.
1. Выберете одну ноду Keeper, которая станет новым лидером. Учтите, что данные которые с этой ноды будут испольщзованы всем кластером, поэтому рекомендуется выбрать ноду с наиболее актуальным состоянием.
1. Выберете одну ноду Keeper, которая станет новым лидером. Учтите, что данные с этой ноды будут использованы всем кластером, поэтому рекомендуется выбрать ноду с наиболее актуальным состоянием.
2. Перед дальнейшими действиям сделайте резервную копию данных из директорий `log_storage_path` и `snapshot_storage_path`.
3. Измените настройки на всех нодах кластера, которые вы собираетесь использовать.
4. Отправьте команду `rcvr` на ноду, которую вы выбрали или остановите ее и запустите заново с аргументом `--force-recovery`. Это переведет ноду в режим восстановления.
4. Отправьте команду `rcvr` на ноду, которую вы выбрали, или остановите ее и запустите заново с аргументом `--force-recovery`. Это переведет ноду в режим восстановления.
5. Запускайте остальные ноды кластера по одной и проверяйте, что команда `mntr` возвращает `follower` в выводе состояния `zk_server_state` перед тем, как запустить следующую ноду.
6. Пока нода работает в режиме восстановления, лидер будет возвращать ошибку на запрос `mntr` пока кворум не будет достигнут с помощью новых нод. Любые запросы от клиентов и постедователей будут возвращать ошибку.
6. Пока нода работает в режиме восстановления, лидер будет возвращать ошибку на запрос `mntr` пока кворум не будет достигнут с помощью новых нод. Любые запросы от клиентов и последователей будут возвращать ошибку.
7. После достижения кворума лидер перейдет в нормальный режим работы и станет обрабатывать все запросы через Raft. Удостоверьтесь, что запрос `mntr` возвращает `leader` в выводе состояния `zk_server_state`.

View File

@ -10,6 +10,7 @@ ClickHouse поддерживает [OpenTelemetry](https://opentelemetry.io/)
:::danger "Предупреждение"
Поддержка стандарта экспериментальная и будет со временем меняться.
:::
## Обеспечение поддержки контекста трассировки в ClickHouse

View File

@ -26,6 +26,7 @@ ClickHouse перезагружает встроенные словари с з
:::danger "Внимание"
Лучше не использовать, если вы только начали работать с ClickHouse.
:::
Общий вид конфигурации:
@ -1064,6 +1065,7 @@ ClickHouse использует потоки из глобального пул
:::danger "Обратите внимание"
Завершающий слеш обязателен.
:::
**Пример**
@ -1330,6 +1332,7 @@ TCP порт для защищённого обмена данными с кли
:::danger "Обратите внимание"
Завершающий слеш обязателен.
:::
**Пример**

View File

@ -82,7 +82,7 @@ sidebar_label: "Хранение данных на внешних дисках"
- `type``encrypted`. Иначе зашифрованный диск создан не будет.
- `disk` — тип диска для хранения данных.
- `key` — ключ для шифрования и расшифровки. Тип: [Uint64](../sql-reference/data-types/int-uint.md). Вы можете использовать параметр `key_hex` для шифрования в шестнадцатеричной форме.
- `key` — ключ для шифрования и расшифровки. Тип: [UInt64](../sql-reference/data-types/int-uint.md). Вы можете использовать параметр `key_hex` для шифрования в шестнадцатеричной форме.
Вы можете указать несколько ключей, используя атрибут `id` (смотрите пример выше).
Необязательные параметры:

View File

@ -6,7 +6,7 @@ sidebar_label: AggregateFunction
# AggregateFunction {#data-type-aggregatefunction}
Агрегатные функции могут обладать определяемым реализацией промежуточным состоянием, которое может быть сериализовано в тип данных, соответствующий AggregateFunction(…), и быть записано в таблицу обычно посредством [материализованного представления] (../../sql-reference/statements/create/view.md). Чтобы получить промежуточное состояние, обычно используются агрегатные функции с суффиксом `-State`. Чтобы в дальнейшем получить агрегированные данные необходимо использовать те же агрегатные функции с суффиксом `-Merge`.
Агрегатные функции могут обладать определяемым реализацией промежуточным состоянием, которое может быть сериализовано в тип данных, соответствующий AggregateFunction(…), и быть записано в таблицу обычно посредством [материализованного представления](../../sql-reference/statements/create/view.md). Чтобы получить промежуточное состояние, обычно используются агрегатные функции с суффиксом `-State`. Чтобы в дальнейшем получить агрегированные данные необходимо использовать те же агрегатные функции с суффиксом `-Merge`.
`AggregateFunction(name, types_of_arguments…)` — параметрический тип данных.

View File

@ -10,6 +10,7 @@ ClickHouse поддерживает типы данных для отображ
:::danger "Предупреждение"
Сейчас использование типов данных для работы с географическими структурами является экспериментальной возможностью. Чтобы использовать эти типы данных, включите настройку `allow_experimental_geo_types = 1`.
:::
**См. также**
- [Хранение географических структур данных](https://ru.wikipedia.org/wiki/GeoJSON).

View File

@ -10,6 +10,7 @@ sidebar_label: Interval
:::danger "Внимание"
Нельзя использовать типы данных `Interval` для хранения данных в таблице.
:::
Структура:

View File

@ -34,7 +34,7 @@ SELECT tuple(1,'a') AS x, toTypeName(x)
## Особенности работы с типами данных {#osobennosti-raboty-s-tipami-dannykh}
При создании кортежа «на лету» ClickHouse автоматически определяет тип каждого аргументов как минимальный из типов, который может сохранить значение аргумента. Если аргумент — [NULL](../../sql-reference/data-types/tuple.md#null-literal), то тип элемента кортежа — [Nullable](nullable.md).
При создании кортежа «на лету» ClickHouse автоматически определяет тип всех аргументов как минимальный из типов, который может сохранить значение аргумента. Если аргумент — [NULL](../../sql-reference/data-types/tuple.md#null-literal), то тип элемента кортежа — [Nullable](nullable.md).
Пример автоматического определения типа данных:

View File

@ -61,7 +61,7 @@ LAYOUT(POLYGON(STORE_POLYGON_KEY_COLUMN 1))
- Мультиполигон. Представляет из себя массив полигонов. Каждый полигон задается двумерным массивом точек — первый элемент этого массива задает внешнюю границу полигона,
последующие элементы могут задавать дырки, вырезаемые из него.
Точки могут задаваться массивом или кортежем из своих координат. В текущей реализации поддерживается только двумерные точки.
Точки могут задаваться массивом или кортежем из своих координат. В текущей реализации поддерживаются только двумерные точки.
Пользователь может [загружать свои собственные данные](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md) во всех поддерживаемых ClickHouse форматах.
@ -80,7 +80,7 @@ LAYOUT(POLYGON(STORE_POLYGON_KEY_COLUMN 1))
- `POLYGON`. Синоним к `POLYGON_INDEX_CELL`.
Запросы к словарю осуществляются с помощью стандартных [функций](../../../sql-reference/functions/ext-dict-functions.md) для работы со внешними словарями.
Важным отличием является то, что здесь ключами будут являются точки, для которых хочется найти содержащий их полигон.
Важным отличием является то, что здесь ключами являются точки, для которых хочется найти содержащий их полигон.
**Пример**

View File

@ -59,6 +59,7 @@ ClickHouse поддерживает следующие виды ключей:
:::danger "Обратите внимание"
Ключ не надо дополнительно описывать в атрибутах.
:::
### Числовой ключ {#ext_dict-numeric-key}

View File

@ -14,7 +14,7 @@ ClickHouse:
- Периодически обновляет их и динамически подгружает отсутствующие значения.
- Позволяет создавать внешние словари с помощью xml-файлов или [DDL-запросов](../../statements/create/dictionary.md#create-dictionary-query).
Конфигурация внешних словарей может находится в одном или нескольких xml-файлах. Путь к конфигурации указывается в параметре [dictionaries_config](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-dictionaries_config).
Конфигурация внешних словарей может находиться в одном или нескольких xml-файлах. Путь к конфигурации указывается в параметре [dictionaries_config](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-dictionaries_config).
Словари могут загружаться при старте сервера или при первом использовании, в зависимости от настройки [dictionaries_lazy_load](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-dictionaries_lazy_load).

View File

@ -22,7 +22,7 @@ sidebar_label: "Функции интроспекции"
ClickHouse сохраняет отчеты профилировщика в [журнал трассировки](../../operations/system-tables/trace_log.md#system_tables-trace_log) в системной таблице. Убедитесь, что таблица и профилировщик настроены правильно.
## addresssToLine {#addresstoline}
## addressToLine {#addresstoline}
Преобразует адрес виртуальной памяти внутри процесса сервера ClickHouse в имя файла и номер строки в исходном коде ClickHouse.

View File

@ -8,7 +8,8 @@ slug: /ru/sql-reference/operators/exists
`EXISTS` может быть использован в секции [WHERE](../../sql-reference/statements/select/where.md).
:::danger "Предупреждение"
Ссылки на таблицы или столбцы основного запроса не поддерживаются в подзапросе.
Ссылки на таблицы или столбцы основного запроса не поддерживаются в подзапросе.
:::
**Синтаксис**

View File

@ -38,9 +38,9 @@ SELECT '1' IN (SELECT 1);
└──────────────────────┘
```
Если в качестве правой части оператора указано имя таблицы (например, `UserID IN users`), то это эквивалентно подзапросу `UserID IN (SELECT * FROM users)`. Это используется при работе с внешними данными, отправляемым вместе с запросом. Например, вместе с запросом может быть отправлено множество идентификаторов посетителей, загруженное во временную таблицу users, по которому следует выполнить фильтрацию.
Если в качестве правой части оператора указано имя таблицы (например, `UserID IN users`), то это эквивалентно подзапросу `UserID IN (SELECT * FROM users)`. Это используется при работе с внешними данными, отправляемыми вместе с запросом. Например, вместе с запросом может быть отправлено множество идентификаторов посетителей, загруженное во временную таблицу users, по которому следует выполнить фильтрацию.
Если в качестве правой части оператора, указано имя таблицы, имеющий движок Set (подготовленное множество, постоянно находящееся в оперативке), то множество не будет создаваться заново при каждом запросе.
Если в качестве правой части оператора, указано имя таблицы, имеющей движок Set (подготовленное множество, постоянно находящееся в оперативке), то множество не будет создаваться заново при каждом запросе.
В подзапросе может быть указано более одного столбца для фильтрации кортежей.
Пример:
@ -49,9 +49,9 @@ SELECT '1' IN (SELECT 1);
SELECT (CounterID, UserID) IN (SELECT CounterID, UserID FROM ...) FROM ...
```
Типы столбцов слева и справа оператора IN, должны совпадать.
Типы столбцов слева и справа оператора IN должны совпадать.
Оператор IN и подзапрос могут встречаться в любой части запроса, в том числе в агрегатных и лямбда функциях.
Оператор IN и подзапрос могут встречаться в любой части запроса, в том числе в агрегатных и лямбда-функциях.
Пример:
``` sql
@ -122,7 +122,7 @@ FROM t_null
Существует два варианта IN-ов с подзапросами (аналогично для JOIN-ов): обычный `IN` / `JOIN` и `GLOBAL IN` / `GLOBAL JOIN`. Они отличаются способом выполнения при распределённой обработке запроса.
:::note "Attention"
:::note "Внимание"
Помните, что алгоритмы, описанные ниже, могут работать иначе в зависимости от [настройки](../../operations/settings/settings.md) `distributed_product_mode`.
:::
При использовании обычного IN-а, запрос отправляется на удалённые серверы, и на каждом из них выполняются подзапросы в секциях `IN` / `JOIN`.
@ -228,7 +228,7 @@ SELECT CounterID, count() FROM distributed_table_1 WHERE UserID IN (SELECT UserI
SETTINGS max_parallel_replicas=3
```
преобразуются на каждом сервере в
преобразуется на каждом сервере в
```sql
SELECT CounterID, count() FROM local_table_1 WHERE UserID IN (SELECT UserID FROM local_table_2 WHERE CounterID < 100)

View File

@ -263,6 +263,7 @@ SELECT toDateTime('2014-10-26 00:00:00', 'Europe/Moscow') AS time, time + 60 * 6
│ 2014-10-26 00:00:00 │ 2014-10-26 23:00:00 │ 2014-10-27 00:00:00 │
└─────────────────────┴─────────────────────┴─────────────────────┘
```
:::
**Смотрите также**

View File

@ -6,7 +6,7 @@ sidebar_label: VIEW
# Выражение ALTER TABLE … MODIFY QUERY {#alter-modify-query}
Вы можеие изменить запрос `SELECT`, который был задан при создании [материализованного представления](../create/view.md#materialized), с помощью запроса 'ALTER TABLE … MODIFY QUERY'. Используйте его если при создании материализованного представления не использовалась секция `TO [db.]name`. Настройка `allow_experimental_alter_materialized_view_structure` должна быть включена.
Вы можете изменить запрос `SELECT`, который был задан при создании [материализованного представления](../create/view.md#materialized), с помощью запроса 'ALTER TABLE … MODIFY QUERY'. Используйте его если при создании материализованного представления не использовалась секция `TO [db.]name`. Настройка `allow_experimental_alter_materialized_view_structure` должна быть включена.
Если при создании материализованного представления использовалась конструкция `TO [db.]name`, то для изменения отсоедините представление с помощью [DETACH](../detach.md), измените таблицу с помощью [ALTER TABLE](index.md), а затем снова присоедините запрос с помощью [ATTACH](../attach.md).

View File

@ -10,6 +10,7 @@ sidebar_label: OPTIMIZE
:::danger "Внимание"
`OPTIMIZE` не устраняет причину появления ошибки `Too many parts`.
:::
**Синтаксис**

View File

@ -2,18 +2,21 @@
#include <Common/SipHash.h>
#include <Common/FieldVisitorToString.h>
#include <DataTypes/IDataType.h>
#include <Analyzer/ConstantNode.h>
#include <IO/WriteBufferFromString.h>
#include <IO/Operators.h>
#include <DataTypes/IDataType.h>
#include <DataTypes/DataTypeSet.h>
#include <Parsers/ASTFunction.h>
#include <Functions/IFunction.h>
#include <AggregateFunctions/IAggregateFunction.h>
#include <Analyzer/Utils.h>
#include <Analyzer/ConstantNode.h>
#include <Analyzer/IdentifierNode.h>
namespace DB
@ -44,17 +47,29 @@ const DataTypes & FunctionNode::getArgumentTypes() const
ColumnsWithTypeAndName FunctionNode::getArgumentColumns() const
{
const auto & arguments = getArguments().getNodes();
size_t arguments_size = arguments.size();
ColumnsWithTypeAndName argument_columns;
argument_columns.reserve(arguments.size());
for (const auto & arg : arguments)
for (size_t i = 0; i < arguments_size; ++i)
{
ColumnWithTypeAndName argument;
argument.type = arg->getResultType();
if (auto * constant = arg->as<ConstantNode>())
argument.column = argument.type->createColumnConst(1, constant->getValue());
argument_columns.push_back(std::move(argument));
const auto & argument = arguments[i];
ColumnWithTypeAndName argument_column;
if (isNameOfInFunction(function_name) && i == 1)
argument_column.type = std::make_shared<DataTypeSet>();
else
argument_column.type = argument->getResultType();
auto * constant = argument->as<ConstantNode>();
if (constant && !isNotCreatable(argument_column.type))
argument_column.column = argument_column.type->createColumnConst(1, constant->getValue());
argument_columns.push_back(std::move(argument_column));
}
return argument_columns;
}

View File

@ -99,8 +99,9 @@ class InDepthQueryTreeVisitorWithContext
public:
using VisitQueryTreeNodeType = std::conditional_t<const_visitor, const QueryTreeNodePtr, QueryTreeNodePtr>;
explicit InDepthQueryTreeVisitorWithContext(ContextPtr context)
explicit InDepthQueryTreeVisitorWithContext(ContextPtr context, size_t initial_subquery_depth = 0)
: current_context(std::move(context))
, subquery_depth(initial_subquery_depth)
{}
/// Return true if visitor should traverse tree top to bottom, false otherwise
@ -125,11 +126,17 @@ public:
return current_context->getSettingsRef();
}
size_t getSubqueryDepth() const
{
return subquery_depth;
}
void visit(VisitQueryTreeNodeType & query_tree_node)
{
auto current_scope_context_ptr = current_context;
SCOPE_EXIT(
current_context = std::move(current_scope_context_ptr);
--subquery_depth;
);
if (auto * query_node = query_tree_node->template as<QueryNode>())
@ -137,6 +144,8 @@ public:
else if (auto * union_node = query_tree_node->template as<UnionNode>())
current_context = union_node->getContext();
++subquery_depth;
bool traverse_top_to_bottom = getDerived().shouldTraverseTopToBottom();
if (!traverse_top_to_bottom)
visitChildren(query_tree_node);
@ -145,7 +154,12 @@ public:
if (traverse_top_to_bottom)
visitChildren(query_tree_node);
getDerived().leaveImpl(query_tree_node);
}
void leaveImpl(VisitQueryTreeNodeType & node [[maybe_unused]])
{}
private:
Derived & getDerived()
{
@ -172,6 +186,7 @@ private:
}
ContextPtr current_context;
size_t subquery_depth = 0;
};
template <typename Derived>

View File

@ -106,6 +106,12 @@ public:
return locality;
}
/// Set join locality
void setLocality(JoinLocality locality_value)
{
locality = locality_value;
}
/// Get join strictness
JoinStrictness getStrictness() const
{

View File

@ -42,7 +42,7 @@ private:
return;
const auto & storage = table_node ? table_node->getStorage() : table_function_node->getStorage();
bool is_final_supported = storage && storage->supportsFinal() && !storage->isRemote();
bool is_final_supported = storage && storage->supportsFinal();
if (!is_final_supported)
return;

View File

@ -7,8 +7,6 @@
#include <Analyzer/ConstantNode.h>
#include <Analyzer/HashUtils.h>
#include <DataTypes/DataTypeString.h>
namespace DB
{
@ -100,6 +98,9 @@ private:
}
}
if (and_operands.size() == function_node.getArguments().getNodes().size())
return;
if (and_operands.size() == 1)
{
/// AND operator can have UInt8 or bool as its type.
@ -207,6 +208,9 @@ private:
or_operands.push_back(std::move(in_function));
}
if (or_operands.size() == function_node.getArguments().getNodes().size())
return;
if (or_operands.size() == 1)
{
/// if the result type of operand is the same as the result type of OR

View File

@ -69,8 +69,7 @@ private:
for (auto it = function_arguments.rbegin(); it != function_arguments.rend(); ++it)
candidates.push_back({ *it, is_deterministic });
// Using DFS we traverse function tree and try to find if it uses other keys as function arguments.
// TODO: Also process CONSTANT here. We can simplify GROUP BY x, x + 1 to GROUP BY x.
/// Using DFS we traverse function tree and try to find if it uses other keys as function arguments.
while (!candidates.empty())
{
auto [candidate, parents_are_only_deterministic] = candidates.back();
@ -108,6 +107,7 @@ private:
return false;
}
}
return true;
}

View File

@ -193,13 +193,9 @@ namespace ErrorCodes
* lookup should not be continued, and exception must be thrown because if lookup continues identifier can be resolved from parent scope.
*
* TODO: Update exception messages
* TODO: JOIN TREE subquery constant columns
* TODO: Table identifiers with optional UUID.
* TODO: Lookup functions arrayReduce(sum, [1, 2, 3]);
* TODO: SELECT (compound_expression).*, (compound_expression).COLUMNS are not supported on parser level.
* TODO: SELECT a.b.c.*, a.b.c.COLUMNS. Qualified matcher where identifier size is greater than 2 are not supported on parser level.
* TODO: Support function identifier resolve from parent query scope, if lambda in parent scope does not capture any columns.
* TODO: Scalar subqueries cache.
*/
namespace
@ -701,7 +697,9 @@ struct IdentifierResolveScope
}
if (auto * union_node = scope_node->as<UnionNode>())
{
context = union_node->getContext();
}
else if (auto * query_node = scope_node->as<QueryNode>())
{
context = query_node->getContext();
@ -1336,6 +1334,9 @@ private:
/// Global resolve expression node to projection names map
std::unordered_map<QueryTreeNodePtr, ProjectionNames> resolved_expressions;
/// Global resolve expression node to tree size
std::unordered_map<QueryTreeNodePtr, size_t> node_to_tree_size;
/// Global scalar subquery to scalar value map
std::unordered_map<QueryTreeNodePtrWithHash, Block> scalar_subquery_to_scalar_value;
@ -1864,7 +1865,10 @@ void QueryAnalyzer::evaluateScalarSubqueryIfNeeded(QueryTreeNodePtr & node, Iden
Block scalar_block;
QueryTreeNodePtrWithHash node_with_hash(node);
auto node_without_alias = node->clone();
node_without_alias->removeAlias();
QueryTreeNodePtrWithHash node_with_hash(node_without_alias);
auto scalar_value_it = scalar_subquery_to_scalar_value.find(node_with_hash);
if (scalar_value_it != scalar_subquery_to_scalar_value.end())
@ -1954,21 +1958,7 @@ void QueryAnalyzer::evaluateScalarSubqueryIfNeeded(QueryTreeNodePtr & node, Iden
*
* Example: SELECT (SELECT 2 AS x, x)
*/
NameSet block_column_names;
size_t unique_column_name_counter = 1;
for (auto & column_with_type : block)
{
if (!block_column_names.contains(column_with_type.name))
{
block_column_names.insert(column_with_type.name);
continue;
}
column_with_type.name += '_';
column_with_type.name += std::to_string(unique_column_name_counter);
++unique_column_name_counter;
}
makeUniqueColumnNamesInBlock(block);
scalar_block.insert({
ColumnTuple::create(block.getColumns()),
@ -2348,7 +2338,13 @@ QueryTreeNodePtr QueryAnalyzer::tryResolveTableIdentifierFromDatabaseCatalog(con
storage_id = context->resolveStorageID(storage_id);
bool is_temporary_table = storage_id.getDatabaseName() == DatabaseCatalog::TEMPORARY_DATABASE;
auto storage = DatabaseCatalog::instance().tryGetTable(storage_id, context);
StoragePtr storage;
if (is_temporary_table)
storage = DatabaseCatalog::instance().getTable(storage_id, context);
else
storage = DatabaseCatalog::instance().tryGetTable(storage_id, context);
if (!storage)
return {};
@ -2914,7 +2910,10 @@ QueryTreeNodePtr QueryAnalyzer::tryResolveIdentifierFromTableExpression(const Id
break;
IdentifierLookup column_identifier_lookup = {qualified_identifier_with_removed_part, IdentifierLookupContext::EXPRESSION};
if (tryBindIdentifierToAliases(column_identifier_lookup, scope) ||
if (tryBindIdentifierToAliases(column_identifier_lookup, scope))
break;
if (table_expression_data.should_qualify_columns &&
tryBindIdentifierToTableExpressions(column_identifier_lookup, table_expression_node, scope))
break;
@ -3018,11 +3017,39 @@ QueryTreeNodePtr QueryAnalyzer::tryResolveIdentifierFromJoin(const IdentifierLoo
resolved_identifier = std::move(result_column_node);
}
else if (scope.joins_count == 1 && scope.context->getSettingsRef().single_join_prefer_left_table)
else if (left_resolved_identifier->isEqual(*right_resolved_identifier, IQueryTreeNode::CompareOptions{.compare_aliases = false}))
{
const auto & identifier_path_part = identifier_lookup.identifier.front();
auto * left_resolved_identifier_column = left_resolved_identifier->as<ColumnNode>();
auto * right_resolved_identifier_column = right_resolved_identifier->as<ColumnNode>();
if (left_resolved_identifier_column && right_resolved_identifier_column)
{
const auto & left_column_source_alias = left_resolved_identifier_column->getColumnSource()->getAlias();
const auto & right_column_source_alias = right_resolved_identifier_column->getColumnSource()->getAlias();
/** If column from right table was resolved using alias, we prefer column from right table.
*
* Example: SELECT dummy FROM system.one JOIN system.one AS A ON A.dummy = system.one.dummy;
*
* If alias is specified for left table, and alias is not specified for right table and identifier was resolved
* without using left table alias, we prefer column from right table.
*
* Example: SELECT dummy FROM system.one AS A JOIN system.one ON A.dummy = system.one.dummy;
*
* Otherwise we prefer column from left table.
*/
if (identifier_path_part == right_column_source_alias)
return right_resolved_identifier;
else if (!left_column_source_alias.empty() &&
right_column_source_alias.empty() &&
identifier_path_part != left_column_source_alias)
return right_resolved_identifier;
}
return left_resolved_identifier;
}
else if (left_resolved_identifier->isEqual(*right_resolved_identifier, IQueryTreeNode::CompareOptions{.compare_aliases = false}))
else if (scope.joins_count == 1 && scope.context->getSettingsRef().single_join_prefer_left_table)
{
return left_resolved_identifier;
}
@ -4466,6 +4493,7 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
bool is_special_function_dict_get = false;
bool is_special_function_join_get = false;
bool is_special_function_exists = false;
bool is_special_function_if = false;
if (!lambda_expression_untyped)
{
@ -4473,6 +4501,7 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
is_special_function_dict_get = functionIsDictGet(function_name);
is_special_function_join_get = functionIsJoinGet(function_name);
is_special_function_exists = function_name == "exists";
is_special_function_if = function_name == "if";
auto function_name_lowercase = Poco::toLower(function_name);
@ -4571,6 +4600,60 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
is_special_function_in = true;
}
if (is_special_function_if && !function_node_ptr->getArguments().getNodes().empty())
{
/** Handle special case with constant If function, even if some of the arguments are invalid.
*
* SELECT if(hasColumnInTable('system', 'numbers', 'not_existing_column'), not_existing_column, 5) FROM system.numbers;
*/
auto & if_function_arguments = function_node_ptr->getArguments().getNodes();
auto if_function_condition = if_function_arguments[0];
resolveExpressionNode(if_function_condition, scope, false /*allow_lambda_expression*/, false /*allow_table_expression*/);
auto constant_condition = tryExtractConstantFromConditionNode(if_function_condition);
if (constant_condition.has_value() && if_function_arguments.size() == 3)
{
QueryTreeNodePtr constant_if_result_node;
QueryTreeNodePtr possibly_invalid_argument_node;
if (*constant_condition)
{
possibly_invalid_argument_node = if_function_arguments[2];
constant_if_result_node = if_function_arguments[1];
}
else
{
possibly_invalid_argument_node = if_function_arguments[1];
constant_if_result_node = if_function_arguments[2];
}
bool apply_constant_if_optimization = false;
try
{
resolveExpressionNode(possibly_invalid_argument_node,
scope,
false /*allow_lambda_expression*/,
false /*allow_table_expression*/);
}
catch (...)
{
apply_constant_if_optimization = true;
}
if (apply_constant_if_optimization)
{
auto result_projection_names = resolveExpressionNode(constant_if_result_node,
scope,
false /*allow_lambda_expression*/,
false /*allow_table_expression*/);
node = std::move(constant_if_result_node);
return result_projection_names;
}
}
}
/// Resolve function arguments
bool allow_table_expressions = is_special_function_in;
@ -5059,7 +5142,7 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
/// Do not constant fold get scalar functions
bool disable_constant_folding = function_name == "__getScalar" || function_name == "shardNum" ||
function_name == "shardCount";
function_name == "shardCount" || function_name == "hostName";
/** If function is suitable for constant folding try to convert it to constant.
* Example: SELECT plus(1, 1);
@ -5085,7 +5168,8 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
/** Do not perform constant folding if there are aggregate or arrayJoin functions inside function.
* Example: SELECT toTypeName(sum(number)) FROM numbers(10);
*/
if (column && isColumnConst(*column) && (!hasAggregateFunctionNodes(node) && !hasFunctionNode(node, "arrayJoin")))
if (column && isColumnConst(*column) && !typeid_cast<const ColumnConst *>(column.get())->getDataColumn().isDummy() &&
(!hasAggregateFunctionNodes(node) && !hasFunctionNode(node, "arrayJoin")))
{
/// Replace function node with result constant node
Field column_constant_value;
@ -5433,9 +5517,9 @@ ProjectionNames QueryAnalyzer::resolveExpressionNode(QueryTreeNodePtr & node, Id
}
}
if (node
&& scope.nullable_group_by_keys.contains(node)
&& !scope.expressions_in_resolve_process_stack.hasAggregateFunction())
validateTreeSize(node, scope.context->getSettingsRef().max_expanded_ast_elements, node_to_tree_size);
if (scope.nullable_group_by_keys.contains(node) && !scope.expressions_in_resolve_process_stack.hasAggregateFunction())
{
node = node->clone();
node->convertToNullable();
@ -6592,6 +6676,17 @@ void QueryAnalyzer::resolveQuery(const QueryTreeNodePtr & query_node, Identifier
/// Resolve query node sections.
NamesAndTypes projection_columns;
if (!scope.group_by_use_nulls)
{
projection_columns = resolveProjectionExpressionNodeList(query_node_typed.getProjectionNode(), scope);
if (query_node_typed.getProjection().getNodes().empty())
throw Exception(ErrorCodes::EMPTY_LIST_OF_COLUMNS_QUERIED,
"Empty list of columns in projection. In scope {}",
scope.scope_node->formatASTForErrorMessage());
}
if (query_node_typed.hasWith())
resolveExpressionNodeList(query_node_typed.getWithNode(), scope, true /*allow_lambda_expression*/, false /*allow_table_expression*/);
@ -6686,11 +6781,14 @@ void QueryAnalyzer::resolveQuery(const QueryTreeNodePtr & query_node, Identifier
convertLimitOffsetExpression(query_node_typed.getOffset(), "OFFSET", scope);
}
auto projection_columns = resolveProjectionExpressionNodeList(query_node_typed.getProjectionNode(), scope);
if (query_node_typed.getProjection().getNodes().empty())
throw Exception(ErrorCodes::EMPTY_LIST_OF_COLUMNS_QUERIED,
"Empty list of columns in projection. In scope {}",
scope.scope_node->formatASTForErrorMessage());
if (scope.group_by_use_nulls)
{
projection_columns = resolveProjectionExpressionNodeList(query_node_typed.getProjectionNode(), scope);
if (query_node_typed.getProjection().getNodes().empty())
throw Exception(ErrorCodes::EMPTY_LIST_OF_COLUMNS_QUERIED,
"Empty list of columns in projection. In scope {}",
scope.scope_node->formatASTForErrorMessage());
}
/** Resolve nodes with duplicate aliases.
* Table expressions cannot have duplicate aliases.
@ -6757,6 +6855,15 @@ void QueryAnalyzer::resolveQuery(const QueryTreeNodePtr & query_node, Identifier
validateAggregates(query_node, { .group_by_use_nulls = scope.group_by_use_nulls });
for (const auto & column : projection_columns)
{
if (isNotCreatable(column.type))
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
"Invalid projection column with type {}. In scope {}",
column.type->getName(),
scope.scope_node->formatASTForErrorMessage());
}
/** WITH section can be safely removed, because WITH section only can provide aliases to query expressions
* and CTE for other sections to use.
*

View File

@ -61,12 +61,17 @@ bool TableNode::isEqualImpl(const IQueryTreeNode & rhs) const
void TableNode::updateTreeHashImpl(HashState & state) const
{
auto full_name = storage_id.getFullNameNotQuoted();
state.update(full_name.size());
state.update(full_name);
state.update(temporary_table_name.size());
state.update(temporary_table_name);
if (!temporary_table_name.empty())
{
state.update(temporary_table_name.size());
state.update(temporary_table_name);
}
else
{
auto full_name = storage_id.getFullNameNotQuoted();
state.update(full_name.size());
state.update(full_name);
}
if (table_expression_modifiers)
table_expression_modifiers->updateTreeHash(state);

View File

@ -8,6 +8,7 @@
#include <DataTypes/DataTypeString.h>
#include <DataTypes/DataTypeTuple.h>
#include <DataTypes/DataTypeArray.h>
#include <DataTypes/DataTypeLowCardinality.h>
#include <Functions/FunctionHelpers.h>
#include <Functions/FunctionFactory.h>
@ -32,6 +33,7 @@ namespace DB
namespace ErrorCodes
{
extern const int LOGICAL_ERROR;
extern const int BAD_ARGUMENTS;
}
bool isNodePartOfTree(const IQueryTreeNode * node, const IQueryTreeNode * root)
@ -79,6 +81,75 @@ bool isNameOfInFunction(const std::string & function_name)
return is_special_function_in;
}
bool isNameOfLocalInFunction(const std::string & function_name)
{
bool is_special_function_in = function_name == "in" ||
function_name == "notIn" ||
function_name == "nullIn" ||
function_name == "notNullIn" ||
function_name == "inIgnoreSet" ||
function_name == "notInIgnoreSet" ||
function_name == "nullInIgnoreSet" ||
function_name == "notNullInIgnoreSet";
return is_special_function_in;
}
bool isNameOfGlobalInFunction(const std::string & function_name)
{
bool is_special_function_in = function_name == "globalIn" ||
function_name == "globalNotIn" ||
function_name == "globalNullIn" ||
function_name == "globalNotNullIn" ||
function_name == "globalInIgnoreSet" ||
function_name == "globalNotInIgnoreSet" ||
function_name == "globalNullInIgnoreSet" ||
function_name == "globalNotNullInIgnoreSet";
return is_special_function_in;
}
std::string getGlobalInFunctionNameForLocalInFunctionName(const std::string & function_name)
{
if (function_name == "in")
return "globalIn";
else if (function_name == "notIn")
return "globalNotIn";
else if (function_name == "nullIn")
return "globalNullIn";
else if (function_name == "notNullIn")
return "globalNotNullIn";
else if (function_name == "inIgnoreSet")
return "globalInIgnoreSet";
else if (function_name == "notInIgnoreSet")
return "globalNotInIgnoreSet";
else if (function_name == "nullInIgnoreSet")
return "globalNullInIgnoreSet";
else if (function_name == "notNullInIgnoreSet")
return "globalNotNullInIgnoreSet";
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Invalid local IN function name {}", function_name);
}
void makeUniqueColumnNamesInBlock(Block & block)
{
NameSet block_column_names;
size_t unique_column_name_counter = 1;
for (auto & column_with_type : block)
{
if (!block_column_names.contains(column_with_type.name))
{
block_column_names.insert(column_with_type.name);
continue;
}
column_with_type.name += '_';
column_with_type.name += std::to_string(unique_column_name_counter);
++unique_column_name_counter;
}
}
QueryTreeNodePtr buildCastFunction(const QueryTreeNodePtr & expression,
const DataTypePtr & type,
const ContextPtr & context,
@ -102,6 +173,27 @@ QueryTreeNodePtr buildCastFunction(const QueryTreeNodePtr & expression,
return cast_function_node;
}
std::optional<bool> tryExtractConstantFromConditionNode(const QueryTreeNodePtr & condition_node)
{
const auto * constant_node = condition_node->as<ConstantNode>();
if (!constant_node)
return {};
const auto & value = constant_node->getValue();
auto constant_type = constant_node->getResultType();
constant_type = removeNullable(removeLowCardinality(constant_type));
auto which_constant_type = WhichDataType(constant_type);
if (!which_constant_type.isUInt8() && !which_constant_type.isNothing())
return {};
if (value.isNull())
return false;
UInt8 predicate_value = value.safeGet<UInt8>();
return predicate_value > 0;
}
static ASTPtr convertIntoTableExpressionAST(const QueryTreeNodePtr & table_expression_node)
{
ASTPtr table_expression_node_ast;

View File

@ -13,6 +13,18 @@ bool isNodePartOfTree(const IQueryTreeNode * node, const IQueryTreeNode * root);
/// Returns true if function name is name of IN function or its variations, false otherwise
bool isNameOfInFunction(const std::string & function_name);
/// Returns true if function name is name of local IN function or its variations, false otherwise
bool isNameOfLocalInFunction(const std::string & function_name);
/// Returns true if function name is name of global IN function or its variations, false otherwise
bool isNameOfGlobalInFunction(const std::string & function_name);
/// Returns global IN function name for local IN function name
std::string getGlobalInFunctionNameForLocalInFunctionName(const std::string & function_name);
/// Add unique suffix to names of duplicate columns in block
void makeUniqueColumnNamesInBlock(Block & block);
/** Build cast function that cast expression into type.
* If resolve = true, then result cast function is resolved during build, otherwise
* result cast function is not resolved during build.
@ -22,6 +34,9 @@ QueryTreeNodePtr buildCastFunction(const QueryTreeNodePtr & expression,
const ContextPtr & context,
bool resolve = true);
/// Try extract boolean constant from condition node
std::optional<bool> tryExtractConstantFromConditionNode(const QueryTreeNodePtr & condition_node);
/** Add table expression in tables in select query children.
* If table expression node is not of identifier node, table node, query node, table function node, join node or array join node type throws logical error exception.
*/

View File

@ -16,6 +16,7 @@ namespace ErrorCodes
{
extern const int NOT_AN_AGGREGATE;
extern const int NOT_IMPLEMENTED;
extern const int BAD_ARGUMENTS;
}
class ValidateGroupByColumnsVisitor : public ConstInDepthQueryTreeVisitor<ValidateGroupByColumnsVisitor>
@ -283,4 +284,52 @@ void assertNoFunctionNodes(const QueryTreeNodePtr & node,
visitor.visit(node);
}
void validateTreeSize(const QueryTreeNodePtr & node,
size_t max_size,
std::unordered_map<QueryTreeNodePtr, size_t> & node_to_tree_size)
{
size_t tree_size = 0;
std::vector<std::pair<QueryTreeNodePtr, bool>> nodes_to_process;
nodes_to_process.emplace_back(node, false);
while (!nodes_to_process.empty())
{
const auto [node_to_process, processed_children] = nodes_to_process.back();
nodes_to_process.pop_back();
if (processed_children)
{
++tree_size;
node_to_tree_size.emplace(node_to_process, tree_size);
continue;
}
auto node_to_size_it = node_to_tree_size.find(node_to_process);
if (node_to_size_it != node_to_tree_size.end())
{
tree_size += node_to_size_it->second;
continue;
}
nodes_to_process.emplace_back(node_to_process, true);
for (const auto & node_to_process_child : node_to_process->getChildren())
{
if (!node_to_process_child)
continue;
nodes_to_process.emplace_back(node_to_process_child, false);
}
auto * constant_node = node_to_process->as<ConstantNode>();
if (constant_node && constant_node->hasSourceExpression())
nodes_to_process.emplace_back(constant_node->getSourceExpression(), false);
}
if (tree_size > max_size)
throw Exception(ErrorCodes::BAD_ARGUMENTS,
"Query tree is too big. Maximum: {}",
max_size);
}
}

View File

@ -7,7 +7,7 @@ namespace DB
struct ValidationParams
{
bool group_by_use_nulls;
bool group_by_use_nulls = false;
};
/** Validate aggregates in query node.
@ -31,4 +31,11 @@ void assertNoFunctionNodes(const QueryTreeNodePtr & node,
std::string_view exception_function_name,
std::string_view exception_place_message);
/** Validate tree size. If size of tree is greater than max size throws exception.
* Additionally for each node in tree, update node to tree size map.
*/
void validateTreeSize(const QueryTreeNodePtr & node,
size_t max_size,
std::unordered_map<QueryTreeNodePtr, size_t> & node_to_tree_size);
}

View File

@ -113,11 +113,17 @@ ASTPtr WindowNode::toASTImpl() const
window_definition->parent_window_name = parent_window_name;
window_definition->children.push_back(getPartitionByNode()->toAST());
window_definition->partition_by = window_definition->children.back();
if (hasPartitionBy())
{
window_definition->children.push_back(getPartitionByNode()->toAST());
window_definition->partition_by = window_definition->children.back();
}
window_definition->children.push_back(getOrderByNode()->toAST());
window_definition->order_by = window_definition->children.back();
if (hasOrderBy())
{
window_definition->children.push_back(getOrderByNode()->toAST());
window_definition->order_by = window_definition->children.back();
}
window_definition->frame_is_default = window_frame.is_default;
window_definition->frame_type = window_frame.type;

View File

@ -506,7 +506,7 @@ void Connection::sendQuery(
bool with_pending_data,
std::function<void(const Progress &)>)
{
OpenTelemetry::SpanHolder span("Connection::sendQuery()");
OpenTelemetry::SpanHolder span("Connection::sendQuery()", OpenTelemetry::CLIENT);
span.addAttribute("clickhouse.query_id", query_id_);
span.addAttribute("clickhouse.query", query);
span.addAttribute("target", [this] () { return this->getHost() + ":" + std::to_string(this->getPort()); });

View File

@ -110,23 +110,4 @@ ThreadGroupStatusPtr CurrentThread::getGroup()
return current_thread->getThreadGroup();
}
MemoryTracker * CurrentThread::getUserMemoryTracker()
{
if (unlikely(!current_thread))
return nullptr;
auto * tracker = current_thread->memory_tracker.getParent();
while (tracker && tracker->level != VariableContext::User)
tracker = tracker->getParent();
return tracker;
}
void CurrentThread::flushUntrackedMemory()
{
if (unlikely(!current_thread))
return;
current_thread->flushUntrackedMemory();
}
}

View File

@ -40,12 +40,6 @@ public:
/// Group to which belongs current thread
static ThreadGroupStatusPtr getGroup();
/// MemoryTracker for user that owns current thread if any
static MemoryTracker * getUserMemoryTracker();
/// Adjust counters in MemoryTracker hierarchy if untracked_memory is not 0.
static void flushUntrackedMemory();
/// A logs queue used by TCPHandler to pass logs to a client
static void attachInternalTextLogsQueue(const std::shared_ptr<InternalTextLogsQueue> & logs_queue,
LogsLevel client_logs_level);

View File

@ -92,7 +92,7 @@ bool Span::addAttributeImpl(std::string_view name, std::string_view value) noexc
return true;
}
SpanHolder::SpanHolder(std::string_view _operation_name)
SpanHolder::SpanHolder(std::string_view _operation_name, SpanKind _kind)
{
if (!current_thread_trace_context.isTraceEnabled())
{
@ -106,6 +106,7 @@ SpanHolder::SpanHolder(std::string_view _operation_name)
this->parent_span_id = current_thread_trace_context.span_id;
this->span_id = thread_local_rng(); // create a new id for this span
this->operation_name = _operation_name;
this->kind = _kind;
this->start_time_us
= std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::system_clock::now().time_since_epoch()).count();

View File

@ -13,6 +13,29 @@ class ReadBuffer;
namespace OpenTelemetry
{
/// See https://opentelemetry.io/docs/reference/specification/trace/api/#spankind
enum SpanKind
{
/// Default value. Indicates that the span represents an internal operation within an application,
/// as opposed to an operations with remote parents or children.
INTERNAL = 0,
/// Indicates that the span covers server-side handling of a synchronous RPC or other remote request.
/// This span is often the child of a remote CLIENT span that was expected to wait for a response.
SERVER = 1,
/// Indicates that the span describes a request to some remote service.
/// This span is usually the parent of a remote SERVER span and does not end until the response is received.
CLIENT = 2,
/// Indicates that the span describes the initiators of an asynchronous request. This parent span will often end before the corresponding child CONSUMER span, possibly even before the child span starts.
/// In messaging scenarios with batching, tracing individual messages requires a new PRODUCER span per message to be created.
PRODUCER = 3,
/// Indicates that the span describes a child of an asynchronous PRODUCER request
CONSUMER = 4
};
struct Span
{
UUID trace_id{};
@ -21,6 +44,7 @@ struct Span
String operation_name;
UInt64 start_time_us = 0;
UInt64 finish_time_us = 0;
SpanKind kind = INTERNAL;
Map attributes;
/// Following methods are declared as noexcept to make sure they're exception safe.
@ -155,7 +179,7 @@ using TracingContextHolderPtr = std::unique_ptr<TracingContextHolder>;
/// Once it's created or destructed, it automatically maitains the tracing context on the thread that it lives.
struct SpanHolder : public Span
{
SpanHolder(std::string_view);
SpanHolder(std::string_view, SpanKind _kind = INTERNAL);
~SpanHolder();
/// Finish a span explicitly if needed.

View File

@ -803,7 +803,6 @@ class IColumn;
M(Bool, input_format_tsv_detect_header, true, "Automatically detect header with names and types in TSV format", 0) \
M(Bool, input_format_custom_detect_header, true, "Automatically detect header with names and types in CustomSeparated format", 0) \
M(Bool, input_format_parquet_skip_columns_with_unsupported_types_in_schema_inference, false, "Skip columns with unsupported types while schema inference for format Parquet", 0) \
M(UInt64, input_format_parquet_max_block_size, 8192, "Max block size for parquet reader.", 0) \
M(Bool, input_format_protobuf_skip_fields_with_unsupported_types_in_schema_inference, false, "Skip fields with unsupported types while schema inference for format Protobuf", 0) \
M(Bool, input_format_capn_proto_skip_fields_with_unsupported_types_in_schema_inference, false, "Skip columns with unsupported types while schema inference for format CapnProto", 0) \
M(Bool, input_format_orc_skip_columns_with_unsupported_types_in_schema_inference, false, "Skip columns with unsupported types while schema inference for format ORC", 0) \

View File

@ -117,7 +117,6 @@ FormatSettings getFormatSettings(ContextPtr context, const Settings & settings)
format_settings.parquet.skip_columns_with_unsupported_types_in_schema_inference = settings.input_format_parquet_skip_columns_with_unsupported_types_in_schema_inference;
format_settings.parquet.output_string_as_string = settings.output_format_parquet_string_as_string;
format_settings.parquet.output_fixed_string_as_fixed_byte_array = settings.output_format_parquet_fixed_string_as_fixed_byte_array;
format_settings.parquet.max_block_size = settings.input_format_parquet_max_block_size;
format_settings.parquet.output_compression_method = settings.output_format_parquet_compression_method;
format_settings.pretty.charset = settings.output_format_pretty_grid_charset.toString() == "ASCII" ? FormatSettings::Pretty::Charset::ASCII : FormatSettings::Pretty::Charset::UTF8;
format_settings.pretty.color = settings.output_format_pretty_color;

View File

@ -211,7 +211,6 @@ struct FormatSettings
std::unordered_set<int> skip_row_groups = {};
bool output_string_as_string = false;
bool output_fixed_string_as_fixed_byte_array = true;
UInt64 max_block_size = 8192;
ParquetVersion output_version;
ParquetCompression output_compression_method = ParquetCompression::SNAPPY;
} parquet;

View File

@ -38,6 +38,8 @@ public:
String getName() const override { return name; }
size_t getNumberOfArguments() const override { return 0; }
bool isSuitableForShortCircuitArgumentsExecution(const DataTypesWithConstInfo & /*arguments*/) const override { return true; }
bool isDeterministic() const override { return false; }
bool isDeterministicInScopeOfQuery() const override { return false; }
DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override
{

View File

@ -14,28 +14,33 @@ namespace ErrorCodes
static size_t encodeURL(const char * __restrict src, size_t src_size, char * __restrict dst, bool space_as_plus)
{
char * dst_pos = dst;
for (size_t i = 0; i < src_size; i++)
for (size_t i = 0; i < src_size; ++i)
{
if ((src[i] >= '0' && src[i] <= '9') || (src[i] >= 'a' && src[i] <= 'z') || (src[i] >= 'A' && src[i] <= 'Z')
|| src[i] == '-' || src[i] == '_' || src[i] == '.' || src[i] == '~')
{
*dst_pos++ = src[i];
*dst_pos = src[i];
++dst_pos;
}
else if (src[i] == ' ' && space_as_plus)
{
*dst_pos++ = '+';
*dst_pos = '+';
++dst_pos;
}
else
{
*dst_pos++ = '%';
*dst_pos++ = hexDigitUppercase(src[i] >> 4);
*dst_pos++ = hexDigitUppercase(src[i] & 0xf);
dst_pos[0] = '%';
++dst_pos;
writeHexByteUppercase(src[i], dst_pos);
dst_pos += 2;
}
}
*dst_pos++ = src[src_size];
*dst_pos = 0;
++dst_pos;
return dst_pos - dst;
}
/// We assume that size of the dst buf isn't less than src_size.
static size_t decodeURL(const char * __restrict src, size_t src_size, char * __restrict dst, bool plus_as_space)
{
@ -120,10 +125,14 @@ struct CodeURLComponentImpl
ColumnString::Chars & res_data, ColumnString::Offsets & res_offsets)
{
if (code_strategy == encode)
//the destination(res_data) string is at most three times the length of the source string
{
/// the destination(res_data) string is at most three times the length of the source string
res_data.resize(data.size() * 3);
}
else
{
res_data.resize(data.size());
}
size_t size = offsets.size();
res_offsets.resize(size);

View File

@ -259,12 +259,12 @@ namespace detail
{
try
{
callWithRedirects<true>(response, Poco::Net::HTTPRequest::HTTP_HEAD);
callWithRedirects<true>(response, Poco::Net::HTTPRequest::HTTP_HEAD, true);
break;
}
catch (const Poco::Exception & e)
{
if (i == settings.http_max_tries - 1)
if (i == settings.http_max_tries - 1 || !isRetriableError(response.getStatus()))
throw;
LOG_ERROR(log, "Failed to make HTTP_HEAD request to {}. Error: {}", uri.toString(), e.displayText());
@ -353,11 +353,12 @@ namespace detail
static bool isRetriableError(const Poco::Net::HTTPResponse::HTTPStatus http_status) noexcept
{
constexpr std::array non_retriable_errors{
static constexpr std::array non_retriable_errors{
Poco::Net::HTTPResponse::HTTPStatus::HTTP_BAD_REQUEST,
Poco::Net::HTTPResponse::HTTPStatus::HTTP_UNAUTHORIZED,
Poco::Net::HTTPResponse::HTTPStatus::HTTP_NOT_FOUND,
Poco::Net::HTTPResponse::HTTPStatus::HTTP_FORBIDDEN,
Poco::Net::HTTPResponse::HTTPStatus::HTTP_NOT_IMPLEMENTED,
Poco::Net::HTTPResponse::HTTPStatus::HTTP_METHOD_NOT_ALLOWED};
return std::all_of(

View File

@ -18,7 +18,6 @@
#include <Parsers/ASTInsertQuery.h>
#include <Parsers/queryToString.h>
#include <Storages/IStorage.h>
#include <Common/CurrentThread.h>
#include <Common/SipHash.h>
#include <Common/FieldVisitorHash.h>
#include <Common/DateLUT.h>
@ -104,10 +103,9 @@ bool AsynchronousInsertQueue::InsertQuery::operator==(const InsertQuery & other)
return query_str == other.query_str && settings == other.settings;
}
AsynchronousInsertQueue::InsertData::Entry::Entry(String && bytes_, String && query_id_, MemoryTracker * user_memory_tracker_)
AsynchronousInsertQueue::InsertData::Entry::Entry(String && bytes_, String && query_id_)
: bytes(std::move(bytes_))
, query_id(std::move(query_id_))
, user_memory_tracker(user_memory_tracker_)
, create_time(std::chrono::system_clock::now())
{
}
@ -236,7 +234,7 @@ AsynchronousInsertQueue::push(ASTPtr query, ContextPtr query_context)
if (auto quota = query_context->getQuota())
quota->used(QuotaType::WRITTEN_BYTES, bytes.size());
auto entry = std::make_shared<InsertData::Entry>(std::move(bytes), query_context->getCurrentQueryId(), CurrentThread::getUserMemoryTracker());
auto entry = std::make_shared<InsertData::Entry>(std::move(bytes), query_context->getCurrentQueryId());
InsertQuery key{query, settings};
InsertDataPtr data_to_process;

View File

@ -1,7 +1,6 @@
#pragma once
#include <Parsers/IAST_fwd.h>
#include <Common/CurrentThread.h>
#include <Common/ThreadPool.h>
#include <Core/Settings.h>
#include <Poco/Logger.h>
@ -60,31 +59,6 @@ private:
UInt128 calculateHash() const;
};
struct UserMemoryTrackerSwitcher
{
explicit UserMemoryTrackerSwitcher(MemoryTracker * new_tracker)
{
auto * thread_tracker = CurrentThread::getMemoryTracker();
prev_untracked_memory = current_thread->untracked_memory;
prev_memory_tracker_parent = thread_tracker->getParent();
current_thread->untracked_memory = 0;
thread_tracker->setParent(new_tracker);
}
~UserMemoryTrackerSwitcher()
{
CurrentThread::flushUntrackedMemory();
auto * thread_tracker = CurrentThread::getMemoryTracker();
current_thread->untracked_memory = prev_untracked_memory;
thread_tracker->setParent(prev_memory_tracker_parent);
}
MemoryTracker * prev_memory_tracker_parent;
Int64 prev_untracked_memory;
};
struct InsertData
{
struct Entry
@ -92,10 +66,9 @@ private:
public:
const String bytes;
const String query_id;
MemoryTracker * const user_memory_tracker;
const std::chrono::time_point<std::chrono::system_clock> create_time;
Entry(String && bytes_, String && query_id_, MemoryTracker * user_memory_tracker_);
Entry(String && bytes_, String && query_id_);
void finish(std::exception_ptr exception_ = nullptr);
std::future<void> getFuture() { return promise.get_future(); }
@ -106,19 +79,6 @@ private:
std::atomic_bool finished = false;
};
~InsertData()
{
auto it = entries.begin();
// Entries must be destroyed in context of user who runs async insert.
// Each entry in the list may correspond to a different user,
// so we need to switch current thread's MemoryTracker parent on each iteration.
while (it != entries.end())
{
UserMemoryTrackerSwitcher switcher((*it)->user_memory_tracker);
it = entries.erase(it);
}
}
using EntryPtr = std::shared_ptr<Entry>;
std::list<EntryPtr> entries;

View File

@ -542,6 +542,7 @@ void DDLWorker::processTask(DDLTaskBase & task, const ZooKeeperPtr & zookeeper)
OpenTelemetry::TracingContextHolder tracing_ctx_holder(__PRETTY_FUNCTION__ ,
task.entry.tracing_context,
this->context->getOpenTelemetrySpanLog());
tracing_ctx_holder.root_span.kind = OpenTelemetry::CONSUMER;
String active_node_path = task.getActiveNodePath();
String finished_node_path = task.getFinishedNodePath();

View File

@ -44,6 +44,10 @@ public:
const auto & on_expr = table_join->getOnlyClause();
bool support_conditions = !on_expr.on_filter_condition_left && !on_expr.on_filter_condition_right;
if (!on_expr.analyzer_left_filter_condition_column_name.empty() ||
!on_expr.analyzer_right_filter_condition_column_name.empty())
support_conditions = false;
/// Key column can change nullability and it's not handled on type conversion stage, so algorithm should be aware of it
bool support_using_and_nulls = !table_join->hasUsing() || !table_join->joinUseNulls();

View File

@ -226,6 +226,12 @@ BlockIO InterpreterSelectQueryAnalyzer::execute()
return result;
}
QueryPlan & InterpreterSelectQueryAnalyzer::getQueryPlan()
{
planner.buildQueryPlanIfNeeded();
return planner.getQueryPlan();
}
QueryPlan && InterpreterSelectQueryAnalyzer::extractQueryPlan() &&
{
planner.buildQueryPlanIfNeeded();

View File

@ -51,6 +51,8 @@ public:
BlockIO execute() override;
QueryPlan & getQueryPlan();
QueryPlan && extractQueryPlan() &&;
QueryPipelineBuilder buildQueryPipeline();

View File

@ -8,6 +8,7 @@
#include <DataTypes/DataTypeString.h>
#include <DataTypes/DataTypeMap.h>
#include <DataTypes/DataTypeUUID.h>
#include <DataTypes/DataTypeEnum.h>
#include <Interpreters/Context.h>
#include <base/hex.h>
@ -20,11 +21,23 @@ namespace DB
NamesAndTypesList OpenTelemetrySpanLogElement::getNamesAndTypes()
{
auto span_kind_type = std::make_shared<DataTypeEnum8>(
DataTypeEnum8::Values
{
{"INTERNAL", static_cast<Int8>(OpenTelemetry::INTERNAL)},
{"SERVER", static_cast<Int8>(OpenTelemetry::SERVER)},
{"CLIENT", static_cast<Int8>(OpenTelemetry::CLIENT)},
{"PRODUCER", static_cast<Int8>(OpenTelemetry::PRODUCER)},
{"CONSUMER", static_cast<Int8>(OpenTelemetry::CONSUMER)}
}
);
return {
{"trace_id", std::make_shared<DataTypeUUID>()},
{"span_id", std::make_shared<DataTypeUInt64>()},
{"parent_span_id", std::make_shared<DataTypeUInt64>()},
{"operation_name", std::make_shared<DataTypeString>()},
{"kind", std::move(span_kind_type)},
// DateTime64 is really unwieldy -- there is no "normal" way to convert
// it to an UInt64 count of microseconds, except:
// 1) reinterpretAsUInt64(reinterpretAsFixedString(date)), which just
@ -59,6 +72,7 @@ void OpenTelemetrySpanLogElement::appendToBlock(MutableColumns & columns) const
columns[i++]->insert(span_id);
columns[i++]->insert(parent_span_id);
columns[i++]->insert(operation_name);
columns[i++]->insert(kind);
columns[i++]->insert(start_time_us);
columns[i++]->insert(finish_time_us);
columns[i++]->insert(DateLUT::instance().toDayNum(finish_time_us / 1000000).toUnderType());

View File

@ -50,7 +50,16 @@ void ReplaceQueryParameterVisitor::visit(ASTPtr & ast)
void ReplaceQueryParameterVisitor::visitChildren(ASTPtr & ast)
{
for (auto & child : ast->children)
{
void * old_ptr = child.get();
visit(child);
void * new_ptr = child.get();
/// Some AST classes have naked pointers to children elements as members.
/// We have to replace them if the child was replaced.
if (new_ptr != old_ptr)
ast->updatePointerToChild(old_ptr, new_ptr);
}
}
const String & ReplaceQueryParameterVisitor::getParamValue(const String & name)
@ -89,6 +98,7 @@ void ReplaceQueryParameterVisitor::visitQueryParameter(ASTPtr & ast)
literal = value;
else
literal = temp_column[0];
ast = addTypeConversionToAST(std::make_shared<ASTLiteral>(literal), type_name);
/// Keep the original alias.

View File

@ -55,7 +55,7 @@ bool isSupportedAlterType(int type)
BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, ContextPtr context, const DDLQueryOnClusterParams & params)
{
OpenTelemetry::SpanHolder span(__FUNCTION__);
OpenTelemetry::SpanHolder span(__FUNCTION__, OpenTelemetry::PRODUCER);
if (context->getCurrentTransaction() && context->getSettingsRef().throw_on_unsupported_query_inside_transaction)
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "ON CLUSTER queries inside transactions are not supported");

View File

@ -256,6 +256,11 @@ protected:
void formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override;
bool isOneCommandTypeOnly(const ASTAlterCommand::Type & type) const;
void forEachPointerToChild(std::function<void(void**)> f) override
{
f(reinterpret_cast<void **>(&command_list));
}
};
}

View File

@ -94,5 +94,12 @@ public:
void formatImpl(const FormatSettings & format, FormatState &, FormatStateStacked) const override;
ASTPtr getRewrittenASTWithoutOnCluster(const WithoutOnClusterASTRewriteParams &) const override;
QueryKind getQueryKind() const override;
void forEachPointerToChild(std::function<void(void**)> f) override
{
f(reinterpret_cast<void **>(&backup_name));
f(reinterpret_cast<void **>(&base_backup_name));
}
};
}

View File

@ -25,5 +25,11 @@ public:
ASTPtr clone() const override;
void formatImpl(const FormatSettings & s, FormatState & state, FormatStateStacked frame) const override;
void forEachPointerToChild(std::function<void(void**)> f) override
{
f(reinterpret_cast<void **>(&expr));
}
};
}

View File

@ -91,6 +91,11 @@ public:
ASTPtr clone() const override;
void formatImpl(const FormatSettings & s, FormatState & state, FormatStateStacked frame) const override;
void forEachPointerToChild(std::function<void(void**)> f) override
{
f(reinterpret_cast<void **>(&elem));
}
};
ASTPtr ASTColumnsElement::clone() const

View File

@ -32,6 +32,17 @@ public:
void formatImpl(const FormatSettings & s, FormatState & state, FormatStateStacked frame) const override;
bool isExtendedStorageDefinition() const;
void forEachPointerToChild(std::function<void(void**)> f) override
{
f(reinterpret_cast<void **>(&engine));
f(reinterpret_cast<void **>(&partition_by));
f(reinterpret_cast<void **>(&primary_key));
f(reinterpret_cast<void **>(&order_by));
f(reinterpret_cast<void **>(&sample_by));
f(reinterpret_cast<void **>(&ttl_table));
f(reinterpret_cast<void **>(&settings));
}
};
@ -57,6 +68,16 @@ public:
return (!columns || columns->children.empty()) && (!indices || indices->children.empty()) && (!constraints || constraints->children.empty())
&& (!projections || projections->children.empty());
}
void forEachPointerToChild(std::function<void(void**)> f) override
{
f(reinterpret_cast<void **>(&columns));
f(reinterpret_cast<void **>(&indices));
f(reinterpret_cast<void **>(&primary_key));
f(reinterpret_cast<void **>(&constraints));
f(reinterpret_cast<void **>(&projections));
f(reinterpret_cast<void **>(&primary_key));
}
};
@ -126,6 +147,19 @@ public:
protected:
void formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override;
void forEachPointerToChild(std::function<void(void**)> f) override
{
f(reinterpret_cast<void **>(&columns_list));
f(reinterpret_cast<void **>(&inner_storage));
f(reinterpret_cast<void **>(&storage));
f(reinterpret_cast<void **>(&as_table_function));
f(reinterpret_cast<void **>(&select));
f(reinterpret_cast<void **>(&comment));
f(reinterpret_cast<void **>(&table_overrides));
f(reinterpret_cast<void **>(&dictionary_attributes_list));
f(reinterpret_cast<void **>(&dictionary));
}
};
}

View File

@ -47,6 +47,11 @@ public:
ASTPtr clone() const override;
void formatImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override;
void forEachPointerToChild(std::function<void(void**)> f) override
{
f(reinterpret_cast<void **>(&parameters));
}
};

View File

@ -41,6 +41,11 @@ public:
}
QueryKind getQueryKind() const override { return QueryKind::ExternalDDL; }
void forEachPointerToChild(std::function<void(void**)> f) override
{
f(reinterpret_cast<void **>(&from));
}
};
}

View File

@ -33,6 +33,11 @@ public:
bool hasSecretParts() const override;
void updateTreeHashImpl(SipHash & hash_state) const override;
void forEachPointerToChild(std::function<void(void**)> f) override
{
f(reinterpret_cast<void **>(&second));
}
};

View File

@ -23,6 +23,12 @@ public:
ASTPtr clone() const override;
void formatImpl(const FormatSettings & s, FormatState & state, FormatStateStacked frame) const override;
void forEachPointerToChild(std::function<void(void**)> f) override
{
f(reinterpret_cast<void **>(&expr));
f(reinterpret_cast<void **>(&type));
}
};
}

View File

@ -18,6 +18,11 @@ public:
ASTPtr clone() const override;
void formatImpl(const FormatSettings & s, FormatState & state, FormatStateStacked frame) const override;
void forEachPointerToChild(std::function<void(void**)> f) override
{
f(reinterpret_cast<void **>(&query));
}
};
}

View File

@ -27,6 +27,12 @@ public:
String getID(char) const override { return "TableOverride " + table_name; }
ASTPtr clone() const override;
void formatImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override;
void forEachPointerToChild(std::function<void(void**)> f) override
{
f(reinterpret_cast<void **>(&columns));
f(reinterpret_cast<void **>(&storage));
}
};
/// List of table overrides, for example:

View File

@ -175,6 +175,16 @@ public:
field = nullptr;
}
/// After changing one of `children` elements, update the corresponding member pointer if needed.
void updatePointerToChild(void * old_ptr, void * new_ptr)
{
forEachPointerToChild([old_ptr, new_ptr](void ** ptr) mutable
{
if (*ptr == old_ptr)
*ptr = new_ptr;
});
}
/// Convert to a string.
/// Format settings.
@ -295,6 +305,10 @@ public:
protected:
bool childrenHaveSecretParts() const;
/// Some AST classes have naked pointers to children elements as members.
/// This method allows to iterate over them.
virtual void forEachPointerToChild(std::function<void(void**)>) {}
private:
size_t checkDepthImpl(size_t max_depth) const;

View File

@ -80,6 +80,15 @@ protected:
{
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method formatImpl is not supported by MySQLParser::ASTAlterCommand.");
}
void forEachPointerToChild(std::function<void(void**)> f) override
{
f(reinterpret_cast<void **>(&index_decl));
f(reinterpret_cast<void **>(&default_expression));
f(reinterpret_cast<void **>(&additional_columns));
f(reinterpret_cast<void **>(&order_by_columns));
f(reinterpret_cast<void **>(&properties));
}
};
class ParserAlterCommand : public IParserBase

View File

@ -31,6 +31,13 @@ protected:
{
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method formatImpl is not supported by MySQLParser::ASTCreateDefines.");
}
void forEachPointerToChild(std::function<void(void**)> f) override
{
f(reinterpret_cast<void **>(&columns));
f(reinterpret_cast<void **>(&indices));
f(reinterpret_cast<void **>(&constraints));
}
};
class ParserCreateDefines : public IParserBase
@ -44,4 +51,3 @@ protected:
}
}

View File

@ -214,9 +214,14 @@ public:
{
/// Constness of limit is validated during query analysis stage
limit_length = query_node.getLimit()->as<ConstantNode &>().getValue().safeGet<UInt64>();
}
if (query_node.hasOffset())
if (query_node.hasOffset() && limit_length)
{
/// Constness of offset is validated during query analysis stage
limit_offset = query_node.getOffset()->as<ConstantNode &>().getValue().safeGet<UInt64>();
}
}
else if (query_node.hasOffset())
{
/// Constness of offset is validated during query analysis stage
limit_offset = query_node.getOffset()->as<ConstantNode &>().getValue().safeGet<UInt64>();

View File

@ -45,7 +45,7 @@ bool GlobalPlannerContext::hasColumnIdentifier(const ColumnIdentifier & column_i
return column_identifiers.contains(column_identifier);
}
PlannerContext::PlannerContext(ContextPtr query_context_, GlobalPlannerContextPtr global_planner_context_)
PlannerContext::PlannerContext(ContextMutablePtr query_context_, GlobalPlannerContextPtr global_planner_context_)
: query_context(std::move(query_context_))
, global_planner_context(std::move(global_planner_context_))
{}

View File

@ -88,16 +88,22 @@ class PlannerContext
{
public:
/// Create planner context with query context and global planner context
PlannerContext(ContextPtr query_context_, GlobalPlannerContextPtr global_planner_context_);
PlannerContext(ContextMutablePtr query_context_, GlobalPlannerContextPtr global_planner_context_);
/// Get planner context query context
const ContextPtr & getQueryContext() const
ContextPtr getQueryContext() const
{
return query_context;
}
/// Get planner context query context
ContextPtr & getQueryContext()
/// Get planner context mutable query context
const ContextMutablePtr & getMutableQueryContext() const
{
return query_context;
}
/// Get planner context mutable query context
ContextMutablePtr & getMutableQueryContext()
{
return query_context;
}
@ -137,12 +143,18 @@ public:
*/
TableExpressionData * getTableExpressionDataOrNull(const QueryTreeNodePtr & table_expression_node);
/// Get table expression node to data read only map
/// Get table expression node to data map
const std::unordered_map<QueryTreeNodePtr, TableExpressionData> & getTableExpressionNodeToData() const
{
return table_expression_node_to_data;
}
/// Get table expression node to data map
std::unordered_map<QueryTreeNodePtr, TableExpressionData> & getTableExpressionNodeToData()
{
return table_expression_node_to_data;
}
/** Get column node identifier.
* For column node source check if table expression data is registered.
* If table expression data is not registered exception is thrown.
@ -184,7 +196,7 @@ public:
private:
/// Query context
ContextPtr query_context;
ContextMutablePtr query_context;
/// Global planner context
GlobalPlannerContextPtr global_planner_context;

View File

@ -34,15 +34,13 @@ namespace
* It is client responsibility to update filter analysis result if filter column must be removed after chain is finalized.
*/
FilterAnalysisResult analyzeFilter(const QueryTreeNodePtr & filter_expression_node,
const ColumnsWithTypeAndName & current_output_columns,
const ColumnsWithTypeAndName & input_columns,
const PlannerContextPtr & planner_context,
ActionsChain & actions_chain)
{
const auto & filter_input = current_output_columns;
FilterAnalysisResult result;
result.filter_actions = buildActionsDAGFromExpressionNode(filter_expression_node, filter_input, planner_context);
result.filter_actions = buildActionsDAGFromExpressionNode(filter_expression_node, input_columns, planner_context);
result.filter_column_name = result.filter_actions->getOutputs().at(0)->result_name;
actions_chain.addStep(std::make_unique<ActionsChainStep>(result.filter_actions));
@ -52,8 +50,8 @@ FilterAnalysisResult analyzeFilter(const QueryTreeNodePtr & filter_expression_no
/** Construct aggregation analysis result if query tree has GROUP BY or aggregates.
* Actions before aggregation are added into actions chain, if result is not null optional.
*/
std::pair<std::optional<AggregationAnalysisResult>, std::optional<ColumnsWithTypeAndName>> analyzeAggregation(const QueryTreeNodePtr & query_tree,
const ColumnsWithTypeAndName & current_output_columns,
std::optional<AggregationAnalysisResult> analyzeAggregation(const QueryTreeNodePtr & query_tree,
const ColumnsWithTypeAndName & input_columns,
const PlannerContextPtr & planner_context,
ActionsChain & actions_chain)
{
@ -69,9 +67,7 @@ std::pair<std::optional<AggregationAnalysisResult>, std::optional<ColumnsWithTyp
Names aggregation_keys;
const auto & group_by_input = current_output_columns;
ActionsDAGPtr before_aggregation_actions = std::make_shared<ActionsDAG>(group_by_input);
ActionsDAGPtr before_aggregation_actions = std::make_shared<ActionsDAG>(input_columns);
before_aggregation_actions->getOutputs().clear();
std::unordered_set<std::string_view> before_aggregation_actions_output_node_names;
@ -203,14 +199,14 @@ std::pair<std::optional<AggregationAnalysisResult>, std::optional<ColumnsWithTyp
aggregation_analysis_result.grouping_sets_parameters_list = std::move(grouping_sets_parameters_list);
aggregation_analysis_result.group_by_with_constant_keys = group_by_with_constant_keys;
return { aggregation_analysis_result, available_columns_after_aggregation };
return aggregation_analysis_result;
}
/** Construct window analysis result if query tree has window functions.
* Actions before window functions are added into actions chain, if result is not null optional.
*/
std::optional<WindowAnalysisResult> analyzeWindow(const QueryTreeNodePtr & query_tree,
const ColumnsWithTypeAndName & current_output_columns,
const ColumnsWithTypeAndName & input_columns,
const PlannerContextPtr & planner_context,
ActionsChain & actions_chain)
{
@ -220,11 +216,9 @@ std::optional<WindowAnalysisResult> analyzeWindow(const QueryTreeNodePtr & query
auto window_descriptions = extractWindowDescriptions(window_function_nodes, *planner_context);
const auto & window_input = current_output_columns;
PlannerActionsVisitor actions_visitor(planner_context);
ActionsDAGPtr before_window_actions = std::make_shared<ActionsDAG>(window_input);
ActionsDAGPtr before_window_actions = std::make_shared<ActionsDAG>(input_columns);
before_window_actions->getOutputs().clear();
std::unordered_set<std::string_view> before_window_actions_output_node_names;
@ -299,12 +293,11 @@ std::optional<WindowAnalysisResult> analyzeWindow(const QueryTreeNodePtr & query
* It is client responsibility to update projection analysis result with project names actions after chain is finalized.
*/
ProjectionAnalysisResult analyzeProjection(const QueryNode & query_node,
const ColumnsWithTypeAndName & current_output_columns,
const ColumnsWithTypeAndName & input_columns,
const PlannerContextPtr & planner_context,
ActionsChain & actions_chain)
{
const auto & projection_input = current_output_columns;
auto projection_actions = buildActionsDAGFromExpressionNode(query_node.getProjectionNode(), projection_input, planner_context);
auto projection_actions = buildActionsDAGFromExpressionNode(query_node.getProjectionNode(), input_columns, planner_context);
auto projection_columns = query_node.getProjectionColumns();
size_t projection_columns_size = projection_columns.size();
@ -347,13 +340,11 @@ ProjectionAnalysisResult analyzeProjection(const QueryNode & query_node,
* Actions before sort are added into actions chain.
*/
SortAnalysisResult analyzeSort(const QueryNode & query_node,
const ColumnsWithTypeAndName & current_output_columns,
const ColumnsWithTypeAndName & input_columns,
const PlannerContextPtr & planner_context,
ActionsChain & actions_chain)
{
const auto & order_by_input = current_output_columns;
ActionsDAGPtr before_sort_actions = std::make_shared<ActionsDAG>(order_by_input);
ActionsDAGPtr before_sort_actions = std::make_shared<ActionsDAG>(input_columns);
auto & before_sort_actions_outputs = before_sort_actions->getOutputs();
before_sort_actions_outputs.clear();
@ -436,13 +427,12 @@ SortAnalysisResult analyzeSort(const QueryNode & query_node,
* Actions before limit by are added into actions chain.
*/
LimitByAnalysisResult analyzeLimitBy(const QueryNode & query_node,
const ColumnsWithTypeAndName & current_output_columns,
const ColumnsWithTypeAndName & input_columns,
const PlannerContextPtr & planner_context,
const NameSet & required_output_nodes_names,
ActionsChain & actions_chain)
{
const auto & limit_by_input = current_output_columns;
auto before_limit_by_actions = buildActionsDAGFromExpressionNode(query_node.getLimitByNode(), limit_by_input, planner_context);
auto before_limit_by_actions = buildActionsDAGFromExpressionNode(query_node.getLimitByNode(), input_columns, planner_context);
NameSet limit_by_column_names_set;
Names limit_by_column_names;
@ -480,8 +470,7 @@ PlannerExpressionsAnalysisResult buildExpressionAnalysisResult(const QueryTreeNo
std::optional<FilterAnalysisResult> where_analysis_result_optional;
std::optional<size_t> where_action_step_index_optional;
const auto * input_columns = actions_chain.getLastStepAvailableOutputColumnsOrNull();
ColumnsWithTypeAndName current_output_columns = input_columns ? *input_columns : join_tree_input_columns;
ColumnsWithTypeAndName current_output_columns = join_tree_input_columns;
if (query_node.hasWhere())
{
@ -490,9 +479,9 @@ PlannerExpressionsAnalysisResult buildExpressionAnalysisResult(const QueryTreeNo
current_output_columns = actions_chain.getLastStepAvailableOutputColumns();
}
auto [aggregation_analysis_result_optional, aggregated_columns_optional] = analyzeAggregation(query_tree, current_output_columns, planner_context, actions_chain);
if (aggregated_columns_optional)
current_output_columns = std::move(*aggregated_columns_optional);
auto aggregation_analysis_result_optional = analyzeAggregation(query_tree, current_output_columns, planner_context, actions_chain);
if (aggregation_analysis_result_optional)
current_output_columns = actions_chain.getLastStepAvailableOutputColumns();
std::optional<FilterAnalysisResult> having_analysis_result_optional;
std::optional<size_t> having_action_step_index_optional;

View File

@ -246,17 +246,87 @@ bool applyTrivialCountIfPossible(
return true;
}
JoinTreeQueryPlan buildQueryPlanForTableExpression(const QueryTreeNodePtr & table_expression,
const SelectQueryInfo & select_query_info,
const SelectQueryOptions & select_query_options,
PlannerContextPtr & planner_context,
bool is_single_table_expression)
void prepareBuildQueryPlanForTableExpression(const QueryTreeNodePtr & table_expression, PlannerContextPtr & planner_context)
{
const auto & query_context = planner_context->getQueryContext();
const auto & settings = query_context->getSettingsRef();
auto & table_expression_data = planner_context->getTableExpressionDataOrThrow(table_expression);
auto columns_names = table_expression_data.getColumnNames();
auto * table_node = table_expression->as<TableNode>();
auto * table_function_node = table_expression->as<TableFunctionNode>();
auto * query_node = table_expression->as<QueryNode>();
auto * union_node = table_expression->as<UnionNode>();
/** The current user must have the SELECT privilege.
* We do not check access rights for table functions because they have been already checked in ITableFunction::execute().
*/
if (table_node)
{
auto column_names_with_aliases = columns_names;
const auto & alias_columns_names = table_expression_data.getAliasColumnsNames();
column_names_with_aliases.insert(column_names_with_aliases.end(), alias_columns_names.begin(), alias_columns_names.end());
checkAccessRights(*table_node, column_names_with_aliases, query_context);
}
if (columns_names.empty())
{
NameAndTypePair additional_column_to_read;
if (table_node || table_function_node)
{
const auto & storage = table_node ? table_node->getStorage() : table_function_node->getStorage();
const auto & storage_snapshot = table_node ? table_node->getStorageSnapshot() : table_function_node->getStorageSnapshot();
additional_column_to_read = chooseSmallestColumnToReadFromStorage(storage, storage_snapshot);
}
else if (query_node || union_node)
{
const auto & projection_columns = query_node ? query_node->getProjectionColumns() : union_node->computeProjectionColumns();
NamesAndTypesList projection_columns_list(projection_columns.begin(), projection_columns.end());
additional_column_to_read = ExpressionActions::getSmallestColumn(projection_columns_list);
}
else
{
throw Exception(ErrorCodes::LOGICAL_ERROR, "Expected table, table function, query or union. Actual {}",
table_expression->formatASTForErrorMessage());
}
auto & global_planner_context = planner_context->getGlobalPlannerContext();
const auto & column_identifier = global_planner_context->createColumnIdentifier(additional_column_to_read, table_expression);
columns_names.push_back(additional_column_to_read.name);
table_expression_data.addColumn(additional_column_to_read, column_identifier);
}
/// Limitation on the number of columns to read
if (settings.max_columns_to_read && columns_names.size() > settings.max_columns_to_read)
throw Exception(ErrorCodes::TOO_MANY_COLUMNS,
"Limit for number of columns to read exceeded. Requested: {}, maximum: {}",
columns_names.size(),
settings.max_columns_to_read);
}
JoinTreeQueryPlan buildQueryPlanForTableExpression(QueryTreeNodePtr table_expression,
const SelectQueryInfo & select_query_info,
const SelectQueryOptions & select_query_options,
PlannerContextPtr & planner_context,
bool is_single_table_expression,
bool wrap_read_columns_in_subquery)
{
const auto & query_context = planner_context->getQueryContext();
const auto & settings = query_context->getSettingsRef();
auto & table_expression_data = planner_context->getTableExpressionDataOrThrow(table_expression);
QueryProcessingStage::Enum from_stage = QueryProcessingStage::Enum::FetchColumns;
if (wrap_read_columns_in_subquery)
{
auto columns = table_expression_data.getColumns();
table_expression = buildSubqueryToReadColumnsFromTableExpression(columns, table_expression, query_context);
}
auto * table_node = table_expression->as<TableNode>();
auto * table_function_node = table_expression->as<TableFunctionNode>();
auto * query_node = table_expression->as<QueryNode>();
@ -264,8 +334,6 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(const QueryTreeNodePtr & tabl
QueryPlan query_plan;
auto & table_expression_data = planner_context->getTableExpressionDataOrThrow(table_expression);
if (table_node || table_function_node)
{
const auto & storage = table_node ? table_node->getStorage() : table_function_node->getStorage();
@ -362,32 +430,6 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(const QueryTreeNodePtr & tabl
auto columns_names = table_expression_data.getColumnNames();
/** The current user must have the SELECT privilege.
* We do not check access rights for table functions because they have been already checked in ITableFunction::execute().
*/
if (table_node)
{
auto column_names_with_aliases = columns_names;
const auto & alias_columns_names = table_expression_data.getAliasColumnsNames();
column_names_with_aliases.insert(column_names_with_aliases.end(), alias_columns_names.begin(), alias_columns_names.end());
checkAccessRights(*table_node, column_names_with_aliases, planner_context->getQueryContext());
}
/// Limitation on the number of columns to read
if (settings.max_columns_to_read && columns_names.size() > settings.max_columns_to_read)
throw Exception(ErrorCodes::TOO_MANY_COLUMNS,
"Limit for number of columns to read exceeded. Requested: {}, maximum: {}",
columns_names.size(),
settings.max_columns_to_read);
if (columns_names.empty())
{
auto additional_column_to_read = chooseSmallestColumnToReadFromStorage(storage, storage_snapshot);
const auto & column_identifier = planner_context->getGlobalPlannerContext()->createColumnIdentifier(additional_column_to_read, table_expression);
columns_names.push_back(additional_column_to_read.name);
table_expression_data.addColumn(additional_column_to_read, column_identifier);
}
bool need_rewrite_query_with_final = storage->needRewriteQueryWithFinal(columns_names);
if (need_rewrite_query_with_final)
{
@ -423,6 +465,17 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(const QueryTreeNodePtr & tabl
{
from_stage = storage->getQueryProcessingStage(query_context, select_query_options.to_stage, storage_snapshot, table_expression_query_info);
storage->read(query_plan, columns_names, storage_snapshot, table_expression_query_info, query_context, from_stage, max_block_size, max_streams);
if (query_context->hasQueryContext() && !select_query_options.is_internal)
{
auto local_storage_id = storage->getStorageID();
query_context->getQueryContext()->addQueryAccessInfo(
backQuoteIfNeed(local_storage_id.getDatabaseName()),
local_storage_id.getFullTableName(),
columns_names,
{},
{});
}
}
if (query_plan.isInitialized())
@ -464,16 +517,6 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(const QueryTreeNodePtr & tabl
}
else
{
if (table_expression_data.getColumnNames().empty())
{
const auto & projection_columns = query_node ? query_node->getProjectionColumns() : union_node->computeProjectionColumns();
NamesAndTypesList projection_columns_list(projection_columns.begin(), projection_columns.end());
auto additional_column_to_read = ExpressionActions::getSmallestColumn(projection_columns_list);
const auto & column_identifier = planner_context->getGlobalPlannerContext()->createColumnIdentifier(additional_column_to_read, table_expression);
table_expression_data.addColumn(additional_column_to_read, column_identifier);
}
auto subquery_options = select_query_options.subquery();
Planner subquery_planner(table_expression, subquery_options, planner_context->getGlobalPlannerContext());
/// Propagate storage limits to subquery
@ -516,10 +559,11 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(const QueryTreeNodePtr & tabl
planner.buildQueryPlanIfNeeded();
auto expected_header = planner.getQueryPlan().getCurrentDataStream().header;
materializeBlockInplace(expected_header);
if (!blocksHaveEqualStructure(query_plan.getCurrentDataStream().header, expected_header))
{
materializeBlockInplace(expected_header);
auto rename_actions_dag = ActionsDAG::makeConvertingActions(
query_plan.getCurrentDataStream().header.getColumnsWithTypeAndName(),
expected_header.getColumnsWithTypeAndName(),
@ -1059,14 +1103,40 @@ JoinTreeQueryPlan buildJoinTreeQueryPlan(const QueryTreeNodePtr & query_node,
const ColumnIdentifierSet & outer_scope_columns,
PlannerContextPtr & planner_context)
{
const auto & query_node_typed = query_node->as<QueryNode &>();
auto table_expressions_stack = buildTableExpressionsStack(query_node_typed.getJoinTree());
auto table_expressions_stack = buildTableExpressionsStack(query_node->as<QueryNode &>().getJoinTree());
size_t table_expressions_stack_size = table_expressions_stack.size();
bool is_single_table_expression = table_expressions_stack_size == 1;
std::vector<ColumnIdentifierSet> table_expressions_outer_scope_columns(table_expressions_stack_size);
ColumnIdentifierSet current_outer_scope_columns = outer_scope_columns;
/// For each table, table function, query, union table expressions prepare before query plan build
for (size_t i = 0; i < table_expressions_stack_size; ++i)
{
const auto & table_expression = table_expressions_stack[i];
auto table_expression_type = table_expression->getNodeType();
if (table_expression_type == QueryTreeNodeType::JOIN ||
table_expression_type == QueryTreeNodeType::ARRAY_JOIN)
continue;
prepareBuildQueryPlanForTableExpression(table_expression, planner_context);
}
/** If left most table expression query plan is planned to stage that is not equal to fetch columns,
* then left most table expression is responsible for providing valid JOIN TREE part of final query plan.
*
* Examples: Distributed, LiveView, Merge storages.
*/
auto left_table_expression = table_expressions_stack.front();
auto left_table_expression_query_plan = buildQueryPlanForTableExpression(left_table_expression,
select_query_info,
select_query_options,
planner_context,
is_single_table_expression,
false /*wrap_read_columns_in_subquery*/);
if (left_table_expression_query_plan.from_stage != QueryProcessingStage::FetchColumns)
return left_table_expression_query_plan;
for (Int64 i = static_cast<Int64>(table_expressions_stack_size) - 1; i >= 0; --i)
{
table_expressions_outer_scope_columns[i] = current_outer_scope_columns;
@ -1120,19 +1190,23 @@ JoinTreeQueryPlan buildJoinTreeQueryPlan(const QueryTreeNodePtr & query_node,
}
else
{
const auto & table_expression_data = planner_context->getTableExpressionDataOrThrow(table_expression);
if (table_expression_data.isRemote() && i != 0)
throw Exception(ErrorCodes::UNSUPPORTED_METHOD,
"JOIN with multiple remote storages is unsupported");
if (table_expression == left_table_expression)
{
query_plans_stack.push_back(std::move(left_table_expression_query_plan)); /// NOLINT
left_table_expression = {};
continue;
}
/** If table expression is remote and it is not left most table expression, we wrap read columns from such
* table expression in subquery.
*/
bool is_remote = planner_context->getTableExpressionDataOrThrow(table_expression).isRemote();
query_plans_stack.push_back(buildQueryPlanForTableExpression(table_expression,
select_query_info,
select_query_options,
planner_context,
is_single_table_expression));
if (query_plans_stack.back().from_stage != QueryProcessingStage::FetchColumns)
break;
is_single_table_expression,
is_remote /*wrap_read_columns_in_subquery*/));
}
}

View File

@ -18,6 +18,7 @@
#include <Functions/IFunction.h>
#include <Functions/FunctionFactory.h>
#include <Analyzer/Utils.h>
#include <Analyzer/FunctionNode.h>
#include <Analyzer/ConstantNode.h>
#include <Analyzer/TableNode.h>
@ -61,6 +62,8 @@ void JoinClause::dump(WriteBuffer & buffer) const
for (const auto & dag_node : dag_nodes)
{
dag_nodes_dump += dag_node->result_name;
dag_nodes_dump += " ";
dag_nodes_dump += dag_node->result_type->getName();
dag_nodes_dump += ", ";
}

View File

@ -101,6 +101,17 @@ public:
return column_names;
}
NamesAndTypes getColumns() const
{
NamesAndTypes result;
result.reserve(column_names.size());
for (const auto & column_name : column_names)
result.push_back(column_name_to_column.at(column_name));
return result;
}
ColumnIdentifiers getColumnIdentifiers() const
{
ColumnIdentifiers result;

View File

@ -4,6 +4,7 @@
#include <Parsers/ASTSelectQuery.h>
#include <Parsers/ASTSubquery.h>
#include <DataTypes/DataTypesNumber.h>
#include <DataTypes/DataTypeLowCardinality.h>
#include <DataTypes/DataTypeNullable.h>
@ -19,6 +20,7 @@
#include <Analyzer/Utils.h>
#include <Analyzer/ConstantNode.h>
#include <Analyzer/ColumnNode.h>
#include <Analyzer/FunctionNode.h>
#include <Analyzer/QueryNode.h>
#include <Analyzer/UnionNode.h>
@ -341,27 +343,6 @@ QueryTreeNodePtr mergeConditionNodes(const QueryTreeNodes & condition_nodes, con
return function_node;
}
std::optional<bool> tryExtractConstantFromConditionNode(const QueryTreeNodePtr & condition_node)
{
const auto * constant_node = condition_node->as<ConstantNode>();
if (!constant_node)
return {};
const auto & value = constant_node->getValue();
auto constant_type = constant_node->getResultType();
constant_type = removeNullable(removeLowCardinality(constant_type));
auto which_constant_type = WhichDataType(constant_type);
if (!which_constant_type.isUInt8() && !which_constant_type.isNothing())
return {};
if (value.isNull())
return false;
UInt8 predicate_value = value.safeGet<UInt8>();
return predicate_value > 0;
}
QueryTreeNodePtr replaceTablesAndTableFunctionsWithDummyTables(const QueryTreeNodePtr & query_node,
const ContextPtr & context,
ResultReplacementMap * result_replacement_map)
@ -391,4 +372,36 @@ QueryTreeNodePtr replaceTablesAndTableFunctionsWithDummyTables(const QueryTreeNo
return query_node->cloneAndReplace(replacement_map);
}
QueryTreeNodePtr buildSubqueryToReadColumnsFromTableExpression(const NamesAndTypes & columns,
const QueryTreeNodePtr & table_expression,
const ContextPtr & context)
{
auto projection_columns = columns;
QueryTreeNodes subquery_projection_nodes;
subquery_projection_nodes.reserve(projection_columns.size());
for (const auto & column : projection_columns)
subquery_projection_nodes.push_back(std::make_shared<ColumnNode>(column, table_expression));
if (subquery_projection_nodes.empty())
{
auto constant_data_type = std::make_shared<DataTypeUInt64>();
subquery_projection_nodes.push_back(std::make_shared<ConstantNode>(1UL, constant_data_type));
projection_columns.push_back({"1", std::move(constant_data_type)});
}
auto context_copy = Context::createCopy(context);
updateContextForSubqueryExecution(context_copy);
auto query_node = std::make_shared<QueryNode>(std::move(context_copy));
query_node->resolveProjectionColumns(projection_columns);
query_node->getProjection().getNodes() = std::move(subquery_projection_nodes);
query_node->getJoinTree() = table_expression;
query_node->setIsSubquery(true);
return query_node;
}
}

View File

@ -63,13 +63,15 @@ bool queryHasWithTotalsInAnySubqueryInJoinTree(const QueryTreeNodePtr & query_no
/// Returns `and` function node that has condition nodes as its arguments
QueryTreeNodePtr mergeConditionNodes(const QueryTreeNodes & condition_nodes, const ContextPtr & context);
/// Try extract boolean constant from condition node
std::optional<bool> tryExtractConstantFromConditionNode(const QueryTreeNodePtr & condition_node);
/// Replace tables nodes and table function nodes with dummy table nodes
using ResultReplacementMap = std::unordered_map<QueryTreeNodePtr, QueryTreeNodePtr>;
QueryTreeNodePtr replaceTablesAndTableFunctionsWithDummyTables(const QueryTreeNodePtr & query_node,
const ContextPtr & context,
ResultReplacementMap * result_replacement_map = nullptr);
/// Build subquery to read specified columns from table expression
QueryTreeNodePtr buildSubqueryToReadColumnsFromTableExpression(const NamesAndTypes & columns,
const QueryTreeNodePtr & table_expression,
const ContextPtr & context);
}

View File

@ -45,44 +45,38 @@ Chunk ParquetBlockInputFormat::generate()
block_missing_values.clear();
if (!file_reader)
{
prepareReader();
file_reader->set_batch_size(format_settings.parquet.max_block_size);
std::vector<int> row_group_indices;
for (int i = 0; i < row_group_total; ++i)
{
if (!skip_row_groups.contains(i))
row_group_indices.emplace_back(i);
}
auto read_status = file_reader->GetRecordBatchReader(row_group_indices, column_indices, &current_record_batch_reader);
if (!read_status.ok())
throw DB::ParsingException(ErrorCodes::CANNOT_READ_ALL_DATA, "Error while reading Parquet data: {}", read_status.ToString());
}
if (is_stopped)
return {};
auto batch = current_record_batch_reader->Next();
if (!batch.ok())
{
throw ParsingException(ErrorCodes::CANNOT_READ_ALL_DATA, "Error while reading Parquet data: {}",
batch.status().ToString());
}
if (*batch)
{
auto tmp_table = arrow::Table::FromRecordBatches({*batch});
/// If defaults_for_omitted_fields is true, calculate the default values from default expression for omitted fields.
/// Otherwise fill the missing columns with zero values of its type.
BlockMissingValues * block_missing_values_ptr = format_settings.defaults_for_omitted_fields ? &block_missing_values : nullptr;
arrow_column_to_ch_column->arrowTableToCHChunk(res, *tmp_table, (*tmp_table)->num_rows(), block_missing_values_ptr);
}
else
{
current_record_batch_reader.reset();
file_reader.reset();
return {};
}
while (row_group_current < row_group_total && skip_row_groups.contains(row_group_current))
++row_group_current;
if (row_group_current >= row_group_total)
return res;
std::shared_ptr<arrow::Table> table;
std::unique_ptr<::arrow::RecordBatchReader> rbr;
std::vector<int> row_group_indices { row_group_current };
arrow::Status get_batch_reader_status = file_reader->GetRecordBatchReader(row_group_indices, column_indices, &rbr);
if (!get_batch_reader_status.ok())
throw ParsingException(ErrorCodes::CANNOT_READ_ALL_DATA, "Error while reading Parquet data: {}",
get_batch_reader_status.ToString());
arrow::Status read_status = rbr->ReadAll(&table);
if (!read_status.ok())
throw ParsingException(ErrorCodes::CANNOT_READ_ALL_DATA, "Error while reading Parquet data: {}", read_status.ToString());
++row_group_current;
/// If defaults_for_omitted_fields is true, calculate the default values from default expression for omitted fields.
/// Otherwise fill the missing columns with zero values of its type.
BlockMissingValues * block_missing_values_ptr = format_settings.defaults_for_omitted_fields ? &block_missing_values : nullptr;
arrow_column_to_ch_column->arrowTableToCHChunk(res, table, table->num_rows(), block_missing_values_ptr);
return res;
}

View File

@ -12,10 +12,19 @@ ISourceStep::ISourceStep(DataStream output_stream_)
QueryPipelineBuilderPtr ISourceStep::updatePipeline(QueryPipelineBuilders, const BuildQueryPipelineSettings & settings)
{
auto pipeline = std::make_unique<QueryPipelineBuilder>();
QueryPipelineProcessorsCollector collector(*pipeline, this);
/// For `Source` step, since it's not add new Processors to `pipeline->pipe`
/// in `initializePipeline`, but make an assign with new created Pipe.
/// And Processors for the Step is added here. So we do not need to use
/// `QueryPipelineProcessorsCollector` to collect Processors.
initializePipeline(*pipeline, settings);
auto added_processors = collector.detachProcessors();
processors.insert(processors.end(), added_processors.begin(), added_processors.end());
/// But we need to set QueryPlanStep manually for the Processors, which
/// will be used in `EXPLAIN PIPELINE`
for (auto & processor : processors)
{
processor->setQueryPlanStep(this);
}
return pipeline;
}

View File

@ -519,8 +519,9 @@ AggregationInputOrder buildInputOrderInfo(
enreachFixedColumns(sorting_key_dag, fixed_key_columns);
for (auto it = matches.cbegin(); it != matches.cend(); ++it)
for (const auto * output : dag->getOutputs())
{
auto it = matches.find(output);
const MatchedTrees::Match * match = &it->second;
if (match->node)
{

View File

@ -10,7 +10,6 @@ namespace DB
* You can render it with:
* dot -T png < pipeline.dot > pipeline.png
*/
template <typename Processors, typename Statuses>
void printPipeline(const Processors & processors, const Statuses & statuses, WriteBuffer & out)
{
@ -70,5 +69,4 @@ void printPipeline(const Processors & processors, WriteBuffer & out)
/// If QueryPlanStep wasn't set for processor, representation may be not correct.
/// If with_header is set, prints block header for each edge.
void printPipelineCompact(const Processors & processors, WriteBuffer & out, bool with_header);
}

View File

@ -843,6 +843,7 @@ namespace
query_context->getClientInfo().client_trace_context,
query_context->getSettingsRef(),
query_context->getOpenTelemetrySpanLog());
thread_trace_context->root_span.kind = OpenTelemetry::SERVER;
/// Prepare for sending exceptions and logs.
const Settings & settings = query_context->getSettingsRef();

View File

@ -1001,6 +1001,7 @@ void HTTPHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse
client_info.client_trace_context,
context->getSettingsRef(),
context->getOpenTelemetrySpanLog());
thread_trace_context->root_span.kind = OpenTelemetry::SERVER;
thread_trace_context->root_span.addAttribute("clickhouse.uri", request.getURI());
response.setContentType("text/plain; charset=UTF-8");

View File

@ -277,6 +277,7 @@ void TCPHandler::runImpl()
query_context->getClientInfo().client_trace_context,
query_context->getSettingsRef(),
query_context->getOpenTelemetrySpanLog());
thread_trace_context->root_span.kind = OpenTelemetry::SERVER;
query_scope.emplace(query_context, /* fatal_error_callback */ [this]
{

View File

@ -13,6 +13,7 @@
#include <Interpreters/getHeaderForProcessingStage.h>
#include <Interpreters/SelectQueryOptions.h>
#include <Interpreters/InterpreterSelectQuery.h>
#include <Interpreters/InterpreterSelectQueryAnalyzer.h>
#include <QueryPipeline/narrowPipe.h>
#include <QueryPipeline/Pipe.h>
#include <QueryPipeline/RemoteQueryExecutor.h>
@ -83,8 +84,12 @@ Pipe StorageHDFSCluster::read(
auto extension = getTaskIteratorExtension(query_info.query, context);
/// Calculate the header. This is significant, because some columns could be thrown away in some cases like query with count(*)
Block header =
InterpreterSelectQuery(query_info.query, context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock();
Block header;
if (context->getSettingsRef().allow_experimental_analyzer)
header = InterpreterSelectQueryAnalyzer::getSampleBlock(query_info.query, context, SelectQueryOptions(processed_stage).analyze());
else
header = InterpreterSelectQuery(query_info.query, context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock();
const Scalars & scalars = context->hasQueryContext() ? context->getQueryContext()->getScalars() : Scalars{};

View File

@ -4121,9 +4121,9 @@ void MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event * until, const Contex
ProfileEvents::increment(ProfileEvents::RejectedInserts);
throw Exception(
ErrorCodes::TOO_MANY_PARTS,
"Too many parts ({}) in all partitions in total. This indicates wrong choice of partition key. The threshold can be modified "
"Too many parts ({}) in all partitions in total in table '{}'. This indicates wrong choice of partition key. The threshold can be modified "
"with 'max_parts_in_total' setting in <merge_tree> element in config.xml or with per-table setting.",
parts_count_in_total);
parts_count_in_total, getLogName());
}
size_t outdated_parts_over_threshold = 0;
@ -4137,8 +4137,8 @@ void MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event * until, const Contex
ProfileEvents::increment(ProfileEvents::RejectedInserts);
throw Exception(
ErrorCodes::TOO_MANY_PARTS,
"Too many inactive parts ({}). Parts cleaning are processing significantly slower than inserts",
outdated_parts_count_in_partition);
"Too many inactive parts ({}) in table '{}'. Parts cleaning are processing significantly slower than inserts",
outdated_parts_count_in_partition, getLogName());
}
if (settings->inactive_parts_to_delay_insert > 0 && outdated_parts_count_in_partition >= settings->inactive_parts_to_delay_insert)
outdated_parts_over_threshold = outdated_parts_count_in_partition - settings->inactive_parts_to_delay_insert + 1;
@ -4151,6 +4151,7 @@ void MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event * until, const Contex
const auto active_parts_to_throw_insert
= query_settings.parts_to_throw_insert ? query_settings.parts_to_throw_insert : settings->parts_to_throw_insert;
size_t active_parts_over_threshold = 0;
{
bool parts_are_large_enough_in_average
= settings->max_avg_part_size_for_too_many_parts && average_part_size > settings->max_avg_part_size_for_too_many_parts;
@ -4160,9 +4161,10 @@ void MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event * until, const Contex
ProfileEvents::increment(ProfileEvents::RejectedInserts);
throw Exception(
ErrorCodes::TOO_MANY_PARTS,
"Too many parts ({} with average size of {}). Merges are processing significantly slower than inserts",
"Too many parts ({} with average size of {}) in table '{}'. Merges are processing significantly slower than inserts",
parts_count_in_partition,
ReadableSize(average_part_size));
ReadableSize(average_part_size),
getLogName());
}
if (active_parts_to_delay_insert > 0 && parts_count_in_partition >= active_parts_to_delay_insert
&& !parts_are_large_enough_in_average)

View File

@ -201,6 +201,7 @@ MergeTreeConditionInverted::MergeTreeConditionInverted(
rpn.push_back(RPNElement::FUNCTION_UNKNOWN);
return;
}
rpn = std::move(
RPNBuilder<RPNElement>(
query_info.filter_actions_dag->getOutputs().at(0), context_,
@ -208,10 +209,10 @@ MergeTreeConditionInverted::MergeTreeConditionInverted(
{
return this->traverseAtomAST(node, out);
}).extractRPN());
return;
}
ASTPtr filter_node = buildFilterNode(query_info.query);
if (!filter_node)
{
rpn.push_back(RPNElement::FUNCTION_UNKNOWN);
@ -226,7 +227,6 @@ MergeTreeConditionInverted::MergeTreeConditionInverted(
query_info.prepared_sets,
[&](const RPNBuilderTreeNode & node, RPNElement & out) { return traverseAtomAST(node, out); });
rpn = std::move(builder).extractRPN();
}
/// Keep in-sync with MergeTreeConditionFullText::alwaysUnknownOrTrue

Some files were not shown because too many files have changed in this diff Show More