Merge branch 'master' of github.com:ClickHouse/ClickHouse into fix-variant-as-common-type

This commit is contained in:
avogar 2024-08-02 17:27:32 +00:00
commit 12b6ea0232
80 changed files with 1867 additions and 696 deletions

View File

@ -0,0 +1,39 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.3.6.48-lts (b2d33c3c45d) FIXME as compared to v24.3.5.46-lts (fe54cead6b6)
#### Critical Bug Fix (crash, LOGICAL_ERROR, data loss, RBAC)
* Backported in [#66889](https://github.com/ClickHouse/ClickHouse/issues/66889): Fix unexpeced size of low cardinality column in function calls. [#65298](https://github.com/ClickHouse/ClickHouse/pull/65298) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66687](https://github.com/ClickHouse/ClickHouse/issues/66687): Fix the VALID UNTIL clause in the user definition resetting after a restart. Closes [#66405](https://github.com/ClickHouse/ClickHouse/issues/66405). [#66409](https://github.com/ClickHouse/ClickHouse/pull/66409) ([Nikolay Degterinsky](https://github.com/evillique)).
* Backported in [#67497](https://github.com/ClickHouse/ClickHouse/issues/67497): Fix crash in DistributedAsyncInsert when connection is empty. [#67219](https://github.com/ClickHouse/ClickHouse/pull/67219) ([Pablo Marcos](https://github.com/pamarcos)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#66324](https://github.com/ClickHouse/ClickHouse/issues/66324): Add missing settings `input_format_csv_skip_first_lines/input_format_tsv_skip_first_lines/input_format_csv_try_infer_numbers_from_strings/input_format_csv_try_infer_strings_from_quoted_tuples` in schema inference cache because they can change the resulting schema. It prevents from incorrect result of schema inference with these settings changed. [#65980](https://github.com/ClickHouse/ClickHouse/pull/65980) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#66151](https://github.com/ClickHouse/ClickHouse/issues/66151): Fixed buffer overflow bug in `unbin`/`unhex` implementation. [#66106](https://github.com/ClickHouse/ClickHouse/pull/66106) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#66451](https://github.com/ClickHouse/ClickHouse/issues/66451): Fixed a bug in ZooKeeper client: a session could get stuck in unusable state after receiving a hardware error from ZooKeeper. For example, this might happen due to "soft memory limit" in ClickHouse Keeper. [#66140](https://github.com/ClickHouse/ClickHouse/pull/66140) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#66222](https://github.com/ClickHouse/ClickHouse/issues/66222): Fix issue in SumIfToCountIfVisitor and signed integers. [#66146](https://github.com/ClickHouse/ClickHouse/pull/66146) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66676](https://github.com/ClickHouse/ClickHouse/issues/66676): Fix handling limit for `system.numbers_mt` when no index can be used. [#66231](https://github.com/ClickHouse/ClickHouse/pull/66231) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#66602](https://github.com/ClickHouse/ClickHouse/issues/66602): Fixed how the ClickHouse server detects the maximum number of usable CPU cores as specified by cgroups v2 if the server runs in a container such as Docker. In more detail, containers often run their process in the root cgroup which has an empty name. In that case, ClickHouse ignored the CPU limits set by cgroups v2. [#66237](https://github.com/ClickHouse/ClickHouse/pull/66237) ([filimonov](https://github.com/filimonov)).
* Backported in [#66356](https://github.com/ClickHouse/ClickHouse/issues/66356): Fix the `Not-ready set` error when a subquery with `IN` is used in the constraint. [#66261](https://github.com/ClickHouse/ClickHouse/pull/66261) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66970](https://github.com/ClickHouse/ClickHouse/issues/66970): Fix `Column identifier is already registered` error with `group_by_use_nulls=true` and new analyzer. [#66400](https://github.com/ClickHouse/ClickHouse/pull/66400) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66967](https://github.com/ClickHouse/ClickHouse/issues/66967): Fix `Cannot find column` error for queries with constant expression in `GROUP BY` key and new analyzer enabled. [#66433](https://github.com/ClickHouse/ClickHouse/pull/66433) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66718](https://github.com/ClickHouse/ClickHouse/issues/66718): Correctly track memory for `Allocator::realloc`. [#66548](https://github.com/ClickHouse/ClickHouse/pull/66548) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#66949](https://github.com/ClickHouse/ClickHouse/issues/66949): Fix an invalid result for queries with `WINDOW`. This could happen when `PARTITION` columns have sparse serialization and window functions are executed in parallel. [#66579](https://github.com/ClickHouse/ClickHouse/pull/66579) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66946](https://github.com/ClickHouse/ClickHouse/issues/66946): Fix `Method getResultType is not supported for QUERY query node` error when scalar subquery was used as the first argument of IN (with new analyzer). [#66655](https://github.com/ClickHouse/ClickHouse/pull/66655) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#67629](https://github.com/ClickHouse/ClickHouse/issues/67629): Fix for occasional deadlock in Context::getDDLWorker. [#66843](https://github.com/ClickHouse/ClickHouse/pull/66843) ([Alexander Gololobov](https://github.com/davenger)).
* Backported in [#67193](https://github.com/ClickHouse/ClickHouse/issues/67193): TRUNCATE DATABASE used to stop replication as if it was a DROP DATABASE query, it's fixed. [#67129](https://github.com/ClickHouse/ClickHouse/pull/67129) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#67375](https://github.com/ClickHouse/ClickHouse/issues/67375): Fix error `Cannot convert column because it is non constant in source stream but must be constant in result.` for a query that reads from the `Merge` table over the `Distriburted` table with one shard. [#67146](https://github.com/ClickHouse/ClickHouse/pull/67146) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#67572](https://github.com/ClickHouse/ClickHouse/issues/67572): Fix execution of nested short-circuit functions. [#67520](https://github.com/ClickHouse/ClickHouse/pull/67520) ([Kruglov Pavel](https://github.com/Avogar)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#66422](https://github.com/ClickHouse/ClickHouse/issues/66422): Ignore subquery for IN in DDLLoadingDependencyVisitor. [#66395](https://github.com/ClickHouse/ClickHouse/pull/66395) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66855](https://github.com/ClickHouse/ClickHouse/issues/66855): Fix data race in S3::ClientCache. [#66644](https://github.com/ClickHouse/ClickHouse/pull/66644) ([Konstantin Morozov](https://github.com/k-morozov)).
* Backported in [#67055](https://github.com/ClickHouse/ClickHouse/issues/67055): Increase asio pool size in case the server is tiny. [#66761](https://github.com/ClickHouse/ClickHouse/pull/66761) ([alesapin](https://github.com/alesapin)).
* Backported in [#66943](https://github.com/ClickHouse/ClickHouse/issues/66943): Small fix in realloc memory tracking. [#66820](https://github.com/ClickHouse/ClickHouse/pull/66820) ([Antonio Andelic](https://github.com/antonio2368)).

View File

@ -0,0 +1,73 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.4.4.113-stable (d63a54957bd) FIXME as compared to v24.4.3.25-stable (a915dd4eda4)
#### Improvement
* Backported in [#65884](https://github.com/ClickHouse/ClickHouse/issues/65884): Always start Keeper with sufficient amount of threads in global thread pool. [#64444](https://github.com/ClickHouse/ClickHouse/pull/64444) ([Duc Canh Le](https://github.com/canhld94)).
* Backported in [#65303](https://github.com/ClickHouse/ClickHouse/issues/65303): Returned back the behaviour of how ClickHouse works and interprets Tuples in CSV format. This change effectively reverts https://github.com/ClickHouse/ClickHouse/pull/60994 and makes it available only under a few settings: `output_format_csv_serialize_tuple_into_separate_columns`, `input_format_csv_deserialize_separate_columns_into_tuple` and `input_format_csv_try_infer_strings_from_quoted_tuples`. [#65170](https://github.com/ClickHouse/ClickHouse/pull/65170) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Backported in [#65894](https://github.com/ClickHouse/ClickHouse/issues/65894): Respect cgroup CPU limit in Keeper. [#65819](https://github.com/ClickHouse/ClickHouse/pull/65819) ([Antonio Andelic](https://github.com/antonio2368)).
#### Critical Bug Fix (crash, LOGICAL_ERROR, data loss, RBAC)
* Backported in [#65372](https://github.com/ClickHouse/ClickHouse/issues/65372): Fix a bug in ClickHouse Keeper that causes digest mismatch during closing session. [#65198](https://github.com/ClickHouse/ClickHouse/pull/65198) ([Aleksei Filatov](https://github.com/aalexfvk)).
* Backported in [#66883](https://github.com/ClickHouse/ClickHouse/issues/66883): Fix unexpeced size of low cardinality column in function calls. [#65298](https://github.com/ClickHouse/ClickHouse/pull/65298) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#65435](https://github.com/ClickHouse/ClickHouse/issues/65435): Forbid `QUALIFY` clause in the old analyzer. The old analyzer ignored `QUALIFY`, so it could lead to unexpected data removal in mutations. [#65356](https://github.com/ClickHouse/ClickHouse/pull/65356) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#65448](https://github.com/ClickHouse/ClickHouse/issues/65448): Use correct memory alignment for Distinct combinator. Previously, crash could happen because of invalid memory allocation when the combinator was used. [#65379](https://github.com/ClickHouse/ClickHouse/pull/65379) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#65710](https://github.com/ClickHouse/ClickHouse/issues/65710): Fix crash in maxIntersections. [#65689](https://github.com/ClickHouse/ClickHouse/pull/65689) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66689](https://github.com/ClickHouse/ClickHouse/issues/66689): Fix the VALID UNTIL clause in the user definition resetting after a restart. Closes [#66405](https://github.com/ClickHouse/ClickHouse/issues/66405). [#66409](https://github.com/ClickHouse/ClickHouse/pull/66409) ([Nikolay Degterinsky](https://github.com/evillique)).
* Backported in [#67499](https://github.com/ClickHouse/ClickHouse/issues/67499): Fix crash in DistributedAsyncInsert when connection is empty. [#67219](https://github.com/ClickHouse/ClickHouse/pull/67219) ([Pablo Marcos](https://github.com/pamarcos)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#65353](https://github.com/ClickHouse/ClickHouse/issues/65353): Fix possible abort on uncaught exception in ~WriteBufferFromFileDescriptor in StatusFile. [#64206](https://github.com/ClickHouse/ClickHouse/pull/64206) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#65060](https://github.com/ClickHouse/ClickHouse/issues/65060): Fix the `Expression nodes list expected 1 projection names` and `Unknown expression or identifier` errors for queries with aliases to `GLOBAL IN.`. [#64517](https://github.com/ClickHouse/ClickHouse/pull/64517) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#65329](https://github.com/ClickHouse/ClickHouse/issues/65329): Fix the crash loop when restoring from backup is blocked by creating an MV with a definer that hasn't been restored yet. [#64595](https://github.com/ClickHouse/ClickHouse/pull/64595) ([pufit](https://github.com/pufit)).
* Backported in [#64833](https://github.com/ClickHouse/ClickHouse/issues/64833): Fix bug which could lead to non-working TTLs with expressions. [#64694](https://github.com/ClickHouse/ClickHouse/pull/64694) ([alesapin](https://github.com/alesapin)).
* Backported in [#65086](https://github.com/ClickHouse/ClickHouse/issues/65086): Fix removing the `WHERE` and `PREWHERE` expressions, which are always true (for the new analyzer). [#64695](https://github.com/ClickHouse/ClickHouse/pull/64695) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#65540](https://github.com/ClickHouse/ClickHouse/issues/65540): Fix crash for `ALTER TABLE ... ON CLUSTER ... MODIFY SQL SECURITY`. [#64957](https://github.com/ClickHouse/ClickHouse/pull/64957) ([pufit](https://github.com/pufit)).
* Backported in [#65578](https://github.com/ClickHouse/ClickHouse/issues/65578): Fix crash on destroying AccessControl: add explicit shutdown. [#64993](https://github.com/ClickHouse/ClickHouse/pull/64993) ([Vitaly Baranov](https://github.com/vitlibar)).
* Backported in [#65161](https://github.com/ClickHouse/ClickHouse/issues/65161): Fix pushing arithmetic operations out of aggregation. In the new analyzer, optimization was applied only once. [#65104](https://github.com/ClickHouse/ClickHouse/pull/65104) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#65616](https://github.com/ClickHouse/ClickHouse/issues/65616): Fix aggregate function name rewriting in the new analyzer. [#65110](https://github.com/ClickHouse/ClickHouse/pull/65110) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#65730](https://github.com/ClickHouse/ClickHouse/issues/65730): Eliminate injective function in argument of functions `uniq*` recursively. This used to work correctly but was broken in the new analyzer. [#65140](https://github.com/ClickHouse/ClickHouse/pull/65140) ([Duc Canh Le](https://github.com/canhld94)).
* Backported in [#65668](https://github.com/ClickHouse/ClickHouse/issues/65668): Disable `non-intersecting-parts` optimization for queries with `FINAL` in case of `read-in-order` optimization was enabled. This could lead to an incorrect query result. As a workaround, disable `do_not_merge_across_partitions_select_final` and `split_parts_ranges_into_intersecting_and_non_intersecting_final` before this fix is merged. [#65505](https://github.com/ClickHouse/ClickHouse/pull/65505) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#65786](https://github.com/ClickHouse/ClickHouse/issues/65786): Fixed bug in MergeJoin. Column in sparse serialisation might be treated as a column of its nested type though the required conversion wasn't performed. [#65632](https://github.com/ClickHouse/ClickHouse/pull/65632) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#65810](https://github.com/ClickHouse/ClickHouse/issues/65810): Fix invalid exceptions in function `parseDateTime` with `%F` and `%D` placeholders. [#65768](https://github.com/ClickHouse/ClickHouse/pull/65768) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#65931](https://github.com/ClickHouse/ClickHouse/issues/65931): For queries that read from `PostgreSQL`, cancel the internal `PostgreSQL` query if the ClickHouse query is finished. Otherwise, `ClickHouse` query cannot be canceled until the internal `PostgreSQL` query is finished. [#65771](https://github.com/ClickHouse/ClickHouse/pull/65771) ([Maksim Kita](https://github.com/kitaisreal)).
* Backported in [#65826](https://github.com/ClickHouse/ClickHouse/issues/65826): Fix a bug in short circuit logic when old analyzer and dictGetOrDefault is used. [#65802](https://github.com/ClickHouse/ClickHouse/pull/65802) ([jsc0218](https://github.com/jsc0218)).
* Backported in [#66299](https://github.com/ClickHouse/ClickHouse/issues/66299): Better handling of join conditions involving `IS NULL` checks (for example `ON (a = b AND (a IS NOT NULL) AND (b IS NOT NULL) ) OR ( (a IS NULL) AND (b IS NULL) )` is rewritten to `ON a <=> b`), fix incorrect optimization when condition other then `IS NULL` are present. [#65835](https://github.com/ClickHouse/ClickHouse/pull/65835) ([vdimir](https://github.com/vdimir)).
* Backported in [#66326](https://github.com/ClickHouse/ClickHouse/issues/66326): Add missing settings `input_format_csv_skip_first_lines/input_format_tsv_skip_first_lines/input_format_csv_try_infer_numbers_from_strings/input_format_csv_try_infer_strings_from_quoted_tuples` in schema inference cache because they can change the resulting schema. It prevents from incorrect result of schema inference with these settings changed. [#65980](https://github.com/ClickHouse/ClickHouse/pull/65980) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#66153](https://github.com/ClickHouse/ClickHouse/issues/66153): Fixed buffer overflow bug in `unbin`/`unhex` implementation. [#66106](https://github.com/ClickHouse/ClickHouse/pull/66106) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#66459](https://github.com/ClickHouse/ClickHouse/issues/66459): Fixed a bug in ZooKeeper client: a session could get stuck in unusable state after receiving a hardware error from ZooKeeper. For example, this might happen due to "soft memory limit" in ClickHouse Keeper. [#66140](https://github.com/ClickHouse/ClickHouse/pull/66140) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#66224](https://github.com/ClickHouse/ClickHouse/issues/66224): Fix issue in SumIfToCountIfVisitor and signed integers. [#66146](https://github.com/ClickHouse/ClickHouse/pull/66146) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66267](https://github.com/ClickHouse/ClickHouse/issues/66267): Don't throw `TIMEOUT_EXCEEDED` for `none_only_active` mode of `distributed_ddl_output_mode`. [#66218](https://github.com/ClickHouse/ClickHouse/pull/66218) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#66678](https://github.com/ClickHouse/ClickHouse/issues/66678): Fix handling limit for `system.numbers_mt` when no index can be used. [#66231](https://github.com/ClickHouse/ClickHouse/pull/66231) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#66603](https://github.com/ClickHouse/ClickHouse/issues/66603): Fixed how the ClickHouse server detects the maximum number of usable CPU cores as specified by cgroups v2 if the server runs in a container such as Docker. In more detail, containers often run their process in the root cgroup which has an empty name. In that case, ClickHouse ignored the CPU limits set by cgroups v2. [#66237](https://github.com/ClickHouse/ClickHouse/pull/66237) ([filimonov](https://github.com/filimonov)).
* Backported in [#66358](https://github.com/ClickHouse/ClickHouse/issues/66358): Fix the `Not-ready set` error when a subquery with `IN` is used in the constraint. [#66261](https://github.com/ClickHouse/ClickHouse/pull/66261) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66971](https://github.com/ClickHouse/ClickHouse/issues/66971): Fix `Column identifier is already registered` error with `group_by_use_nulls=true` and new analyzer. [#66400](https://github.com/ClickHouse/ClickHouse/pull/66400) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66968](https://github.com/ClickHouse/ClickHouse/issues/66968): Fix `Cannot find column` error for queries with constant expression in `GROUP BY` key and new analyzer enabled. [#66433](https://github.com/ClickHouse/ClickHouse/pull/66433) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66719](https://github.com/ClickHouse/ClickHouse/issues/66719): Correctly track memory for `Allocator::realloc`. [#66548](https://github.com/ClickHouse/ClickHouse/pull/66548) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#66950](https://github.com/ClickHouse/ClickHouse/issues/66950): Fix an invalid result for queries with `WINDOW`. This could happen when `PARTITION` columns have sparse serialization and window functions are executed in parallel. [#66579](https://github.com/ClickHouse/ClickHouse/pull/66579) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66947](https://github.com/ClickHouse/ClickHouse/issues/66947): Fix `Method getResultType is not supported for QUERY query node` error when scalar subquery was used as the first argument of IN (with new analyzer). [#66655](https://github.com/ClickHouse/ClickHouse/pull/66655) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#67631](https://github.com/ClickHouse/ClickHouse/issues/67631): Fix for occasional deadlock in Context::getDDLWorker. [#66843](https://github.com/ClickHouse/ClickHouse/pull/66843) ([Alexander Gololobov](https://github.com/davenger)).
* Backported in [#67195](https://github.com/ClickHouse/ClickHouse/issues/67195): TRUNCATE DATABASE used to stop replication as if it was a DROP DATABASE query, it's fixed. [#67129](https://github.com/ClickHouse/ClickHouse/pull/67129) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#67377](https://github.com/ClickHouse/ClickHouse/issues/67377): Fix error `Cannot convert column because it is non constant in source stream but must be constant in result.` for a query that reads from the `Merge` table over the `Distriburted` table with one shard. [#67146](https://github.com/ClickHouse/ClickHouse/pull/67146) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#67240](https://github.com/ClickHouse/ClickHouse/issues/67240): This closes [#67156](https://github.com/ClickHouse/ClickHouse/issues/67156). This closes [#66447](https://github.com/ClickHouse/ClickHouse/issues/66447). The bug was introduced in https://github.com/ClickHouse/ClickHouse/pull/62907. [#67178](https://github.com/ClickHouse/ClickHouse/pull/67178) ([Maksim Kita](https://github.com/kitaisreal)).
* Backported in [#67574](https://github.com/ClickHouse/ClickHouse/issues/67574): Fix execution of nested short-circuit functions. [#67520](https://github.com/ClickHouse/ClickHouse/pull/67520) ([Kruglov Pavel](https://github.com/Avogar)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#65410](https://github.com/ClickHouse/ClickHouse/issues/65410): Re-enable OpenSSL session caching. [#65111](https://github.com/ClickHouse/ClickHouse/pull/65111) ([Robert Schulze](https://github.com/rschu1ze)).
* Backported in [#65903](https://github.com/ClickHouse/ClickHouse/issues/65903): Fix bug with session closing in Keeper. [#65735](https://github.com/ClickHouse/ClickHouse/pull/65735) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#66385](https://github.com/ClickHouse/ClickHouse/issues/66385): Disable broken cases from 02911_join_on_nullsafe_optimization. [#66310](https://github.com/ClickHouse/ClickHouse/pull/66310) ([vdimir](https://github.com/vdimir)).
* Backported in [#66424](https://github.com/ClickHouse/ClickHouse/issues/66424): Ignore subquery for IN in DDLLoadingDependencyVisitor. [#66395](https://github.com/ClickHouse/ClickHouse/pull/66395) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66542](https://github.com/ClickHouse/ClickHouse/issues/66542): Add additional log masking in CI. [#66523](https://github.com/ClickHouse/ClickHouse/pull/66523) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66857](https://github.com/ClickHouse/ClickHouse/issues/66857): Fix data race in S3::ClientCache. [#66644](https://github.com/ClickHouse/ClickHouse/pull/66644) ([Konstantin Morozov](https://github.com/k-morozov)).
* Backported in [#66873](https://github.com/ClickHouse/ClickHouse/issues/66873): Support one more case in JOIN ON ... IS NULL. [#66725](https://github.com/ClickHouse/ClickHouse/pull/66725) ([vdimir](https://github.com/vdimir)).
* Backported in [#67057](https://github.com/ClickHouse/ClickHouse/issues/67057): Increase asio pool size in case the server is tiny. [#66761](https://github.com/ClickHouse/ClickHouse/pull/66761) ([alesapin](https://github.com/alesapin)).
* Backported in [#66944](https://github.com/ClickHouse/ClickHouse/issues/66944): Small fix in realloc memory tracking. [#66820](https://github.com/ClickHouse/ClickHouse/pull/66820) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#67250](https://github.com/ClickHouse/ClickHouse/issues/67250): Followup [#66725](https://github.com/ClickHouse/ClickHouse/issues/66725). [#66869](https://github.com/ClickHouse/ClickHouse/pull/66869) ([vdimir](https://github.com/vdimir)).
* Backported in [#67410](https://github.com/ClickHouse/ClickHouse/issues/67410): CI: Fix build results for release branches. [#67402](https://github.com/ClickHouse/ClickHouse/pull/67402) ([Max K.](https://github.com/maxknv)).

View File

@ -0,0 +1,90 @@
---
slug: /en/sql-reference/aggregate-functions/reference/groupconcat
sidebar_position: 363
sidebar_label: groupConcat
title: groupConcat
---
Calculates a concatenated string from a group of strings, optionally separated by a delimiter, and optionally limited by a maximum number of elements.
**Syntax**
``` sql
groupConcat(expression [, delimiter] [, limit]);
```
**Arguments**
- `expression` — The expression or column name that outputs strings to be concatenated..
- `delimiter` — A [string](../../../sql-reference/data-types/string.md) that will be used to separate concatenated values. This parameter is optional and defaults to an empty string if not specified.
- `limit` — A positive [integer](../../../sql-reference/data-types/int-uint.md) specifying the maximum number of elements to concatenate. If more elements are present, excess elements are ignored. This parameter is optional.
:::note
If delimiter is specified without limit, it must be the first parameter following the expression. If both delimiter and limit are specified, delimiter must precede limit.
:::
**Returned value**
- Returns a [string](../../../sql-reference/data-types/string.md) consisting of the concatenated values of the column or expression. If the group has no elements or only null elements, and the function does not specify a handling for only null values, the result is a nullable string with a null value.
**Examples**
Input table:
``` text
┌─id─┬─name─┐
│ 1 │ John│
│ 2 │ Jane│
│ 3 │ Bob│
└────┴──────┘
```
1. Basic usage without a delimiter:
Query:
``` sql
SELECT groupConcat(Name) FROM Employees;
```
Result:
``` text
JohnJaneBob
```
This concatenates all names into one continuous string without any separator.
2. Using comma as a delimiter:
Query:
``` sql
SELECT groupConcat(Name, ', ', 2) FROM Employees;
```
Result:
``` text
John, Jane, Bob
```
This output shows the names separated by a comma followed by a space.
3. Limiting the number of concatenated elements
Query:
``` sql
SELECT groupConcat(Name, ', ', 2) FROM Employees;
```
Result:
``` text
John, Jane
```
This query limits the output to the first two names, even though there are more names in the table.

View File

@ -24,7 +24,7 @@ void InterpolateNode::dumpTreeImpl(WriteBuffer & buffer, FormatState & format_st
{
buffer << std::string(indent, ' ') << "INTERPOLATE id: " << format_state.getNodeId(this);
buffer << '\n' << std::string(indent + 2, ' ') << "EXPRESSION\n";
buffer << '\n' << std::string(indent + 2, ' ') << "EXPRESSION " << expression_name << " \n";
getExpression()->dumpTreeImpl(buffer, format_state, indent + 4);
buffer << '\n' << std::string(indent + 2, ' ') << "INTERPOLATE_EXPRESSION\n";

View File

@ -50,6 +50,8 @@ public:
return QueryTreeNodeType::INTERPOLATE;
}
const std::string & getExpressionName() const { return expression_name; }
void dumpTreeImpl(WriteBuffer & buffer, FormatState & format_state, size_t indent) const override;
protected:

View File

@ -64,6 +64,8 @@
#include <Analyzer/Resolve/TableExpressionsAliasVisitor.h>
#include <Analyzer/Resolve/ReplaceColumnsVisitor.h>
#include <Planner/PlannerActionsVisitor.h>
#include <Core/Settings.h>
namespace ProfileEvents
@ -4122,11 +4124,7 @@ void QueryAnalyzer::resolveInterpolateColumnsNodeList(QueryTreeNodePtr & interpo
{
auto & interpolate_node_typed = interpolate_node->as<InterpolateNode &>();
auto * column_to_interpolate = interpolate_node_typed.getExpression()->as<IdentifierNode>();
if (!column_to_interpolate)
throw Exception(ErrorCodes::LOGICAL_ERROR, "INTERPOLATE can work only for indentifiers, but {} is found",
interpolate_node_typed.getExpression()->formatASTForErrorMessage());
auto column_to_interpolate_name = column_to_interpolate->getIdentifier().getFullName();
auto column_to_interpolate_name = interpolate_node_typed.getExpressionName();
resolveExpressionNode(interpolate_node_typed.getExpression(), scope, false /*allow_lambda_expression*/, false /*allow_table_expression*/);
@ -4135,14 +4133,11 @@ void QueryAnalyzer::resolveInterpolateColumnsNodeList(QueryTreeNodePtr & interpo
auto & interpolation_to_resolve = interpolate_node_typed.getInterpolateExpression();
IdentifierResolveScope interpolate_scope(interpolation_to_resolve, &scope /*parent_scope*/);
auto fake_column_node = std::make_shared<ColumnNode>(NameAndTypePair(column_to_interpolate_name, interpolate_node_typed.getExpression()->getResultType()), interpolate_node_typed.getExpression());
auto fake_column_node = std::make_shared<ColumnNode>(NameAndTypePair(column_to_interpolate_name, interpolate_node_typed.getExpression()->getResultType()), interpolate_node);
if (is_column_constant)
interpolate_scope.expression_argument_name_to_node.emplace(column_to_interpolate_name, fake_column_node);
resolveExpressionNode(interpolation_to_resolve, interpolate_scope, false /*allow_lambda_expression*/, false /*allow_table_expression*/);
if (is_column_constant)
interpolation_to_resolve = interpolation_to_resolve->cloneAndReplace(fake_column_node, interpolate_node_typed.getExpression());
}
}

View File

@ -14,7 +14,10 @@ public:
, re_gen(key_template)
{
}
DB::ObjectStorageKey generate(const String &, bool) const override { return DB::ObjectStorageKey::createAsAbsolute(re_gen.generate()); }
DB::ObjectStorageKey generate(const String &, bool /* is_directory */, const std::optional<String> & /* key_prefix */) const override
{
return DB::ObjectStorageKey::createAsAbsolute(re_gen.generate());
}
private:
String key_template;
@ -29,7 +32,7 @@ public:
: key_prefix(std::move(key_prefix_))
{}
DB::ObjectStorageKey generate(const String &, bool) const override
DB::ObjectStorageKey generate(const String &, bool /* is_directory */, const std::optional<String> & /* key_prefix */) const override
{
/// Path to store the new S3 object.
@ -60,7 +63,8 @@ public:
: key_prefix(std::move(key_prefix_))
{}
DB::ObjectStorageKey generate(const String & path, bool) const override
DB::ObjectStorageKey
generate(const String & path, bool /* is_directory */, const std::optional<String> & /* key_prefix */) const override
{
return DB::ObjectStorageKey::createAsRelative(key_prefix, path);
}

View File

@ -1,6 +1,7 @@
#pragma once
#include <memory>
#include <optional>
#include "ObjectStorageKey.h"
namespace DB
@ -11,7 +12,11 @@ class IObjectStorageKeysGenerator
public:
virtual ~IObjectStorageKeysGenerator() = default;
virtual ObjectStorageKey generate(const String & path, bool is_directory) const = 0;
/// Generates an object storage key based on a path in the virtual filesystem.
/// @param path - Path in the virtual filesystem.
/// @param is_directory - If the path in the virtual filesystem corresponds to a directory.
/// @param key_prefix - Optional key prefix for the generated object storage key. If provided, this prefix will be added to the beginning of the generated key.
virtual ObjectStorageKey generate(const String & path, bool is_directory, const std::optional<String> & key_prefix) const = 0;
};
using ObjectStorageKeysGeneratorPtr = std::shared_ptr<IObjectStorageKeysGenerator>;

View File

@ -57,265 +57,446 @@ String ClickHouseVersion::toString() const
/// Note: please check if the key already exists to prevent duplicate entries.
static std::initializer_list<std::pair<ClickHouseVersion, SettingsChangesHistory::SettingsChanges>> settings_changes_history_initializer =
{
{"24.7", {{"output_format_parquet_write_page_index", false, true, "Add a possibility to write page index into parquet files."},
{"output_format_binary_encode_types_in_binary_format", false, false, "Added new setting to allow to write type names in binary format in RowBinaryWithNamesAndTypes output format"},
{"input_format_binary_decode_types_in_binary_format", false, false, "Added new setting to allow to read type names in binary format in RowBinaryWithNamesAndTypes input format"},
{"output_format_native_encode_types_in_binary_format", false, false, "Added new setting to allow to write type names in binary format in Native output format"},
{"input_format_native_decode_types_in_binary_format", false, false, "Added new setting to allow to read type names in binary format in Native output format"},
{"read_in_order_use_buffering", false, true, "Use buffering before merging while reading in order of primary key"},
{"enable_named_columns_in_function_tuple", false, true, "Generate named tuples in function tuple() when all names are unique and can be treated as unquoted identifiers."},
{"input_format_json_case_insensitive_column_matching", false, false, "Ignore case when matching JSON keys with CH columns."},
{"optimize_trivial_insert_select", true, false, "The optimization does not make sense in many cases."},
{"dictionary_validate_primary_key_type", false, false, "Validate primary key type for dictionaries. By default id type for simple layouts will be implicitly converted to UInt64."},
{"collect_hash_table_stats_during_joins", false, true, "New setting."},
{"max_size_to_preallocate_for_joins", 0, 100'000'000, "New setting."},
{"input_format_orc_reader_time_zone_name", "GMT", "GMT", "The time zone name for ORC row reader, the default ORC row reader's time zone is GMT."},
{"lightweight_mutation_projection_mode", "throw", "throw", "When lightweight delete happens on a table with projection(s), the possible operations include throw the exception as projection exists, or drop all projection related to this table then do lightweight delete."},
{"database_replicated_allow_heavy_create", true, false, "Long-running DDL queries (CREATE AS SELECT and POPULATE) for Replicated database engine was forbidden"},
{"query_plan_merge_filters", false, false, "Allow to merge filters in the query plan"},
{"azure_sdk_max_retries", 10, 10, "Maximum number of retries in azure sdk"},
{"azure_sdk_retry_initial_backoff_ms", 10, 10, "Minimal backoff between retries in azure sdk"},
{"azure_sdk_retry_max_backoff_ms", 1000, 1000, "Maximal backoff between retries in azure sdk"},
{"merge_tree_min_bytes_per_task_for_remote_reading", 4194304, 2097152, "Value is unified with `filesystem_prefetch_min_bytes_for_single_read_task`"},
{"ignore_on_cluster_for_replicated_named_collections_queries", false, false, "Ignore ON CLUSTER clause for replicated named collections management queries."},
{"backup_restore_s3_retry_attempts", 1000,1000, "Setting for Aws::Client::RetryStrategy, Aws::Client does retries itself, 0 means no retries. It takes place only for backup/restore."},
{"postgresql_connection_attempt_timeout", 2, 2, "Allow to control 'connect_timeout' parameter of PostgreSQL connection."},
{"postgresql_connection_pool_retries", 2, 2, "Allow to control the number of retries in PostgreSQL connection pool."}
}},
{"24.6", {{"materialize_skip_indexes_on_insert", true, true, "Added new setting to allow to disable materialization of skip indexes on insert"},
{"materialize_statistics_on_insert", true, true, "Added new setting to allow to disable materialization of statistics on insert"},
{"input_format_parquet_use_native_reader", false, false, "When reading Parquet files, to use native reader instead of arrow reader."},
{"hdfs_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in HDFS engine instead of empty query result"},
{"azure_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in AzureBlobStorage engine instead of empty query result"},
{"s3_validate_request_settings", true, true, "Allow to disable S3 request settings validation"},
{"allow_experimental_full_text_index", false, false, "Enable experimental full-text index"},
{"azure_skip_empty_files", false, false, "Allow to skip empty files in azure table engine"},
{"hdfs_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in HDFS table engine"},
{"azure_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in AzureBlobStorage table engine"},
{"s3_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in S3 table engine"},
{"s3_max_part_number", 10000, 10000, "Maximum part number number for s3 upload part"},
{"s3_max_single_operation_copy_size", 32 * 1024 * 1024, 32 * 1024 * 1024, "Maximum size for a single copy operation in s3"},
{"input_format_parquet_max_block_size", 8192, DEFAULT_BLOCK_SIZE, "Increase block size for parquet reader."},
{"input_format_parquet_prefer_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Average block bytes output by parquet reader."},
{"enable_blob_storage_log", true, true, "Write information about blob storage operations to system.blob_storage_log table"},
{"allow_deprecated_snowflake_conversion_functions", true, false, "Disabled deprecated functions snowflakeToDateTime[64] and dateTime[64]ToSnowflake."},
{"allow_statistic_optimize", false, false, "Old setting which popped up here being renamed."},
{"allow_experimental_statistic", false, false, "Old setting which popped up here being renamed."},
{"allow_statistics_optimize", false, false, "The setting was renamed. The previous name is `allow_statistic_optimize`."},
{"allow_experimental_statistics", false, false, "The setting was renamed. The previous name is `allow_experimental_statistic`."},
{"enable_vertical_final", false, true, "Enable vertical final by default again after fixing bug"},
{"parallel_replicas_custom_key_range_lower", 0, 0, "Add settings to control the range filter when using parallel replicas with dynamic shards"},
{"parallel_replicas_custom_key_range_upper", 0, 0, "Add settings to control the range filter when using parallel replicas with dynamic shards. A value of 0 disables the upper limit"},
{"output_format_pretty_display_footer_column_names", 0, 1, "Add a setting to display column names in the footer if there are many rows. Threshold value is controlled by output_format_pretty_display_footer_column_names_min_rows."},
{"output_format_pretty_display_footer_column_names_min_rows", 0, 50, "Add a setting to control the threshold value for setting output_format_pretty_display_footer_column_names_min_rows. Default 50."},
{"output_format_csv_serialize_tuple_into_separate_columns", true, true, "A new way of how interpret tuples in CSV format was added."},
{"input_format_csv_deserialize_separate_columns_into_tuple", true, true, "A new way of how interpret tuples in CSV format was added."},
{"input_format_csv_try_infer_strings_from_quoted_tuples", true, true, "A new way of how interpret tuples in CSV format was added."},
}},
{"24.5", {{"allow_deprecated_error_prone_window_functions", true, false, "Allow usage of deprecated error prone window functions (neighbor, runningAccumulate, runningDifferenceStartingWithFirstValue, runningDifference)"},
{"allow_experimental_join_condition", false, false, "Support join with inequal conditions which involve columns from both left and right table. e.g. t1.y < t2.y."},
{"input_format_tsv_crlf_end_of_line", false, false, "Enables reading of CRLF line endings with TSV formats"},
{"output_format_parquet_use_custom_encoder", false, true, "Enable custom Parquet encoder."},
{"cross_join_min_rows_to_compress", 0, 10000000, "Minimal count of rows to compress block in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."},
{"cross_join_min_bytes_to_compress", 0, 1_GiB, "Minimal size of block to compress in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."},
{"http_max_chunk_size", 0, 0, "Internal limitation"},
{"prefer_external_sort_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Prefer maximum block bytes for external sort, reduce the memory usage during merging."},
{"input_format_force_null_for_omitted_fields", false, false, "Disable type-defaults for omitted fields when needed"},
{"cast_string_to_dynamic_use_inference", false, false, "Add setting to allow converting String to Dynamic through parsing"},
{"allow_experimental_dynamic_type", false, false, "Add new experimental Dynamic type"},
{"azure_max_blocks_in_multipart_upload", 50000, 50000, "Maximum number of blocks in multipart upload for Azure."},
}},
{"24.4", {{"input_format_json_throw_on_bad_escape_sequence", true, true, "Allow to save JSON strings with bad escape sequences"},
{"max_parsing_threads", 0, 0, "Add a separate setting to control number of threads in parallel parsing from files"},
{"ignore_drop_queries_probability", 0, 0, "Allow to ignore drop queries in server with specified probability for testing purposes"},
{"lightweight_deletes_sync", 2, 2, "The same as 'mutation_sync', but controls only execution of lightweight deletes"},
{"query_cache_system_table_handling", "save", "throw", "The query cache no longer caches results of queries against system tables"},
{"input_format_json_ignore_unnecessary_fields", false, true, "Ignore unnecessary fields and not parse them. Enabling this may not throw exceptions on json strings of invalid format or with duplicated fields"},
{"input_format_hive_text_allow_variable_number_of_columns", false, true, "Ignore extra columns in Hive Text input (if file has more columns than expected) and treat missing fields in Hive Text input as default values."},
{"allow_experimental_database_replicated", false, true, "Database engine Replicated is now in Beta stage"},
{"temporary_data_in_cache_reserve_space_wait_lock_timeout_milliseconds", (10 * 60 * 1000), (10 * 60 * 1000), "Wait time to lock cache for sapce reservation in temporary data in filesystem cache"},
{"optimize_rewrite_sum_if_to_count_if", false, true, "Only available for the analyzer, where it works correctly"},
{"azure_allow_parallel_part_upload", "true", "true", "Use multiple threads for azure multipart upload."},
{"max_recursive_cte_evaluation_depth", DBMS_RECURSIVE_CTE_MAX_EVALUATION_DEPTH, DBMS_RECURSIVE_CTE_MAX_EVALUATION_DEPTH, "Maximum limit on recursive CTE evaluation depth"},
{"query_plan_convert_outer_join_to_inner_join", false, true, "Allow to convert OUTER JOIN to INNER JOIN if filter after JOIN always filters default values"},
}},
{"24.3", {{"s3_connect_timeout_ms", 1000, 1000, "Introduce new dedicated setting for s3 connection timeout"},
{"allow_experimental_shared_merge_tree", false, true, "The setting is obsolete"},
{"use_page_cache_for_disks_without_file_cache", false, false, "Added userspace page cache"},
{"read_from_page_cache_if_exists_otherwise_bypass_cache", false, false, "Added userspace page cache"},
{"page_cache_inject_eviction", false, false, "Added userspace page cache"},
{"default_table_engine", "None", "MergeTree", "Set default table engine to MergeTree for better usability"},
{"input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects", false, false, "Allow to use String type for ambiguous paths during named tuple inference from JSON objects"},
{"traverse_shadow_remote_data_paths", false, false, "Traverse shadow directory when query system.remote_data_paths."},
{"throw_if_deduplication_in_dependent_materialized_views_enabled_with_async_insert", false, true, "Deduplication in dependent materialized view cannot work together with async inserts."},
{"parallel_replicas_allow_in_with_subquery", false, true, "If true, subquery for IN will be executed on every follower replica"},
{"log_processors_profiles", false, true, "Enable by default"},
{"function_locate_has_mysql_compatible_argument_order", false, true, "Increase compatibility with MySQL's locate function."},
{"allow_suspicious_primary_key", true, false, "Forbid suspicious PRIMARY KEY/ORDER BY for MergeTree (i.e. SimpleAggregateFunction)"},
{"filesystem_cache_reserve_space_wait_lock_timeout_milliseconds", 1000, 1000, "Wait time to lock cache for sapce reservation in filesystem cache"},
{"max_parser_backtracks", 0, 1000000, "Limiting the complexity of parsing"},
{"analyzer_compatibility_join_using_top_level_identifier", false, false, "Force to resolve identifier in JOIN USING from projection"},
{"distributed_insert_skip_read_only_replicas", false, false, "If true, INSERT into Distributed will skip read-only replicas"},
{"keeper_max_retries", 10, 10, "Max retries for general keeper operations"},
{"keeper_retry_initial_backoff_ms", 100, 100, "Initial backoff timeout for general keeper operations"},
{"keeper_retry_max_backoff_ms", 5000, 5000, "Max backoff timeout for general keeper operations"},
{"s3queue_allow_experimental_sharded_mode", false, false, "Enable experimental sharded mode of S3Queue table engine. It is experimental because it will be rewritten"},
{"allow_experimental_analyzer", false, true, "Enable analyzer and planner by default."},
{"merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability", 0.0, 0.0, "For testing of `PartsSplitter` - split read ranges into intersecting and non intersecting every time you read from MergeTree with the specified probability."},
{"allow_get_client_http_header", false, false, "Introduced a new function."},
{"output_format_pretty_row_numbers", false, true, "It is better for usability."},
{"output_format_pretty_max_value_width_apply_for_single_value", true, false, "Single values in Pretty formats won't be cut."},
{"output_format_parquet_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
{"output_format_orc_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
{"output_format_arrow_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
{"output_format_parquet_compression_method", "lz4", "zstd", "Parquet/ORC/Arrow support many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools, such as 'duckdb', lack support for the faster `lz4` compression method, that's why we set zstd by default."},
{"output_format_orc_compression_method", "lz4", "zstd", "Parquet/ORC/Arrow support many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools, such as 'duckdb', lack support for the faster `lz4` compression method, that's why we set zstd by default."},
{"output_format_pretty_highlight_digit_groups", false, true, "If enabled and if output is a terminal, highlight every digit corresponding to the number of thousands, millions, etc. with underline."},
{"geo_distance_returns_float64_on_float64_arguments", false, true, "Increase the default precision."},
{"azure_max_inflight_parts_for_one_file", 20, 20, "The maximum number of a concurrent loaded parts in multipart upload request. 0 means unlimited."},
{"azure_strict_upload_part_size", 0, 0, "The exact size of part to upload during multipart upload to Azure blob storage."},
{"azure_min_upload_part_size", 16*1024*1024, 16*1024*1024, "The minimum size of part to upload during multipart upload to Azure blob storage."},
{"azure_max_upload_part_size", 5ull*1024*1024*1024, 5ull*1024*1024*1024, "The maximum size of part to upload during multipart upload to Azure blob storage."},
{"azure_upload_part_size_multiply_factor", 2, 2, "Multiply azure_min_upload_part_size by this factor each time azure_multiply_parts_count_threshold parts were uploaded from a single write to Azure blob storage."},
{"azure_upload_part_size_multiply_parts_count_threshold", 500, 500, "Each time this number of parts was uploaded to Azure blob storage, azure_min_upload_part_size is multiplied by azure_upload_part_size_multiply_factor."},
{"output_format_csv_serialize_tuple_into_separate_columns", true, true, "A new way of how interpret tuples in CSV format was added."},
{"input_format_csv_deserialize_separate_columns_into_tuple", true, true, "A new way of how interpret tuples in CSV format was added."},
{"input_format_csv_try_infer_strings_from_quoted_tuples", true, true, "A new way of how interpret tuples in CSV format was added."},
}},
{"24.2", {{"allow_suspicious_variant_types", true, false, "Don't allow creating Variant type with suspicious variants by default"},
{"validate_experimental_and_suspicious_types_inside_nested_types", false, true, "Validate usage of experimental and suspicious types inside nested types"},
{"output_format_values_escape_quote_with_quote", false, false, "If true escape ' with '', otherwise quoted with \\'"},
{"output_format_pretty_single_large_number_tip_threshold", 0, 1'000'000, "Print a readable number tip on the right side of the table if the block consists of a single number which exceeds this value (except 0)"},
{"input_format_try_infer_exponent_floats", true, false, "Don't infer floats in exponential notation by default"},
{"query_plan_optimize_prewhere", true, true, "Allow to push down filter to PREWHERE expression for supported storages"},
{"async_insert_max_data_size", 1000000, 10485760, "The previous value appeared to be too small."},
{"async_insert_poll_timeout_ms", 10, 10, "Timeout in milliseconds for polling data from asynchronous insert queue"},
{"async_insert_use_adaptive_busy_timeout", false, true, "Use adaptive asynchronous insert timeout"},
{"async_insert_busy_timeout_min_ms", 50, 50, "The minimum value of the asynchronous insert timeout in milliseconds; it also serves as the initial value, which may be increased later by the adaptive algorithm"},
{"async_insert_busy_timeout_max_ms", 200, 200, "The minimum value of the asynchronous insert timeout in milliseconds; async_insert_busy_timeout_ms is aliased to async_insert_busy_timeout_max_ms"},
{"async_insert_busy_timeout_increase_rate", 0.2, 0.2, "The exponential growth rate at which the adaptive asynchronous insert timeout increases"},
{"async_insert_busy_timeout_decrease_rate", 0.2, 0.2, "The exponential growth rate at which the adaptive asynchronous insert timeout decreases"},
{"format_template_row_format", "", "", "Template row format string can be set directly in query"},
{"format_template_resultset_format", "", "", "Template result set format string can be set in query"},
{"split_parts_ranges_into_intersecting_and_non_intersecting_final", true, true, "Allow to split parts ranges into intersecting and non intersecting during FINAL optimization"},
{"split_intersecting_parts_ranges_into_layers_final", true, true, "Allow to split intersecting parts ranges into layers during FINAL optimization"},
{"azure_max_single_part_copy_size", 256*1024*1024, 256*1024*1024, "The maximum size of object to copy using single part copy to Azure blob storage."},
{"min_external_table_block_size_rows", DEFAULT_INSERT_BLOCK_SIZE, DEFAULT_INSERT_BLOCK_SIZE, "Squash blocks passed to external table to specified size in rows, if blocks are not big enough"},
{"min_external_table_block_size_bytes", DEFAULT_INSERT_BLOCK_SIZE * 256, DEFAULT_INSERT_BLOCK_SIZE * 256, "Squash blocks passed to external table to specified size in bytes, if blocks are not big enough."},
{"parallel_replicas_prefer_local_join", true, true, "If true, and JOIN can be executed with parallel replicas algorithm, and all storages of right JOIN part are *MergeTree, local JOIN will be used instead of GLOBAL JOIN."},
{"optimize_time_filter_with_preimage", true, true, "Optimize Date and DateTime predicates by converting functions into equivalent comparisons without conversions (e.g. toYear(col) = 2023 -> col >= '2023-01-01' AND col <= '2023-12-31')"},
{"extract_key_value_pairs_max_pairs_per_row", 0, 0, "Max number of pairs that can be produced by the `extractKeyValuePairs` function. Used as a safeguard against consuming too much memory."},
{"default_view_definer", "CURRENT_USER", "CURRENT_USER", "Allows to set default `DEFINER` option while creating a view"},
{"default_materialized_view_sql_security", "DEFINER", "DEFINER", "Allows to set a default value for SQL SECURITY option when creating a materialized view"},
{"default_normal_view_sql_security", "INVOKER", "INVOKER", "Allows to set default `SQL SECURITY` option while creating a normal view"},
{"mysql_map_string_to_text_in_show_columns", false, true, "Reduce the configuration effort to connect ClickHouse with BI tools."},
{"mysql_map_fixed_string_to_text_in_show_columns", false, true, "Reduce the configuration effort to connect ClickHouse with BI tools."},
}},
{"24.1", {{"print_pretty_type_names", false, true, "Better user experience."},
{"input_format_json_read_bools_as_strings", false, true, "Allow to read bools as strings in JSON formats by default"},
{"output_format_arrow_use_signed_indexes_for_dictionary", false, true, "Use signed indexes type for Arrow dictionaries by default as it's recommended"},
{"allow_experimental_variant_type", false, false, "Add new experimental Variant type"},
{"use_variant_as_common_type", false, false, "Allow to use Variant in if/multiIf if there is no common type"},
{"output_format_arrow_use_64_bit_indexes_for_dictionary", false, false, "Allow to use 64 bit indexes type in Arrow dictionaries"},
{"parallel_replicas_mark_segment_size", 128, 128, "Add new setting to control segment size in new parallel replicas coordinator implementation"},
{"ignore_materialized_views_with_dropped_target_table", false, false, "Add new setting to allow to ignore materialized views with dropped target table"},
{"output_format_compression_level", 3, 3, "Allow to change compression level in the query output"},
{"output_format_compression_zstd_window_log", 0, 0, "Allow to change zstd window log in the query output when zstd compression is used"},
{"enable_zstd_qat_codec", false, false, "Add new ZSTD_QAT codec"},
{"enable_vertical_final", false, true, "Use vertical final by default"},
{"output_format_arrow_use_64_bit_indexes_for_dictionary", false, false, "Allow to use 64 bit indexes type in Arrow dictionaries"},
{"max_rows_in_set_to_optimize_join", 100000, 0, "Disable join optimization as it prevents from read in order optimization"},
{"output_format_pretty_color", true, "auto", "Setting is changed to allow also for auto value, disabling ANSI escapes if output is not a tty"},
{"function_visible_width_behavior", 0, 1, "We changed the default behavior of `visibleWidth` to be more precise"},
{"max_estimated_execution_time", 0, 0, "Separate max_execution_time and max_estimated_execution_time"},
{"iceberg_engine_ignore_schema_evolution", false, false, "Allow to ignore schema evolution in Iceberg table engine"},
{"optimize_injective_functions_in_group_by", false, true, "Replace injective functions by it's arguments in GROUP BY section in analyzer"},
{"update_insert_deduplication_token_in_dependent_materialized_views", false, false, "Allow to update insert deduplication token with table identifier during insert in dependent materialized views"},
{"azure_max_unexpected_write_error_retries", 4, 4, "The maximum number of retries in case of unexpected errors during Azure blob storage write"},
{"split_parts_ranges_into_intersecting_and_non_intersecting_final", false, true, "Allow to split parts ranges into intersecting and non intersecting during FINAL optimization"},
{"split_intersecting_parts_ranges_into_layers_final", true, true, "Allow to split intersecting parts ranges into layers during FINAL optimization"}}},
{"23.12", {{"allow_suspicious_ttl_expressions", true, false, "It is a new setting, and in previous versions the behavior was equivalent to allowing."},
{"input_format_parquet_allow_missing_columns", false, true, "Allow missing columns in Parquet files by default"},
{"input_format_orc_allow_missing_columns", false, true, "Allow missing columns in ORC files by default"},
{"input_format_arrow_allow_missing_columns", false, true, "Allow missing columns in Arrow files by default"}}},
{"23.11", {{"parsedatetime_parse_without_leading_zeros", false, true, "Improved compatibility with MySQL DATE_FORMAT/STR_TO_DATE"}}},
{"23.9", {{"optimize_group_by_constant_keys", false, true, "Optimize group by constant keys by default"},
{"input_format_json_try_infer_named_tuples_from_objects", false, true, "Try to infer named Tuples from JSON objects by default"},
{"input_format_json_read_numbers_as_strings", false, true, "Allow to read numbers as strings in JSON formats by default"},
{"input_format_json_read_arrays_as_strings", false, true, "Allow to read arrays as strings in JSON formats by default"},
{"input_format_json_infer_incomplete_types_as_strings", false, true, "Allow to infer incomplete types as Strings in JSON formats by default"},
{"input_format_json_try_infer_numbers_from_strings", true, false, "Don't infer numbers from strings in JSON formats by default to prevent possible parsing errors"},
{"http_write_exception_in_output_format", false, true, "Output valid JSON/XML on exception in HTTP streaming."}}},
{"23.8", {{"rewrite_count_distinct_if_with_count_distinct_implementation", false, true, "Rewrite countDistinctIf with count_distinct_implementation configuration"}}},
{"23.7", {{"function_sleep_max_microseconds_per_block", 0, 3000000, "In previous versions, the maximum sleep time of 3 seconds was applied only for `sleep`, but not for `sleepEachRow` function. In the new version, we introduce this setting. If you set compatibility with the previous versions, we will disable the limit altogether."}}},
{"23.6", {{"http_send_timeout", 180, 30, "3 minutes seems crazy long. Note that this is timeout for a single network write call, not for the whole upload operation."},
{"http_receive_timeout", 180, 30, "See http_send_timeout."}}},
{"23.5", {{"input_format_parquet_preserve_order", true, false, "Allow Parquet reader to reorder rows for better parallelism."},
{"parallelize_output_from_storages", false, true, "Allow parallelism when executing queries that read from file/url/s3/etc. This may reorder rows."},
{"use_with_fill_by_sorting_prefix", false, true, "Columns preceding WITH FILL columns in ORDER BY clause form sorting prefix. Rows with different values in sorting prefix are filled independently"},
{"output_format_parquet_compliant_nested_types", false, true, "Change an internal field name in output Parquet file schema."}}},
{"23.4", {{"allow_suspicious_indices", true, false, "If true, index can defined with identical expressions"},
{"allow_nonconst_timezone_arguments", true, false, "Allow non-const timezone arguments in certain time-related functions like toTimeZone(), fromUnixTimestamp*(), snowflakeToDateTime*()."},
{"connect_timeout_with_failover_ms", 50, 1000, "Increase default connect timeout because of async connect"},
{"connect_timeout_with_failover_secure_ms", 100, 1000, "Increase default secure connect timeout because of async connect"},
{"hedged_connection_timeout_ms", 100, 50, "Start new connection in hedged requests after 50 ms instead of 100 to correspond with previous connect timeout"},
{"formatdatetime_f_prints_single_zero", true, false, "Improved compatibility with MySQL DATE_FORMAT()/STR_TO_DATE()"},
{"formatdatetime_parsedatetime_m_is_month_name", false, true, "Improved compatibility with MySQL DATE_FORMAT/STR_TO_DATE"}}},
{"23.3", {{"output_format_parquet_version", "1.0", "2.latest", "Use latest Parquet format version for output format"},
{"input_format_json_ignore_unknown_keys_in_named_tuple", false, true, "Improve parsing JSON objects as named tuples"},
{"input_format_native_allow_types_conversion", false, true, "Allow types conversion in Native input forma"},
{"output_format_arrow_compression_method", "none", "lz4_frame", "Use lz4 compression in Arrow output format by default"},
{"output_format_parquet_compression_method", "snappy", "lz4", "Use lz4 compression in Parquet output format by default"},
{"output_format_orc_compression_method", "none", "lz4_frame", "Use lz4 compression in ORC output format by default"},
{"async_query_sending_for_remote", false, true, "Create connections and send query async across shards"}}},
{"23.2", {{"output_format_parquet_fixed_string_as_fixed_byte_array", false, true, "Use Parquet FIXED_LENGTH_BYTE_ARRAY type for FixedString by default"},
{"output_format_arrow_fixed_string_as_fixed_byte_array", false, true, "Use Arrow FIXED_SIZE_BINARY type for FixedString by default"},
{"query_plan_remove_redundant_distinct", false, true, "Remove redundant Distinct step in query plan"},
{"optimize_duplicate_order_by_and_distinct", true, false, "Remove duplicate ORDER BY and DISTINCT if it's possible"},
{"insert_keeper_max_retries", 0, 20, "Enable reconnections to Keeper on INSERT, improve reliability"}}},
{"23.1", {{"input_format_json_read_objects_as_strings", 0, 1, "Enable reading nested json objects as strings while object type is experimental"},
{"input_format_json_defaults_for_missing_elements_in_named_tuple", false, true, "Allow missing elements in JSON objects while reading named tuples by default"},
{"input_format_csv_detect_header", false, true, "Detect header in CSV format by default"},
{"input_format_tsv_detect_header", false, true, "Detect header in TSV format by default"},
{"input_format_custom_detect_header", false, true, "Detect header in CustomSeparated format by default"},
{"query_plan_remove_redundant_sorting", false, true, "Remove redundant sorting in query plan. For example, sorting steps related to ORDER BY clauses in subqueries"}}},
{"22.12", {{"max_size_to_preallocate_for_aggregation", 10'000'000, 100'000'000, "This optimizes performance"},
{"query_plan_aggregation_in_order", 0, 1, "Enable some refactoring around query plan"},
{"format_binary_max_string_size", 0, 1_GiB, "Prevent allocating large amount of memory"}}},
{"22.11", {{"use_structure_from_insertion_table_in_table_functions", 0, 2, "Improve using structure from insertion table in table functions"}}},
{"22.9", {{"force_grouping_standard_compatibility", false, true, "Make GROUPING function output the same as in SQL standard and other DBMS"}}},
{"22.7", {{"cross_to_inner_join_rewrite", 1, 2, "Force rewrite comma join to inner"},
{"enable_positional_arguments", false, true, "Enable positional arguments feature by default"},
{"format_csv_allow_single_quotes", true, false, "Most tools don't treat single quote in CSV specially, don't do it by default too"}}},
{"22.6", {{"output_format_json_named_tuples_as_objects", false, true, "Allow to serialize named tuples as JSON objects in JSON formats by default"},
{"input_format_skip_unknown_fields", false, true, "Optimize reading subset of columns for some input formats"}}},
{"22.5", {{"memory_overcommit_ratio_denominator", 0, 1073741824, "Enable memory overcommit feature by default"},
{"memory_overcommit_ratio_denominator_for_user", 0, 1073741824, "Enable memory overcommit feature by default"}}},
{"22.4", {{"allow_settings_after_format_in_insert", true, false, "Do not allow SETTINGS after FORMAT for INSERT queries because ClickHouse interpret SETTINGS as some values, which is misleading"}}},
{"22.3", {{"cast_ipv4_ipv6_default_on_conversion_error", true, false, "Make functions cast(value, 'IPv4') and cast(value, 'IPv6') behave same as toIPv4 and toIPv6 functions"}}},
{"21.12", {{"stream_like_engine_allow_direct_select", true, false, "Do not allow direct select for Kafka/RabbitMQ/FileLog by default"}}},
{"21.9", {{"output_format_decimal_trailing_zeros", true, false, "Do not output trailing zeros in text representation of Decimal types by default for better looking output"},
{"use_hedged_requests", false, true, "Enable Hedged Requests feature by default"}}},
{"21.7", {{"legacy_column_name_of_tuple_literal", true, false, "Add this setting only for compatibility reasons. It makes sense to set to 'true', while doing rolling update of cluster from version lower than 21.7 to higher"}}},
{"21.5", {{"async_socket_for_remote", false, true, "Fix all problems and turn on asynchronous reads from socket for remote queries by default again"}}},
{"21.3", {{"async_socket_for_remote", true, false, "Turn off asynchronous reads from socket for remote queries because of some problems"},
{"optimize_normalize_count_variants", false, true, "Rewrite aggregate functions that semantically equals to count() as count() by default"},
{"normalize_function_names", false, true, "Normalize function names to their canonical names, this was needed for projection query routing"}}},
{"21.2", {{"enable_global_with_statement", false, true, "Propagate WITH statements to UNION queries and all subqueries by default"}}},
{"21.1", {{"insert_quorum_parallel", false, true, "Use parallel quorum inserts by default. It is significantly more convenient to use than sequential quorum inserts"},
{"input_format_null_as_default", false, true, "Allow to insert NULL as default for input formats by default"},
{"optimize_on_insert", false, true, "Enable data optimization on INSERT by default for better user experience"},
{"use_compact_format_in_distributed_parts_names", false, true, "Use compact format for async INSERT into Distributed tables by default"}}},
{"20.10", {{"format_regexp_escaping_rule", "Escaped", "Raw", "Use Raw as default escaping rule for Regexp format to male the behaviour more like to what users expect"}}},
{"20.7", {{"show_table_uuid_in_table_create_query_if_not_nil", true, false, "Stop showing UID of the table in its CREATE query for Engine=Atomic"}}},
{"20.5", {{"input_format_with_names_use_header", false, true, "Enable using header with names for formats with WithNames/WithNamesAndTypes suffixes"},
{"allow_suspicious_codecs", true, false, "Don't allow to specify meaningless compression codecs"}}},
{"20.4", {{"validate_polygons", false, true, "Throw exception if polygon is invalid in function pointInPolygon by default instead of returning possibly wrong results"}}},
{"19.18", {{"enable_scalar_subquery_optimization", false, true, "Prevent scalar subqueries from (de)serializing large scalar values and possibly avoid running the same subquery more than once"}}},
{"19.14", {{"any_join_distinct_right_table_keys", true, false, "Disable ANY RIGHT and ANY FULL JOINs by default to avoid inconsistency"}}},
{"19.12", {{"input_format_defaults_for_omitted_fields", false, true, "Enable calculation of complex default expressions for omitted fields for some input formats, because it should be the expected behaviour"}}},
{"19.5", {{"max_partitions_per_insert_block", 0, 100, "Add a limit for the number of partitions in one block"}}},
{"18.12.17", {{"enable_optimize_predicate_expression", 0, 1, "Optimize predicates to subqueries by default"}}},
{"24.12",
{
}
},
{"24.11",
{
}
},
{"24.10",
{
}
},
{"24.9",
{
}
},
{"24.8",
{
{"merge_tree_min_bytes_per_task_for_remote_reading", 4194304, 2097152, "Value is unified with `filesystem_prefetch_min_bytes_for_single_read_task`"},
}
},
{"24.7",
{
{"output_format_parquet_write_page_index", false, true, "Add a possibility to write page index into parquet files."},
{"output_format_binary_encode_types_in_binary_format", false, false, "Added new setting to allow to write type names in binary format in RowBinaryWithNamesAndTypes output format"},
{"input_format_binary_decode_types_in_binary_format", false, false, "Added new setting to allow to read type names in binary format in RowBinaryWithNamesAndTypes input format"},
{"output_format_native_encode_types_in_binary_format", false, false, "Added new setting to allow to write type names in binary format in Native output format"},
{"input_format_native_decode_types_in_binary_format", false, false, "Added new setting to allow to read type names in binary format in Native output format"},
{"read_in_order_use_buffering", false, true, "Use buffering before merging while reading in order of primary key"},
{"enable_named_columns_in_function_tuple", false, true, "Generate named tuples in function tuple() when all names are unique and can be treated as unquoted identifiers."},
{"input_format_json_case_insensitive_column_matching", false, false, "Ignore case when matching JSON keys with CH columns."},
{"optimize_trivial_insert_select", true, false, "The optimization does not make sense in many cases."},
{"dictionary_validate_primary_key_type", false, false, "Validate primary key type for dictionaries. By default id type for simple layouts will be implicitly converted to UInt64."},
{"collect_hash_table_stats_during_joins", false, true, "New setting."},
{"max_size_to_preallocate_for_joins", 0, 100'000'000, "New setting."},
{"input_format_orc_reader_time_zone_name", "GMT", "GMT", "The time zone name for ORC row reader, the default ORC row reader's time zone is GMT."}, {"lightweight_mutation_projection_mode", "throw", "throw", "When lightweight delete happens on a table with projection(s), the possible operations include throw the exception as projection exists, or drop all projection related to this table then do lightweight delete."},
{"database_replicated_allow_heavy_create", true, false, "Long-running DDL queries (CREATE AS SELECT and POPULATE) for Replicated database engine was forbidden"},
{"query_plan_merge_filters", false, false, "Allow to merge filters in the query plan"},
{"azure_sdk_max_retries", 10, 10, "Maximum number of retries in azure sdk"},
{"azure_sdk_retry_initial_backoff_ms", 10, 10, "Minimal backoff between retries in azure sdk"},
{"azure_sdk_retry_max_backoff_ms", 1000, 1000, "Maximal backoff between retries in azure sdk"},
{"ignore_on_cluster_for_replicated_named_collections_queries", false, false, "Ignore ON CLUSTER clause for replicated named collections management queries."},
{"backup_restore_s3_retry_attempts", 1000,1000, "Setting for Aws::Client::RetryStrategy, Aws::Client does retries itself, 0 means no retries. It takes place only for backup/restore."},
{"postgresql_connection_attempt_timeout", 2, 2, "Allow to control 'connect_timeout' parameter of PostgreSQL connection."},
{"postgresql_connection_pool_retries", 2, 2, "Allow to control the number of retries in PostgreSQL connection pool."}
}
},
{"24.6",
{
{"materialize_skip_indexes_on_insert", true, true, "Added new setting to allow to disable materialization of skip indexes on insert"},
{"materialize_statistics_on_insert", true, true, "Added new setting to allow to disable materialization of statistics on insert"},
{"input_format_parquet_use_native_reader", false, false, "When reading Parquet files, to use native reader instead of arrow reader."},
{"hdfs_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in HDFS engine instead of empty query result"},
{"azure_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in AzureBlobStorage engine instead of empty query result"},
{"s3_validate_request_settings", true, true, "Allow to disable S3 request settings validation"},
{"allow_experimental_full_text_index", false, false, "Enable experimental full-text index"},
{"azure_skip_empty_files", false, false, "Allow to skip empty files in azure table engine"},
{"hdfs_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in HDFS table engine"},
{"azure_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in AzureBlobStorage table engine"},
{"s3_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in S3 table engine"},
{"s3_max_part_number", 10000, 10000, "Maximum part number number for s3 upload part"},
{"s3_max_single_operation_copy_size", 32 * 1024 * 1024, 32 * 1024 * 1024, "Maximum size for a single copy operation in s3"},
{"input_format_parquet_max_block_size", 8192, DEFAULT_BLOCK_SIZE, "Increase block size for parquet reader."},
{"input_format_parquet_prefer_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Average block bytes output by parquet reader."},
{"enable_blob_storage_log", true, true, "Write information about blob storage operations to system.blob_storage_log table"},
{"allow_deprecated_snowflake_conversion_functions", true, false, "Disabled deprecated functions snowflakeToDateTime[64] and dateTime[64]ToSnowflake."},
{"allow_statistic_optimize", false, false, "Old setting which popped up here being renamed."},
{"allow_experimental_statistic", false, false, "Old setting which popped up here being renamed."},
{"allow_statistics_optimize", false, false, "The setting was renamed. The previous name is `allow_statistic_optimize`."},
{"allow_experimental_statistics", false, false, "The setting was renamed. The previous name is `allow_experimental_statistic`."},
{"enable_vertical_final", false, true, "Enable vertical final by default again after fixing bug"},
{"parallel_replicas_custom_key_range_lower", 0, 0, "Add settings to control the range filter when using parallel replicas with dynamic shards"},
{"parallel_replicas_custom_key_range_upper", 0, 0, "Add settings to control the range filter when using parallel replicas with dynamic shards. A value of 0 disables the upper limit"},
{"output_format_pretty_display_footer_column_names", 0, 1, "Add a setting to display column names in the footer if there are many rows. Threshold value is controlled by output_format_pretty_display_footer_column_names_min_rows."},
{"output_format_pretty_display_footer_column_names_min_rows", 0, 50, "Add a setting to control the threshold value for setting output_format_pretty_display_footer_column_names_min_rows. Default 50."},
{"output_format_csv_serialize_tuple_into_separate_columns", true, true, "A new way of how interpret tuples in CSV format was added."},
{"input_format_csv_deserialize_separate_columns_into_tuple", true, true, "A new way of how interpret tuples in CSV format was added."},
{"input_format_csv_try_infer_strings_from_quoted_tuples", true, true, "A new way of how interpret tuples in CSV format was added."},
}
},
{"24.5",
{
{"allow_deprecated_error_prone_window_functions", true, false, "Allow usage of deprecated error prone window functions (neighbor, runningAccumulate, runningDifferenceStartingWithFirstValue, runningDifference)"},
{"allow_experimental_join_condition", false, false, "Support join with inequal conditions which involve columns from both left and right table. e.g. t1.y < t2.y."},
{"input_format_tsv_crlf_end_of_line", false, false, "Enables reading of CRLF line endings with TSV formats"},
{"output_format_parquet_use_custom_encoder", false, true, "Enable custom Parquet encoder."},
{"cross_join_min_rows_to_compress", 0, 10000000, "Minimal count of rows to compress block in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."},
{"cross_join_min_bytes_to_compress", 0, 1_GiB, "Minimal size of block to compress in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."},
{"http_max_chunk_size", 0, 0, "Internal limitation"},
{"prefer_external_sort_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Prefer maximum block bytes for external sort, reduce the memory usage during merging."},
{"input_format_force_null_for_omitted_fields", false, false, "Disable type-defaults for omitted fields when needed"},
{"cast_string_to_dynamic_use_inference", false, false, "Add setting to allow converting String to Dynamic through parsing"},
{"allow_experimental_dynamic_type", false, false, "Add new experimental Dynamic type"},
{"azure_max_blocks_in_multipart_upload", 50000, 50000, "Maximum number of blocks in multipart upload for Azure."},
}
},
{"24.4",
{
{"input_format_json_throw_on_bad_escape_sequence", true, true, "Allow to save JSON strings with bad escape sequences"},
{"max_parsing_threads", 0, 0, "Add a separate setting to control number of threads in parallel parsing from files"},
{"ignore_drop_queries_probability", 0, 0, "Allow to ignore drop queries in server with specified probability for testing purposes"},
{"lightweight_deletes_sync", 2, 2, "The same as 'mutation_sync', but controls only execution of lightweight deletes"},
{"query_cache_system_table_handling", "save", "throw", "The query cache no longer caches results of queries against system tables"},
{"input_format_json_ignore_unnecessary_fields", false, true, "Ignore unnecessary fields and not parse them. Enabling this may not throw exceptions on json strings of invalid format or with duplicated fields"},
{"input_format_hive_text_allow_variable_number_of_columns", false, true, "Ignore extra columns in Hive Text input (if file has more columns than expected) and treat missing fields in Hive Text input as default values."},
{"allow_experimental_database_replicated", false, true, "Database engine Replicated is now in Beta stage"},
{"temporary_data_in_cache_reserve_space_wait_lock_timeout_milliseconds", (10 * 60 * 1000), (10 * 60 * 1000), "Wait time to lock cache for sapce reservation in temporary data in filesystem cache"},
{"optimize_rewrite_sum_if_to_count_if", false, true, "Only available for the analyzer, where it works correctly"},
{"azure_allow_parallel_part_upload", "true", "true", "Use multiple threads for azure multipart upload."},
{"max_recursive_cte_evaluation_depth", DBMS_RECURSIVE_CTE_MAX_EVALUATION_DEPTH, DBMS_RECURSIVE_CTE_MAX_EVALUATION_DEPTH, "Maximum limit on recursive CTE evaluation depth"},
{"query_plan_convert_outer_join_to_inner_join", false, true, "Allow to convert OUTER JOIN to INNER JOIN if filter after JOIN always filters default values"},
}
},
{"24.3",
{
{"s3_connect_timeout_ms", 1000, 1000, "Introduce new dedicated setting for s3 connection timeout"},
{"allow_experimental_shared_merge_tree", false, true, "The setting is obsolete"},
{"use_page_cache_for_disks_without_file_cache", false, false, "Added userspace page cache"},
{"read_from_page_cache_if_exists_otherwise_bypass_cache", false, false, "Added userspace page cache"},
{"page_cache_inject_eviction", false, false, "Added userspace page cache"},
{"default_table_engine", "None", "MergeTree", "Set default table engine to MergeTree for better usability"},
{"input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects", false, false, "Allow to use String type for ambiguous paths during named tuple inference from JSON objects"},
{"traverse_shadow_remote_data_paths", false, false, "Traverse shadow directory when query system.remote_data_paths."},
{"throw_if_deduplication_in_dependent_materialized_views_enabled_with_async_insert", false, true, "Deduplication in dependent materialized view cannot work together with async inserts."},
{"parallel_replicas_allow_in_with_subquery", false, true, "If true, subquery for IN will be executed on every follower replica"},
{"log_processors_profiles", false, true, "Enable by default"},
{"function_locate_has_mysql_compatible_argument_order", false, true, "Increase compatibility with MySQL's locate function."},
{"allow_suspicious_primary_key", true, false, "Forbid suspicious PRIMARY KEY/ORDER BY for MergeTree (i.e. SimpleAggregateFunction)"},
{"filesystem_cache_reserve_space_wait_lock_timeout_milliseconds", 1000, 1000, "Wait time to lock cache for sapce reservation in filesystem cache"},
{"max_parser_backtracks", 0, 1000000, "Limiting the complexity of parsing"},
{"analyzer_compatibility_join_using_top_level_identifier", false, false, "Force to resolve identifier in JOIN USING from projection"},
{"distributed_insert_skip_read_only_replicas", false, false, "If true, INSERT into Distributed will skip read-only replicas"},
{"keeper_max_retries", 10, 10, "Max retries for general keeper operations"},
{"keeper_retry_initial_backoff_ms", 100, 100, "Initial backoff timeout for general keeper operations"},
{"keeper_retry_max_backoff_ms", 5000, 5000, "Max backoff timeout for general keeper operations"},
{"s3queue_allow_experimental_sharded_mode", false, false, "Enable experimental sharded mode of S3Queue table engine. It is experimental because it will be rewritten"},
{"allow_experimental_analyzer", false, true, "Enable analyzer and planner by default."},
{"merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability", 0.0, 0.0, "For testing of `PartsSplitter` - split read ranges into intersecting and non intersecting every time you read from MergeTree with the specified probability."},
{"allow_get_client_http_header", false, false, "Introduced a new function."},
{"output_format_pretty_row_numbers", false, true, "It is better for usability."},
{"output_format_pretty_max_value_width_apply_for_single_value", true, false, "Single values in Pretty formats won't be cut."},
{"output_format_parquet_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
{"output_format_orc_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
{"output_format_arrow_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
{"output_format_parquet_compression_method", "lz4", "zstd", "Parquet/ORC/Arrow support many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools, such as 'duckdb', lack support for the faster `lz4` compression method, that's why we set zstd by default."},
{"output_format_orc_compression_method", "lz4", "zstd", "Parquet/ORC/Arrow support many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools, such as 'duckdb', lack support for the faster `lz4` compression method, that's why we set zstd by default."},
{"output_format_pretty_highlight_digit_groups", false, true, "If enabled and if output is a terminal, highlight every digit corresponding to the number of thousands, millions, etc. with underline."},
{"geo_distance_returns_float64_on_float64_arguments", false, true, "Increase the default precision."},
{"azure_max_inflight_parts_for_one_file", 20, 20, "The maximum number of a concurrent loaded parts in multipart upload request. 0 means unlimited."},
{"azure_strict_upload_part_size", 0, 0, "The exact size of part to upload during multipart upload to Azure blob storage."},
{"azure_min_upload_part_size", 16*1024*1024, 16*1024*1024, "The minimum size of part to upload during multipart upload to Azure blob storage."},
{"azure_max_upload_part_size", 5ull*1024*1024*1024, 5ull*1024*1024*1024, "The maximum size of part to upload during multipart upload to Azure blob storage."},
{"azure_upload_part_size_multiply_factor", 2, 2, "Multiply azure_min_upload_part_size by this factor each time azure_multiply_parts_count_threshold parts were uploaded from a single write to Azure blob storage."},
{"azure_upload_part_size_multiply_parts_count_threshold", 500, 500, "Each time this number of parts was uploaded to Azure blob storage, azure_min_upload_part_size is multiplied by azure_upload_part_size_multiply_factor."},
{"output_format_csv_serialize_tuple_into_separate_columns", true, true, "A new way of how interpret tuples in CSV format was added."},
{"input_format_csv_deserialize_separate_columns_into_tuple", true, true, "A new way of how interpret tuples in CSV format was added."},
{"input_format_csv_try_infer_strings_from_quoted_tuples", true, true, "A new way of how interpret tuples in CSV format was added."},
}
},
{"24.2",
{
{"allow_suspicious_variant_types", true, false, "Don't allow creating Variant type with suspicious variants by default"},
{"validate_experimental_and_suspicious_types_inside_nested_types", false, true, "Validate usage of experimental and suspicious types inside nested types"},
{"output_format_values_escape_quote_with_quote", false, false, "If true escape ' with '', otherwise quoted with \\'"},
{"output_format_pretty_single_large_number_tip_threshold", 0, 1'000'000, "Print a readable number tip on the right side of the table if the block consists of a single number which exceeds this value (except 0)"},
{"input_format_try_infer_exponent_floats", true, false, "Don't infer floats in exponential notation by default"},
{"query_plan_optimize_prewhere", true, true, "Allow to push down filter to PREWHERE expression for supported storages"},
{"async_insert_max_data_size", 1000000, 10485760, "The previous value appeared to be too small."},
{"async_insert_poll_timeout_ms", 10, 10, "Timeout in milliseconds for polling data from asynchronous insert queue"},
{"async_insert_use_adaptive_busy_timeout", false, true, "Use adaptive asynchronous insert timeout"},
{"async_insert_busy_timeout_min_ms", 50, 50, "The minimum value of the asynchronous insert timeout in milliseconds; it also serves as the initial value, which may be increased later by the adaptive algorithm"},
{"async_insert_busy_timeout_max_ms", 200, 200, "The minimum value of the asynchronous insert timeout in milliseconds; async_insert_busy_timeout_ms is aliased to async_insert_busy_timeout_max_ms"},
{"async_insert_busy_timeout_increase_rate", 0.2, 0.2, "The exponential growth rate at which the adaptive asynchronous insert timeout increases"},
{"async_insert_busy_timeout_decrease_rate", 0.2, 0.2, "The exponential growth rate at which the adaptive asynchronous insert timeout decreases"},
{"format_template_row_format", "", "", "Template row format string can be set directly in query"},
{"format_template_resultset_format", "", "", "Template result set format string can be set in query"},
{"split_parts_ranges_into_intersecting_and_non_intersecting_final", true, true, "Allow to split parts ranges into intersecting and non intersecting during FINAL optimization"},
{"split_intersecting_parts_ranges_into_layers_final", true, true, "Allow to split intersecting parts ranges into layers during FINAL optimization"},
{"azure_max_single_part_copy_size", 256*1024*1024, 256*1024*1024, "The maximum size of object to copy using single part copy to Azure blob storage."},
{"min_external_table_block_size_rows", DEFAULT_INSERT_BLOCK_SIZE, DEFAULT_INSERT_BLOCK_SIZE, "Squash blocks passed to external table to specified size in rows, if blocks are not big enough"},
{"min_external_table_block_size_bytes", DEFAULT_INSERT_BLOCK_SIZE * 256, DEFAULT_INSERT_BLOCK_SIZE * 256, "Squash blocks passed to external table to specified size in bytes, if blocks are not big enough."},
{"parallel_replicas_prefer_local_join", true, true, "If true, and JOIN can be executed with parallel replicas algorithm, and all storages of right JOIN part are *MergeTree, local JOIN will be used instead of GLOBAL JOIN."},
{"optimize_time_filter_with_preimage", true, true, "Optimize Date and DateTime predicates by converting functions into equivalent comparisons without conversions (e.g. toYear(col) = 2023 -> col >= '2023-01-01' AND col <= '2023-12-31')"},
{"extract_key_value_pairs_max_pairs_per_row", 0, 0, "Max number of pairs that can be produced by the `extractKeyValuePairs` function. Used as a safeguard against consuming too much memory."},
{"default_view_definer", "CURRENT_USER", "CURRENT_USER", "Allows to set default `DEFINER` option while creating a view"},
{"default_materialized_view_sql_security", "DEFINER", "DEFINER", "Allows to set a default value for SQL SECURITY option when creating a materialized view"},
{"default_normal_view_sql_security", "INVOKER", "INVOKER", "Allows to set default `SQL SECURITY` option while creating a normal view"},
{"mysql_map_string_to_text_in_show_columns", false, true, "Reduce the configuration effort to connect ClickHouse with BI tools."},
{"mysql_map_fixed_string_to_text_in_show_columns", false, true, "Reduce the configuration effort to connect ClickHouse with BI tools."},
}
},
{"24.1",
{
{"print_pretty_type_names", false, true, "Better user experience."},
{"input_format_json_read_bools_as_strings", false, true, "Allow to read bools as strings in JSON formats by default"},
{"output_format_arrow_use_signed_indexes_for_dictionary", false, true, "Use signed indexes type for Arrow dictionaries by default as it's recommended"},
{"allow_experimental_variant_type", false, false, "Add new experimental Variant type"},
{"use_variant_as_common_type", false, false, "Allow to use Variant in if/multiIf if there is no common type"},
{"output_format_arrow_use_64_bit_indexes_for_dictionary", false, false, "Allow to use 64 bit indexes type in Arrow dictionaries"},
{"parallel_replicas_mark_segment_size", 128, 128, "Add new setting to control segment size in new parallel replicas coordinator implementation"},
{"ignore_materialized_views_with_dropped_target_table", false, false, "Add new setting to allow to ignore materialized views with dropped target table"},
{"output_format_compression_level", 3, 3, "Allow to change compression level in the query output"},
{"output_format_compression_zstd_window_log", 0, 0, "Allow to change zstd window log in the query output when zstd compression is used"},
{"enable_zstd_qat_codec", false, false, "Add new ZSTD_QAT codec"},
{"enable_vertical_final", false, true, "Use vertical final by default"},
{"output_format_arrow_use_64_bit_indexes_for_dictionary", false, false, "Allow to use 64 bit indexes type in Arrow dictionaries"},
{"max_rows_in_set_to_optimize_join", 100000, 0, "Disable join optimization as it prevents from read in order optimization"},
{"output_format_pretty_color", true, "auto", "Setting is changed to allow also for auto value, disabling ANSI escapes if output is not a tty"},
{"function_visible_width_behavior", 0, 1, "We changed the default behavior of `visibleWidth` to be more precise"},
{"max_estimated_execution_time", 0, 0, "Separate max_execution_time and max_estimated_execution_time"},
{"iceberg_engine_ignore_schema_evolution", false, false, "Allow to ignore schema evolution in Iceberg table engine"},
{"optimize_injective_functions_in_group_by", false, true, "Replace injective functions by it's arguments in GROUP BY section in analyzer"},
{"update_insert_deduplication_token_in_dependent_materialized_views", false, false, "Allow to update insert deduplication token with table identifier during insert in dependent materialized views"},
{"azure_max_unexpected_write_error_retries", 4, 4, "The maximum number of retries in case of unexpected errors during Azure blob storage write"},
{"split_parts_ranges_into_intersecting_and_non_intersecting_final", false, true, "Allow to split parts ranges into intersecting and non intersecting during FINAL optimization"},
{"split_intersecting_parts_ranges_into_layers_final", true, true, "Allow to split intersecting parts ranges into layers during FINAL optimization"}
}
},
{"23.12",
{
{"allow_suspicious_ttl_expressions", true, false, "It is a new setting, and in previous versions the behavior was equivalent to allowing."},
{"input_format_parquet_allow_missing_columns", false, true, "Allow missing columns in Parquet files by default"},
{"input_format_orc_allow_missing_columns", false, true, "Allow missing columns in ORC files by default"},
{"input_format_arrow_allow_missing_columns", false, true, "Allow missing columns in Arrow files by default"}
}
},
{"23.11",
{
{"parsedatetime_parse_without_leading_zeros", false, true, "Improved compatibility with MySQL DATE_FORMAT/STR_TO_DATE"}
}
},
{"23.9",
{
{"optimize_group_by_constant_keys", false, true, "Optimize group by constant keys by default"},
{"input_format_json_try_infer_named_tuples_from_objects", false, true, "Try to infer named Tuples from JSON objects by default"},
{"input_format_json_read_numbers_as_strings", false, true, "Allow to read numbers as strings in JSON formats by default"},
{"input_format_json_read_arrays_as_strings", false, true, "Allow to read arrays as strings in JSON formats by default"},
{"input_format_json_infer_incomplete_types_as_strings", false, true, "Allow to infer incomplete types as Strings in JSON formats by default"},
{"input_format_json_try_infer_numbers_from_strings", true, false, "Don't infer numbers from strings in JSON formats by default to prevent possible parsing errors"},
{"http_write_exception_in_output_format", false, true, "Output valid JSON/XML on exception in HTTP streaming."}
}
},
{"23.8",
{
{"rewrite_count_distinct_if_with_count_distinct_implementation", false, true, "Rewrite countDistinctIf with count_distinct_implementation configuration"}
}
},
{"23.7",
{
{"function_sleep_max_microseconds_per_block", 0, 3000000, "In previous versions, the maximum sleep time of 3 seconds was applied only for `sleep`, but not for `sleepEachRow` function. In the new version, we introduce this setting. If you set compatibility with the previous versions, we will disable the limit altogether."}
}
},
{"23.6",
{
{"http_send_timeout", 180, 30, "3 minutes seems crazy long. Note that this is timeout for a single network write call, not for the whole upload operation."},
{"http_receive_timeout", 180, 30, "See http_send_timeout."}
}
},
{"23.5",
{
{"input_format_parquet_preserve_order", true, false, "Allow Parquet reader to reorder rows for better parallelism."},
{"parallelize_output_from_storages", false, true, "Allow parallelism when executing queries that read from file/url/s3/etc. This may reorder rows."},
{"use_with_fill_by_sorting_prefix", false, true, "Columns preceding WITH FILL columns in ORDER BY clause form sorting prefix. Rows with different values in sorting prefix are filled independently"},
{"output_format_parquet_compliant_nested_types", false, true, "Change an internal field name in output Parquet file schema."}
}
},
{"23.4",
{
{"allow_suspicious_indices", true, false, "If true, index can defined with identical expressions"},
{"allow_nonconst_timezone_arguments", true, false, "Allow non-const timezone arguments in certain time-related functions like toTimeZone(), fromUnixTimestamp*(), snowflakeToDateTime*()."},
{"connect_timeout_with_failover_ms", 50, 1000, "Increase default connect timeout because of async connect"},
{"connect_timeout_with_failover_secure_ms", 100, 1000, "Increase default secure connect timeout because of async connect"},
{"hedged_connection_timeout_ms", 100, 50, "Start new connection in hedged requests after 50 ms instead of 100 to correspond with previous connect timeout"},
{"formatdatetime_f_prints_single_zero", true, false, "Improved compatibility with MySQL DATE_FORMAT()/STR_TO_DATE()"},
{"formatdatetime_parsedatetime_m_is_month_name", false, true, "Improved compatibility with MySQL DATE_FORMAT/STR_TO_DATE"}
}
},
{"23.3",
{
{"output_format_parquet_version", "1.0", "2.latest", "Use latest Parquet format version for output format"},
{"input_format_json_ignore_unknown_keys_in_named_tuple", false, true, "Improve parsing JSON objects as named tuples"},
{"input_format_native_allow_types_conversion", false, true, "Allow types conversion in Native input forma"},
{"output_format_arrow_compression_method", "none", "lz4_frame", "Use lz4 compression in Arrow output format by default"},
{"output_format_parquet_compression_method", "snappy", "lz4", "Use lz4 compression in Parquet output format by default"},
{"output_format_orc_compression_method", "none", "lz4_frame", "Use lz4 compression in ORC output format by default"},
{"async_query_sending_for_remote", false, true, "Create connections and send query async across shards"}
}
},
{"23.2",
{
{"output_format_parquet_fixed_string_as_fixed_byte_array", false, true, "Use Parquet FIXED_LENGTH_BYTE_ARRAY type for FixedString by default"},
{"output_format_arrow_fixed_string_as_fixed_byte_array", false, true, "Use Arrow FIXED_SIZE_BINARY type for FixedString by default"},
{"query_plan_remove_redundant_distinct", false, true, "Remove redundant Distinct step in query plan"},
{"optimize_duplicate_order_by_and_distinct", true, false, "Remove duplicate ORDER BY and DISTINCT if it's possible"},
{"insert_keeper_max_retries", 0, 20, "Enable reconnections to Keeper on INSERT, improve reliability"}
}
},
{"23.1",
{
{"input_format_json_read_objects_as_strings", 0, 1, "Enable reading nested json objects as strings while object type is experimental"},
{"input_format_json_defaults_for_missing_elements_in_named_tuple", false, true, "Allow missing elements in JSON objects while reading named tuples by default"},
{"input_format_csv_detect_header", false, true, "Detect header in CSV format by default"},
{"input_format_tsv_detect_header", false, true, "Detect header in TSV format by default"},
{"input_format_custom_detect_header", false, true, "Detect header in CustomSeparated format by default"},
{"query_plan_remove_redundant_sorting", false, true, "Remove redundant sorting in query plan. For example, sorting steps related to ORDER BY clauses in subqueries"}
}
},
{"22.12",
{
{"max_size_to_preallocate_for_aggregation", 10'000'000, 100'000'000, "This optimizes performance"},
{"query_plan_aggregation_in_order", 0, 1, "Enable some refactoring around query plan"},
{"format_binary_max_string_size", 0, 1_GiB, "Prevent allocating large amount of memory"}
}
},
{"22.11",
{
{"use_structure_from_insertion_table_in_table_functions", 0, 2, "Improve using structure from insertion table in table functions"}
}
},
{"22.9",
{
{"force_grouping_standard_compatibility", false, true, "Make GROUPING function output the same as in SQL standard and other DBMS"}
}
},
{"22.7",
{
{"cross_to_inner_join_rewrite", 1, 2, "Force rewrite comma join to inner"},
{"enable_positional_arguments", false, true, "Enable positional arguments feature by default"},
{"format_csv_allow_single_quotes", true, false, "Most tools don't treat single quote in CSV specially, don't do it by default too"}
}
},
{"22.6",
{
{"output_format_json_named_tuples_as_objects", false, true, "Allow to serialize named tuples as JSON objects in JSON formats by default"},
{"input_format_skip_unknown_fields", false, true, "Optimize reading subset of columns for some input formats"}
}
},
{"22.5",
{
{"memory_overcommit_ratio_denominator", 0, 1073741824, "Enable memory overcommit feature by default"},
{"memory_overcommit_ratio_denominator_for_user", 0, 1073741824, "Enable memory overcommit feature by default"}
}
},
{"22.4",
{
{"allow_settings_after_format_in_insert", true, false, "Do not allow SETTINGS after FORMAT for INSERT queries because ClickHouse interpret SETTINGS as some values, which is misleading"}
}
},
{"22.3",
{
{"cast_ipv4_ipv6_default_on_conversion_error", true, false, "Make functions cast(value, 'IPv4') and cast(value, 'IPv6') behave same as toIPv4 and toIPv6 functions"}
}
},
{"21.12",
{
{"stream_like_engine_allow_direct_select", true, false, "Do not allow direct select for Kafka/RabbitMQ/FileLog by default"}
}
},
{"21.9",
{
{"output_format_decimal_trailing_zeros", true, false, "Do not output trailing zeros in text representation of Decimal types by default for better looking output"},
{"use_hedged_requests", false, true, "Enable Hedged Requests feature by default"}
}
},
{"21.7",
{
{"legacy_column_name_of_tuple_literal", true, false, "Add this setting only for compatibility reasons. It makes sense to set to 'true', while doing rolling update of cluster from version lower than 21.7 to higher"}
}
},
{"21.5",
{
{"async_socket_for_remote", false, true, "Fix all problems and turn on asynchronous reads from socket for remote queries by default again"}
}
},
{"21.3",
{
{"async_socket_for_remote", true, false, "Turn off asynchronous reads from socket for remote queries because of some problems"},
{"optimize_normalize_count_variants", false, true, "Rewrite aggregate functions that semantically equals to count() as count() by default"},
{"normalize_function_names", false, true, "Normalize function names to their canonical names, this was needed for projection query routing"}
}
},
{"21.2",
{
{"enable_global_with_statement", false, true, "Propagate WITH statements to UNION queries and all subqueries by default"}
}
},
{"21.1",
{
{"insert_quorum_parallel", false, true, "Use parallel quorum inserts by default. It is significantly more convenient to use than sequential quorum inserts"},
{"input_format_null_as_default", false, true, "Allow to insert NULL as default for input formats by default"},
{"optimize_on_insert", false, true, "Enable data optimization on INSERT by default for better user experience"},
{"use_compact_format_in_distributed_parts_names", false, true, "Use compact format for async INSERT into Distributed tables by default"}
}
},
{"20.10",
{
{"format_regexp_escaping_rule", "Escaped", "Raw", "Use Raw as default escaping rule for Regexp format to male the behaviour more like to what users expect"}
}
},
{"20.7",
{
{"show_table_uuid_in_table_create_query_if_not_nil", true, false, "Stop showing UID of the table in its CREATE query for Engine=Atomic"}
}
},
{"20.5",
{
{"input_format_with_names_use_header", false, true, "Enable using header with names for formats with WithNames/WithNamesAndTypes suffixes"},
{"allow_suspicious_codecs", true, false, "Don't allow to specify meaningless compression codecs"}
}
},
{"20.4",
{
{"validate_polygons", false, true, "Throw exception if polygon is invalid in function pointInPolygon by default instead of returning possibly wrong results"}
}
},
{"19.18",
{
{"enable_scalar_subquery_optimization", false, true, "Prevent scalar subqueries from (de)serializing large scalar values and possibly avoid running the same subquery more than once"}
}
},
{"19.14",
{
{"any_join_distinct_right_table_keys", true, false, "Disable ANY RIGHT and ANY FULL JOINs by default to avoid inconsistency"}
}
},
{"19.12",
{
{"input_format_defaults_for_omitted_fields", false, true, "Enable calculation of complex default expressions for omitted fields for some input formats, because it should be the expected behaviour"}
}
},
{"19.5",
{
{"max_partitions_per_insert_block", 0, 100, "Add a limit for the number of partitions in one block"}
}
},
{"18.12.17",
{
{"enable_optimize_predicate_expression", 0, 1, "Optimize predicates to subqueries by default"}
}
},
};

View File

@ -12,6 +12,7 @@
#include <Common/ZooKeeper/KeeperException.h>
#include <Common/ZooKeeper/Types.h>
#include <Common/ZooKeeper/ZooKeeper.h>
#include <Common/ZooKeeper/IKeeper.h>
#include <Common/PoolId.h>
#include <Core/ServerSettings.h>
#include <Core/Settings.h>
@ -338,9 +339,12 @@ ClusterPtr DatabaseReplicated::getClusterImpl(bool all_groups) const
return std::make_shared<Cluster>(getContext()->getSettingsRef(), shards, params);
}
std::vector<UInt8> DatabaseReplicated::tryGetAreReplicasActive(const ClusterPtr & cluster_) const
ReplicasInfo DatabaseReplicated::tryGetReplicasInfo(const ClusterPtr & cluster_) const
{
Strings paths;
Strings paths_get, paths_exists;
paths_get.emplace_back(fs::path(zookeeper_path) / "max_log_ptr");
const auto & addresses_with_failover = cluster_->getShardsAddresses();
const auto & shards_info = cluster_->getShardsInfo();
for (size_t shard_index = 0; shard_index < shards_info.size(); ++shard_index)
@ -348,32 +352,59 @@ std::vector<UInt8> DatabaseReplicated::tryGetAreReplicasActive(const ClusterPtr
for (const auto & replica : addresses_with_failover[shard_index])
{
String full_name = getFullReplicaName(replica.database_shard_name, replica.database_replica_name);
paths.emplace_back(fs::path(zookeeper_path) / "replicas" / full_name / "active");
paths_exists.emplace_back(fs::path(zookeeper_path) / "replicas" / full_name / "active");
paths_get.emplace_back(fs::path(zookeeper_path) / "replicas" / full_name / "log_ptr");
}
}
try
{
auto current_zookeeper = getZooKeeper();
auto res = current_zookeeper->exists(paths);
auto get_res = current_zookeeper->get(paths_get);
auto exist_res = current_zookeeper->exists(paths_exists);
chassert(get_res.size() == exist_res.size() + 1);
std::vector<UInt8> statuses;
statuses.resize(paths.size());
auto max_log_ptr_zk = get_res[0];
if (max_log_ptr_zk.error != Coordination::Error::ZOK)
throw Coordination::Exception(max_log_ptr_zk.error);
for (size_t i = 0; i < res.size(); ++i)
if (res[i].error == Coordination::Error::ZOK)
statuses[i] = 1;
UInt32 max_log_ptr = parse<UInt32>(max_log_ptr_zk.data);
return statuses;
}
catch (...)
ReplicasInfo replicas_info;
replicas_info.resize(exist_res.size());
size_t global_replica_index = 0;
for (size_t shard_index = 0; shard_index < shards_info.size(); ++shard_index)
{
for (const auto & replica : addresses_with_failover[shard_index])
{
auto replica_active = exist_res[global_replica_index];
auto replica_log_ptr = get_res[global_replica_index + 1];
if (replica_active.error != Coordination::Error::ZOK && replica_active.error != Coordination::Error::ZNONODE)
throw Coordination::Exception(replica_active.error);
if (replica_log_ptr.error != Coordination::Error::ZOK)
throw Coordination::Exception(replica_log_ptr.error);
replicas_info[global_replica_index] = ReplicaInfo{
.is_active = replica_active.error == Coordination::Error::ZOK,
.replication_lag = max_log_ptr - parse<UInt32>(replica_log_ptr.data),
.recovery_time = replica.is_local ? ddl_worker->getCurrentInitializationDurationMs() : 0,
};
++global_replica_index;
}
}
return replicas_info;
} catch (...)
{
tryLogCurrentException(log);
return {};
}
}
void DatabaseReplicated::fillClusterAuthInfo(String collection_name, const Poco::Util::AbstractConfiguration & config_ref)
{
const auto & config_prefix = fmt::format("named_collections.{}", collection_name);

View File

@ -17,6 +17,14 @@ using ZooKeeperPtr = std::shared_ptr<zkutil::ZooKeeper>;
class Cluster;
using ClusterPtr = std::shared_ptr<Cluster>;
struct ReplicaInfo
{
bool is_active;
UInt32 replication_lag;
UInt64 recovery_time;
};
using ReplicasInfo = std::vector<ReplicaInfo>;
class DatabaseReplicated : public DatabaseAtomic
{
public:
@ -84,7 +92,7 @@ public:
static void dropReplica(DatabaseReplicated * database, const String & database_zookeeper_path, const String & shard, const String & replica, bool throw_if_noop);
std::vector<UInt8> tryGetAreReplicasActive(const ClusterPtr & cluster_) const;
ReplicasInfo tryGetReplicasInfo(const ClusterPtr & cluster_) const;
void renameDatabase(ContextPtr query_context, const String & new_name) override;

View File

@ -32,6 +32,12 @@ DatabaseReplicatedDDLWorker::DatabaseReplicatedDDLWorker(DatabaseReplicated * db
bool DatabaseReplicatedDDLWorker::initializeMainThread()
{
{
std::lock_guard lock(initialization_duration_timer_mutex);
initialization_duration_timer.emplace();
initialization_duration_timer->start();
}
while (!stop_flag)
{
try
@ -69,6 +75,10 @@ bool DatabaseReplicatedDDLWorker::initializeMainThread()
initializeReplication();
initialized = true;
{
std::lock_guard lock(initialization_duration_timer_mutex);
initialization_duration_timer.reset();
}
return true;
}
catch (...)
@ -78,6 +88,11 @@ bool DatabaseReplicatedDDLWorker::initializeMainThread()
}
}
{
std::lock_guard lock(initialization_duration_timer_mutex);
initialization_duration_timer.reset();
}
return false;
}
@ -459,4 +474,10 @@ UInt32 DatabaseReplicatedDDLWorker::getLogPointer() const
return max_id.load();
}
UInt64 DatabaseReplicatedDDLWorker::getCurrentInitializationDurationMs() const
{
std::lock_guard lock(initialization_duration_timer_mutex);
return initialization_duration_timer ? initialization_duration_timer->elapsedMilliseconds() : 0;
}
}

View File

@ -36,6 +36,8 @@ public:
DatabaseReplicated * const database, bool committed = false); /// NOLINT
UInt32 getLogPointer() const;
UInt64 getCurrentInitializationDurationMs() const;
private:
bool initializeMainThread() override;
void initializeReplication();
@ -56,6 +58,9 @@ private:
ZooKeeperPtr active_node_holder_zookeeper;
/// It will remove "active" node when database is detached
zkutil::EphemeralNodeHolderPtr active_node_holder;
std::optional<Stopwatch> initialization_duration_timer;
mutable std::mutex initialization_duration_timer_mutex;
};
}

View File

@ -1,3 +1,4 @@
#include <optional>
#include <Disks/ObjectStorages/AzureBlobStorage/AzureObjectStorage.h>
#include "Common/Exception.h"
@ -117,7 +118,8 @@ AzureObjectStorage::AzureObjectStorage(
{
}
ObjectStorageKey AzureObjectStorage::generateObjectKeyForPath(const std::string & /* path */) const
ObjectStorageKey
AzureObjectStorage::generateObjectKeyForPath(const std::string & /* path */, const std::optional<std::string> & /* key_prefix */) const
{
return ObjectStorageKey::createAsRelative(getRandomASCIIString(32));
}

View File

@ -101,7 +101,7 @@ public:
const std::string & config_prefix,
ContextPtr context) override;
ObjectStorageKey generateObjectKeyForPath(const std::string & path) const override;
ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const override;
bool isRemote() const override { return true; }

View File

@ -34,14 +34,16 @@ FileCache::Key CachedObjectStorage::getCacheKey(const std::string & path) const
return cache->createKeyForPath(path);
}
ObjectStorageKey CachedObjectStorage::generateObjectKeyForPath(const std::string & path) const
ObjectStorageKey
CachedObjectStorage::generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const
{
return object_storage->generateObjectKeyForPath(path);
return object_storage->generateObjectKeyForPath(path, key_prefix);
}
ObjectStorageKey CachedObjectStorage::generateObjectKeyPrefixForDirectoryPath(const std::string & path) const
ObjectStorageKey
CachedObjectStorage::generateObjectKeyPrefixForDirectoryPath(const std::string & path, const std::optional<std::string> & key_prefix) const
{
return object_storage->generateObjectKeyPrefixForDirectoryPath(path);
return object_storage->generateObjectKeyPrefixForDirectoryPath(path, key_prefix);
}
ReadSettings CachedObjectStorage::patchSettings(const ReadSettings & read_settings) const

View File

@ -98,9 +98,10 @@ public:
const std::string & getCacheName() const override { return cache_config_name; }
ObjectStorageKey generateObjectKeyForPath(const std::string & path) const override;
ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const override;
ObjectStorageKey generateObjectKeyPrefixForDirectoryPath(const std::string & path) const override;
ObjectStorageKey
generateObjectKeyPrefixForDirectoryPath(const std::string & path, const std::optional<std::string> & key_prefix) const override;
void setKeysGenerator(ObjectStorageKeysGeneratorPtr gen) override { object_storage->setKeysGenerator(gen); }

View File

@ -1,5 +1,7 @@
#include "CommonPathPrefixKeyGenerator.h"
#include <Disks/ObjectStorages/CommonPathPrefixKeyGenerator.h>
#include <Disks/ObjectStorages/InMemoryPathMap.h>
#include <Common/SharedLockGuard.h>
#include <Common/getRandomASCIIString.h>
#include <deque>
@ -9,21 +11,22 @@
namespace DB
{
CommonPathPrefixKeyGenerator::CommonPathPrefixKeyGenerator(
String key_prefix_, SharedMutex & shared_mutex_, std::weak_ptr<PathMap> path_map_)
: storage_key_prefix(key_prefix_), shared_mutex(shared_mutex_), path_map(std::move(path_map_))
CommonPathPrefixKeyGenerator::CommonPathPrefixKeyGenerator(String key_prefix_, std::weak_ptr<InMemoryPathMap> path_map_)
: storage_key_prefix(key_prefix_), path_map(std::move(path_map_))
{
}
ObjectStorageKey CommonPathPrefixKeyGenerator::generate(const String & path, bool is_directory) const
ObjectStorageKey
CommonPathPrefixKeyGenerator::generate(const String & path, bool is_directory, const std::optional<String> & key_prefix) const
{
const auto & [object_key_prefix, suffix_parts] = getLongestObjectKeyPrefix(path);
const auto & [object_key_prefix, suffix_parts]
= getLongestObjectKeyPrefix(is_directory ? std::filesystem::path(path).parent_path().string() : path);
auto key = std::filesystem::path(object_key_prefix.empty() ? storage_key_prefix : object_key_prefix);
auto key = std::filesystem::path(object_key_prefix);
/// The longest prefix is the same as path, meaning that the path is already mapped.
if (suffix_parts.empty())
return ObjectStorageKey::createAsRelative(std::move(key));
return ObjectStorageKey::createAsRelative(key_prefix.has_value() ? *key_prefix : storage_key_prefix, std::move(key));
/// File and top-level directory paths are mapped as is.
if (!is_directory || object_key_prefix.empty())
@ -39,7 +42,7 @@ ObjectStorageKey CommonPathPrefixKeyGenerator::generate(const String & path, boo
key /= getRandomASCIIString(part_size);
}
return ObjectStorageKey::createAsRelative(key);
return ObjectStorageKey::createAsRelative(key_prefix.has_value() ? *key_prefix : storage_key_prefix, key);
}
std::tuple<std::string, std::vector<std::string>> CommonPathPrefixKeyGenerator::getLongestObjectKeyPrefix(const std::string & path) const
@ -47,14 +50,13 @@ std::tuple<std::string, std::vector<std::string>> CommonPathPrefixKeyGenerator::
std::filesystem::path p(path);
std::deque<std::string> dq;
std::shared_lock lock(shared_mutex);
auto ptr = path_map.lock();
const auto ptr = path_map.lock();
SharedLockGuard lock(ptr->mutex);
while (p != p.root_path())
{
auto it = ptr->find(p / "");
if (it != ptr->end())
auto it = ptr->map.find(p);
if (it != ptr->map.end())
{
std::vector<std::string> vec(std::make_move_iterator(dq.begin()), std::make_move_iterator(dq.end()));
return std::make_tuple(it->second, std::move(vec));

View File

@ -1,14 +1,15 @@
#pragma once
#include <Common/ObjectStorageKeyGenerator.h>
#include <Common/SharedMutex.h>
#include <filesystem>
#include <map>
#include <optional>
namespace DB
{
/// Deprecated. Used for backward compatibility with plain rewritable disks without a separate metadata layout.
/// Object storage key generator used specifically with the
/// MetadataStorageFromPlainObjectStorage if multiple writes are allowed.
@ -18,15 +19,16 @@ namespace DB
///
/// The key generator ensures that the original directory hierarchy is
/// preserved, which is required for the MergeTree family.
struct InMemoryPathMap;
class CommonPathPrefixKeyGenerator : public IObjectStorageKeysGenerator
{
public:
/// Local to remote path map. Leverages filesystem::path comparator for paths.
using PathMap = std::map<std::filesystem::path, std::string>;
explicit CommonPathPrefixKeyGenerator(String key_prefix_, SharedMutex & shared_mutex_, std::weak_ptr<PathMap> path_map_);
explicit CommonPathPrefixKeyGenerator(String key_prefix_, std::weak_ptr<InMemoryPathMap> path_map_);
ObjectStorageKey generate(const String & path, bool is_directory) const override;
ObjectStorageKey generate(const String & path, bool is_directory, const std::optional<String> & key_prefix) const override;
private:
/// Longest key prefix and unresolved parts of the source path.
@ -34,8 +36,7 @@ private:
const String storage_key_prefix;
SharedMutex & shared_mutex;
std::weak_ptr<PathMap> path_map;
std::weak_ptr<InMemoryPathMap> path_map;
};
}

View File

@ -537,7 +537,7 @@ struct CopyFileObjectStorageOperation final : public IDiskObjectStorageOperation
for (const auto & object_from : source_blobs)
{
auto object_key = destination_object_storage.generateObjectKeyForPath(to_path);
auto object_key = destination_object_storage.generateObjectKeyForPath(to_path, std::nullopt /* key_prefix */);
auto object_to = StoredObject(object_key.serialize());
object_storage.copyObjectToAnotherObjectStorage(object_from, object_to,read_settings,write_settings, destination_object_storage);
@ -738,7 +738,7 @@ std::unique_ptr<WriteBufferFromFileBase> DiskObjectStorageTransaction::writeFile
const WriteSettings & settings,
bool autocommit)
{
auto object_key = object_storage.generateObjectKeyForPath(path);
auto object_key = object_storage.generateObjectKeyForPath(path, std::nullopt /* key_prefix */);
std::optional<ObjectAttributes> object_attributes;
if (metadata_helper)
@ -835,7 +835,7 @@ void DiskObjectStorageTransaction::writeFileUsingBlobWritingFunction(
const String & path, WriteMode mode, WriteBlobFunction && write_blob_function)
{
/// This function is a simplified and adapted version of DiskObjectStorageTransaction::writeFile().
auto object_key = object_storage.generateObjectKeyForPath(path);
auto object_key = object_storage.generateObjectKeyForPath(path, std::nullopt /* key_prefix */);
std::optional<ObjectAttributes> object_attributes;
if (metadata_helper)

View File

@ -0,0 +1,51 @@
#include "FlatDirectoryStructureKeyGenerator.h"
#include <Disks/ObjectStorages/InMemoryPathMap.h>
#include "Common/ObjectStorageKey.h"
#include <Common/SharedLockGuard.h>
#include <Common/SharedMutex.h>
#include <Common/getRandomASCIIString.h>
#include <optional>
#include <shared_mutex>
#include <string>
namespace DB
{
FlatDirectoryStructureKeyGenerator::FlatDirectoryStructureKeyGenerator(String storage_key_prefix_, std::weak_ptr<InMemoryPathMap> path_map_)
: storage_key_prefix(storage_key_prefix_), path_map(std::move(path_map_))
{
}
ObjectStorageKey FlatDirectoryStructureKeyGenerator::generate(const String & path, bool is_directory, const std::optional<String> & key_prefix) const
{
if (is_directory)
chassert(path.ends_with('/'));
const auto p = std::filesystem::path(path);
auto directory = p.parent_path();
std::optional<std::filesystem::path> remote_path;
{
const auto ptr = path_map.lock();
SharedLockGuard lock(ptr->mutex);
auto it = ptr->map.find(p);
if (it != ptr->map.end())
return ObjectStorageKey::createAsRelative(key_prefix.has_value() ? *key_prefix : storage_key_prefix, it->second);
it = ptr->map.find(directory);
if (it != ptr->map.end())
remote_path = it->second;
}
constexpr size_t part_size = 32;
std::filesystem::path key = remote_path.has_value() ? *remote_path
: is_directory ? std::filesystem::path(getRandomASCIIString(part_size))
: directory;
if (!is_directory)
key /= p.filename();
return ObjectStorageKey::createAsRelative(key_prefix.has_value() ? *key_prefix : storage_key_prefix, key);
}
}

View File

@ -0,0 +1,23 @@
#pragma once
#include <Common/ObjectStorageKeyGenerator.h>
#include <memory>
namespace DB
{
struct InMemoryPathMap;
class FlatDirectoryStructureKeyGenerator : public IObjectStorageKeysGenerator
{
public:
explicit FlatDirectoryStructureKeyGenerator(String storage_key_prefix_, std::weak_ptr<InMemoryPathMap> path_map_);
ObjectStorageKey generate(const String & path, bool is_directory, const std::optional<String> & key_prefix) const override;
private:
const String storage_key_prefix;
std::weak_ptr<InMemoryPathMap> path_map;
};
}

View File

@ -4,8 +4,8 @@
#include <Storages/ObjectStorage/HDFS/WriteBufferFromHDFS.h>
#include <Storages/ObjectStorage/HDFS/HDFSCommon.h>
#include <Storages/ObjectStorage/HDFS/ReadBufferFromHDFS.h>
#include <Disks/IO/ReadBufferFromRemoteFSGather.h>
#include <Storages/ObjectStorage/HDFS/ReadBufferFromHDFS.h>
#include <Common/getRandomASCIIString.h>
#include <Common/logger_useful.h>
@ -53,7 +53,8 @@ std::string HDFSObjectStorage::extractObjectKeyFromURL(const StoredObject & obje
return path;
}
ObjectStorageKey HDFSObjectStorage::generateObjectKeyForPath(const std::string & /* path */) const
ObjectStorageKey
HDFSObjectStorage::generateObjectKeyForPath(const std::string & /* path */, const std::optional<std::string> & /* key_prefix */) const
{
initializeHDFSFS();
/// what ever data_source_description.description value is, consider that key as relative key

View File

@ -111,7 +111,7 @@ public:
const std::string & config_prefix,
ContextPtr context) override;
ObjectStorageKey generateObjectKeyForPath(const std::string & path) const override;
ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const override;
bool isRemote() const override { return true; }

View File

@ -232,10 +232,11 @@ public:
/// Generate blob name for passed absolute local path.
/// Path can be generated either independently or based on `path`.
virtual ObjectStorageKey generateObjectKeyForPath(const std::string & path) const = 0;
virtual ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const = 0;
/// Object key prefix for local paths in the directory 'path'.
virtual ObjectStorageKey generateObjectKeyPrefixForDirectoryPath(const std::string & /* path */) const
virtual ObjectStorageKey
generateObjectKeyPrefixForDirectoryPath(const std::string & /* path */, const std::optional<std::string> & /* key_prefix */) const
{
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method 'generateObjectKeyPrefixForDirectoryPath' is not implemented");
}

View File

@ -0,0 +1,37 @@
#pragma once
#include <filesystem>
#include <map>
#include <base/defines.h>
#include <Common/SharedMutex.h>
namespace DB
{
struct InMemoryPathMap
{
struct PathComparator
{
bool operator()(const std::filesystem::path & path1, const std::filesystem::path & path2) const
{
auto d1 = std::distance(path1.begin(), path1.end());
auto d2 = std::distance(path2.begin(), path2.end());
if (d1 != d2)
return d1 < d2;
return path1 < path2;
}
};
/// Local -> Remote path.
using Map = std::map<std::filesystem::path, std::string, PathComparator>;
mutable SharedMutex mutex;
#ifdef OS_LINUX
Map TSA_GUARDED_BY(mutex) map;
/// std::shared_mutex may not be annotated with the 'capability' attribute in libcxx.
#else
Map map;
#endif
};
}

View File

@ -1,15 +1,15 @@
#include <Disks/ObjectStorages/Local/LocalObjectStorage.h>
#include <Interpreters/Context.h>
#include <Common/filesystemHelpers.h>
#include <Common/logger_useful.h>
#include <filesystem>
#include <Disks/IO/AsynchronousBoundedReadBuffer.h>
#include <Disks/IO/ReadBufferFromRemoteFSGather.h>
#include <Disks/IO/createReadBufferFromFileBase.h>
#include <Disks/IO/AsynchronousBoundedReadBuffer.h>
#include <IO/WriteBufferFromFile.h>
#include <IO/copyData.h>
#include <Interpreters/Context.h>
#include <Common/filesystemHelpers.h>
#include <Common/getRandomASCIIString.h>
#include <filesystem>
#include <Common/logger_useful.h>
namespace fs = std::filesystem;
@ -222,7 +222,8 @@ std::unique_ptr<IObjectStorage> LocalObjectStorage::cloneObjectStorage(
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "cloneObjectStorage() is not implemented for LocalObjectStorage");
}
ObjectStorageKey LocalObjectStorage::generateObjectKeyForPath(const std::string & /* path */) const
ObjectStorageKey
LocalObjectStorage::generateObjectKeyForPath(const std::string & /* path */, const std::optional<std::string> & /* key_prefix */) const
{
constexpr size_t key_name_total_size = 32;
return ObjectStorageKey::createAsRelative(key_prefix, getRandomASCIIString(key_name_total_size));

View File

@ -81,7 +81,7 @@ public:
const std::string & config_prefix,
ContextPtr context) override;
ObjectStorageKey generateObjectKeyForPath(const std::string & path) const override;
ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const override;
bool isRemote() const override { return false; }

View File

@ -1,5 +1,6 @@
#include "MetadataStorageFromPlainObjectStorage.h"
#include <Disks/IDisk.h>
#include <Disks/ObjectStorages/InMemoryPathMap.h>
#include <Disks/ObjectStorages/MetadataStorageFromPlainObjectStorageOperations.h>
#include <Disks/ObjectStorages/StaticDirectoryIterator.h>
@ -7,6 +8,7 @@
#include <filesystem>
#include <tuple>
#include <unordered_set>
namespace DB
{
@ -41,7 +43,7 @@ bool MetadataStorageFromPlainObjectStorage::exists(const std::string & path) con
{
/// NOTE: exists() cannot be used here since it works only for existing
/// key, and does not work for some intermediate path.
auto object_key = object_storage->generateObjectKeyForPath(path);
auto object_key = object_storage->generateObjectKeyForPath(path, std::nullopt /* key_prefix */);
return object_storage->existsOrHasAnyChild(object_key.serialize());
}
@ -53,7 +55,7 @@ bool MetadataStorageFromPlainObjectStorage::isFile(const std::string & path) con
bool MetadataStorageFromPlainObjectStorage::isDirectory(const std::string & path) const
{
auto key_prefix = object_storage->generateObjectKeyForPath(path).serialize();
auto key_prefix = object_storage->generateObjectKeyForPath(path, std::nullopt /* key_prefix */).serialize();
auto directory = std::filesystem::path(std::move(key_prefix)) / "";
return object_storage->existsOrHasAnyChild(directory);
@ -61,7 +63,7 @@ bool MetadataStorageFromPlainObjectStorage::isDirectory(const std::string & path
uint64_t MetadataStorageFromPlainObjectStorage::getFileSize(const String & path) const
{
auto object_key = object_storage->generateObjectKeyForPath(path);
auto object_key = object_storage->generateObjectKeyForPath(path, std::nullopt /* key_prefix */);
auto metadata = object_storage->tryGetObjectMetadata(object_key.serialize());
if (metadata)
return metadata->size_bytes;
@ -70,7 +72,7 @@ uint64_t MetadataStorageFromPlainObjectStorage::getFileSize(const String & path)
std::vector<std::string> MetadataStorageFromPlainObjectStorage::listDirectory(const std::string & path) const
{
auto key_prefix = object_storage->generateObjectKeyForPath(path).serialize();
auto key_prefix = object_storage->generateObjectKeyForPath(path, std::nullopt /* key_prefix */).serialize();
RelativePathsWithMetadata files;
std::string abs_key = key_prefix;
@ -79,14 +81,27 @@ std::vector<std::string> MetadataStorageFromPlainObjectStorage::listDirectory(co
object_storage->listObjects(abs_key, files, 0);
return getDirectChildrenOnDisk(abs_key, files, path);
std::unordered_set<std::string> result;
for (const auto & elem : files)
{
const auto & p = elem->relative_path;
chassert(p.find(abs_key) == 0);
const auto child_pos = abs_key.size();
/// string::npos is ok.
const auto slash_pos = p.find('/', child_pos);
if (slash_pos == std::string::npos)
result.emplace(p.substr(child_pos));
else
result.emplace(p.substr(child_pos, slash_pos - child_pos));
}
return std::vector<std::string>(std::make_move_iterator(result.begin()), std::make_move_iterator(result.end()));
}
DirectoryIteratorPtr MetadataStorageFromPlainObjectStorage::iterateDirectory(const std::string & path) const
{
/// Required for MergeTree
auto paths = listDirectory(path);
// Prepend path, since iterateDirectory() includes path, unlike listDirectory()
/// Prepend path, since iterateDirectory() includes path, unlike listDirectory()
std::for_each(paths.begin(), paths.end(), [&](auto & child) { child = fs::path(path) / child; });
std::vector<std::filesystem::path> fs_paths(paths.begin(), paths.end());
return std::make_unique<StaticDirectoryIterator>(std::move(fs_paths));
@ -95,29 +110,10 @@ DirectoryIteratorPtr MetadataStorageFromPlainObjectStorage::iterateDirectory(con
StoredObjects MetadataStorageFromPlainObjectStorage::getStorageObjects(const std::string & path) const
{
size_t object_size = getFileSize(path);
auto object_key = object_storage->generateObjectKeyForPath(path);
auto object_key = object_storage->generateObjectKeyForPath(path, std::nullopt /* key_prefix */);
return {StoredObject(object_key.serialize(), path, object_size)};
}
std::vector<std::string> MetadataStorageFromPlainObjectStorage::getDirectChildrenOnDisk(
const std::string & storage_key, const RelativePathsWithMetadata & remote_paths, const std::string & /* local_path */) const
{
std::unordered_set<std::string> duplicates_filter;
for (const auto & elem : remote_paths)
{
const auto & path = elem->relative_path;
chassert(path.find(storage_key) == 0);
const auto child_pos = storage_key.size();
/// string::npos is ok.
const auto slash_pos = path.find('/', child_pos);
if (slash_pos == std::string::npos)
duplicates_filter.emplace(path.substr(child_pos));
else
duplicates_filter.emplace(path.substr(child_pos, slash_pos - child_pos));
}
return std::vector<std::string>(std::make_move_iterator(duplicates_filter.begin()), std::make_move_iterator(duplicates_filter.end()));
}
const IMetadataStorage & MetadataStorageFromPlainObjectStorageTransaction::getStorageForNonTransactionalReads() const
{
return metadata_storage;
@ -125,7 +121,7 @@ const IMetadataStorage & MetadataStorageFromPlainObjectStorageTransaction::getSt
void MetadataStorageFromPlainObjectStorageTransaction::unlinkFile(const std::string & path)
{
auto object_key = metadata_storage.object_storage->generateObjectKeyForPath(path);
auto object_key = metadata_storage.object_storage->generateObjectKeyForPath(path, std::nullopt /* key_prefix */);
auto object = StoredObject(object_key.serialize());
metadata_storage.object_storage->removeObject(object);
}
@ -140,7 +136,7 @@ void MetadataStorageFromPlainObjectStorageTransaction::removeDirectory(const std
else
{
addOperation(std::make_unique<MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation>(
normalizeDirectoryPath(path), *metadata_storage.getPathMap(), object_storage));
normalizeDirectoryPath(path), *metadata_storage.getPathMap(), object_storage, metadata_storage.getMetadataKeyPrefix()));
}
}
@ -150,9 +146,11 @@ void MetadataStorageFromPlainObjectStorageTransaction::createDirectory(const std
return;
auto normalized_path = normalizeDirectoryPath(path);
auto key_prefix = object_storage->generateObjectKeyPrefixForDirectoryPath(normalized_path).serialize();
auto op = std::make_unique<MetadataStorageFromPlainObjectStorageCreateDirectoryOperation>(
std::move(normalized_path), std::move(key_prefix), *metadata_storage.getPathMap(), object_storage);
std::move(normalized_path),
*metadata_storage.getPathMap(),
object_storage,
metadata_storage.getMetadataKeyPrefix());
addOperation(std::move(op));
}
@ -167,7 +165,11 @@ void MetadataStorageFromPlainObjectStorageTransaction::moveDirectory(const std::
throwNotImplemented();
addOperation(std::make_unique<MetadataStorageFromPlainObjectStorageMoveDirectoryOperation>(
normalizeDirectoryPath(path_from), normalizeDirectoryPath(path_to), *metadata_storage.getPathMap(), object_storage));
normalizeDirectoryPath(path_from),
normalizeDirectoryPath(path_to),
*metadata_storage.getPathMap(),
object_storage,
metadata_storage.getMetadataKeyPrefix()));
}
void MetadataStorageFromPlainObjectStorageTransaction::addBlobToMetadata(

View File

@ -2,14 +2,18 @@
#include <Disks/IDisk.h>
#include <Disks/ObjectStorages/IMetadataStorage.h>
#include <Disks/ObjectStorages/InMemoryPathMap.h>
#include <Disks/ObjectStorages/MetadataOperationsHolder.h>
#include <Disks/ObjectStorages/MetadataStorageTransactionState.h>
#include <map>
#include <string>
#include <unordered_set>
namespace DB
{
struct InMemoryPathMap;
struct UnlinkMetadataFileOperationOutcome;
using UnlinkMetadataFileOperationOutcomePtr = std::shared_ptr<UnlinkMetadataFileOperationOutcome>;
@ -25,10 +29,6 @@ using UnlinkMetadataFileOperationOutcomePtr = std::shared_ptr<UnlinkMetadataFile
/// structure as on disk MergeTree, and does not require metadata from a local disk to restore.
class MetadataStorageFromPlainObjectStorage : public IMetadataStorage
{
public:
/// Local path prefixes mapped to storage key prefixes.
using PathMap = std::map<std::filesystem::path, std::string>;
private:
friend class MetadataStorageFromPlainObjectStorageTransaction;
@ -78,10 +78,11 @@ public:
bool supportsStat() const override { return false; }
protected:
virtual std::shared_ptr<PathMap> getPathMap() const { throwNotImplemented(); }
/// Get the object storage prefix for storing metadata files.
virtual std::string getMetadataKeyPrefix() const { return object_storage->getCommonKeyPrefix(); }
virtual std::vector<std::string> getDirectChildrenOnDisk(
const std::string & storage_key, const RelativePathsWithMetadata & remote_paths, const std::string & local_path) const;
/// Returns a map of virtual filesystem paths to paths in the object storage.
virtual std::shared_ptr<InMemoryPathMap> getPathMap() const { throwNotImplemented(); }
};
class MetadataStorageFromPlainObjectStorageTransaction final : public IMetadataTransaction, private MetadataOperationsHolder

View File

@ -1,8 +1,10 @@
#include "MetadataStorageFromPlainObjectStorageOperations.h"
#include <Disks/ObjectStorages/InMemoryPathMap.h>
#include <IO/ReadHelpers.h>
#include <IO/WriteHelpers.h>
#include <Common/Exception.h>
#include <Common/SharedLockGuard.h>
#include <Common/logger_useful.h>
namespace DB
@ -20,29 +22,45 @@ namespace
constexpr auto PREFIX_PATH_FILE_NAME = "prefix.path";
ObjectStorageKey createMetadataObjectKey(const std::string & object_key_prefix, const std::string & metadata_key_prefix)
{
auto prefix = std::filesystem::path(metadata_key_prefix) / object_key_prefix;
return ObjectStorageKey::createAsRelative(prefix.string(), PREFIX_PATH_FILE_NAME);
}
}
MetadataStorageFromPlainObjectStorageCreateDirectoryOperation::MetadataStorageFromPlainObjectStorageCreateDirectoryOperation(
std::filesystem::path && path_,
std::string && key_prefix_,
MetadataStorageFromPlainObjectStorage::PathMap & path_map_,
ObjectStoragePtr object_storage_)
: path(std::move(path_)), key_prefix(key_prefix_), path_map(path_map_), object_storage(object_storage_)
std::filesystem::path && path_, InMemoryPathMap & path_map_, ObjectStoragePtr object_storage_, const std::string & metadata_key_prefix_)
: path(std::move(path_))
, path_map(path_map_)
, object_storage(object_storage_)
, metadata_key_prefix(metadata_key_prefix_)
, object_key_prefix(object_storage->generateObjectKeyPrefixForDirectoryPath(path, "" /* object_key_prefix */).serialize())
{
chassert(path.string().ends_with('/'));
}
void MetadataStorageFromPlainObjectStorageCreateDirectoryOperation::execute(std::unique_lock<SharedMutex> &)
{
if (path_map.contains(path))
return;
/// parent_path() removes the trailing '/'
const auto base_path = path.parent_path();
{
SharedLockGuard lock(path_map.mutex);
if (path_map.map.contains(base_path))
return;
}
LOG_TRACE(getLogger("MetadataStorageFromPlainObjectStorageCreateDirectoryOperation"), "Creating metadata for directory '{}'", path);
auto metadata_object_key = createMetadataObjectKey(object_key_prefix, metadata_key_prefix);
auto object_key = ObjectStorageKey::createAsRelative(key_prefix, PREFIX_PATH_FILE_NAME);
LOG_TRACE(
getLogger("MetadataStorageFromPlainObjectStorageCreateDirectoryOperation"),
"Creating metadata for directory '{}' with remote path='{}'",
path,
metadata_object_key.serialize());
auto object = StoredObject(object_key.serialize(), path / PREFIX_PATH_FILE_NAME);
auto metadata_object = StoredObject(/*remote_path*/ metadata_object_key.serialize(), /*local_path*/ path / PREFIX_PATH_FILE_NAME);
auto buf = object_storage->writeObject(
object,
metadata_object,
WriteMode::Rewrite,
/* object_attributes */ std::nullopt,
/* buf_size */ DBMS_DEFAULT_BUFFER_SIZE,
@ -50,8 +68,12 @@ void MetadataStorageFromPlainObjectStorageCreateDirectoryOperation::execute(std:
write_created = true;
[[maybe_unused]] auto result = path_map.emplace(path, std::move(key_prefix));
chassert(result.second);
{
std::lock_guard lock(path_map.mutex);
auto & map = path_map.map;
[[maybe_unused]] auto result = map.emplace(base_path, object_key_prefix);
chassert(result.second);
}
auto metric = object_storage->getMetadataStorageMetrics().directory_map_size;
CurrentMetrics::add(metric, 1);
@ -66,58 +88,81 @@ void MetadataStorageFromPlainObjectStorageCreateDirectoryOperation::execute(std:
void MetadataStorageFromPlainObjectStorageCreateDirectoryOperation::undo(std::unique_lock<SharedMutex> &)
{
auto object_key = ObjectStorageKey::createAsRelative(key_prefix, PREFIX_PATH_FILE_NAME);
auto metadata_object_key = createMetadataObjectKey(object_key_prefix, metadata_key_prefix);
if (write_finalized)
{
path_map.erase(path);
const auto base_path = path.parent_path();
{
std::lock_guard lock(path_map.mutex);
path_map.map.erase(base_path);
}
auto metric = object_storage->getMetadataStorageMetrics().directory_map_size;
CurrentMetrics::sub(metric, 1);
object_storage->removeObject(StoredObject(object_key.serialize(), path / PREFIX_PATH_FILE_NAME));
object_storage->removeObject(StoredObject(metadata_object_key.serialize(), path / PREFIX_PATH_FILE_NAME));
}
else if (write_created)
object_storage->removeObjectIfExists(StoredObject(object_key.serialize(), path / PREFIX_PATH_FILE_NAME));
object_storage->removeObjectIfExists(StoredObject(metadata_object_key.serialize(), path / PREFIX_PATH_FILE_NAME));
}
MetadataStorageFromPlainObjectStorageMoveDirectoryOperation::MetadataStorageFromPlainObjectStorageMoveDirectoryOperation(
std::filesystem::path && path_from_,
std::filesystem::path && path_to_,
MetadataStorageFromPlainObjectStorage::PathMap & path_map_,
ObjectStoragePtr object_storage_)
: path_from(std::move(path_from_)), path_to(std::move(path_to_)), path_map(path_map_), object_storage(object_storage_)
InMemoryPathMap & path_map_,
ObjectStoragePtr object_storage_,
const std::string & metadata_key_prefix_)
: path_from(std::move(path_from_))
, path_to(std::move(path_to_))
, path_map(path_map_)
, object_storage(object_storage_)
, metadata_key_prefix(metadata_key_prefix_)
{
chassert(path_from.string().ends_with('/'));
chassert(path_to.string().ends_with('/'));
}
std::unique_ptr<WriteBufferFromFileBase> MetadataStorageFromPlainObjectStorageMoveDirectoryOperation::createWriteBuf(
const std::filesystem::path & expected_path, const std::filesystem::path & new_path, bool validate_content)
{
auto expected_it = path_map.find(expected_path);
if (expected_it == path_map.end())
throw Exception(ErrorCodes::FILE_DOESNT_EXIST, "Metadata object for the expected (source) path '{}' does not exist", expected_path);
std::filesystem::path remote_path;
{
SharedLockGuard lock(path_map.mutex);
auto & map = path_map.map;
/// parent_path() removes the trailing '/'.
auto expected_it = map.find(expected_path.parent_path());
if (expected_it == map.end())
throw Exception(
ErrorCodes::FILE_DOESNT_EXIST, "Metadata object for the expected (source) path '{}' does not exist", expected_path);
if (path_map.contains(new_path))
throw Exception(ErrorCodes::FILE_ALREADY_EXISTS, "Metadata object for the new (destination) path '{}' already exists", new_path);
if (map.contains(new_path.parent_path()))
throw Exception(
ErrorCodes::FILE_ALREADY_EXISTS, "Metadata object for the new (destination) path '{}' already exists", new_path);
auto object_key = ObjectStorageKey::createAsRelative(expected_it->second, PREFIX_PATH_FILE_NAME);
remote_path = expected_it->second;
}
auto object = StoredObject(object_key.serialize(), expected_path / PREFIX_PATH_FILE_NAME);
auto metadata_object_key = createMetadataObjectKey(remote_path, metadata_key_prefix);
auto metadata_object
= StoredObject(/*remote_path*/ metadata_object_key.serialize(), /*local_path*/ expected_path / PREFIX_PATH_FILE_NAME);
if (validate_content)
{
std::string data;
auto read_buf = object_storage->readObject(object);
auto read_buf = object_storage->readObject(metadata_object);
readStringUntilEOF(data, *read_buf);
if (data != path_from)
throw Exception(
ErrorCodes::INCORRECT_DATA,
"Incorrect data for object key {}, expected {}, got {}",
object_key.serialize(),
metadata_object_key.serialize(),
expected_path,
data);
}
auto write_buf = object_storage->writeObject(
object,
metadata_object,
WriteMode::Rewrite,
/* object_attributes */ std::nullopt,
/*buf_size*/ DBMS_DEFAULT_BUFFER_SIZE,
@ -136,8 +181,16 @@ void MetadataStorageFromPlainObjectStorageMoveDirectoryOperation::execute(std::u
writeString(path_to.string(), *write_buf);
write_buf->finalize();
[[maybe_unused]] auto result = path_map.emplace(path_to, path_map.extract(path_from).mapped());
chassert(result.second);
/// parent_path() removes the trailing '/'.
auto base_path_to = path_to.parent_path();
auto base_path_from = path_from.parent_path();
{
std::lock_guard lock(path_map.mutex);
auto & map = path_map.map;
[[maybe_unused]] auto result = map.emplace(base_path_to, map.extract(base_path_from).mapped());
chassert(result.second);
}
write_finalized = true;
}
@ -145,7 +198,11 @@ void MetadataStorageFromPlainObjectStorageMoveDirectoryOperation::execute(std::u
void MetadataStorageFromPlainObjectStorageMoveDirectoryOperation::undo(std::unique_lock<SharedMutex> &)
{
if (write_finalized)
path_map.emplace(path_from, path_map.extract(path_to).mapped());
{
std::lock_guard lock(path_map.mutex);
auto & map = path_map.map;
map.emplace(path_from.parent_path(), map.extract(path_to.parent_path()).mapped());
}
if (write_created)
{
@ -156,25 +213,37 @@ void MetadataStorageFromPlainObjectStorageMoveDirectoryOperation::undo(std::uniq
}
MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation::MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation(
std::filesystem::path && path_, MetadataStorageFromPlainObjectStorage::PathMap & path_map_, ObjectStoragePtr object_storage_)
: path(std::move(path_)), path_map(path_map_), object_storage(object_storage_)
std::filesystem::path && path_, InMemoryPathMap & path_map_, ObjectStoragePtr object_storage_, const std::string & metadata_key_prefix_)
: path(std::move(path_)), path_map(path_map_), object_storage(object_storage_), metadata_key_prefix(metadata_key_prefix_)
{
chassert(path.string().ends_with('/'));
}
void MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation::execute(std::unique_lock<SharedMutex> & /* metadata_lock */)
{
auto path_it = path_map.find(path);
if (path_it == path_map.end())
return;
/// parent_path() removes the trailing '/'
const auto base_path = path.parent_path();
{
SharedLockGuard lock(path_map.mutex);
auto & map = path_map.map;
auto path_it = map.find(base_path);
if (path_it == map.end())
return;
key_prefix = path_it->second;
}
LOG_TRACE(getLogger("MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation"), "Removing directory '{}'", path);
key_prefix = path_it->second;
auto object_key = ObjectStorageKey::createAsRelative(key_prefix, PREFIX_PATH_FILE_NAME);
auto object = StoredObject(object_key.serialize(), path / PREFIX_PATH_FILE_NAME);
object_storage->removeObject(object);
auto metadata_object_key = createMetadataObjectKey(key_prefix, metadata_key_prefix);
auto metadata_object = StoredObject(/*remote_path*/ metadata_object_key.serialize(), /*local_path*/ path / PREFIX_PATH_FILE_NAME);
object_storage->removeObject(metadata_object);
{
std::lock_guard lock(path_map.mutex);
auto & map = path_map.map;
map.erase(base_path);
}
path_map.erase(path_it);
auto metric = object_storage->getMetadataStorageMetrics().directory_map_size;
CurrentMetrics::sub(metric, 1);
@ -189,10 +258,10 @@ void MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation::undo(std::un
if (!removed)
return;
auto object_key = ObjectStorageKey::createAsRelative(key_prefix, PREFIX_PATH_FILE_NAME);
auto object = StoredObject(object_key.serialize(), path / PREFIX_PATH_FILE_NAME);
auto metadata_object_key = createMetadataObjectKey(key_prefix, metadata_key_prefix);
auto metadata_object = StoredObject(metadata_object_key.serialize(), path / PREFIX_PATH_FILE_NAME);
auto buf = object_storage->writeObject(
object,
metadata_object,
WriteMode::Rewrite,
/* object_attributes */ std::nullopt,
/* buf_size */ DBMS_DEFAULT_BUFFER_SIZE,
@ -200,7 +269,11 @@ void MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation::undo(std::un
writeString(path.string(), *buf);
buf->finalize();
path_map.emplace(path, std::move(key_prefix));
{
std::lock_guard lock(path_map.mutex);
auto & map = path_map.map;
map.emplace(path.parent_path(), std::move(key_prefix));
}
auto metric = object_storage->getMetadataStorageMetrics().directory_map_size;
CurrentMetrics::add(metric, 1);
}

View File

@ -1,6 +1,7 @@
#pragma once
#include <Disks/ObjectStorages/IMetadataOperation.h>
#include <Disks/ObjectStorages/InMemoryPathMap.h>
#include <Disks/ObjectStorages/MetadataStorageFromPlainObjectStorage.h>
#include <filesystem>
@ -13,20 +14,21 @@ class MetadataStorageFromPlainObjectStorageCreateDirectoryOperation final : publ
{
private:
std::filesystem::path path;
std::string key_prefix;
MetadataStorageFromPlainObjectStorage::PathMap & path_map;
InMemoryPathMap & path_map;
ObjectStoragePtr object_storage;
const std::string metadata_key_prefix;
const std::string object_key_prefix;
bool write_created = false;
bool write_finalized = false;
public:
// Assuming that paths are normalized.
MetadataStorageFromPlainObjectStorageCreateDirectoryOperation(
/// path_ must end with a trailing '/'.
std::filesystem::path && path_,
std::string && key_prefix_,
MetadataStorageFromPlainObjectStorage::PathMap & path_map_,
ObjectStoragePtr object_storage_);
InMemoryPathMap & path_map_,
ObjectStoragePtr object_storage_,
const std::string & metadata_key_prefix_);
void execute(std::unique_lock<SharedMutex> & metadata_lock) override;
void undo(std::unique_lock<SharedMutex> & metadata_lock) override;
@ -37,8 +39,9 @@ class MetadataStorageFromPlainObjectStorageMoveDirectoryOperation final : public
private:
std::filesystem::path path_from;
std::filesystem::path path_to;
MetadataStorageFromPlainObjectStorage::PathMap & path_map;
InMemoryPathMap & path_map;
ObjectStoragePtr object_storage;
const std::string metadata_key_prefix;
bool write_created = false;
bool write_finalized = false;
@ -48,10 +51,12 @@ private:
public:
MetadataStorageFromPlainObjectStorageMoveDirectoryOperation(
/// Both path_from_ and path_to_ must end with a trailing '/'.
std::filesystem::path && path_from_,
std::filesystem::path && path_to_,
MetadataStorageFromPlainObjectStorage::PathMap & path_map_,
ObjectStoragePtr object_storage_);
InMemoryPathMap & path_map_,
ObjectStoragePtr object_storage_,
const std::string & metadata_key_prefix_);
void execute(std::unique_lock<SharedMutex> & metadata_lock) override;
@ -63,15 +68,20 @@ class MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation final : publ
private:
std::filesystem::path path;
MetadataStorageFromPlainObjectStorage::PathMap & path_map;
InMemoryPathMap & path_map;
ObjectStoragePtr object_storage;
const std::string metadata_key_prefix;
std::string key_prefix;
bool removed = false;
public:
MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation(
std::filesystem::path && path_, MetadataStorageFromPlainObjectStorage::PathMap & path_map_, ObjectStoragePtr object_storage_);
/// path_ must end with a trailing '/'.
std::filesystem::path && path_,
InMemoryPathMap & path_map_,
ObjectStoragePtr object_storage_,
const std::string & metadata_key_prefix_);
void execute(std::unique_lock<SharedMutex> & metadata_lock) override;
void undo(std::unique_lock<SharedMutex> & metadata_lock) override;

View File

@ -1,9 +1,14 @@
#include <Disks/ObjectStorages/FlatDirectoryStructureKeyGenerator.h>
#include <Disks/ObjectStorages/InMemoryPathMap.h>
#include <Disks/ObjectStorages/MetadataStorageFromPlainRewritableObjectStorage.h>
#include <Disks/ObjectStorages/ObjectStorageIterator.h>
#include <unordered_set>
#include <IO/ReadHelpers.h>
#include <IO/SharedThreadPools.h>
#include <IO/S3Common.h>
#include <IO/SharedThreadPools.h>
#include "Common/SharedLockGuard.h"
#include "Common/SharedMutex.h"
#include <Common/ErrorCodes.h>
#include <Common/logger_useful.h>
#include "CommonPathPrefixKeyGenerator.h"
@ -21,14 +26,28 @@ namespace
{
constexpr auto PREFIX_PATH_FILE_NAME = "prefix.path";
constexpr auto METADATA_PATH_TOKEN = "__meta/";
MetadataStorageFromPlainObjectStorage::PathMap loadPathPrefixMap(const std::string & root, ObjectStoragePtr object_storage)
/// Use a separate layout for metadata if:
/// 1. The disk endpoint does not contain any objects yet (empty), OR
/// 2. The metadata is already stored behind a separate endpoint.
/// Otherwise, store metadata along with regular data for backward compatibility.
std::string getMetadataKeyPrefix(ObjectStoragePtr object_storage)
{
MetadataStorageFromPlainObjectStorage::PathMap result;
const auto common_key_prefix = std::filesystem::path(object_storage->getCommonKeyPrefix());
const auto metadata_key_prefix = std::filesystem::path(common_key_prefix) / METADATA_PATH_TOKEN;
return !object_storage->existsOrHasAnyChild(metadata_key_prefix / "") && object_storage->existsOrHasAnyChild(common_key_prefix / "")
? common_key_prefix
: metadata_key_prefix;
}
std::shared_ptr<InMemoryPathMap> loadPathPrefixMap(const std::string & metadata_key_prefix, ObjectStoragePtr object_storage)
{
auto result = std::make_shared<InMemoryPathMap>();
using Map = InMemoryPathMap::Map;
ThreadPool & pool = getIOThreadPool().get();
ThreadPoolCallbackRunnerLocal<void> runner(pool, "PlainRWMetaLoad");
std::mutex mutex;
LoggerPtr log = getLogger("MetadataStorageFromPlainObjectStorage");
@ -39,102 +58,107 @@ MetadataStorageFromPlainObjectStorage::PathMap loadPathPrefixMap(const std::stri
LOG_DEBUG(log, "Loading metadata");
size_t num_files = 0;
for (auto iterator = object_storage->iterate(root, 0); iterator->isValid(); iterator->next())
for (auto iterator = object_storage->iterate(metadata_key_prefix, 0); iterator->isValid(); iterator->next())
{
++num_files;
auto file = iterator->current();
String path = file->getPath();
auto remote_path = std::filesystem::path(path);
if (remote_path.filename() != PREFIX_PATH_FILE_NAME)
auto remote_metadata_path = std::filesystem::path(path);
if (remote_metadata_path.filename() != PREFIX_PATH_FILE_NAME)
continue;
runner([remote_path, path, &object_storage, &result, &mutex, &log, &settings]
{
setThreadName("PlainRWMetaLoad");
StoredObject object{path};
String local_path;
try
runner(
[remote_metadata_path, path, &object_storage, &result, &log, &settings, &metadata_key_prefix]
{
auto read_buf = object_storage->readObject(object, settings);
readStringUntilEOF(local_path, *read_buf);
}
setThreadName("PlainRWMetaLoad");
StoredObject object{path};
String local_path;
try
{
auto read_buf = object_storage->readObject(object, settings);
readStringUntilEOF(local_path, *read_buf);
}
#if USE_AWS_S3
catch (const S3Exception & e)
{
/// It is ok if a directory was removed just now.
/// We support attaching a filesystem that is concurrently modified by someone else.
if (e.getS3ErrorCode() == Aws::S3::S3Errors::NO_SUCH_KEY)
return;
throw;
}
catch (const S3Exception & e)
{
/// It is ok if a directory was removed just now.
/// We support attaching a filesystem that is concurrently modified by someone else.
if (e.getS3ErrorCode() == Aws::S3::S3Errors::NO_SUCH_KEY)
return;
throw;
}
#endif
catch (...)
{
throw;
}
catch (...)
{
throw;
}
chassert(remote_path.has_parent_path());
std::pair<MetadataStorageFromPlainObjectStorage::PathMap::iterator, bool> res;
{
std::lock_guard lock(mutex);
res = result.emplace(local_path, remote_path.parent_path());
}
chassert(remote_metadata_path.has_parent_path());
chassert(remote_metadata_path.string().starts_with(metadata_key_prefix));
auto suffix = remote_metadata_path.string().substr(metadata_key_prefix.size());
auto remote_path = std::filesystem::path(std::move(suffix));
std::pair<Map::iterator, bool> res;
{
std::lock_guard lock(result->mutex);
res = result->map.emplace(std::filesystem::path(local_path).parent_path(), remote_path.parent_path());
}
/// This can happen if table replication is enabled, then the same local path is written
/// in `prefix.path` of each replica.
/// TODO: should replicated tables (e.g., RMT) be explicitly disallowed?
if (!res.second)
LOG_WARNING(
log,
"The local path '{}' is already mapped to a remote path '{}', ignoring: '{}'",
local_path,
res.first->second,
remote_path.parent_path().string());
});
/// This can happen if table replication is enabled, then the same local path is written
/// in `prefix.path` of each replica.
/// TODO: should replicated tables (e.g., RMT) be explicitly disallowed?
if (!res.second)
LOG_WARNING(
log,
"The local path '{}' is already mapped to a remote path '{}', ignoring: '{}'",
local_path,
res.first->second,
remote_path.parent_path().string());
});
}
runner.waitForAllToFinishAndRethrowFirstError();
LOG_DEBUG(log, "Loaded metadata for {} files, found {} directories", num_files, result.size());
{
SharedLockGuard lock(result->mutex);
LOG_DEBUG(log, "Loaded metadata for {} files, found {} directories", num_files, result->map.size());
auto metric = object_storage->getMetadataStorageMetrics().directory_map_size;
CurrentMetrics::add(metric, result.size());
auto metric = object_storage->getMetadataStorageMetrics().directory_map_size;
CurrentMetrics::add(metric, result->map.size());
}
return result;
}
std::vector<std::string> getDirectChildrenOnRewritableDisk(
void getDirectChildrenOnDiskImpl(
const std::string & storage_key,
const RelativePathsWithMetadata & remote_paths,
const std::string & local_path,
const MetadataStorageFromPlainObjectStorage::PathMap & local_path_prefixes,
SharedMutex & shared_mutex)
const InMemoryPathMap & path_map,
std::unordered_set<std::string> & result)
{
using PathMap = MetadataStorageFromPlainObjectStorage::PathMap;
std::unordered_set<std::string> duplicates_filter;
/// Map remote paths into local subdirectories.
std::unordered_map<PathMap::mapped_type, PathMap::key_type> remote_to_local_subdir;
/// Directories are retrieved from the in-memory path map.
{
std::shared_lock lock(shared_mutex);
auto end_it = local_path_prefixes.end();
SharedLockGuard lock(path_map.mutex);
const auto & local_path_prefixes = path_map.map;
const auto end_it = local_path_prefixes.end();
for (auto it = local_path_prefixes.lower_bound(local_path); it != end_it; ++it)
{
const auto & [k, v] = std::make_tuple(it->first.string(), it->second);
const auto & [k, _] = std::make_tuple(it->first.string(), it->second);
if (!k.starts_with(local_path))
break;
auto slash_num = count(k.begin() + local_path.size(), k.end(), '/');
if (slash_num != 1)
continue;
/// The local_path_prefixes comparator ensures that the paths with the smallest number of
/// hops from the local_path are iterated first. The paths do not end with '/', hence
/// break the loop if the number of slashes is greater than 0.
if (slash_num != 0)
break;
chassert(k.back() == '/');
remote_to_local_subdir.emplace(v, std::string(k.begin() + local_path.size(), k.end() - 1));
result.emplace(std::string(k.begin() + local_path.size(), k.end()) + "/");
}
}
/// Files.
auto skip_list = std::set<std::string>{PREFIX_PATH_FILE_NAME};
for (const auto & elem : remote_paths)
{
@ -149,22 +173,9 @@ std::vector<std::string> getDirectChildrenOnRewritableDisk(
/// File names.
auto filename = path.substr(child_pos);
if (!skip_list.contains(filename))
duplicates_filter.emplace(std::move(filename));
}
else
{
/// Subdirectories.
auto it = remote_to_local_subdir.find(path.substr(0, slash_pos));
/// Mapped subdirectories.
if (it != remote_to_local_subdir.end())
duplicates_filter.emplace(it->second);
/// The remote subdirectory name is the same as the local subdirectory.
else
duplicates_filter.emplace(path.substr(child_pos, slash_pos - child_pos));
result.emplace(std::move(filename));
}
}
return std::vector<std::string>(std::make_move_iterator(duplicates_filter.begin()), std::make_move_iterator(duplicates_filter.end()));
}
}
@ -172,7 +183,8 @@ std::vector<std::string> getDirectChildrenOnRewritableDisk(
MetadataStorageFromPlainRewritableObjectStorage::MetadataStorageFromPlainRewritableObjectStorage(
ObjectStoragePtr object_storage_, String storage_path_prefix_)
: MetadataStorageFromPlainObjectStorage(object_storage_, storage_path_prefix_)
, path_map(std::make_shared<PathMap>(loadPathPrefixMap(object_storage->getCommonKeyPrefix(), object_storage)))
, metadata_key_prefix(DB::getMetadataKeyPrefix(object_storage))
, path_map(loadPathPrefixMap(metadata_key_prefix, object_storage))
{
if (object_storage->isWriteOnce())
throw Exception(
@ -180,20 +192,85 @@ MetadataStorageFromPlainRewritableObjectStorage::MetadataStorageFromPlainRewrita
"MetadataStorageFromPlainRewritableObjectStorage is not compatible with write-once storage '{}'",
object_storage->getName());
auto keys_gen = std::make_shared<CommonPathPrefixKeyGenerator>(object_storage->getCommonKeyPrefix(), metadata_mutex, path_map);
object_storage->setKeysGenerator(keys_gen);
if (useSeparateLayoutForMetadata())
{
/// Use flat directory structure if the metadata is stored separately from the table data.
auto keys_gen = std::make_shared<FlatDirectoryStructureKeyGenerator>(object_storage->getCommonKeyPrefix(), path_map);
object_storage->setKeysGenerator(keys_gen);
}
else
{
auto keys_gen = std::make_shared<CommonPathPrefixKeyGenerator>(object_storage->getCommonKeyPrefix(), path_map);
object_storage->setKeysGenerator(keys_gen);
}
}
MetadataStorageFromPlainRewritableObjectStorage::~MetadataStorageFromPlainRewritableObjectStorage()
{
auto metric = object_storage->getMetadataStorageMetrics().directory_map_size;
CurrentMetrics::sub(metric, path_map->size());
CurrentMetrics::sub(metric, path_map->map.size());
}
std::vector<std::string> MetadataStorageFromPlainRewritableObjectStorage::getDirectChildrenOnDisk(
const std::string & storage_key, const RelativePathsWithMetadata & remote_paths, const std::string & local_path) const
bool MetadataStorageFromPlainRewritableObjectStorage::exists(const std::string & path) const
{
return getDirectChildrenOnRewritableDisk(storage_key, remote_paths, local_path, *getPathMap(), metadata_mutex);
if (MetadataStorageFromPlainObjectStorage::exists(path))
return true;
if (useSeparateLayoutForMetadata())
{
auto key_prefix = object_storage->generateObjectKeyForPath(path, getMetadataKeyPrefix()).serialize();
return object_storage->existsOrHasAnyChild(key_prefix);
}
return false;
}
bool MetadataStorageFromPlainRewritableObjectStorage::isDirectory(const std::string & path) const
{
if (useSeparateLayoutForMetadata())
{
auto directory = std::filesystem::path(object_storage->generateObjectKeyForPath(path, getMetadataKeyPrefix()).serialize()) / "";
return object_storage->existsOrHasAnyChild(directory);
}
else
return MetadataStorageFromPlainObjectStorage::isDirectory(path);
}
std::vector<std::string> MetadataStorageFromPlainRewritableObjectStorage::listDirectory(const std::string & path) const
{
auto key_prefix = object_storage->generateObjectKeyForPath(path, "" /* key_prefix */).serialize();
RelativePathsWithMetadata files;
auto abs_key = std::filesystem::path(object_storage->getCommonKeyPrefix()) / key_prefix / "";
object_storage->listObjects(abs_key, files, 0);
std::unordered_set<std::string> directories;
getDirectChildrenOnDisk(abs_key, files, std::filesystem::path(path) / "", directories);
/// List empty directories that are identified by the `prefix.path` metadata files. This is required to, e.g., remove
/// metadata along with regular files.
if (useSeparateLayoutForMetadata())
{
auto metadata_key = std::filesystem::path(getMetadataKeyPrefix()) / key_prefix / "";
RelativePathsWithMetadata metadata_files;
object_storage->listObjects(metadata_key, metadata_files, 0);
getDirectChildrenOnDisk(metadata_key, metadata_files, std::filesystem::path(path) / "", directories);
}
return std::vector<std::string>(std::make_move_iterator(directories.begin()), std::make_move_iterator(directories.end()));
}
void MetadataStorageFromPlainRewritableObjectStorage::getDirectChildrenOnDisk(
const std::string & storage_key,
const RelativePathsWithMetadata & remote_paths,
const std::string & local_path,
std::unordered_set<std::string> & result) const
{
getDirectChildrenOnDiskImpl(storage_key, remote_paths, local_path, *getPathMap(), result);
}
bool MetadataStorageFromPlainRewritableObjectStorage::useSeparateLayoutForMetadata() const
{
return getMetadataKeyPrefix() != object_storage->getCommonKeyPrefix();
}
}

View File

@ -3,6 +3,7 @@
#include <Disks/ObjectStorages/MetadataStorageFromPlainObjectStorage.h>
#include <memory>
#include <unordered_set>
namespace DB
@ -11,18 +12,29 @@ namespace DB
class MetadataStorageFromPlainRewritableObjectStorage final : public MetadataStorageFromPlainObjectStorage
{
private:
std::shared_ptr<PathMap> path_map;
const std::string metadata_key_prefix;
std::shared_ptr<InMemoryPathMap> path_map;
public:
MetadataStorageFromPlainRewritableObjectStorage(ObjectStoragePtr object_storage_, String storage_path_prefix_);
~MetadataStorageFromPlainRewritableObjectStorage() override;
MetadataStorageType getType() const override { return MetadataStorageType::PlainRewritable; }
bool exists(const std::string & path) const override;
bool isDirectory(const std::string & path) const override;
std::vector<std::string> listDirectory(const std::string & path) const override;
protected:
std::shared_ptr<PathMap> getPathMap() const override { return path_map; }
std::vector<std::string> getDirectChildrenOnDisk(
const std::string & storage_key, const RelativePathsWithMetadata & remote_paths, const std::string & local_path) const override;
std::string getMetadataKeyPrefix() const override { return metadata_key_prefix; }
std::shared_ptr<InMemoryPathMap> getPathMap() const override { return path_map; }
void getDirectChildrenOnDisk(
const std::string & storage_key,
const RelativePathsWithMetadata & remote_paths,
const std::string & local_path,
std::unordered_set<std::string> & result) const;
private:
bool useSeparateLayoutForMetadata() const;
};
}

View File

@ -26,7 +26,7 @@ public:
bool isPlain() const override { return true; }
ObjectStorageKey generateObjectKeyForPath(const std::string & path) const override
ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & /* key_prefix */) const override
{
return ObjectStorageKey::createAsRelative(BaseObjectStorage::getCommonKeyPrefix(), path);
}

View File

@ -1,5 +1,7 @@
#pragma once
#include <optional>
#include <string>
#include <Disks/ObjectStorages/IObjectStorage.h>
#include <Common/ObjectStorageKeyGenerator.h>
#include "CommonPathPrefixKeyGenerator.h"
@ -33,9 +35,10 @@ public:
bool isPlain() const override { return true; }
ObjectStorageKey generateObjectKeyForPath(const std::string & path) const override;
ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const override;
ObjectStorageKey generateObjectKeyPrefixForDirectoryPath(const std::string & path) const override;
ObjectStorageKey
generateObjectKeyPrefixForDirectoryPath(const std::string & path, const std::optional<std::string> & key_prefix) const override;
void setKeysGenerator(ObjectStorageKeysGeneratorPtr gen) override { key_generator = gen; }
@ -46,20 +49,22 @@ private:
template <typename BaseObjectStorage>
ObjectStorageKey PlainRewritableObjectStorage<BaseObjectStorage>::generateObjectKeyForPath(const std::string & path) const
ObjectStorageKey PlainRewritableObjectStorage<BaseObjectStorage>::generateObjectKeyForPath(
const std::string & path, const std::optional<std::string> & key_prefix) const
{
if (!key_generator)
throw Exception(ErrorCodes::LOGICAL_ERROR, "Key generator is not set");
return key_generator->generate(path, /* is_directory */ false);
return key_generator->generate(path, /* is_directory */ false, key_prefix);
}
template <typename BaseObjectStorage>
ObjectStorageKey PlainRewritableObjectStorage<BaseObjectStorage>::generateObjectKeyPrefixForDirectoryPath(const std::string & path) const
ObjectStorageKey PlainRewritableObjectStorage<BaseObjectStorage>::generateObjectKeyPrefixForDirectoryPath(
const std::string & path, const std::optional<std::string> & key_prefix) const
{
if (!key_generator)
throw Exception(ErrorCodes::LOGICAL_ERROR, "Key generator is not set");
return key_generator->generate(path, /* is_directory */ true);
return key_generator->generate(path, /* is_directory */ true, key_prefix);
}
}

View File

@ -79,7 +79,7 @@ bool checkBatchRemove(S3ObjectStorage & storage)
/// We are using generateObjectKeyForPath() which returns random object key.
/// That generated key is placed in a right directory where we should have write access.
const String path = fmt::format("clickhouse_remove_objects_capability_{}", getServerUUID());
const auto key = storage.generateObjectKeyForPath(path);
const auto key = storage.generateObjectKeyForPath(path, {} /* key_prefix */);
StoredObject object(key.serialize(), path);
try
{

View File

@ -624,12 +624,12 @@ std::unique_ptr<IObjectStorage> S3ObjectStorage::cloneObjectStorage(
std::move(new_client), std::move(new_s3_settings), new_uri, s3_capabilities, key_generator, disk_name);
}
ObjectStorageKey S3ObjectStorage::generateObjectKeyForPath(const std::string & path) const
ObjectStorageKey S3ObjectStorage::generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const
{
if (!key_generator)
throw Exception(ErrorCodes::LOGICAL_ERROR, "Key generator is not set");
return key_generator->generate(path, /* is_directory */ false);
return key_generator->generate(path, /* is_directory */ false, key_prefix);
}
std::shared_ptr<const S3::Client> S3ObjectStorage::getS3StorageClient()

View File

@ -164,7 +164,7 @@ public:
bool supportParallelWrite() const override { return true; }
ObjectStorageKey generateObjectKeyForPath(const std::string & path) const override;
ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const override;
bool isReadOnly() const override { return s3_settings.get()->read_only; }

View File

@ -82,7 +82,7 @@ public:
const std::string & config_prefix,
ContextPtr context) override;
ObjectStorageKey generateObjectKeyForPath(const std::string & path) const override
ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & /* key_prefix */) const override
{
return ObjectStorageKey::createAsRelative(path);
}

View File

@ -46,7 +46,7 @@ public:
auto column_source_node = column_node->getColumnSource();
auto column_source_node_type = column_source_node->getNodeType();
if (column_source_node_type == QueryTreeNodeType::LAMBDA)
if (column_source_node_type == QueryTreeNodeType::LAMBDA || column_source_node_type == QueryTreeNodeType::INTERPOLATE)
return;
/// JOIN using expression

View File

@ -744,6 +744,8 @@ void addWithFillStepIfNeeded(QueryPlan & query_plan,
}
else
{
ActionsDAG rename_dag;
for (auto & interpolate_node : interpolate_list_nodes)
{
auto & interpolate_node_typed = interpolate_node->as<InterpolateNode &>();
@ -772,8 +774,28 @@ void addWithFillStepIfNeeded(QueryPlan & query_plan,
const auto * alias_node = &interpolate_actions_dag.addAlias(*interpolate_expression, expression_to_interpolate_name);
interpolate_actions_dag.getOutputs().push_back(alias_node);
/// Here we fix INTERPOLATE by constant expression.
/// Example from 02336_sort_optimization_with_fill:
///
/// SELECT 5 AS x, 'Hello' AS s ORDER BY x WITH FILL FROM 1 TO 10 INTERPOLATE (s AS s||'A')
///
/// For this query, INTERPOLATE_EXPRESSION would be : s AS concat(s, 'A'),
/// so that interpolate_actions_dag would have INPUT `s`.
///
/// However, INPUT `s` does not exist. Instead, we have a constant with execution name 'Hello'_String.
/// To fix this, we prepend a rename : 'Hello'_String -> s
if (const auto * constant_node = interpolate_node_typed.getExpression()->as<const ConstantNode>())
{
const auto * node = &rename_dag.addInput(alias_node->result_name, alias_node->result_type);
node = &rename_dag.addAlias(*node, interpolate_node_typed.getExpressionName());
rename_dag.getOutputs().push_back(node);
}
}
if (!rename_dag.getOutputs().empty())
interpolate_actions_dag = ActionsDAG::merge(std::move(rename_dag), std::move(interpolate_actions_dag));
interpolate_actions_dag.removeUnusedActions();
}

View File

@ -491,7 +491,16 @@ public:
{
auto it = node_name_to_node.find(node_name);
if (it != node_name_to_node.end())
return it->second;
{
/// It is possible that ActionsDAG already has an input with the same name as constant.
/// In this case, prefer constant to input.
/// Constatns affect function return type, which should be consistent with QueryTree.
/// Query example:
/// SELECT materialize(toLowCardinality('b')) || 'a' FROM remote('127.0.0.{1,2}', system, one) GROUP BY 'a'
bool materialized_input = it->second->type == ActionsDAG::ActionType::INPUT && !it->second->column;
if (!materialized_input)
return it->second;
}
const auto * node = &actions_dag.addColumn(column);
node_name_to_node[node->result_name] = node;

View File

@ -462,6 +462,9 @@ SortAnalysisResult analyzeSort(const QueryNode & query_node,
for (auto & interpolate_node : interpolate_list_node.getNodes())
{
auto & interpolate_node_typed = interpolate_node->as<InterpolateNode &>();
if (interpolate_node_typed.getExpression()->getNodeType() == QueryTreeNodeType::CONSTANT)
continue;
interpolate_actions_visitor.visit(interpolate_actions_dag, interpolate_node_typed.getInterpolateExpression());
}

View File

@ -2,6 +2,7 @@
#include <Processors/Transforms/FillingTransform.h>
#include <QueryPipeline/QueryPipelineBuilder.h>
#include <IO/Operators.h>
#include <Interpreters/ExpressionActions.h>
#include <Common/JSONBuilder.h>
namespace DB
@ -58,14 +59,25 @@ void FillingStep::transformPipeline(QueryPipelineBuilder & pipeline, const Build
void FillingStep::describeActions(FormatSettings & settings) const
{
settings.out << String(settings.offset, ' ');
String prefix(settings.offset, settings.indent_char);
settings.out << prefix;
dumpSortDescription(sort_description, settings.out);
settings.out << '\n';
if (interpolate_description)
{
auto expression = std::make_shared<ExpressionActions>(interpolate_description->actions.clone());
expression->describeActions(settings.out, prefix);
}
}
void FillingStep::describeActions(JSONBuilder::JSONMap & map) const
{
map.add("Sort Description", explainSortDescription(sort_description));
if (interpolate_description)
{
auto expression = std::make_shared<ExpressionActions>(interpolate_description->actions.clone());
map.add("Expression", expression->toTree());
}
}
void FillingStep::updateOutputStream()

View File

@ -31,6 +31,8 @@ ColumnsDescription StorageSystemClusters::getColumnsDescription()
{"database_shard_name", std::make_shared<DataTypeString>(), "The name of the `Replicated` database shard (for clusters that belong to a `Replicated` database)."},
{"database_replica_name", std::make_shared<DataTypeString>(), "The name of the `Replicated` database replica (for clusters that belong to a `Replicated` database)."},
{"is_active", std::make_shared<DataTypeNullable>(std::make_shared<DataTypeUInt8>()), "The status of the Replicated database replica (for clusters that belong to a Replicated database): 1 means 'replica is online', 0 means 'replica is offline', NULL means 'unknown'."},
{"replication_lag", std::make_shared<DataTypeNullable>(std::make_shared<DataTypeUInt32>()), "The replication lag of the `Replicated` database replica (for clusters that belong to a Replicated database)."},
{"recovery_time", std::make_shared<DataTypeNullable>(std::make_shared<DataTypeUInt64>()), "The recovery time of the `Replicated` database replica (for clusters that belong to a Replicated database), in milliseconds."},
};
description.setAliases({
@ -46,31 +48,30 @@ void StorageSystemClusters::fillData(MutableColumns & res_columns, ContextPtr co
writeCluster(res_columns, name_and_cluster, {});
const auto databases = DatabaseCatalog::instance().getDatabases();
for (const auto & name_and_database : databases)
for (const auto & [database_name, database] : databases)
{
if (const auto * replicated = typeid_cast<const DatabaseReplicated *>(name_and_database.second.get()))
if (const auto * replicated = typeid_cast<const DatabaseReplicated *>(database.get()))
{
if (auto database_cluster = replicated->tryGetCluster())
writeCluster(res_columns, {name_and_database.first, database_cluster},
replicated->tryGetAreReplicasActive(database_cluster));
writeCluster(res_columns, {database_name, database_cluster},
replicated->tryGetReplicasInfo(database_cluster));
if (auto database_cluster = replicated->tryGetAllGroupsCluster())
writeCluster(res_columns, {DatabaseReplicated::ALL_GROUPS_CLUSTER_PREFIX + name_and_database.first, database_cluster},
replicated->tryGetAreReplicasActive(database_cluster));
writeCluster(res_columns, {DatabaseReplicated::ALL_GROUPS_CLUSTER_PREFIX + database_name, database_cluster},
replicated->tryGetReplicasInfo(database_cluster));
}
}
}
void StorageSystemClusters::writeCluster(MutableColumns & res_columns, const NameAndCluster & name_and_cluster,
const std::vector<UInt8> & is_active)
const ReplicasInfo & replicas_info)
{
const String & cluster_name = name_and_cluster.first;
const ClusterPtr & cluster = name_and_cluster.second;
const auto & shards_info = cluster->getShardsInfo();
const auto & addresses_with_failover = cluster->getShardsAddresses();
size_t replica_idx = 0;
size_t global_replica_idx = 0;
for (size_t shard_index = 0; shard_index < shards_info.size(); ++shard_index)
{
const auto & shard_info = shards_info[shard_index];
@ -99,10 +100,24 @@ void StorageSystemClusters::writeCluster(MutableColumns & res_columns, const Nam
res_columns[i++]->insert(pool_status[replica_index].estimated_recovery_time.count());
res_columns[i++]->insert(address.database_shard_name);
res_columns[i++]->insert(address.database_replica_name);
if (is_active.empty())
if (replicas_info.empty())
{
res_columns[i++]->insertDefault();
res_columns[i++]->insertDefault();
res_columns[i++]->insertDefault();
}
else
res_columns[i++]->insert(is_active[replica_idx++]);
{
const auto & replica_info = replicas_info[global_replica_idx];
res_columns[i++]->insert(replica_info.is_active);
res_columns[i++]->insert(replica_info.replication_lag);
if (replica_info.recovery_time != 0)
res_columns[i++]->insert(replica_info.recovery_time);
else
res_columns[i++]->insertDefault();
}
++global_replica_idx;
}
}
}

View File

@ -1,10 +1,10 @@
#pragma once
#include <Databases/DatabaseReplicated.h>
#include <DataTypes/DataTypeString.h>
#include <DataTypes/DataTypesNumber.h>
#include <Storages/System/IStorageSystemOneBlock.h>
namespace DB
{
@ -27,7 +27,7 @@ protected:
using NameAndCluster = std::pair<String, std::shared_ptr<Cluster>>;
void fillData(MutableColumns & res_columns, ContextPtr context, const ActionsDAG::Node *, std::vector<UInt8>) const override;
static void writeCluster(MutableColumns & res_columns, const NameAndCluster & name_and_cluster, const std::vector<UInt8> & is_active);
static void writeCluster(MutableColumns & res_columns, const NameAndCluster & name_and_cluster, const ReplicasInfo & replicas_info);
};
}

View File

@ -1588,6 +1588,7 @@
"groupBitmapXorResample"
"groupBitmapXorSimpleState"
"groupBitmapXorState"
"groupConcat"
"groupUniqArray"
"groupUniqArrayArgMax"
"groupUniqArrayArgMin"

View File

@ -188,6 +188,14 @@ docker build -t clickhouse/integration-test .
```
The helper container used by the `runner` script is in `docker/test/integration/runner/Dockerfile`.
It can be rebuild with
```
cd docker/test/integration/runner
docker build -t clickhouse/integration-test-runner .
```
If your docker configuration doesn't allow access to public internet with docker build command you may also need to add option --network=host if you rebuild image for a local integration testsing.
### Adding new tests

View File

@ -0,0 +1,17 @@
<clickhouse>
<!-- Native interface with TLS.
You have to configure certificate to enable this interface.
See the openSSL section below.
-->
<tcp_port_secure>9440</tcp_port_secure>
<!-- Used with https_port and tcp_port_secure. Full ssl options list: https://github.com/ClickHouse-Extras/poco/blob/master/NetSSL_OpenSSL/include/Poco/Net/SSLManager.h#L71 -->
<openSSL replace="replace">
<server> <!-- Used for https server AND secure tcp port -->
<certificateFile>/etc/clickhouse-server/config.d/self-cert.pem</certificateFile>
<privateKeyFile>/etc/clickhouse-server/config.d/self-key.pem</privateKeyFile>
<caConfig>/etc/clickhouse-server/config.d/ca-cert.pem</caConfig>
<verificationMode>strict</verificationMode>
</server>
</openSSL>
</clickhouse>

View File

@ -17,6 +17,19 @@ instance = cluster.add_instance(
"certs/self-cert.pem",
"certs/ca-cert.pem",
],
with_zookeeper=False,
)
node1 = cluster.add_instance(
"node1",
main_configs=[
"configs/ssl_config_strict.xml",
"certs/self-key.pem",
"certs/self-cert.pem",
"certs/ca-cert.pem",
],
with_zookeeper=False,
)
@ -90,3 +103,25 @@ def test_connection_accept():
)
== "1\n"
)
def test_strict_reject():
with pytest.raises(Exception) as err:
execute_query_native(node1, "SELECT 1", "<clickhouse></clickhouse>")
assert "certificate verify failed" in str(err.value)
def test_strict_reject_with_config():
with pytest.raises(Exception) as err:
execute_query_native(node1, "SELECT 1", config_accept)
assert "alert certificate required" in str(err.value)
def test_strict_connection_reject():
with pytest.raises(Exception) as err:
execute_query_native(
node1,
"SELECT 1",
config_connection_accept.format(ip_address=f"{instance.ip_address}"),
)
assert "certificate verify failed" in str(err.value)

View File

@ -0,0 +1,41 @@
<clickhouse>
<tcp_port>9000</tcp_port>
<profiles>
<default>
</default>
</profiles>
<users>
<default>
<profile>default</profile>
<no_password></no_password>
</default>
</users>
<keeper_server>
<tcp_port>2181</tcp_port>
<server_id>1</server_id>
<log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
<snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
<coordination_settings>
<session_timeout_ms>20000</session_timeout_ms>
</coordination_settings>
<raft_configuration>
<server>
<id>1</id>
<hostname>localhost</hostname>
<port>9444</port>
</server>
</raft_configuration>
</keeper_server>
<zookeeper>
<node index="1">
<host>localhost</host>
<port>2181</port>
</node>
<session_timeout_ms>20000</session_timeout_ms>
</zookeeper>
</clickhouse>

View File

@ -0,0 +1,61 @@
import pytest
from helpers.cluster import ClickHouseCluster
cluster = ClickHouseCluster(__file__)
node = cluster.add_instance(
"node",
main_configs=["configs/config.xml"],
stay_alive=True,
)
@pytest.fixture(scope="module")
def start_cluster():
try:
cluster.start()
yield cluster
finally:
cluster.shutdown()
def test_recovery_time_metric(start_cluster):
node.query(
"""
DROP DATABASE IF EXISTS rdb;
CREATE DATABASE rdb
ENGINE = Replicated('/test/test_recovery_time_metric', 'shard1', 'replica1')
"""
)
node.query(
"""
DROP TABLE IF EXISTS rdb.t;
CREATE TABLE rdb.t
(
`x` UInt32
)
ENGINE = MergeTree
ORDER BY x
"""
)
node.exec_in_container(["bash", "-c", "rm /var/lib/clickhouse/metadata/rdb/t.sql"])
node.restart_clickhouse()
ret = int(
node.query(
"""
SELECT recovery_time
FROM system.clusters
WHERE cluster = 'rdb'
"""
).strip()
)
assert ret > 0
node.query(
"""
DROP DATABASE rdb
"""
)

View File

@ -1,6 +1,6 @@
<clickhouse>
<background_schedule_pool_size>1</background_schedule_pool_size>
<merge_tree>
<initialization_retry_period>5</initialization_retry_period>
<initialization_retry_period>3</initialization_retry_period>
</merge_tree>
</clickhouse>

View File

@ -80,4 +80,8 @@ def test_startup_with_small_bg_pool_partitioned(started_cluster):
assert_values()
# check that we activate it in the end
node.query_with_retry("INSERT INTO replicated_table_partitioned VALUES(20, 30)")
node.query_with_retry(
"INSERT INTO replicated_table_partitioned VALUES(20, 30)",
retry_count=20,
sleep_time=3,
)

View File

@ -139,6 +139,19 @@ def test(storage_policy):
== insert_values_arr[i]
)
metadata_it = cluster.minio_client.list_objects(
cluster.minio_bucket, "data/", recursive=True
)
metadata_count = 0
for obj in list(metadata_it):
if "/__meta/" in obj.object_name:
assert obj.object_name.endswith("/prefix.path")
metadata_count += 1
else:
assert not obj.object_name.endswith("/prefix.path")
assert metadata_count > 0
for i in range(NUM_WORKERS):
node = cluster.instances[f"node{i + 1}"]
node.query("DROP TABLE IF EXISTS test SYNC")

View File

@ -1,11 +1,8 @@
<test>
<query>SELECT arrayReduce('count', range(100000000))</query>
<query>SELECT arrayReduce('sum', range(100000000))</query>
<query>SELECT arrayReduceInRanges('count', [(1, 100000000)], range(100000000))</query>
<query>SELECT arrayReduceInRanges('sum', [(1, 100000000)], range(100000000))</query>
<query>SELECT arrayReduceInRanges('count', arrayZip(range(1000000), range(1000000)), range(100000000))[123456]</query>
<query>SELECT arrayReduceInRanges('sum', arrayZip(range(1000000), range(1000000)), range(100000000))[123456]</query>
<query>SELECT arrayReduce('count', range(1000000)) FROM numbers_mt(500000000) format Null</query>
<query>SELECT arrayReduce('sum', range(1000000)) FROM numbers_mt(500000000) format Null</query>
<query>SELECT arrayReduceInRanges('count', [(1, 1000000)], range(1000000)) FROM numbers_mt(500000000) format Null</query>
<query>SELECT arrayReduceInRanges('sum', [(1, 1000000)], range(1000000)) FROM numbers_mt(500000000) format Null</query>
<query>SELECT arrayReduceInRanges('count', arrayZip(range(1000000), range(1000000)), range(1000000))[123456]</query>
<query>SELECT arrayReduceInRanges('sum', arrayZip(range(1000000), range(1000000)), range(1000000))[123456]</query>
</test>

View File

@ -10,8 +10,8 @@
PARTITION BY toYYYYMM(d) ORDER BY key
</create_query>
<fill_query>INSERT INTO optimized_select_final SELECT toDate('2000-01-01'), 2*number, randomPrintableASCII(1000) FROM numbers(2500000)</fill_query>
<fill_query>INSERT INTO optimized_select_final SELECT toDate('2020-01-01'), 2*number+1, randomPrintableASCII(1000) FROM numbers(2500000)</fill_query>
<fill_query>INSERT INTO optimized_select_final SELECT toDate('2000-01-01'), 2*number, randomPrintableASCII(1000) FROM numbers(1000000)</fill_query>
<fill_query>INSERT INTO optimized_select_final SELECT toDate('2020-01-01'), 2*number+1, randomPrintableASCII(1000) FROM numbers(1000000)</fill_query>
<query>SELECT * FROM optimized_select_final FINAL FORMAT Null SETTINGS max_threads = 8</query>
<query>SELECT * FROM optimized_select_final FINAL WHERE key % 10 = 0 FORMAT Null</query>

View File

@ -5,15 +5,16 @@
ORDER BY (c1, c2)
SETTINGS min_rows_for_wide_part = 1000000000 AS
SELECT *
FROM generateRandom('c1 UInt32, c2 UInt64, s1 String, arr1 Array(UInt32), c3 UInt64, s2 String', 0, 30, 30)
FROM generateRandom('c1 UInt32, c2 UInt64, s1 String, arr1 Array(UInt32), c3 UInt64, s2 String', 0, 5, 6)
LIMIT 50000000
SETTINGS max_insert_threads = 8
</create_query>
<settings>
<max_threads>8</max_threads>
</settings>
<query short="1">SELECT count() FROM mt_comp_parts WHERE NOT ignore(c1)</query>
<query short="1">SELECT count() FROM mt_comp_parts WHERE NOT ignore(s1)</query>
<query>SELECT count() FROM mt_comp_parts WHERE NOT ignore(c2, s1, arr1, s2)</query>
<query>SELECT count() FROM mt_comp_parts WHERE NOT ignore(c1, s1, c3)</query>
<query>SELECT count() FROM mt_comp_parts WHERE NOT ignore(c1, c2, c3)</query>

View File

@ -118,7 +118,7 @@ then
# far in the future and have unrelated test changes.
base=$(git -C right/ch merge-base pr origin/master)
git -C right/ch diff --name-only "$base" pr -- . | tee all-changed-files.txt
git -C right/ch diff --name-only "$base" pr -- tests/performance/*.xml | tee changed-test-definitions.txt
git -C right/ch diff --name-only --diff-filter=d "$base" pr -- tests/performance/*.xml | tee changed-test-definitions.txt
git -C right/ch diff --name-only "$base" pr -- :!tests/performance/*.xml :!docker/test/performance-comparison | tee other-changed-files.txt
fi

View File

@ -345,6 +345,16 @@ for query_index in queries_to_run:
print(f"display-name\t{query_index}\t{tsv_escape(query_display_name)}")
for conn_index, c in enumerate(all_connections):
try:
c.execute("SYSTEM JEMALLOC PURGE")
print(f"purging jemalloc arenas\t{conn_index}\t{c.last_query.elapsed}")
except KeyboardInterrupt:
raise
except:
continue
# Prewarm: run once on both servers. Helps to bring the data into memory,
# precompile the queries, etc.
# A query might not run on the old server if it uses a function added in the

View File

@ -1,11 +0,0 @@
<!-- https://github.com/ClickHouse/ClickHouse/issues/37900 -->
<test>
<create_query>create table views_max_insert_threads_null (a UInt64) Engine = Null</create_query>
<create_query>create materialized view views_max_insert_threads_mv Engine = Null AS select now() as ts, max(a) from views_max_insert_threads_null group by ts</create_query>
<query>insert into views_max_insert_threads_null select * from numbers_mt(3000000000) settings max_threads = 16, max_insert_threads=16</query>
<drop_query>drop table if exists views_max_insert_threads_null</drop_query>
<drop_query>drop table if exists views_max_insert_threads_mv</drop_query>
</test>

View File

@ -8,13 +8,13 @@
40
41
0
41
2 42
2 42
43
0
43
11
11

View File

@ -1,5 +1,4 @@
#!/usr/bin/env python3
# Tags: no-parallel
import os
import sys
@ -17,6 +16,7 @@ log = None
with client(name="client1>", log=log) as client1, client(
name="client2>", log=log
) as client2:
database_name = os.environ["CLICKHOUSE_DATABASE"]
client1.expect(prompt)
client2.expect(prompt)
@ -31,40 +31,38 @@ with client(name="client1>", log=log) as client1, client(
client2.send("SET allow_experimental_analyzer = 0")
client2.expect(prompt)
client1.send("CREATE DATABASE IF NOT EXISTS 01062_window_view_event_hop_watch_asc")
client1.send(f"DROP TABLE IF EXISTS {database_name}.mt")
client1.expect(prompt)
client1.send("DROP TABLE IF EXISTS 01062_window_view_event_hop_watch_asc.mt")
client1.expect(prompt)
client1.send("DROP TABLE IF EXISTS 01062_window_view_event_hop_watch_asc.wv SYNC")
client1.send(f"DROP TABLE IF EXISTS {database_name}.wv SYNC")
client1.expect(prompt)
client1.send(
"CREATE TABLE 01062_window_view_event_hop_watch_asc.mt(a Int32, timestamp DateTime) ENGINE=MergeTree ORDER BY tuple()"
f"CREATE TABLE {database_name}.mt(a Int32, timestamp DateTime) ENGINE=MergeTree ORDER BY tuple()"
)
client1.expect(prompt)
client1.send(
"CREATE WINDOW VIEW 01062_window_view_event_hop_watch_asc.wv ENGINE Memory WATERMARK=ASCENDING AS SELECT count(a) AS count, hopEnd(wid) AS w_end FROM 01062_window_view_event_hop_watch_asc.mt GROUP BY hop(timestamp, INTERVAL '2' SECOND, INTERVAL '3' SECOND, 'US/Samoa') AS wid"
f"CREATE WINDOW VIEW {database_name}.wv ENGINE Memory WATERMARK=ASCENDING AS SELECT count(a) AS count, hopEnd(wid) AS w_end FROM {database_name}.mt GROUP BY hop(timestamp, INTERVAL '2' SECOND, INTERVAL '3' SECOND, 'US/Samoa') AS wid"
)
client1.expect(prompt)
client1.send("WATCH 01062_window_view_event_hop_watch_asc.wv")
client1.send(f"WATCH {database_name}.wv")
client1.expect("Query id" + end_of_block)
client1.expect("Progress: 0.00 rows.*\\)")
client2.send(
"INSERT INTO 01062_window_view_event_hop_watch_asc.mt VALUES (1, toDateTime('1990/01/01 12:00:00', 'US/Samoa'));"
f"INSERT INTO {database_name}.mt VALUES (1, toDateTime('1990/01/01 12:00:00', 'US/Samoa'));"
)
client2.expect(prompt)
client2.send(
"INSERT INTO 01062_window_view_event_hop_watch_asc.mt VALUES (1, toDateTime('1990/01/01 12:00:05', 'US/Samoa'));"
f"INSERT INTO {database_name}.mt VALUES (1, toDateTime('1990/01/01 12:00:05', 'US/Samoa'));"
)
client2.expect(prompt)
client1.expect("1*" + end_of_block)
client2.send(
"INSERT INTO 01062_window_view_event_hop_watch_asc.mt VALUES (1, toDateTime('1990/01/01 12:00:06', 'US/Samoa'));"
f"INSERT INTO {database_name}.mt VALUES (1, toDateTime('1990/01/01 12:00:06', 'US/Samoa'));"
)
client2.expect(prompt)
client2.send(
"INSERT INTO 01062_window_view_event_hop_watch_asc.mt VALUES (1, toDateTime('1990/01/01 12:00:10', 'US/Samoa'));"
f"INSERT INTO {database_name}.mt VALUES (1, toDateTime('1990/01/01 12:00:10', 'US/Samoa'));"
)
client2.expect(prompt)
client1.expect("1" + end_of_block)
@ -77,9 +75,7 @@ with client(name="client1>", log=log) as client1, client(
if match.groups()[1]:
client1.send(client1.command)
client1.expect(prompt)
client1.send("DROP TABLE 01062_window_view_event_hop_watch_asc.wv SYNC")
client1.send(f"DROP TABLE {database_name}.wv SYNC")
client1.expect(prompt)
client1.send("DROP TABLE 01062_window_view_event_hop_watch_asc.mt")
client1.expect(prompt)
client1.send("DROP DATABASE IF EXISTS 01062_window_view_event_hop_watch_asc")
client1.send(f"DROP TABLE {database_name}.mt")
client1.expect(prompt)

View File

@ -1,29 +1,31 @@
#!/usr/bin/env bash
# Tags: no-parallel, no-fasttest
# Tags: no-fasttest, no-parallel
# no-parallel: FIXME: Timing issues with INSERT + DETACH (https://github.com/ClickHouse/ClickHouse/pull/67610/files#r1700345054)
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh
. "$CURDIR"/../shell_config.sh
$CLICKHOUSE_CLIENT -q "DROP DATABASE IF EXISTS test_01107"
$CLICKHOUSE_CLIENT -q "CREATE DATABASE test_01107 ENGINE=Atomic"
$CLICKHOUSE_CLIENT -q "CREATE TABLE test_01107.mt (n UInt64) ENGINE=MergeTree() ORDER BY tuple()"
NEW_DATABASE=test_01107_${CLICKHOUSE_DATABASE}
$CLICKHOUSE_CLIENT -q "DROP DATABASE IF EXISTS ${NEW_DATABASE}"
$CLICKHOUSE_CLIENT -q "CREATE DATABASE ${NEW_DATABASE} ENGINE=Atomic"
$CLICKHOUSE_CLIENT -q "CREATE TABLE ${NEW_DATABASE}.mt (n UInt64) ENGINE=MergeTree() ORDER BY tuple()"
$CLICKHOUSE_CLIENT --function_sleep_max_microseconds_per_block 60000000 -q "INSERT INTO test_01107.mt SELECT number + sleepEachRow(3) FROM numbers(5)" &
$CLICKHOUSE_CLIENT --function_sleep_max_microseconds_per_block 60000000 -q "INSERT INTO ${NEW_DATABASE}.mt SELECT number + sleepEachRow(3) FROM numbers(5)" &
sleep 1
$CLICKHOUSE_CLIENT -q "DETACH TABLE test_01107.mt" --database_atomic_wait_for_drop_and_detach_synchronously=0
$CLICKHOUSE_CLIENT -q "ATTACH TABLE test_01107.mt" --database_atomic_wait_for_drop_and_detach_synchronously=0 2>&1 | grep -F "Code: 57" > /dev/null && echo "OK"
$CLICKHOUSE_CLIENT -q "DETACH DATABASE test_01107" --database_atomic_wait_for_drop_and_detach_synchronously=0 2>&1 | grep -F "Code: 219" > /dev/null && echo "OK"
$CLICKHOUSE_CLIENT -q "DETACH TABLE ${NEW_DATABASE}.mt" --database_atomic_wait_for_drop_and_detach_synchronously=0
$CLICKHOUSE_CLIENT -q "ATTACH TABLE ${NEW_DATABASE}.mt" --database_atomic_wait_for_drop_and_detach_synchronously=0 2>&1 | grep -F "Code: 57" > /dev/null && echo "OK"
$CLICKHOUSE_CLIENT -q "DETACH DATABASE ${NEW_DATABASE}" --database_atomic_wait_for_drop_and_detach_synchronously=0 2>&1 | grep -F "Code: 219" > /dev/null && echo "OK"
wait
$CLICKHOUSE_CLIENT -q "ATTACH TABLE test_01107.mt"
$CLICKHOUSE_CLIENT -q "SELECT count(n), sum(n) FROM test_01107.mt"
$CLICKHOUSE_CLIENT -q "DETACH DATABASE test_01107" --database_atomic_wait_for_drop_and_detach_synchronously=0
$CLICKHOUSE_CLIENT -q "ATTACH DATABASE test_01107"
$CLICKHOUSE_CLIENT -q "SELECT count(n), sum(n) FROM test_01107.mt"
$CLICKHOUSE_CLIENT -q "ATTACH TABLE ${NEW_DATABASE}.mt"
$CLICKHOUSE_CLIENT -q "SELECT count(n), sum(n) FROM ${NEW_DATABASE}.mt"
$CLICKHOUSE_CLIENT -q "DETACH DATABASE ${NEW_DATABASE}" --database_atomic_wait_for_drop_and_detach_synchronously=0
$CLICKHOUSE_CLIENT -q "ATTACH DATABASE ${NEW_DATABASE}"
$CLICKHOUSE_CLIENT -q "SELECT count(n), sum(n) FROM ${NEW_DATABASE}.mt"
$CLICKHOUSE_CLIENT --function_sleep_max_microseconds_per_block 60000000 -q "INSERT INTO test_01107.mt SELECT number + sleepEachRow(1) FROM numbers(5)" && echo "end" &
$CLICKHOUSE_CLIENT --function_sleep_max_microseconds_per_block 60000000 -q "INSERT INTO ${NEW_DATABASE}.mt SELECT number + sleepEachRow(1) FROM numbers(5)" && echo "end" &
sleep 1
$CLICKHOUSE_CLIENT -q "DROP DATABASE test_01107" --database_atomic_wait_for_drop_and_detach_synchronously=0 && sleep 1 && echo "dropped"
$CLICKHOUSE_CLIENT -q "DROP DATABASE ${NEW_DATABASE}" --database_atomic_wait_for_drop_and_detach_synchronously=0 && sleep 1 && echo "dropped"
wait

View File

@ -1,4 +1,4 @@
-- Tags: zookeeper, no-parallel
-- Tags: zookeeper
DROP TABLE IF EXISTS r_prop_table1;
DROP TABLE IF EXISTS r_prop_table2;

View File

@ -1,4 +1,4 @@
-- Tags: no-random-merge-tree-settings, no-tsan, no-debug, no-object-storage
-- Tags: no-random-merge-tree-settings, no-random-settings, no-tsan, no-debug, no-object-storage, long
-- no-tsan: too slow
-- no-object-storage: for remote tables we use thread pool even when reading with one stream, so memory consumption is higher
@ -16,7 +16,7 @@ CREATE TABLE adaptive_table(
value String
) ENGINE MergeTree()
ORDER BY key
SETTINGS index_granularity_bytes=1048576,
SETTINGS index_granularity_bytes = 1048576,
min_bytes_for_wide_part = 0,
min_rows_for_wide_part = 0,
enable_vertical_merge_algorithm = 0;

View File

@ -1,10 +1,4 @@
-- Tags: no-parallel
DROP DATABASE IF EXISTS database_for_range_dict;
CREATE DATABASE database_for_range_dict;
CREATE TABLE database_for_range_dict.date_table
CREATE TABLE date_table
(
CountryID UInt64,
StartDate Date,
@ -14,11 +8,11 @@ CREATE TABLE database_for_range_dict.date_table
ENGINE = MergeTree()
ORDER BY CountryID;
INSERT INTO database_for_range_dict.date_table VALUES(1, toDate('2019-05-05'), toDate('2019-05-20'), 0.33);
INSERT INTO database_for_range_dict.date_table VALUES(1, toDate('2019-05-21'), toDate('2019-05-30'), 0.42);
INSERT INTO database_for_range_dict.date_table VALUES(2, toDate('2019-05-21'), toDate('2019-05-30'), 0.46);
INSERT INTO date_table VALUES(1, toDate('2019-05-05'), toDate('2019-05-20'), 0.33);
INSERT INTO date_table VALUES(1, toDate('2019-05-21'), toDate('2019-05-30'), 0.42);
INSERT INTO date_table VALUES(2, toDate('2019-05-21'), toDate('2019-05-30'), 0.46);
CREATE DICTIONARY database_for_range_dict.range_dictionary
CREATE DICTIONARY range_dictionary
(
CountryID UInt64,
StartDate Date,
@ -26,7 +20,7 @@ CREATE DICTIONARY database_for_range_dict.range_dictionary
Tax Float64 DEFAULT 0.2
)
PRIMARY KEY CountryID
SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'date_table' DB 'database_for_range_dict'))
SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'date_table' DB currentDatabase()))
LIFETIME(MIN 1 MAX 1000)
LAYOUT(RANGE_HASHED())
RANGE(MIN StartDate MAX EndDate)
@ -35,30 +29,30 @@ SETTINGS(dictionary_use_async_executor=1, max_threads=8)
SELECT 'Dictionary not nullable';
SELECT 'dictGet';
SELECT dictGet('database_for_range_dict.range_dictionary', 'Tax', toUInt64(1), toDate('2019-05-15'));
SELECT dictGet('database_for_range_dict.range_dictionary', 'Tax', toUInt64(1), toDate('2019-05-29'));
SELECT dictGet('database_for_range_dict.range_dictionary', 'Tax', toUInt64(2), toDate('2019-05-29'));
SELECT dictGet('database_for_range_dict.range_dictionary', 'Tax', toUInt64(2), toDate('2019-05-31'));
SELECT dictGetOrDefault('database_for_range_dict.range_dictionary', 'Tax', toUInt64(2), toDate('2019-05-31'), 0.4);
SELECT dictGet('range_dictionary', 'Tax', toUInt64(1), toDate('2019-05-15'));
SELECT dictGet('range_dictionary', 'Tax', toUInt64(1), toDate('2019-05-29'));
SELECT dictGet('range_dictionary', 'Tax', toUInt64(2), toDate('2019-05-29'));
SELECT dictGet('range_dictionary', 'Tax', toUInt64(2), toDate('2019-05-31'));
SELECT dictGetOrDefault('range_dictionary', 'Tax', toUInt64(2), toDate('2019-05-31'), 0.4);
SELECT 'dictHas';
SELECT dictHas('database_for_range_dict.range_dictionary', toUInt64(1), toDate('2019-05-15'));
SELECT dictHas('database_for_range_dict.range_dictionary', toUInt64(1), toDate('2019-05-29'));
SELECT dictHas('database_for_range_dict.range_dictionary', toUInt64(2), toDate('2019-05-29'));
SELECT dictHas('database_for_range_dict.range_dictionary', toUInt64(2), toDate('2019-05-31'));
SELECT dictHas('range_dictionary', toUInt64(1), toDate('2019-05-15'));
SELECT dictHas('range_dictionary', toUInt64(1), toDate('2019-05-29'));
SELECT dictHas('range_dictionary', toUInt64(2), toDate('2019-05-29'));
SELECT dictHas('range_dictionary', toUInt64(2), toDate('2019-05-31'));
SELECT 'select columns from dictionary';
SELECT 'allColumns';
SELECT * FROM database_for_range_dict.range_dictionary ORDER BY CountryID, StartDate, EndDate;
SELECT * FROM range_dictionary ORDER BY CountryID, StartDate, EndDate;
SELECT 'noColumns';
SELECT 1 FROM database_for_range_dict.range_dictionary ORDER BY CountryID, StartDate, EndDate;
SELECT 1 FROM range_dictionary ORDER BY CountryID, StartDate, EndDate;
SELECT 'onlySpecificColumns';
SELECT CountryID, StartDate, Tax FROM database_for_range_dict.range_dictionary ORDER BY CountryID, StartDate, EndDate;
SELECT CountryID, StartDate, Tax FROM range_dictionary ORDER BY CountryID, StartDate, EndDate;
SELECT 'onlySpecificColumn';
SELECT Tax FROM database_for_range_dict.range_dictionary ORDER BY CountryID, StartDate, EndDate;
SELECT Tax FROM range_dictionary ORDER BY CountryID, StartDate, EndDate;
DROP DICTIONARY database_for_range_dict.range_dictionary;
DROP TABLE database_for_range_dict.date_table;
DROP DICTIONARY range_dictionary;
DROP TABLE date_table;
CREATE TABLE database_for_range_dict.date_table
CREATE TABLE date_table
(
CountryID UInt64,
StartDate Date,
@ -68,11 +62,11 @@ CREATE TABLE database_for_range_dict.date_table
ENGINE = MergeTree()
ORDER BY CountryID;
INSERT INTO database_for_range_dict.date_table VALUES(1, toDate('2019-05-05'), toDate('2019-05-20'), 0.33);
INSERT INTO database_for_range_dict.date_table VALUES(1, toDate('2019-05-21'), toDate('2019-05-30'), 0.42);
INSERT INTO database_for_range_dict.date_table VALUES(2, toDate('2019-05-21'), toDate('2019-05-30'), NULL);
INSERT INTO date_table VALUES(1, toDate('2019-05-05'), toDate('2019-05-20'), 0.33);
INSERT INTO date_table VALUES(1, toDate('2019-05-21'), toDate('2019-05-30'), 0.42);
INSERT INTO date_table VALUES(2, toDate('2019-05-21'), toDate('2019-05-30'), NULL);
CREATE DICTIONARY database_for_range_dict.range_dictionary_nullable
CREATE DICTIONARY range_dictionary_nullable
(
CountryID UInt64,
StartDate Date,
@ -80,35 +74,33 @@ CREATE DICTIONARY database_for_range_dict.range_dictionary_nullable
Tax Nullable(Float64) DEFAULT 0.2
)
PRIMARY KEY CountryID
SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'date_table' DB 'database_for_range_dict'))
SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'date_table' DB currentDatabase()))
LIFETIME(MIN 1 MAX 1000)
LAYOUT(RANGE_HASHED())
RANGE(MIN StartDate MAX EndDate);
SELECT 'Dictionary nullable';
SELECT 'dictGet';
SELECT dictGet('database_for_range_dict.range_dictionary_nullable', 'Tax', toUInt64(1), toDate('2019-05-15'));
SELECT dictGet('database_for_range_dict.range_dictionary_nullable', 'Tax', toUInt64(1), toDate('2019-05-29'));
SELECT dictGet('database_for_range_dict.range_dictionary_nullable', 'Tax', toUInt64(2), toDate('2019-05-29'));
SELECT dictGet('database_for_range_dict.range_dictionary_nullable', 'Tax', toUInt64(2), toDate('2019-05-31'));
SELECT dictGetOrDefault('database_for_range_dict.range_dictionary_nullable', 'Tax', toUInt64(2), toDate('2019-05-31'), 0.4);
SELECT dictGet('range_dictionary_nullable', 'Tax', toUInt64(1), toDate('2019-05-15'));
SELECT dictGet('range_dictionary_nullable', 'Tax', toUInt64(1), toDate('2019-05-29'));
SELECT dictGet('range_dictionary_nullable', 'Tax', toUInt64(2), toDate('2019-05-29'));
SELECT dictGet('range_dictionary_nullable', 'Tax', toUInt64(2), toDate('2019-05-31'));
SELECT dictGetOrDefault('range_dictionary_nullable', 'Tax', toUInt64(2), toDate('2019-05-31'), 0.4);
SELECT 'dictHas';
SELECT dictHas('database_for_range_dict.range_dictionary_nullable', toUInt64(1), toDate('2019-05-15'));
SELECT dictHas('database_for_range_dict.range_dictionary_nullable', toUInt64(1), toDate('2019-05-29'));
SELECT dictHas('database_for_range_dict.range_dictionary_nullable', toUInt64(2), toDate('2019-05-29'));
SELECT dictHas('database_for_range_dict.range_dictionary_nullable', toUInt64(2), toDate('2019-05-31'));
SELECT dictHas('range_dictionary_nullable', toUInt64(1), toDate('2019-05-15'));
SELECT dictHas('range_dictionary_nullable', toUInt64(1), toDate('2019-05-29'));
SELECT dictHas('range_dictionary_nullable', toUInt64(2), toDate('2019-05-29'));
SELECT dictHas('range_dictionary_nullable', toUInt64(2), toDate('2019-05-31'));
SELECT 'select columns from dictionary';
SELECT 'allColumns';
SELECT * FROM database_for_range_dict.range_dictionary_nullable ORDER BY CountryID, StartDate, EndDate;
SELECT * FROM range_dictionary_nullable ORDER BY CountryID, StartDate, EndDate;
SELECT 'noColumns';
SELECT 1 FROM database_for_range_dict.range_dictionary_nullable ORDER BY CountryID, StartDate, EndDate;
SELECT 1 FROM range_dictionary_nullable ORDER BY CountryID, StartDate, EndDate;
SELECT 'onlySpecificColumns';
SELECT CountryID, StartDate, Tax FROM database_for_range_dict.range_dictionary_nullable ORDER BY CountryID, StartDate, EndDate;
SELECT CountryID, StartDate, Tax FROM range_dictionary_nullable ORDER BY CountryID, StartDate, EndDate;
SELECT 'onlySpecificColumn';
SELECT Tax FROM database_for_range_dict.range_dictionary_nullable ORDER BY CountryID, StartDate, EndDate;
SELECT Tax FROM range_dictionary_nullable ORDER BY CountryID, StartDate, EndDate;
DROP DICTIONARY database_for_range_dict.range_dictionary_nullable;
DROP TABLE database_for_range_dict.date_table;
DROP DATABASE database_for_range_dict;
DROP DICTIONARY range_dictionary_nullable;
DROP TABLE date_table;

View File

@ -52,6 +52,8 @@ CREATE TABLE system.clusters
`database_shard_name` String,
`database_replica_name` String,
`is_active` Nullable(UInt8),
`replication_lag` Nullable(UInt32),
`recovery_time` Nullable(UInt64),
`name` String ALIAS cluster
)
ENGINE = SystemClusters

View File

@ -1,7 +1,5 @@
#!/usr/bin/env bash
# Tags: no-parallel
set -e
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
@ -14,11 +12,11 @@ $CLICKHOUSE_CLIENT -q "SELECT plu(1, 1) SETTINGS allow_experimental_analyzer = 1
$CLICKHOUSE_CLIENT -q "SELECT uniqExac(1, 1) SETTINGS allow_experimental_analyzer = 1;" 2>&1 \
| grep "Maybe you meant: \['uniqExact'\]" &>/dev/null;
$CLICKHOUSE_CLIENT -q "DROP FUNCTION IF EXISTS test_user_defined_function;"
$CLICKHOUSE_CLIENT -q "CREATE FUNCTION test_user_defined_function AS x -> x + 1;"
$CLICKHOUSE_CLIENT -q "SELECT test_user_defined_functio(1) SETTINGS allow_experimental_analyzer = 1;" 2>&1 \
| grep "Maybe you meant: \['test_user_defined_function'\]" &>/dev/null;
$CLICKHOUSE_CLIENT -q "DROP FUNCTION test_user_defined_function";
$CLICKHOUSE_CLIENT -q "DROP FUNCTION IF EXISTS test_user_defined_function_$CLICKHOUSE_DATABASE;"
$CLICKHOUSE_CLIENT -q "CREATE FUNCTION test_user_defined_function_$CLICKHOUSE_DATABASE AS x -> x + 1;"
$CLICKHOUSE_CLIENT -q "SELECT test_user_defined_function_${CLICKHOUSE_DATABASE}A(1) SETTINGS allow_experimental_analyzer = 1;" 2>&1 \
| grep -E "Maybe you meant: \[.*'test_user_defined_function_$CLICKHOUSE_DATABASE'.*\]" &>/dev/null;
$CLICKHOUSE_CLIENT -q "DROP FUNCTION test_user_defined_function_$CLICKHOUSE_DATABASE";
$CLICKHOUSE_CLIENT -q "WITH (x -> x + 1) AS lambda_function SELECT lambda_functio(1) SETTINGS allow_experimental_analyzer = 1;" 2>&1 \
| grep "Maybe you meant: \['lambda_function'\]" &>/dev/null;

View File

@ -11,23 +11,28 @@ allow_create_index_without_type 0
allow_custom_error_code_in_throwif 0
allow_ddl 1
allow_deprecated_database_ordinary 0
allow_deprecated_error_prone_window_functions 0
allow_deprecated_snowflake_conversion_functions 0
allow_deprecated_syntax_for_merge_tree 0
allow_distributed_ddl 1
allow_drop_detached 0
allow_execute_multiif_columnar 1
allow_experimental_alter_materialized_view_structure 1
allow_experimental_analyzer 0
allow_experimental_analyzer 1
allow_experimental_annoy_index 0
allow_experimental_bigint_types 1
allow_experimental_codecs 0
allow_experimental_database_atomic 1
allow_experimental_database_materialized_mysql 0
allow_experimental_database_materialized_postgresql 0
allow_experimental_database_replicated 0
allow_experimental_database_replicated 1
allow_experimental_dynamic_type 0
allow_experimental_full_text_index 0
allow_experimental_funnel_functions 0
allow_experimental_geo_types 1
allow_experimental_hash_functions 0
allow_experimental_inverted_index 0
allow_experimental_join_condition 0
allow_experimental_lightweight_delete 1
allow_experimental_live_view 0
allow_experimental_map_type 1
@ -40,12 +45,15 @@ allow_experimental_query_cache 1
allow_experimental_query_deduplication 0
allow_experimental_refreshable_materialized_view 0
allow_experimental_s3queue 1
allow_experimental_shared_merge_tree 0
allow_experimental_shared_merge_tree 1
allow_experimental_statistic 0
allow_experimental_statistics 0
allow_experimental_undrop_table_query 1
allow_experimental_usearch_index 0
allow_experimental_variant_type 0
allow_experimental_window_functions 1
allow_experimental_window_view 0
allow_get_client_http_header 0
allow_hyperscan 1
allow_introspection_functions 0
allow_named_collection_override_by_default 1
@ -58,17 +66,21 @@ allow_prefetched_read_pool_for_remote_filesystem 1
allow_push_predicate_when_subquery_contains_with 1
allow_settings_after_format_in_insert 0
allow_simdjson 1
allow_statistic_optimize 0
allow_statistics_optimize 0
allow_suspicious_codecs 0
allow_suspicious_fixed_string_types 0
allow_suspicious_indices 0
allow_suspicious_low_cardinality_types 0
allow_suspicious_primary_key 0
allow_suspicious_ttl_expressions 0
allow_suspicious_variant_types 0
allow_unrestricted_reads_from_keeper 0
alter_move_to_space_execute_async 0
alter_partition_verbose_result 0
alter_sync 1
analyze_index_with_space_filling_curves 1
analyzer_compatibility_join_using_top_level_identifier 0
annoy_index_search_k_nodes -1
any_join_distinct_right_table_keys 0
apply_deleted_mask 1
@ -76,20 +88,42 @@ apply_mutations_on_fly 0
asterisk_include_alias_columns 0
asterisk_include_materialized_columns 0
async_insert 0
async_insert_busy_timeout_decrease_rate 0.2
async_insert_busy_timeout_increase_rate 0.2
async_insert_busy_timeout_max_ms 200
async_insert_busy_timeout_min_ms 50
async_insert_busy_timeout_ms 200
async_insert_cleanup_timeout_ms 1000
async_insert_deduplicate 0
async_insert_max_data_size 1000000
async_insert_max_data_size 10485760
async_insert_max_query_number 450
async_insert_poll_timeout_ms 10
async_insert_stale_timeout_ms 0
async_insert_threads 16
async_insert_use_adaptive_busy_timeout 1
async_query_sending_for_remote 1
async_socket_for_remote 1
azure_allow_parallel_part_upload 1
azure_create_new_file_on_insert 0
azure_ignore_file_doesnt_exist 0
azure_list_object_keys_size 1000
azure_max_blocks_in_multipart_upload 50000
azure_max_inflight_parts_for_one_file 20
azure_max_single_part_copy_size 268435456
azure_max_single_part_upload_size 104857600
azure_max_single_read_retries 4
azure_max_unexpected_write_error_retries 4
azure_max_upload_part_size 5368709120
azure_min_upload_part_size 16777216
azure_sdk_max_retries 10
azure_sdk_retry_initial_backoff_ms 10
azure_sdk_retry_max_backoff_ms 1000
azure_skip_empty_files 0
azure_strict_upload_part_size 0
azure_throw_on_zero_files_match 0
azure_truncate_on_insert 0
azure_upload_part_size_multiply_factor 2
azure_upload_part_size_multiply_parts_count_threshold 500
background_buffer_flush_schedule_pool_size 16
background_common_pool_size 8
background_distributed_schedule_pool_size 16
@ -107,6 +141,7 @@ backup_restore_keeper_max_retries 20
backup_restore_keeper_retry_initial_backoff_ms 100
backup_restore_keeper_retry_max_backoff_ms 5000
backup_restore_keeper_value_max_size 1048576
backup_restore_s3_retry_attempts 1000
backup_threads 16
bool_false_representation false
bool_true_representation true
@ -115,6 +150,7 @@ calculate_text_stack_trace 1
cancel_http_readonly_queries_on_client_close 0
cast_ipv4_ipv6_default_on_conversion_error 0
cast_keep_nullable 0
cast_string_to_dynamic_use_inference 0
check_query_single_value_result 1
check_referential_table_dependencies 0
check_table_dependencies 1
@ -123,6 +159,7 @@ cloud_mode 0
cloud_mode_engine 1
cluster_for_parallel_replicas
collect_hash_table_stats_during_aggregation 1
collect_hash_table_stats_during_joins 1
column_names_for_schema_inference
compatibility
compatibility_ignore_auto_increment_in_create_table 0
@ -141,9 +178,12 @@ count_distinct_optimization 0
create_index_ignore_unique 0
create_replicated_merge_tree_fault_injection_probability 0
create_table_empty_primary_key_by_default 0
cross_join_min_bytes_to_compress 1073741824
cross_join_min_rows_to_compress 10000000
cross_to_inner_join_rewrite 1
data_type_default_nullable 0
database_atomic_wait_for_drop_and_detach_synchronously 0
database_replicated_allow_heavy_create 0
database_replicated_allow_only_replicated_engine 0
database_replicated_allow_replicated_engine_arguments 1
database_replicated_always_detach_permanently 0
@ -156,15 +196,19 @@ date_time_overflow_behavior ignore
decimal_check_overflow 1
deduplicate_blocks_in_dependent_materialized_views 0
default_database_engine Atomic
default_materialized_view_sql_security DEFINER
default_max_bytes_in_join 1000000000
default_table_engine None
default_normal_view_sql_security INVOKER
default_table_engine MergeTree
default_temporary_table_engine Memory
default_view_definer CURRENT_USER
describe_compact_output 0
describe_extend_object_types 0
describe_include_subcolumns 0
describe_include_virtual_columns 0
dialect clickhouse
dictionary_use_async_executor 0
dictionary_validate_primary_key_type 0
distinct_overflow_mode throw
distributed_aggregation_memory_efficient 1
distributed_background_insert_batch 0
@ -182,6 +226,7 @@ distributed_directory_monitor_sleep_time_ms 100
distributed_directory_monitor_split_batch_on_failure 0
distributed_foreground_insert 0
distributed_group_by_no_merge 0
distributed_insert_skip_read_only_replicas 0
distributed_product_mode deny
distributed_push_down_limit 1
distributed_replica_error_cap 1000
@ -191,6 +236,7 @@ do_not_merge_across_partitions_select_final 0
drain_timeout 3
empty_result_for_aggregation_by_constant_keys_on_empty_set 1
empty_result_for_aggregation_by_empty_set 0
enable_blob_storage_log 1
enable_debug_queries 0
enable_deflate_qpl_codec 0
enable_early_constant_folding 1
@ -205,6 +251,7 @@ enable_job_stack_trace 0
enable_lightweight_delete 1
enable_memory_bound_merging_of_aggregation_results 1
enable_multiple_prewhere_read_steps 1
enable_named_columns_in_function_tuple 1
enable_optimize_predicate_expression 1
enable_optimize_predicate_expression_to_final_subquery 1
enable_order_by_all 1
@ -216,7 +263,9 @@ enable_sharing_sets_for_mutations 1
enable_software_prefetch_in_aggregation 1
enable_unaligned_array_join 0
enable_url_encoding 1
enable_vertical_final 1
enable_writes_to_query_cache 1
enable_zstd_qat_codec 0
engine_file_allow_create_multiple_files 0
engine_file_empty_if_not_exists 0
engine_file_skip_empty_files 0
@ -231,10 +280,12 @@ external_storage_max_read_rows 0
external_storage_rw_timeout_sec 300
external_table_functions_use_nulls 1
external_table_strict_query 0
extract_key_value_pairs_max_pairs_per_row 1000
extract_kvp_max_pairs_per_row 1000
extremes 0
fallback_to_stale_replicas_for_distributed_queries 1
filesystem_cache_max_download_size 137438953472
filesystem_cache_reserve_space_wait_lock_timeout_milliseconds 1000
filesystem_cache_segments_batch_size 20
filesystem_prefetch_max_memory_usage 1073741824
filesystem_prefetch_min_bytes_for_single_read_task 2097152
@ -278,7 +329,9 @@ format_regexp_escaping_rule Raw
format_regexp_skip_unmatched 0
format_schema
format_template_resultset
format_template_resultset_format
format_template_row
format_template_row_format
format_template_rows_between_delimiter \n
format_tsv_null_representation \\N
formatdatetime_f_prints_single_zero 0
@ -288,8 +341,11 @@ fsync_metadata 1
function_implementation
function_json_value_return_type_allow_complex 0
function_json_value_return_type_allow_nullable 0
function_locate_has_mysql_compatible_argument_order 1
function_range_max_elements_in_block 500000000
function_sleep_max_microseconds_per_block 3000000
function_visible_width_behavior 1
geo_distance_returns_float64_on_float64_arguments 1
glob_expansion_max_elements 1000
grace_hash_join_initial_buckets 1
grace_hash_join_max_buckets 1024
@ -300,8 +356,10 @@ group_by_use_nulls 0
handle_kafka_error_mode default
handshake_timeout_ms 10000
hdfs_create_new_file_on_insert 0
hdfs_ignore_file_doesnt_exist 0
hdfs_replication 0
hdfs_skip_empty_files 0
hdfs_throw_on_zero_files_match 0
hdfs_truncate_on_insert 0
hedged_connection_timeout_ms 50
hsts_max_age 0
@ -326,10 +384,14 @@ http_skip_not_found_url_for_globs 1
http_wait_end_of_query 0
http_write_exception_in_output_format 1
http_zlib_compression_level 3
iceberg_engine_ignore_schema_evolution 0
idle_connection_timeout 3600
ignore_cold_parts_seconds 0
ignore_data_skipping_indices
ignore_drop_queries_probability 0
ignore_materialized_views_with_dropped_target_table 0
ignore_on_cluster_for_replicated_access_entities_queries 0
ignore_on_cluster_for_replicated_named_collections_queries 0
ignore_on_cluster_for_replicated_udf_queries 0
implicit_transaction 0
input_format_allow_errors_num 0
@ -341,12 +403,14 @@ input_format_arrow_import_nested 0
input_format_arrow_skip_columns_with_unsupported_types_in_schema_inference 0
input_format_avro_allow_missing_fields 0
input_format_avro_null_as_default 0
input_format_binary_decode_types_in_binary_format 0
input_format_bson_skip_fields_with_unsupported_types_in_schema_inference 0
input_format_capn_proto_skip_fields_with_unsupported_types_in_schema_inference 0
input_format_csv_allow_cr_end_of_line 0
input_format_csv_allow_variable_number_of_columns 0
input_format_csv_allow_whitespace_or_tab_as_delimiter 0
input_format_csv_arrays_as_nested_csv 0
input_format_csv_deserialize_separate_columns_into_tuple 1
input_format_csv_detect_header 1
input_format_csv_empty_as_default 1
input_format_csv_enum_as_number 0
@ -354,29 +418,37 @@ input_format_csv_skip_first_lines 0
input_format_csv_skip_trailing_empty_lines 0
input_format_csv_trim_whitespaces 1
input_format_csv_try_infer_numbers_from_strings 0
input_format_csv_try_infer_strings_from_quoted_tuples 1
input_format_csv_use_best_effort_in_schema_inference 1
input_format_csv_use_default_on_bad_values 0
input_format_custom_allow_variable_number_of_columns 0
input_format_custom_detect_header 1
input_format_custom_skip_trailing_empty_lines 0
input_format_defaults_for_omitted_fields 1
input_format_force_null_for_omitted_fields 0
input_format_hive_text_allow_variable_number_of_columns 1
input_format_hive_text_collection_items_delimiter 
input_format_hive_text_fields_delimiter 
input_format_hive_text_map_keys_delimiter 
input_format_import_nested_json 0
input_format_ipv4_default_on_conversion_error 0
input_format_ipv6_default_on_conversion_error 0
input_format_json_case_insensitive_column_matching 0
input_format_json_compact_allow_variable_number_of_columns 0
input_format_json_defaults_for_missing_elements_in_named_tuple 1
input_format_json_ignore_unknown_keys_in_named_tuple 1
input_format_json_ignore_unnecessary_fields 1
input_format_json_infer_incomplete_types_as_strings 1
input_format_json_named_tuples_as_objects 1
input_format_json_read_arrays_as_strings 1
input_format_json_read_bools_as_numbers 1
input_format_json_read_bools_as_strings 1
input_format_json_read_numbers_as_strings 1
input_format_json_read_objects_as_strings 1
input_format_json_throw_on_bad_escape_sequence 1
input_format_json_try_infer_named_tuples_from_objects 1
input_format_json_try_infer_numbers_from_strings 0
input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects 0
input_format_json_validate_types_from_metadata 1
input_format_max_bytes_to_read_for_schema_inference 33554432
input_format_max_rows_to_read_for_schema_inference 25000
@ -384,11 +456,13 @@ input_format_msgpack_number_of_columns 0
input_format_mysql_dump_map_column_names 1
input_format_mysql_dump_table_name
input_format_native_allow_types_conversion 1
input_format_native_decode_types_in_binary_format 0
input_format_null_as_default 1
input_format_orc_allow_missing_columns 1
input_format_orc_case_insensitive_column_matching 0
input_format_orc_filter_push_down 1
input_format_orc_import_nested 0
input_format_orc_reader_time_zone_name GMT
input_format_orc_row_batch_size 100000
input_format_orc_skip_columns_with_unsupported_types_in_schema_inference 0
input_format_orc_use_fast_decoder 1
@ -398,17 +472,21 @@ input_format_parquet_case_insensitive_column_matching 0
input_format_parquet_filter_push_down 1
input_format_parquet_import_nested 0
input_format_parquet_local_file_min_bytes_for_seek 8192
input_format_parquet_max_block_size 8192
input_format_parquet_max_block_size 65409
input_format_parquet_prefer_block_bytes 16744704
input_format_parquet_preserve_order 0
input_format_parquet_skip_columns_with_unsupported_types_in_schema_inference 0
input_format_parquet_use_native_reader 0
input_format_protobuf_flatten_google_wrappers 0
input_format_protobuf_skip_fields_with_unsupported_types_in_schema_inference 0
input_format_record_errors_file_path
input_format_skip_unknown_fields 1
input_format_try_infer_dates 1
input_format_try_infer_datetimes 1
input_format_try_infer_exponent_floats 0
input_format_try_infer_integers 1
input_format_tsv_allow_variable_number_of_columns 0
input_format_tsv_crlf_end_of_line 0
input_format_tsv_detect_header 1
input_format_tsv_empty_as_default 0
input_format_tsv_enum_as_number 0
@ -450,7 +528,12 @@ joined_subquery_requires_alias 1
kafka_disable_num_consumers_limit 0
kafka_max_wait_ms 5000
keeper_map_strict_mode 0
keeper_max_retries 10
keeper_retry_initial_backoff_ms 100
keeper_retry_max_backoff_ms 5000
legacy_column_name_of_tuple_literal 0
lightweight_deletes_sync 2
lightweight_mutation_projection_mode throw
limit 0
live_view_heartbeat_interval 15
load_balancing random
@ -461,7 +544,7 @@ local_filesystem_read_prefetch 0
lock_acquire_timeout 120
log_comment
log_formatted_queries 0
log_processors_profiles 0
log_processors_profiles 1
log_profile_events 1
log_queries 1
log_queries_cut_to_length 100000
@ -474,6 +557,8 @@ log_query_views 1
low_cardinality_allow_in_native_format 1
low_cardinality_max_dictionary_size 8192
low_cardinality_use_single_dictionary_for_part 0
materialize_skip_indexes_on_insert 1
materialize_statistics_on_insert 1
materialize_ttl_after_modify 1
materialized_views_ignore_errors 0
max_alter_threads \'auto(16)\'
@ -501,6 +586,7 @@ max_distributed_depth 5
max_download_buffer_size 10485760
max_download_threads 4
max_entries_for_hash_table_stats 10000
max_estimated_execution_time 0
max_execution_speed 0
max_execution_speed_bytes 0
max_execution_time 0
@ -528,7 +614,9 @@ max_network_bandwidth_for_user 0
max_network_bytes 0
max_number_of_partitions_for_independent_aggregation 128
max_parallel_replicas 1
max_parser_backtracks 1000000
max_parser_depth 1000
max_parsing_threads \'auto(16)\'
max_partition_size_to_drop 50000000000
max_partitions_per_insert_block 100
max_partitions_to_read -1
@ -537,6 +625,7 @@ max_query_size 262144
max_read_buffer_size 1048576
max_read_buffer_size_local_fs 131072
max_read_buffer_size_remote_fs 0
max_recursive_cte_evaluation_depth 1000
max_remote_read_network_bandwidth 0
max_remote_read_network_bandwidth_for_server 0
max_remote_write_network_bandwidth 0
@ -549,7 +638,7 @@ max_result_rows 0
max_rows_in_distinct 0
max_rows_in_join 0
max_rows_in_set 0
max_rows_in_set_to_optimize_join 100000
max_rows_in_set_to_optimize_join 0
max_rows_to_group_by 0
max_rows_to_read 0
max_rows_to_read_leaf 0
@ -557,6 +646,7 @@ max_rows_to_sort 0
max_rows_to_transfer 0
max_sessions_for_user 0
max_size_to_preallocate_for_aggregation 100000000
max_size_to_preallocate_for_joins 100000000
max_streams_for_merge_tree_reading 0
max_streams_multiplier_for_merge_tables 5
max_streams_to_max_threads_ratio 1
@ -592,6 +682,7 @@ merge_tree_min_bytes_per_task_for_remote_reading 4194304
merge_tree_min_rows_for_concurrent_read 163840
merge_tree_min_rows_for_concurrent_read_for_remote_filesystem 163840
merge_tree_min_rows_for_seek 0
merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability 0
merge_tree_use_const_size_tasks_for_remote_reading 1
metrics_perf_events_enabled 0
metrics_perf_events_list
@ -604,6 +695,8 @@ min_count_to_compile_expression 3
min_count_to_compile_sort_description 3
min_execution_speed 0
min_execution_speed_bytes 0
min_external_table_block_size_bytes 268402944
min_external_table_block_size_rows 1048449
min_free_disk_space_for_temporary_data 0
min_hit_rate_to_use_consecutive_keys_optimization 0.5
min_insert_block_size_bytes 268402944
@ -619,8 +712,8 @@ mutations_execute_subqueries_on_initiator 0
mutations_max_literal_size_to_replace 16384
mutations_sync 0
mysql_datatypes_support_level
mysql_map_fixed_string_to_text_in_show_columns 0
mysql_map_string_to_text_in_show_columns 0
mysql_map_fixed_string_to_text_in_show_columns 1
mysql_map_string_to_text_in_show_columns 1
mysql_max_rows_to_insert 65536
network_compression_method LZ4
network_zstd_compression_level 1
@ -647,6 +740,7 @@ optimize_group_by_constant_keys 1
optimize_group_by_function_keys 1
optimize_if_chain_to_multiif 0
optimize_if_transform_strings_to_enum 0
optimize_injective_functions_in_group_by 1
optimize_injective_functions_inside_uniq 1
optimize_min_equality_disjunction_chain_length 3
optimize_min_inequality_conjunction_chain_length 3
@ -664,7 +758,7 @@ optimize_redundant_functions_in_order_by 1
optimize_respect_aliases 1
optimize_rewrite_aggregate_function_with_if 1
optimize_rewrite_array_exists_to_has 0
optimize_rewrite_sum_if_to_count_if 0
optimize_rewrite_sum_if_to_count_if 1
optimize_skip_merged_partitions 0
optimize_skip_unused_shards 0
optimize_skip_unused_shards_limit 1000
@ -674,9 +768,10 @@ optimize_sorting_by_input_stream_properties 1
optimize_substitute_columns 0
optimize_syntax_fuse_functions 0
optimize_throw_if_noop 0
optimize_time_filter_with_preimage 1
optimize_trivial_approximate_count_query 0
optimize_trivial_count_query 1
optimize_trivial_insert_select 1
optimize_trivial_insert_select 0
optimize_uniq_to_count 1
optimize_use_implicit_projections 1
optimize_use_projections 1
@ -685,13 +780,19 @@ os_thread_priority 0
output_format_arrow_compression_method lz4_frame
output_format_arrow_fixed_string_as_fixed_byte_array 1
output_format_arrow_low_cardinality_as_dictionary 0
output_format_arrow_string_as_string 0
output_format_arrow_string_as_string 1
output_format_arrow_use_64_bit_indexes_for_dictionary 0
output_format_arrow_use_signed_indexes_for_dictionary 1
output_format_avro_codec
output_format_avro_rows_in_file 1
output_format_avro_string_column_pattern
output_format_avro_sync_interval 16384
output_format_binary_encode_types_in_binary_format 0
output_format_bson_string_as_string 0
output_format_compression_level 3
output_format_compression_zstd_window_log 0
output_format_csv_crlf_end_of_line 0
output_format_csv_serialize_tuple_into_separate_columns 1
output_format_decimal_trailing_zeros 0
output_format_enable_streaming 0
output_format_json_array_of_rows 0
@ -705,27 +806,34 @@ output_format_json_skip_null_value_in_named_tuples 0
output_format_json_validate_utf8 0
output_format_markdown_escape_special_characters 0
output_format_msgpack_uuid_representation ext
output_format_orc_compression_method lz4
output_format_native_encode_types_in_binary_format 0
output_format_orc_compression_method zstd
output_format_orc_row_index_stride 10000
output_format_orc_string_as_string 0
output_format_orc_string_as_string 1
output_format_parallel_formatting 1
output_format_parquet_batch_size 1024
output_format_parquet_compliant_nested_types 1
output_format_parquet_compression_method lz4
output_format_parquet_compression_method zstd
output_format_parquet_data_page_size 1048576
output_format_parquet_fixed_string_as_fixed_byte_array 1
output_format_parquet_parallel_encoding 1
output_format_parquet_row_group_size 1000000
output_format_parquet_row_group_size_bytes 536870912
output_format_parquet_string_as_string 0
output_format_parquet_use_custom_encoder 0
output_format_parquet_string_as_string 1
output_format_parquet_use_custom_encoder 1
output_format_parquet_version 2.latest
output_format_pretty_color 1
output_format_parquet_write_page_index 1
output_format_pretty_color auto
output_format_pretty_display_footer_column_names 1
output_format_pretty_display_footer_column_names_min_rows 50
output_format_pretty_grid_charset UTF-8
output_format_pretty_highlight_digit_groups 1
output_format_pretty_max_column_pad_width 250
output_format_pretty_max_rows 10000
output_format_pretty_max_value_width 10000
output_format_pretty_row_numbers 0
output_format_pretty_max_value_width_apply_for_single_value 0
output_format_pretty_row_numbers 1
output_format_pretty_single_large_number_tip_threshold 1000000
output_format_protobuf_nullables_with_google_wrappers 0
output_format_schema
output_format_sql_insert_include_column_names 1
@ -734,15 +842,22 @@ output_format_sql_insert_quote_names 1
output_format_sql_insert_table_name table
output_format_sql_insert_use_replace 0
output_format_tsv_crlf_end_of_line 0
output_format_values_escape_quote_with_quote 0
output_format_write_statistics 1
page_cache_inject_eviction 0
parallel_distributed_insert_select 0
parallel_replica_offset 0
parallel_replicas_allow_in_with_subquery 1
parallel_replicas_count 0
parallel_replicas_custom_key
parallel_replicas_custom_key_filter_type default
parallel_replicas_custom_key_range_lower 0
parallel_replicas_custom_key_range_upper 0
parallel_replicas_for_non_replicated_merge_tree 0
parallel_replicas_mark_segment_size 128
parallel_replicas_min_number_of_granules_to_enable 0
parallel_replicas_min_number_of_rows_per_replica 0
parallel_replicas_prefer_local_join 1
parallel_replicas_single_task_marks_count_multiplier 2
parallel_view_processing 0
parallelize_output_from_storages 1
@ -755,11 +870,14 @@ parts_to_delay_insert 0
parts_to_throw_insert 0
periodic_live_view_refresh 60
poll_interval 10
postgresql_connection_attempt_timeout 2
postgresql_connection_pool_auto_close_connection 0
postgresql_connection_pool_retries 2
postgresql_connection_pool_size 16
postgresql_connection_pool_wait_timeout 5000
precise_float_parsing 0
prefer_column_name_to_alias 0
prefer_external_sort_block_bytes 16744704
prefer_global_in_and_join 0
prefer_localhost_replica 1
prefer_warmed_unmerged_parts_seconds 0
@ -767,7 +885,7 @@ preferred_block_size_bytes 1000000
preferred_max_column_in_block_size_bytes 0
preferred_optimize_projection_name
prefetch_buffer_size 1048576
print_pretty_type_names 0
print_pretty_type_names 1
priority 0
query_cache_compress_entries 1
query_cache_max_entries 0
@ -778,8 +896,10 @@ query_cache_nondeterministic_function_handling throw
query_cache_share_between_users 0
query_cache_squash_partial_results 1
query_cache_store_results_of_queries_with_nondeterministic_functions 0
query_cache_system_table_handling throw
query_cache_ttl 60
query_plan_aggregation_in_order 1
query_plan_convert_outer_join_to_inner_join 1
query_plan_enable_multithreading_after_window_functions 1
query_plan_enable_optimizations 1
query_plan_execute_functions_after_sorting 1
@ -788,6 +908,8 @@ query_plan_lift_up_array_join 1
query_plan_lift_up_union 1
query_plan_max_optimizations_to_apply 10000
query_plan_merge_expressions 1
query_plan_merge_filters 0
query_plan_optimize_prewhere 1
query_plan_optimize_primary_key 1
query_plan_optimize_projection 1
query_plan_push_down_limit 1
@ -806,7 +928,9 @@ read_backoff_min_events 2
read_backoff_min_interval_between_events_ms 1000
read_backoff_min_latency_ms 1000
read_from_filesystem_cache_if_exists_otherwise_bypass_cache 0
read_from_page_cache_if_exists_otherwise_bypass_cache 0
read_in_order_two_level_merge_threshold 100
read_in_order_use_buffering 1
read_overflow_mode throw
read_overflow_mode_leaf throw
read_priority 0
@ -835,17 +959,20 @@ result_overflow_mode throw
rewrite_count_distinct_if_with_count_distinct_implementation 0
s3_allow_parallel_part_upload 1
s3_check_objects_after_upload 0
s3_connect_timeout_ms 1000
s3_create_new_file_on_insert 0
s3_disable_checksum 0
s3_http_connection_pool_size 1000
s3_ignore_file_doesnt_exist 0
s3_list_object_keys_size 1000
s3_max_connections 1024
s3_max_get_burst 0
s3_max_get_rps 0
s3_max_inflight_parts_for_one_file 20
s3_max_part_number 10000
s3_max_put_burst 0
s3_max_put_rps 0
s3_max_redirects 10
s3_max_single_operation_copy_size 33554432
s3_max_single_part_upload_size 33554432
s3_max_single_read_retries 4
s3_max_unexpected_write_error_retries 4
@ -860,6 +987,8 @@ s3_truncate_on_insert 0
s3_upload_part_size_multiply_factor 2
s3_upload_part_size_multiply_parts_count_threshold 500
s3_use_adaptive_timeouts 1
s3_validate_request_settings 1
s3queue_allow_experimental_sharded_mode 0
s3queue_default_zookeeper_path /clickhouse/s3queue/
s3queue_enable_logging_to_s3queue_log 0
schema_inference_cache_require_modification_time_for_url 1
@ -887,6 +1016,8 @@ sleep_after_receiving_query_ms 0
sleep_in_send_data_ms 0
sleep_in_send_tables_status_ms 0
sort_overflow_mode throw
split_intersecting_parts_ranges_into_layers_final 1
split_parts_ranges_into_intersecting_and_non_intersecting_final 1
splitby_max_substrings_includes_remaining_string 0
stop_refreshable_materialized_views_on_startup 0
storage_file_read_method pread
@ -898,8 +1029,10 @@ stream_poll_timeout_ms 500
system_events_show_zero_values 0
table_function_remote_max_addresses 1000
tcp_keep_alive_timeout 290
temporary_data_in_cache_reserve_space_wait_lock_timeout_milliseconds 600000
temporary_files_codec LZ4
temporary_live_view_timeout 1
throw_if_deduplication_in_dependent_materialized_views_enabled_with_async_insert 1
throw_if_no_data_to_insert 1
throw_on_error_from_cache_on_write_operations 0
throw_on_max_partitions_per_insert_block 1
@ -912,8 +1045,10 @@ totals_mode after_having_exclusive
trace_profile_events 0
transfer_overflow_mode throw
transform_null_in 0
traverse_shadow_remote_data_paths 0
union_default_mode
unknown_packet_in_send_data 0
update_insert_deduplication_token_in_dependent_materialized_views 0
use_cache_for_count_from_files 1
use_client_time_zone 0
use_compact_format_in_distributed_parts_names 1
@ -923,12 +1058,15 @@ use_index_for_in_with_subqueries 1
use_index_for_in_with_subqueries_max_values 0
use_local_cache_for_remote_storage 1
use_mysql_types_in_show_columns 0
use_page_cache_for_disks_without_file_cache 0
use_query_cache 0
use_skip_indexes 1
use_skip_indexes_if_final 0
use_structure_from_insertion_table_in_table_functions 2
use_uncompressed_cache 0
use_variant_as_common_type 0
use_with_fill_by_sorting_prefix 1
validate_experimental_and_suspicious_types_inside_nested_types 1
validate_polygons 1
wait_changes_become_visible_after_commit_mode wait_unknown
wait_for_async_insert 1
1 add_http_cors_header 0
11 allow_custom_error_code_in_throwif 0
12 allow_ddl 1
13 allow_deprecated_database_ordinary 0
14 allow_deprecated_error_prone_window_functions 0
15 allow_deprecated_snowflake_conversion_functions 0
16 allow_deprecated_syntax_for_merge_tree 0
17 allow_distributed_ddl 1
18 allow_drop_detached 0
19 allow_execute_multiif_columnar 1
20 allow_experimental_alter_materialized_view_structure 1
21 allow_experimental_analyzer 0 1
22 allow_experimental_annoy_index 0
23 allow_experimental_bigint_types 1
24 allow_experimental_codecs 0
25 allow_experimental_database_atomic 1
26 allow_experimental_database_materialized_mysql 0
27 allow_experimental_database_materialized_postgresql 0
28 allow_experimental_database_replicated 0 1
29 allow_experimental_dynamic_type 0
30 allow_experimental_full_text_index 0
31 allow_experimental_funnel_functions 0
32 allow_experimental_geo_types 1
33 allow_experimental_hash_functions 0
34 allow_experimental_inverted_index 0
35 allow_experimental_join_condition 0
36 allow_experimental_lightweight_delete 1
37 allow_experimental_live_view 0
38 allow_experimental_map_type 1
45 allow_experimental_query_deduplication 0
46 allow_experimental_refreshable_materialized_view 0
47 allow_experimental_s3queue 1
48 allow_experimental_shared_merge_tree 0 1
49 allow_experimental_statistic 0
50 allow_experimental_statistics 0
51 allow_experimental_undrop_table_query 1
52 allow_experimental_usearch_index 0
53 allow_experimental_variant_type 0
54 allow_experimental_window_functions 1
55 allow_experimental_window_view 0
56 allow_get_client_http_header 0
57 allow_hyperscan 1
58 allow_introspection_functions 0
59 allow_named_collection_override_by_default 1
66 allow_push_predicate_when_subquery_contains_with 1
67 allow_settings_after_format_in_insert 0
68 allow_simdjson 1
69 allow_statistic_optimize 0
70 allow_statistics_optimize 0
71 allow_suspicious_codecs 0
72 allow_suspicious_fixed_string_types 0
73 allow_suspicious_indices 0
74 allow_suspicious_low_cardinality_types 0
75 allow_suspicious_primary_key 0
76 allow_suspicious_ttl_expressions 0
77 allow_suspicious_variant_types 0
78 allow_unrestricted_reads_from_keeper 0
79 alter_move_to_space_execute_async 0
80 alter_partition_verbose_result 0
81 alter_sync 1
82 analyze_index_with_space_filling_curves 1
83 analyzer_compatibility_join_using_top_level_identifier 0
84 annoy_index_search_k_nodes -1
85 any_join_distinct_right_table_keys 0
86 apply_deleted_mask 1
88 asterisk_include_alias_columns 0
89 asterisk_include_materialized_columns 0
90 async_insert 0
91 async_insert_busy_timeout_decrease_rate 0.2
92 async_insert_busy_timeout_increase_rate 0.2
93 async_insert_busy_timeout_max_ms 200
94 async_insert_busy_timeout_min_ms 50
95 async_insert_busy_timeout_ms 200
96 async_insert_cleanup_timeout_ms 1000
97 async_insert_deduplicate 0
98 async_insert_max_data_size 1000000 10485760
99 async_insert_max_query_number 450
100 async_insert_poll_timeout_ms 10
101 async_insert_stale_timeout_ms 0
102 async_insert_threads 16
103 async_insert_use_adaptive_busy_timeout 1
104 async_query_sending_for_remote 1
105 async_socket_for_remote 1
106 azure_allow_parallel_part_upload 1
107 azure_create_new_file_on_insert 0
108 azure_ignore_file_doesnt_exist 0
109 azure_list_object_keys_size 1000
110 azure_max_blocks_in_multipart_upload 50000
111 azure_max_inflight_parts_for_one_file 20
112 azure_max_single_part_copy_size 268435456
113 azure_max_single_part_upload_size 104857600
114 azure_max_single_read_retries 4
115 azure_max_unexpected_write_error_retries 4
116 azure_max_upload_part_size 5368709120
117 azure_min_upload_part_size 16777216
118 azure_sdk_max_retries 10
119 azure_sdk_retry_initial_backoff_ms 10
120 azure_sdk_retry_max_backoff_ms 1000
121 azure_skip_empty_files 0
122 azure_strict_upload_part_size 0
123 azure_throw_on_zero_files_match 0
124 azure_truncate_on_insert 0
125 azure_upload_part_size_multiply_factor 2
126 azure_upload_part_size_multiply_parts_count_threshold 500
127 background_buffer_flush_schedule_pool_size 16
128 background_common_pool_size 8
129 background_distributed_schedule_pool_size 16
141 backup_restore_keeper_retry_initial_backoff_ms 100
142 backup_restore_keeper_retry_max_backoff_ms 5000
143 backup_restore_keeper_value_max_size 1048576
144 backup_restore_s3_retry_attempts 1000
145 backup_threads 16
146 bool_false_representation false
147 bool_true_representation true
150 cancel_http_readonly_queries_on_client_close 0
151 cast_ipv4_ipv6_default_on_conversion_error 0
152 cast_keep_nullable 0
153 cast_string_to_dynamic_use_inference 0
154 check_query_single_value_result 1
155 check_referential_table_dependencies 0
156 check_table_dependencies 1
159 cloud_mode_engine 1
160 cluster_for_parallel_replicas
161 collect_hash_table_stats_during_aggregation 1
162 collect_hash_table_stats_during_joins 1
163 column_names_for_schema_inference
164 compatibility
165 compatibility_ignore_auto_increment_in_create_table 0
178 create_index_ignore_unique 0
179 create_replicated_merge_tree_fault_injection_probability 0
180 create_table_empty_primary_key_by_default 0
181 cross_join_min_bytes_to_compress 1073741824
182 cross_join_min_rows_to_compress 10000000
183 cross_to_inner_join_rewrite 1
184 data_type_default_nullable 0
185 database_atomic_wait_for_drop_and_detach_synchronously 0
186 database_replicated_allow_heavy_create 0
187 database_replicated_allow_only_replicated_engine 0
188 database_replicated_allow_replicated_engine_arguments 1
189 database_replicated_always_detach_permanently 0
196 decimal_check_overflow 1
197 deduplicate_blocks_in_dependent_materialized_views 0
198 default_database_engine Atomic
199 default_materialized_view_sql_security DEFINER
200 default_max_bytes_in_join 1000000000
201 default_table_engine default_normal_view_sql_security None INVOKER
202 default_table_engine MergeTree
203 default_temporary_table_engine Memory
204 default_view_definer CURRENT_USER
205 describe_compact_output 0
206 describe_extend_object_types 0
207 describe_include_subcolumns 0
208 describe_include_virtual_columns 0
209 dialect clickhouse
210 dictionary_use_async_executor 0
211 dictionary_validate_primary_key_type 0
212 distinct_overflow_mode throw
213 distributed_aggregation_memory_efficient 1
214 distributed_background_insert_batch 0
226 distributed_directory_monitor_split_batch_on_failure 0
227 distributed_foreground_insert 0
228 distributed_group_by_no_merge 0
229 distributed_insert_skip_read_only_replicas 0
230 distributed_product_mode deny
231 distributed_push_down_limit 1
232 distributed_replica_error_cap 1000
236 drain_timeout 3
237 empty_result_for_aggregation_by_constant_keys_on_empty_set 1
238 empty_result_for_aggregation_by_empty_set 0
239 enable_blob_storage_log 1
240 enable_debug_queries 0
241 enable_deflate_qpl_codec 0
242 enable_early_constant_folding 1
251 enable_lightweight_delete 1
252 enable_memory_bound_merging_of_aggregation_results 1
253 enable_multiple_prewhere_read_steps 1
254 enable_named_columns_in_function_tuple 1
255 enable_optimize_predicate_expression 1
256 enable_optimize_predicate_expression_to_final_subquery 1
257 enable_order_by_all 1
263 enable_software_prefetch_in_aggregation 1
264 enable_unaligned_array_join 0
265 enable_url_encoding 1
266 enable_vertical_final 1
267 enable_writes_to_query_cache 1
268 enable_zstd_qat_codec 0
269 engine_file_allow_create_multiple_files 0
270 engine_file_empty_if_not_exists 0
271 engine_file_skip_empty_files 0
280 external_storage_rw_timeout_sec 300
281 external_table_functions_use_nulls 1
282 external_table_strict_query 0
283 extract_key_value_pairs_max_pairs_per_row 1000
284 extract_kvp_max_pairs_per_row 1000
285 extremes 0
286 fallback_to_stale_replicas_for_distributed_queries 1
287 filesystem_cache_max_download_size 137438953472
288 filesystem_cache_reserve_space_wait_lock_timeout_milliseconds 1000
289 filesystem_cache_segments_batch_size 20
290 filesystem_prefetch_max_memory_usage 1073741824
291 filesystem_prefetch_min_bytes_for_single_read_task 2097152
329 format_regexp_skip_unmatched 0
330 format_schema
331 format_template_resultset
332 format_template_resultset_format
333 format_template_row
334 format_template_row_format
335 format_template_rows_between_delimiter \n
336 format_tsv_null_representation \\N
337 formatdatetime_f_prints_single_zero 0
341 function_implementation
342 function_json_value_return_type_allow_complex 0
343 function_json_value_return_type_allow_nullable 0
344 function_locate_has_mysql_compatible_argument_order 1
345 function_range_max_elements_in_block 500000000
346 function_sleep_max_microseconds_per_block 3000000
347 function_visible_width_behavior 1
348 geo_distance_returns_float64_on_float64_arguments 1
349 glob_expansion_max_elements 1000
350 grace_hash_join_initial_buckets 1
351 grace_hash_join_max_buckets 1024
356 handle_kafka_error_mode default
357 handshake_timeout_ms 10000
358 hdfs_create_new_file_on_insert 0
359 hdfs_ignore_file_doesnt_exist 0
360 hdfs_replication 0
361 hdfs_skip_empty_files 0
362 hdfs_throw_on_zero_files_match 0
363 hdfs_truncate_on_insert 0
364 hedged_connection_timeout_ms 50
365 hsts_max_age 0
384 http_wait_end_of_query 0
385 http_write_exception_in_output_format 1
386 http_zlib_compression_level 3
387 iceberg_engine_ignore_schema_evolution 0
388 idle_connection_timeout 3600
389 ignore_cold_parts_seconds 0
390 ignore_data_skipping_indices
391 ignore_drop_queries_probability 0
392 ignore_materialized_views_with_dropped_target_table 0
393 ignore_on_cluster_for_replicated_access_entities_queries 0
394 ignore_on_cluster_for_replicated_named_collections_queries 0
395 ignore_on_cluster_for_replicated_udf_queries 0
396 implicit_transaction 0
397 input_format_allow_errors_num 0
403 input_format_arrow_skip_columns_with_unsupported_types_in_schema_inference 0
404 input_format_avro_allow_missing_fields 0
405 input_format_avro_null_as_default 0
406 input_format_binary_decode_types_in_binary_format 0
407 input_format_bson_skip_fields_with_unsupported_types_in_schema_inference 0
408 input_format_capn_proto_skip_fields_with_unsupported_types_in_schema_inference 0
409 input_format_csv_allow_cr_end_of_line 0
410 input_format_csv_allow_variable_number_of_columns 0
411 input_format_csv_allow_whitespace_or_tab_as_delimiter 0
412 input_format_csv_arrays_as_nested_csv 0
413 input_format_csv_deserialize_separate_columns_into_tuple 1
414 input_format_csv_detect_header 1
415 input_format_csv_empty_as_default 1
416 input_format_csv_enum_as_number 0
418 input_format_csv_skip_trailing_empty_lines 0
419 input_format_csv_trim_whitespaces 1
420 input_format_csv_try_infer_numbers_from_strings 0
421 input_format_csv_try_infer_strings_from_quoted_tuples 1
422 input_format_csv_use_best_effort_in_schema_inference 1
423 input_format_csv_use_default_on_bad_values 0
424 input_format_custom_allow_variable_number_of_columns 0
425 input_format_custom_detect_header 1
426 input_format_custom_skip_trailing_empty_lines 0
427 input_format_defaults_for_omitted_fields 1
428 input_format_force_null_for_omitted_fields 0
429 input_format_hive_text_allow_variable_number_of_columns 1
430 input_format_hive_text_collection_items_delimiter 
431 input_format_hive_text_fields_delimiter 
432 input_format_hive_text_map_keys_delimiter 
433 input_format_import_nested_json 0
434 input_format_ipv4_default_on_conversion_error 0
435 input_format_ipv6_default_on_conversion_error 0
436 input_format_json_case_insensitive_column_matching 0
437 input_format_json_compact_allow_variable_number_of_columns 0
438 input_format_json_defaults_for_missing_elements_in_named_tuple 1
439 input_format_json_ignore_unknown_keys_in_named_tuple 1
440 input_format_json_ignore_unnecessary_fields 1
441 input_format_json_infer_incomplete_types_as_strings 1
442 input_format_json_named_tuples_as_objects 1
443 input_format_json_read_arrays_as_strings 1
444 input_format_json_read_bools_as_numbers 1
445 input_format_json_read_bools_as_strings 1
446 input_format_json_read_numbers_as_strings 1
447 input_format_json_read_objects_as_strings 1
448 input_format_json_throw_on_bad_escape_sequence 1
449 input_format_json_try_infer_named_tuples_from_objects 1
450 input_format_json_try_infer_numbers_from_strings 0
451 input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects 0
452 input_format_json_validate_types_from_metadata 1
453 input_format_max_bytes_to_read_for_schema_inference 33554432
454 input_format_max_rows_to_read_for_schema_inference 25000
456 input_format_mysql_dump_map_column_names 1
457 input_format_mysql_dump_table_name
458 input_format_native_allow_types_conversion 1
459 input_format_native_decode_types_in_binary_format 0
460 input_format_null_as_default 1
461 input_format_orc_allow_missing_columns 1
462 input_format_orc_case_insensitive_column_matching 0
463 input_format_orc_filter_push_down 1
464 input_format_orc_import_nested 0
465 input_format_orc_reader_time_zone_name GMT
466 input_format_orc_row_batch_size 100000
467 input_format_orc_skip_columns_with_unsupported_types_in_schema_inference 0
468 input_format_orc_use_fast_decoder 1
472 input_format_parquet_filter_push_down 1
473 input_format_parquet_import_nested 0
474 input_format_parquet_local_file_min_bytes_for_seek 8192
475 input_format_parquet_max_block_size 8192 65409
476 input_format_parquet_prefer_block_bytes 16744704
477 input_format_parquet_preserve_order 0
478 input_format_parquet_skip_columns_with_unsupported_types_in_schema_inference 0
479 input_format_parquet_use_native_reader 0
480 input_format_protobuf_flatten_google_wrappers 0
481 input_format_protobuf_skip_fields_with_unsupported_types_in_schema_inference 0
482 input_format_record_errors_file_path
483 input_format_skip_unknown_fields 1
484 input_format_try_infer_dates 1
485 input_format_try_infer_datetimes 1
486 input_format_try_infer_exponent_floats 0
487 input_format_try_infer_integers 1
488 input_format_tsv_allow_variable_number_of_columns 0
489 input_format_tsv_crlf_end_of_line 0
490 input_format_tsv_detect_header 1
491 input_format_tsv_empty_as_default 0
492 input_format_tsv_enum_as_number 0
528 kafka_disable_num_consumers_limit 0
529 kafka_max_wait_ms 5000
530 keeper_map_strict_mode 0
531 keeper_max_retries 10
532 keeper_retry_initial_backoff_ms 100
533 keeper_retry_max_backoff_ms 5000
534 legacy_column_name_of_tuple_literal 0
535 lightweight_deletes_sync 2
536 lightweight_mutation_projection_mode throw
537 limit 0
538 live_view_heartbeat_interval 15
539 load_balancing random
544 lock_acquire_timeout 120
545 log_comment
546 log_formatted_queries 0
547 log_processors_profiles 0 1
548 log_profile_events 1
549 log_queries 1
550 log_queries_cut_to_length 100000
557 low_cardinality_allow_in_native_format 1
558 low_cardinality_max_dictionary_size 8192
559 low_cardinality_use_single_dictionary_for_part 0
560 materialize_skip_indexes_on_insert 1
561 materialize_statistics_on_insert 1
562 materialize_ttl_after_modify 1
563 materialized_views_ignore_errors 0
564 max_alter_threads \'auto(16)\'
586 max_download_buffer_size 10485760
587 max_download_threads 4
588 max_entries_for_hash_table_stats 10000
589 max_estimated_execution_time 0
590 max_execution_speed 0
591 max_execution_speed_bytes 0
592 max_execution_time 0
614 max_network_bytes 0
615 max_number_of_partitions_for_independent_aggregation 128
616 max_parallel_replicas 1
617 max_parser_backtracks 1000000
618 max_parser_depth 1000
619 max_parsing_threads \'auto(16)\'
620 max_partition_size_to_drop 50000000000
621 max_partitions_per_insert_block 100
622 max_partitions_to_read -1
625 max_read_buffer_size 1048576
626 max_read_buffer_size_local_fs 131072
627 max_read_buffer_size_remote_fs 0
628 max_recursive_cte_evaluation_depth 1000
629 max_remote_read_network_bandwidth 0
630 max_remote_read_network_bandwidth_for_server 0
631 max_remote_write_network_bandwidth 0
638 max_rows_in_distinct 0
639 max_rows_in_join 0
640 max_rows_in_set 0
641 max_rows_in_set_to_optimize_join 100000 0
642 max_rows_to_group_by 0
643 max_rows_to_read 0
644 max_rows_to_read_leaf 0
646 max_rows_to_transfer 0
647 max_sessions_for_user 0
648 max_size_to_preallocate_for_aggregation 100000000
649 max_size_to_preallocate_for_joins 100000000
650 max_streams_for_merge_tree_reading 0
651 max_streams_multiplier_for_merge_tables 5
652 max_streams_to_max_threads_ratio 1
682 merge_tree_min_rows_for_concurrent_read 163840
683 merge_tree_min_rows_for_concurrent_read_for_remote_filesystem 163840
684 merge_tree_min_rows_for_seek 0
685 merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability 0
686 merge_tree_use_const_size_tasks_for_remote_reading 1
687 metrics_perf_events_enabled 0
688 metrics_perf_events_list
695 min_count_to_compile_sort_description 3
696 min_execution_speed 0
697 min_execution_speed_bytes 0
698 min_external_table_block_size_bytes 268402944
699 min_external_table_block_size_rows 1048449
700 min_free_disk_space_for_temporary_data 0
701 min_hit_rate_to_use_consecutive_keys_optimization 0.5
702 min_insert_block_size_bytes 268402944
712 mutations_max_literal_size_to_replace 16384
713 mutations_sync 0
714 mysql_datatypes_support_level
715 mysql_map_fixed_string_to_text_in_show_columns 0 1
716 mysql_map_string_to_text_in_show_columns 0 1
717 mysql_max_rows_to_insert 65536
718 network_compression_method LZ4
719 network_zstd_compression_level 1
740 optimize_group_by_function_keys 1
741 optimize_if_chain_to_multiif 0
742 optimize_if_transform_strings_to_enum 0
743 optimize_injective_functions_in_group_by 1
744 optimize_injective_functions_inside_uniq 1
745 optimize_min_equality_disjunction_chain_length 3
746 optimize_min_inequality_conjunction_chain_length 3
758 optimize_respect_aliases 1
759 optimize_rewrite_aggregate_function_with_if 1
760 optimize_rewrite_array_exists_to_has 0
761 optimize_rewrite_sum_if_to_count_if 0 1
762 optimize_skip_merged_partitions 0
763 optimize_skip_unused_shards 0
764 optimize_skip_unused_shards_limit 1000
768 optimize_substitute_columns 0
769 optimize_syntax_fuse_functions 0
770 optimize_throw_if_noop 0
771 optimize_time_filter_with_preimage 1
772 optimize_trivial_approximate_count_query 0
773 optimize_trivial_count_query 1
774 optimize_trivial_insert_select 1 0
775 optimize_uniq_to_count 1
776 optimize_use_implicit_projections 1
777 optimize_use_projections 1
780 output_format_arrow_compression_method lz4_frame
781 output_format_arrow_fixed_string_as_fixed_byte_array 1
782 output_format_arrow_low_cardinality_as_dictionary 0
783 output_format_arrow_string_as_string 0 1
784 output_format_arrow_use_64_bit_indexes_for_dictionary 0
785 output_format_arrow_use_signed_indexes_for_dictionary 1
786 output_format_avro_codec
787 output_format_avro_rows_in_file 1
788 output_format_avro_string_column_pattern
789 output_format_avro_sync_interval 16384
790 output_format_binary_encode_types_in_binary_format 0
791 output_format_bson_string_as_string 0
792 output_format_compression_level 3
793 output_format_compression_zstd_window_log 0
794 output_format_csv_crlf_end_of_line 0
795 output_format_csv_serialize_tuple_into_separate_columns 1
796 output_format_decimal_trailing_zeros 0
797 output_format_enable_streaming 0
798 output_format_json_array_of_rows 0
806 output_format_json_validate_utf8 0
807 output_format_markdown_escape_special_characters 0
808 output_format_msgpack_uuid_representation ext
809 output_format_orc_compression_method output_format_native_encode_types_in_binary_format lz4 0
810 output_format_orc_compression_method zstd
811 output_format_orc_row_index_stride 10000
812 output_format_orc_string_as_string 0 1
813 output_format_parallel_formatting 1
814 output_format_parquet_batch_size 1024
815 output_format_parquet_compliant_nested_types 1
816 output_format_parquet_compression_method lz4 zstd
817 output_format_parquet_data_page_size 1048576
818 output_format_parquet_fixed_string_as_fixed_byte_array 1
819 output_format_parquet_parallel_encoding 1
820 output_format_parquet_row_group_size 1000000
821 output_format_parquet_row_group_size_bytes 536870912
822 output_format_parquet_string_as_string 0 1
823 output_format_parquet_use_custom_encoder 0 1
824 output_format_parquet_version 2.latest
825 output_format_pretty_color output_format_parquet_write_page_index 1
826 output_format_pretty_color auto
827 output_format_pretty_display_footer_column_names 1
828 output_format_pretty_display_footer_column_names_min_rows 50
829 output_format_pretty_grid_charset UTF-8
830 output_format_pretty_highlight_digit_groups 1
831 output_format_pretty_max_column_pad_width 250
832 output_format_pretty_max_rows 10000
833 output_format_pretty_max_value_width 10000
834 output_format_pretty_row_numbers output_format_pretty_max_value_width_apply_for_single_value 0
835 output_format_pretty_row_numbers 1
836 output_format_pretty_single_large_number_tip_threshold 1000000
837 output_format_protobuf_nullables_with_google_wrappers 0
838 output_format_schema
839 output_format_sql_insert_include_column_names 1
842 output_format_sql_insert_table_name table
843 output_format_sql_insert_use_replace 0
844 output_format_tsv_crlf_end_of_line 0
845 output_format_values_escape_quote_with_quote 0
846 output_format_write_statistics 1
847 page_cache_inject_eviction 0
848 parallel_distributed_insert_select 0
849 parallel_replica_offset 0
850 parallel_replicas_allow_in_with_subquery 1
851 parallel_replicas_count 0
852 parallel_replicas_custom_key
853 parallel_replicas_custom_key_filter_type default
854 parallel_replicas_custom_key_range_lower 0
855 parallel_replicas_custom_key_range_upper 0
856 parallel_replicas_for_non_replicated_merge_tree 0
857 parallel_replicas_mark_segment_size 128
858 parallel_replicas_min_number_of_granules_to_enable 0
859 parallel_replicas_min_number_of_rows_per_replica 0
860 parallel_replicas_prefer_local_join 1
861 parallel_replicas_single_task_marks_count_multiplier 2
862 parallel_view_processing 0
863 parallelize_output_from_storages 1
870 parts_to_throw_insert 0
871 periodic_live_view_refresh 60
872 poll_interval 10
873 postgresql_connection_attempt_timeout 2
874 postgresql_connection_pool_auto_close_connection 0
875 postgresql_connection_pool_retries 2
876 postgresql_connection_pool_size 16
877 postgresql_connection_pool_wait_timeout 5000
878 precise_float_parsing 0
879 prefer_column_name_to_alias 0
880 prefer_external_sort_block_bytes 16744704
881 prefer_global_in_and_join 0
882 prefer_localhost_replica 1
883 prefer_warmed_unmerged_parts_seconds 0
885 preferred_max_column_in_block_size_bytes 0
886 preferred_optimize_projection_name
887 prefetch_buffer_size 1048576
888 print_pretty_type_names 0 1
889 priority 0
890 query_cache_compress_entries 1
891 query_cache_max_entries 0
896 query_cache_share_between_users 0
897 query_cache_squash_partial_results 1
898 query_cache_store_results_of_queries_with_nondeterministic_functions 0
899 query_cache_system_table_handling throw
900 query_cache_ttl 60
901 query_plan_aggregation_in_order 1
902 query_plan_convert_outer_join_to_inner_join 1
903 query_plan_enable_multithreading_after_window_functions 1
904 query_plan_enable_optimizations 1
905 query_plan_execute_functions_after_sorting 1
908 query_plan_lift_up_union 1
909 query_plan_max_optimizations_to_apply 10000
910 query_plan_merge_expressions 1
911 query_plan_merge_filters 0
912 query_plan_optimize_prewhere 1
913 query_plan_optimize_primary_key 1
914 query_plan_optimize_projection 1
915 query_plan_push_down_limit 1
928 read_backoff_min_interval_between_events_ms 1000
929 read_backoff_min_latency_ms 1000
930 read_from_filesystem_cache_if_exists_otherwise_bypass_cache 0
931 read_from_page_cache_if_exists_otherwise_bypass_cache 0
932 read_in_order_two_level_merge_threshold 100
933 read_in_order_use_buffering 1
934 read_overflow_mode throw
935 read_overflow_mode_leaf throw
936 read_priority 0
959 rewrite_count_distinct_if_with_count_distinct_implementation 0
960 s3_allow_parallel_part_upload 1
961 s3_check_objects_after_upload 0
962 s3_connect_timeout_ms 1000
963 s3_create_new_file_on_insert 0
964 s3_disable_checksum 0
965 s3_http_connection_pool_size s3_ignore_file_doesnt_exist 1000 0
966 s3_list_object_keys_size 1000
967 s3_max_connections 1024
968 s3_max_get_burst 0
969 s3_max_get_rps 0
970 s3_max_inflight_parts_for_one_file 20
971 s3_max_part_number 10000
972 s3_max_put_burst 0
973 s3_max_put_rps 0
974 s3_max_redirects 10
975 s3_max_single_operation_copy_size 33554432
976 s3_max_single_part_upload_size 33554432
977 s3_max_single_read_retries 4
978 s3_max_unexpected_write_error_retries 4
987 s3_upload_part_size_multiply_factor 2
988 s3_upload_part_size_multiply_parts_count_threshold 500
989 s3_use_adaptive_timeouts 1
990 s3_validate_request_settings 1
991 s3queue_allow_experimental_sharded_mode 0
992 s3queue_default_zookeeper_path /clickhouse/s3queue/
993 s3queue_enable_logging_to_s3queue_log 0
994 schema_inference_cache_require_modification_time_for_url 1
1016 sleep_in_send_data_ms 0
1017 sleep_in_send_tables_status_ms 0
1018 sort_overflow_mode throw
1019 split_intersecting_parts_ranges_into_layers_final 1
1020 split_parts_ranges_into_intersecting_and_non_intersecting_final 1
1021 splitby_max_substrings_includes_remaining_string 0
1022 stop_refreshable_materialized_views_on_startup 0
1023 storage_file_read_method pread
1029 system_events_show_zero_values 0
1030 table_function_remote_max_addresses 1000
1031 tcp_keep_alive_timeout 290
1032 temporary_data_in_cache_reserve_space_wait_lock_timeout_milliseconds 600000
1033 temporary_files_codec LZ4
1034 temporary_live_view_timeout 1
1035 throw_if_deduplication_in_dependent_materialized_views_enabled_with_async_insert 1
1036 throw_if_no_data_to_insert 1
1037 throw_on_error_from_cache_on_write_operations 0
1038 throw_on_max_partitions_per_insert_block 1
1045 trace_profile_events 0
1046 transfer_overflow_mode throw
1047 transform_null_in 0
1048 traverse_shadow_remote_data_paths 0
1049 union_default_mode
1050 unknown_packet_in_send_data 0
1051 update_insert_deduplication_token_in_dependent_materialized_views 0
1052 use_cache_for_count_from_files 1
1053 use_client_time_zone 0
1054 use_compact_format_in_distributed_parts_names 1
1058 use_index_for_in_with_subqueries_max_values 0
1059 use_local_cache_for_remote_storage 1
1060 use_mysql_types_in_show_columns 0
1061 use_page_cache_for_disks_without_file_cache 0
1062 use_query_cache 0
1063 use_skip_indexes 1
1064 use_skip_indexes_if_final 0
1065 use_structure_from_insertion_table_in_table_functions 2
1066 use_uncompressed_cache 0
1067 use_variant_as_common_type 0
1068 use_with_fill_by_sorting_prefix 1
1069 validate_experimental_and_suspicious_types_inside_nested_types 1
1070 validate_polygons 1
1071 wait_changes_become_visible_after_commit_mode wait_unknown
1072 wait_for_async_insert 1

View File

@ -7,12 +7,12 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
. "$CUR_DIR"/../shell_config.sh
# Note that this is a broad check. A per version check is done in the upgrade test
# Baseline generated with 23.12.1
# clickhouse local --query "select name, default from system.settings order by name format TSV" > 02995_baseline_23_12_1.tsv
# Baseline generated with 24.7.2
# clickhouse local --query "select name, default from system.settings order by name format TSV" > 02995_baseline_24_7_2.tsv
$CLICKHOUSE_LOCAL --query "
WITH old_settings AS
(
SELECT * FROM file('${CUR_DIR}/02995_baseline_23_12_1.tsv', 'TSV', 'name String, default String')
SELECT * FROM file('${CUR_DIR}/02995_baseline_24_7_2.tsv', 'TSV', 'name String, default String')
),
new_settings AS
(
@ -21,7 +21,7 @@ $CLICKHOUSE_LOCAL --query "
)
SELECT * FROM
(
SELECT 'PLEASE ADD THE NEW SETTING TO SettingsChangesHistory.h: ' || name || ' WAS ADDED',
SELECT 'PLEASE ADD THE NEW SETTING TO SettingsChangesHistory.cpp: ' || name || ' WAS ADDED',
FROM new_settings
WHERE (name NOT IN (
SELECT name
@ -29,17 +29,17 @@ $CLICKHOUSE_LOCAL --query "
)) AND (name NOT IN (
SELECT arrayJoin(tupleElement(changes, 'name'))
FROM system.settings_changes
WHERE splitByChar('.', version())[1] >= '24'
WHERE splitByChar('.', version)[1]::UInt64 >= 24 AND splitByChar('.', version)[2]::UInt64 > 7
))
UNION ALL
(
SELECT 'PLEASE ADD THE SETTING VALUE CHANGE TO SettingsChangesHistory.h: ' || name || ' WAS CHANGED FROM ' || old_settings.default || ' TO ' || new_settings.default,
SELECT 'PLEASE ADD THE SETTING VALUE CHANGE TO SettingsChangesHistory.cpp: ' || name || ' WAS CHANGED FROM ' || old_settings.default || ' TO ' || new_settings.default,
FROM new_settings
LEFT JOIN old_settings ON new_settings.name = old_settings.name
WHERE (new_settings.default != old_settings.default) AND (name NOT IN (
SELECT arrayJoin(tupleElement(changes, 'name'))
FROM system.settings_changes
WHERE splitByChar('.', version())[1] >= '24'
WHERE splitByChar('.', version)[1]::UInt64 >= 24 AND splitByChar('.', version)[2]::UInt64 > 7
))
)
)

View File

@ -1,4 +1,5 @@
-- Tags: long
-- Tags: long, no-random-merge-tree-settings
-- no-random-merge-tree-settings - times out in private
DROP TABLE IF EXISTS build;
DROP TABLE IF EXISTS skewed_probe;

View File

@ -0,0 +1,4 @@
0
2
0
2

View File

@ -0,0 +1,11 @@
-- Tags: no-parallel
CREATE DATABASE rdb1 ENGINE = Replicated('/test/test_replication_lag_metric', 'shard1', 'replica1');
CREATE DATABASE rdb2 ENGINE = Replicated('/test/test_replication_lag_metric', 'shard1', 'replica2');
SET distributed_ddl_task_timeout = 0;
CREATE TABLE rdb1.t (id UInt32) ENGINE = ReplicatedMergeTree ORDER BY id;
SELECT replication_lag FROM system.clusters WHERE cluster IN ('rdb1', 'rdb2') ORDER BY cluster ASC, replica_num ASC;
DROP DATABASE rdb1;
DROP DATABASE rdb2;

View File

@ -0,0 +1,3 @@
ba
\N
1 111111111111111111111111111111111111111

View File

@ -0,0 +1,26 @@
SET allow_experimental_analyzer = 1;
SELECT concat(materialize(toLowCardinality('b')), 'a') FROM remote('127.0.0.{1,2}', system, one) GROUP BY 'a';
SELECT concat(NULLIF(1, materialize(toLowCardinality(1))), concat(NULLIF(1, 1))) FROM remote('127.0.0.{1,2}', system, one) GROUP BY concat(NULLIF(1, 1));
DROP TABLE IF EXISTS test__fuzz_21;
CREATE TABLE test__fuzz_21
(
`x` Decimal(18, 10)
)
ENGINE = MergeTree
ORDER BY x;
INSERT INTO test__fuzz_21 VALUES (1), (2), (3);
WITH (
SELECT CAST(toFixedString(toFixedString(materialize(toFixedString('111111111111111111111111111111111111111', 39)), 39), 39), 'UInt128')
) AS v
SELECT
coalesce(materialize(toLowCardinality(toNullable(1))), 10, NULL),
max(v)
FROM remote('127.0.0.{1,2}', currentDatabase(), test__fuzz_21)
GROUP BY
coalesce(NULL),
coalesce(1, 10, 10, materialize(NULL));

View File

@ -1733,6 +1733,7 @@ groupBitmap
groupBitmapAnd
groupBitmapOr
groupBitmapXor
groupConcat
groupUniqArray
grouparray
grouparrayinsertat
@ -1749,6 +1750,7 @@ groupbitmapor
groupbitmapxor
groupbitor
groupbitxor
groupconcat
groupuniqarray
grpc
grpcio

View File

@ -6,9 +6,11 @@ v24.5.4.49-stable 2024-07-01
v24.5.3.5-stable 2024-06-13
v24.5.2.34-stable 2024-06-13
v24.5.1.1763-stable 2024-06-01
v24.4.4.113-stable 2024-08-02
v24.4.3.25-stable 2024-06-14
v24.4.2.141-stable 2024-06-07
v24.4.1.2088-stable 2024-05-01
v24.3.6.48-lts 2024-08-02
v24.3.5.46-lts 2024-07-03
v24.3.4.147-lts 2024-06-13
v24.3.3.102-lts 2024-05-01

1 v24.7.2.13-stable 2024-08-01
6 v24.5.3.5-stable 2024-06-13
7 v24.5.2.34-stable 2024-06-13
8 v24.5.1.1763-stable 2024-06-01
9 v24.4.4.113-stable 2024-08-02
10 v24.4.3.25-stable 2024-06-14
11 v24.4.2.141-stable 2024-06-07
12 v24.4.1.2088-stable 2024-05-01
13 v24.3.6.48-lts 2024-08-02
14 v24.3.5.46-lts 2024-07-03
15 v24.3.4.147-lts 2024-06-13
16 v24.3.3.102-lts 2024-05-01