Merge branch 'master' into miscellaneous-2

This commit is contained in:
Alexey Milovidov 2024-08-02 17:43:33 +02:00
commit e3d03ce3f9
71 changed files with 2163 additions and 912 deletions

View File

@ -34,17 +34,13 @@ curl https://clickhouse.com/ | sh
Every month we get together with the community (users, contributors, customers, those interested in learning more about ClickHouse) to discuss what is coming in the latest release. If you are interested in sharing what you've built on ClickHouse, let us know. Every month we get together with the community (users, contributors, customers, those interested in learning more about ClickHouse) to discuss what is coming in the latest release. If you are interested in sharing what you've built on ClickHouse, let us know.
* [v24.7 Community Call](https://clickhouse.com/company/events/v24-7-community-release-call) - Jul 30 * [v24.8 Community Call](https://clickhouse.com/company/events/v24-8-community-release-call) - August 29
## Upcoming Events ## Upcoming Events
Keep an eye out for upcoming meetups and events around the world. Somewhere else you want us to be? Please feel free to reach out to tyler `<at>` clickhouse `<dot>` com. You can also peruse [ClickHouse Events](https://clickhouse.com/company/news-events) for a list of all upcoming trainings, meetups, speaking engagements, etc. Keep an eye out for upcoming meetups and events around the world. Somewhere else you want us to be? Please feel free to reach out to tyler `<at>` clickhouse `<dot>` com. You can also peruse [ClickHouse Events](https://clickhouse.com/company/news-events) for a list of all upcoming trainings, meetups, speaking engagements, etc.
* [ClickHouse Meetup in Paris](https://www.meetup.com/clickhouse-france-user-group/events/300783448/) - Jul 9 * MORE COMING SOON!
* [ClickHouse Cloud - Live Update Call](https://clickhouse.com/company/events/202407-cloud-update-live) - Jul 9
* [ClickHouse Meetup @ Ramp - New York City](https://www.meetup.com/clickhouse-new-york-user-group/events/300595845/) - Jul 9
* [AWS Summit in New York](https://clickhouse.com/company/events/2024-07-awssummit-nyc) - Jul 10
* [ClickHouse Meetup @ Klaviyo - Boston](https://www.meetup.com/clickhouse-boston-user-group/events/300907870) - Jul 11
## Recent Recordings ## Recent Recordings
* **Recent Meetup Videos**: [Meetup Playlist](https://www.youtube.com/playlist?list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U) Whenever possible recordings of the ClickHouse Community Meetups are edited and presented as individual talks. Current featuring "Modern SQL in 2023", "Fast, Concurrent, and Consistent Asynchronous INSERTS in ClickHouse", and "Full-Text Indices: Design and Experiments" * **Recent Meetup Videos**: [Meetup Playlist](https://www.youtube.com/playlist?list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U) Whenever possible recordings of the ClickHouse Community Meetups are edited and presented as individual talks. Current featuring "Modern SQL in 2023", "Fast, Concurrent, and Consistent Asynchronous INSERTS in ClickHouse", and "Full-Text Indices: Design and Experiments"

View File

@ -0,0 +1,39 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.3.6.48-lts (b2d33c3c45d) FIXME as compared to v24.3.5.46-lts (fe54cead6b6)
#### Critical Bug Fix (crash, LOGICAL_ERROR, data loss, RBAC)
* Backported in [#66889](https://github.com/ClickHouse/ClickHouse/issues/66889): Fix unexpeced size of low cardinality column in function calls. [#65298](https://github.com/ClickHouse/ClickHouse/pull/65298) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66687](https://github.com/ClickHouse/ClickHouse/issues/66687): Fix the VALID UNTIL clause in the user definition resetting after a restart. Closes [#66405](https://github.com/ClickHouse/ClickHouse/issues/66405). [#66409](https://github.com/ClickHouse/ClickHouse/pull/66409) ([Nikolay Degterinsky](https://github.com/evillique)).
* Backported in [#67497](https://github.com/ClickHouse/ClickHouse/issues/67497): Fix crash in DistributedAsyncInsert when connection is empty. [#67219](https://github.com/ClickHouse/ClickHouse/pull/67219) ([Pablo Marcos](https://github.com/pamarcos)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#66324](https://github.com/ClickHouse/ClickHouse/issues/66324): Add missing settings `input_format_csv_skip_first_lines/input_format_tsv_skip_first_lines/input_format_csv_try_infer_numbers_from_strings/input_format_csv_try_infer_strings_from_quoted_tuples` in schema inference cache because they can change the resulting schema. It prevents from incorrect result of schema inference with these settings changed. [#65980](https://github.com/ClickHouse/ClickHouse/pull/65980) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#66151](https://github.com/ClickHouse/ClickHouse/issues/66151): Fixed buffer overflow bug in `unbin`/`unhex` implementation. [#66106](https://github.com/ClickHouse/ClickHouse/pull/66106) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#66451](https://github.com/ClickHouse/ClickHouse/issues/66451): Fixed a bug in ZooKeeper client: a session could get stuck in unusable state after receiving a hardware error from ZooKeeper. For example, this might happen due to "soft memory limit" in ClickHouse Keeper. [#66140](https://github.com/ClickHouse/ClickHouse/pull/66140) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#66222](https://github.com/ClickHouse/ClickHouse/issues/66222): Fix issue in SumIfToCountIfVisitor and signed integers. [#66146](https://github.com/ClickHouse/ClickHouse/pull/66146) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66676](https://github.com/ClickHouse/ClickHouse/issues/66676): Fix handling limit for `system.numbers_mt` when no index can be used. [#66231](https://github.com/ClickHouse/ClickHouse/pull/66231) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#66602](https://github.com/ClickHouse/ClickHouse/issues/66602): Fixed how the ClickHouse server detects the maximum number of usable CPU cores as specified by cgroups v2 if the server runs in a container such as Docker. In more detail, containers often run their process in the root cgroup which has an empty name. In that case, ClickHouse ignored the CPU limits set by cgroups v2. [#66237](https://github.com/ClickHouse/ClickHouse/pull/66237) ([filimonov](https://github.com/filimonov)).
* Backported in [#66356](https://github.com/ClickHouse/ClickHouse/issues/66356): Fix the `Not-ready set` error when a subquery with `IN` is used in the constraint. [#66261](https://github.com/ClickHouse/ClickHouse/pull/66261) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66970](https://github.com/ClickHouse/ClickHouse/issues/66970): Fix `Column identifier is already registered` error with `group_by_use_nulls=true` and new analyzer. [#66400](https://github.com/ClickHouse/ClickHouse/pull/66400) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66967](https://github.com/ClickHouse/ClickHouse/issues/66967): Fix `Cannot find column` error for queries with constant expression in `GROUP BY` key and new analyzer enabled. [#66433](https://github.com/ClickHouse/ClickHouse/pull/66433) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66718](https://github.com/ClickHouse/ClickHouse/issues/66718): Correctly track memory for `Allocator::realloc`. [#66548](https://github.com/ClickHouse/ClickHouse/pull/66548) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#66949](https://github.com/ClickHouse/ClickHouse/issues/66949): Fix an invalid result for queries with `WINDOW`. This could happen when `PARTITION` columns have sparse serialization and window functions are executed in parallel. [#66579](https://github.com/ClickHouse/ClickHouse/pull/66579) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66946](https://github.com/ClickHouse/ClickHouse/issues/66946): Fix `Method getResultType is not supported for QUERY query node` error when scalar subquery was used as the first argument of IN (with new analyzer). [#66655](https://github.com/ClickHouse/ClickHouse/pull/66655) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#67629](https://github.com/ClickHouse/ClickHouse/issues/67629): Fix for occasional deadlock in Context::getDDLWorker. [#66843](https://github.com/ClickHouse/ClickHouse/pull/66843) ([Alexander Gololobov](https://github.com/davenger)).
* Backported in [#67193](https://github.com/ClickHouse/ClickHouse/issues/67193): TRUNCATE DATABASE used to stop replication as if it was a DROP DATABASE query, it's fixed. [#67129](https://github.com/ClickHouse/ClickHouse/pull/67129) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#67375](https://github.com/ClickHouse/ClickHouse/issues/67375): Fix error `Cannot convert column because it is non constant in source stream but must be constant in result.` for a query that reads from the `Merge` table over the `Distriburted` table with one shard. [#67146](https://github.com/ClickHouse/ClickHouse/pull/67146) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#67572](https://github.com/ClickHouse/ClickHouse/issues/67572): Fix execution of nested short-circuit functions. [#67520](https://github.com/ClickHouse/ClickHouse/pull/67520) ([Kruglov Pavel](https://github.com/Avogar)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#66422](https://github.com/ClickHouse/ClickHouse/issues/66422): Ignore subquery for IN in DDLLoadingDependencyVisitor. [#66395](https://github.com/ClickHouse/ClickHouse/pull/66395) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66855](https://github.com/ClickHouse/ClickHouse/issues/66855): Fix data race in S3::ClientCache. [#66644](https://github.com/ClickHouse/ClickHouse/pull/66644) ([Konstantin Morozov](https://github.com/k-morozov)).
* Backported in [#67055](https://github.com/ClickHouse/ClickHouse/issues/67055): Increase asio pool size in case the server is tiny. [#66761](https://github.com/ClickHouse/ClickHouse/pull/66761) ([alesapin](https://github.com/alesapin)).
* Backported in [#66943](https://github.com/ClickHouse/ClickHouse/issues/66943): Small fix in realloc memory tracking. [#66820](https://github.com/ClickHouse/ClickHouse/pull/66820) ([Antonio Andelic](https://github.com/antonio2368)).

View File

@ -0,0 +1,73 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.4.4.113-stable (d63a54957bd) FIXME as compared to v24.4.3.25-stable (a915dd4eda4)
#### Improvement
* Backported in [#65884](https://github.com/ClickHouse/ClickHouse/issues/65884): Always start Keeper with sufficient amount of threads in global thread pool. [#64444](https://github.com/ClickHouse/ClickHouse/pull/64444) ([Duc Canh Le](https://github.com/canhld94)).
* Backported in [#65303](https://github.com/ClickHouse/ClickHouse/issues/65303): Returned back the behaviour of how ClickHouse works and interprets Tuples in CSV format. This change effectively reverts https://github.com/ClickHouse/ClickHouse/pull/60994 and makes it available only under a few settings: `output_format_csv_serialize_tuple_into_separate_columns`, `input_format_csv_deserialize_separate_columns_into_tuple` and `input_format_csv_try_infer_strings_from_quoted_tuples`. [#65170](https://github.com/ClickHouse/ClickHouse/pull/65170) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Backported in [#65894](https://github.com/ClickHouse/ClickHouse/issues/65894): Respect cgroup CPU limit in Keeper. [#65819](https://github.com/ClickHouse/ClickHouse/pull/65819) ([Antonio Andelic](https://github.com/antonio2368)).
#### Critical Bug Fix (crash, LOGICAL_ERROR, data loss, RBAC)
* Backported in [#65372](https://github.com/ClickHouse/ClickHouse/issues/65372): Fix a bug in ClickHouse Keeper that causes digest mismatch during closing session. [#65198](https://github.com/ClickHouse/ClickHouse/pull/65198) ([Aleksei Filatov](https://github.com/aalexfvk)).
* Backported in [#66883](https://github.com/ClickHouse/ClickHouse/issues/66883): Fix unexpeced size of low cardinality column in function calls. [#65298](https://github.com/ClickHouse/ClickHouse/pull/65298) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#65435](https://github.com/ClickHouse/ClickHouse/issues/65435): Forbid `QUALIFY` clause in the old analyzer. The old analyzer ignored `QUALIFY`, so it could lead to unexpected data removal in mutations. [#65356](https://github.com/ClickHouse/ClickHouse/pull/65356) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#65448](https://github.com/ClickHouse/ClickHouse/issues/65448): Use correct memory alignment for Distinct combinator. Previously, crash could happen because of invalid memory allocation when the combinator was used. [#65379](https://github.com/ClickHouse/ClickHouse/pull/65379) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#65710](https://github.com/ClickHouse/ClickHouse/issues/65710): Fix crash in maxIntersections. [#65689](https://github.com/ClickHouse/ClickHouse/pull/65689) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66689](https://github.com/ClickHouse/ClickHouse/issues/66689): Fix the VALID UNTIL clause in the user definition resetting after a restart. Closes [#66405](https://github.com/ClickHouse/ClickHouse/issues/66405). [#66409](https://github.com/ClickHouse/ClickHouse/pull/66409) ([Nikolay Degterinsky](https://github.com/evillique)).
* Backported in [#67499](https://github.com/ClickHouse/ClickHouse/issues/67499): Fix crash in DistributedAsyncInsert when connection is empty. [#67219](https://github.com/ClickHouse/ClickHouse/pull/67219) ([Pablo Marcos](https://github.com/pamarcos)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#65353](https://github.com/ClickHouse/ClickHouse/issues/65353): Fix possible abort on uncaught exception in ~WriteBufferFromFileDescriptor in StatusFile. [#64206](https://github.com/ClickHouse/ClickHouse/pull/64206) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#65060](https://github.com/ClickHouse/ClickHouse/issues/65060): Fix the `Expression nodes list expected 1 projection names` and `Unknown expression or identifier` errors for queries with aliases to `GLOBAL IN.`. [#64517](https://github.com/ClickHouse/ClickHouse/pull/64517) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#65329](https://github.com/ClickHouse/ClickHouse/issues/65329): Fix the crash loop when restoring from backup is blocked by creating an MV with a definer that hasn't been restored yet. [#64595](https://github.com/ClickHouse/ClickHouse/pull/64595) ([pufit](https://github.com/pufit)).
* Backported in [#64833](https://github.com/ClickHouse/ClickHouse/issues/64833): Fix bug which could lead to non-working TTLs with expressions. [#64694](https://github.com/ClickHouse/ClickHouse/pull/64694) ([alesapin](https://github.com/alesapin)).
* Backported in [#65086](https://github.com/ClickHouse/ClickHouse/issues/65086): Fix removing the `WHERE` and `PREWHERE` expressions, which are always true (for the new analyzer). [#64695](https://github.com/ClickHouse/ClickHouse/pull/64695) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#65540](https://github.com/ClickHouse/ClickHouse/issues/65540): Fix crash for `ALTER TABLE ... ON CLUSTER ... MODIFY SQL SECURITY`. [#64957](https://github.com/ClickHouse/ClickHouse/pull/64957) ([pufit](https://github.com/pufit)).
* Backported in [#65578](https://github.com/ClickHouse/ClickHouse/issues/65578): Fix crash on destroying AccessControl: add explicit shutdown. [#64993](https://github.com/ClickHouse/ClickHouse/pull/64993) ([Vitaly Baranov](https://github.com/vitlibar)).
* Backported in [#65161](https://github.com/ClickHouse/ClickHouse/issues/65161): Fix pushing arithmetic operations out of aggregation. In the new analyzer, optimization was applied only once. [#65104](https://github.com/ClickHouse/ClickHouse/pull/65104) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#65616](https://github.com/ClickHouse/ClickHouse/issues/65616): Fix aggregate function name rewriting in the new analyzer. [#65110](https://github.com/ClickHouse/ClickHouse/pull/65110) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#65730](https://github.com/ClickHouse/ClickHouse/issues/65730): Eliminate injective function in argument of functions `uniq*` recursively. This used to work correctly but was broken in the new analyzer. [#65140](https://github.com/ClickHouse/ClickHouse/pull/65140) ([Duc Canh Le](https://github.com/canhld94)).
* Backported in [#65668](https://github.com/ClickHouse/ClickHouse/issues/65668): Disable `non-intersecting-parts` optimization for queries with `FINAL` in case of `read-in-order` optimization was enabled. This could lead to an incorrect query result. As a workaround, disable `do_not_merge_across_partitions_select_final` and `split_parts_ranges_into_intersecting_and_non_intersecting_final` before this fix is merged. [#65505](https://github.com/ClickHouse/ClickHouse/pull/65505) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#65786](https://github.com/ClickHouse/ClickHouse/issues/65786): Fixed bug in MergeJoin. Column in sparse serialisation might be treated as a column of its nested type though the required conversion wasn't performed. [#65632](https://github.com/ClickHouse/ClickHouse/pull/65632) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#65810](https://github.com/ClickHouse/ClickHouse/issues/65810): Fix invalid exceptions in function `parseDateTime` with `%F` and `%D` placeholders. [#65768](https://github.com/ClickHouse/ClickHouse/pull/65768) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#65931](https://github.com/ClickHouse/ClickHouse/issues/65931): For queries that read from `PostgreSQL`, cancel the internal `PostgreSQL` query if the ClickHouse query is finished. Otherwise, `ClickHouse` query cannot be canceled until the internal `PostgreSQL` query is finished. [#65771](https://github.com/ClickHouse/ClickHouse/pull/65771) ([Maksim Kita](https://github.com/kitaisreal)).
* Backported in [#65826](https://github.com/ClickHouse/ClickHouse/issues/65826): Fix a bug in short circuit logic when old analyzer and dictGetOrDefault is used. [#65802](https://github.com/ClickHouse/ClickHouse/pull/65802) ([jsc0218](https://github.com/jsc0218)).
* Backported in [#66299](https://github.com/ClickHouse/ClickHouse/issues/66299): Better handling of join conditions involving `IS NULL` checks (for example `ON (a = b AND (a IS NOT NULL) AND (b IS NOT NULL) ) OR ( (a IS NULL) AND (b IS NULL) )` is rewritten to `ON a <=> b`), fix incorrect optimization when condition other then `IS NULL` are present. [#65835](https://github.com/ClickHouse/ClickHouse/pull/65835) ([vdimir](https://github.com/vdimir)).
* Backported in [#66326](https://github.com/ClickHouse/ClickHouse/issues/66326): Add missing settings `input_format_csv_skip_first_lines/input_format_tsv_skip_first_lines/input_format_csv_try_infer_numbers_from_strings/input_format_csv_try_infer_strings_from_quoted_tuples` in schema inference cache because they can change the resulting schema. It prevents from incorrect result of schema inference with these settings changed. [#65980](https://github.com/ClickHouse/ClickHouse/pull/65980) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#66153](https://github.com/ClickHouse/ClickHouse/issues/66153): Fixed buffer overflow bug in `unbin`/`unhex` implementation. [#66106](https://github.com/ClickHouse/ClickHouse/pull/66106) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#66459](https://github.com/ClickHouse/ClickHouse/issues/66459): Fixed a bug in ZooKeeper client: a session could get stuck in unusable state after receiving a hardware error from ZooKeeper. For example, this might happen due to "soft memory limit" in ClickHouse Keeper. [#66140](https://github.com/ClickHouse/ClickHouse/pull/66140) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#66224](https://github.com/ClickHouse/ClickHouse/issues/66224): Fix issue in SumIfToCountIfVisitor and signed integers. [#66146](https://github.com/ClickHouse/ClickHouse/pull/66146) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66267](https://github.com/ClickHouse/ClickHouse/issues/66267): Don't throw `TIMEOUT_EXCEEDED` for `none_only_active` mode of `distributed_ddl_output_mode`. [#66218](https://github.com/ClickHouse/ClickHouse/pull/66218) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#66678](https://github.com/ClickHouse/ClickHouse/issues/66678): Fix handling limit for `system.numbers_mt` when no index can be used. [#66231](https://github.com/ClickHouse/ClickHouse/pull/66231) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#66603](https://github.com/ClickHouse/ClickHouse/issues/66603): Fixed how the ClickHouse server detects the maximum number of usable CPU cores as specified by cgroups v2 if the server runs in a container such as Docker. In more detail, containers often run their process in the root cgroup which has an empty name. In that case, ClickHouse ignored the CPU limits set by cgroups v2. [#66237](https://github.com/ClickHouse/ClickHouse/pull/66237) ([filimonov](https://github.com/filimonov)).
* Backported in [#66358](https://github.com/ClickHouse/ClickHouse/issues/66358): Fix the `Not-ready set` error when a subquery with `IN` is used in the constraint. [#66261](https://github.com/ClickHouse/ClickHouse/pull/66261) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66971](https://github.com/ClickHouse/ClickHouse/issues/66971): Fix `Column identifier is already registered` error with `group_by_use_nulls=true` and new analyzer. [#66400](https://github.com/ClickHouse/ClickHouse/pull/66400) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66968](https://github.com/ClickHouse/ClickHouse/issues/66968): Fix `Cannot find column` error for queries with constant expression in `GROUP BY` key and new analyzer enabled. [#66433](https://github.com/ClickHouse/ClickHouse/pull/66433) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66719](https://github.com/ClickHouse/ClickHouse/issues/66719): Correctly track memory for `Allocator::realloc`. [#66548](https://github.com/ClickHouse/ClickHouse/pull/66548) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#66950](https://github.com/ClickHouse/ClickHouse/issues/66950): Fix an invalid result for queries with `WINDOW`. This could happen when `PARTITION` columns have sparse serialization and window functions are executed in parallel. [#66579](https://github.com/ClickHouse/ClickHouse/pull/66579) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66947](https://github.com/ClickHouse/ClickHouse/issues/66947): Fix `Method getResultType is not supported for QUERY query node` error when scalar subquery was used as the first argument of IN (with new analyzer). [#66655](https://github.com/ClickHouse/ClickHouse/pull/66655) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#67631](https://github.com/ClickHouse/ClickHouse/issues/67631): Fix for occasional deadlock in Context::getDDLWorker. [#66843](https://github.com/ClickHouse/ClickHouse/pull/66843) ([Alexander Gololobov](https://github.com/davenger)).
* Backported in [#67195](https://github.com/ClickHouse/ClickHouse/issues/67195): TRUNCATE DATABASE used to stop replication as if it was a DROP DATABASE query, it's fixed. [#67129](https://github.com/ClickHouse/ClickHouse/pull/67129) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#67377](https://github.com/ClickHouse/ClickHouse/issues/67377): Fix error `Cannot convert column because it is non constant in source stream but must be constant in result.` for a query that reads from the `Merge` table over the `Distriburted` table with one shard. [#67146](https://github.com/ClickHouse/ClickHouse/pull/67146) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#67240](https://github.com/ClickHouse/ClickHouse/issues/67240): This closes [#67156](https://github.com/ClickHouse/ClickHouse/issues/67156). This closes [#66447](https://github.com/ClickHouse/ClickHouse/issues/66447). The bug was introduced in https://github.com/ClickHouse/ClickHouse/pull/62907. [#67178](https://github.com/ClickHouse/ClickHouse/pull/67178) ([Maksim Kita](https://github.com/kitaisreal)).
* Backported in [#67574](https://github.com/ClickHouse/ClickHouse/issues/67574): Fix execution of nested short-circuit functions. [#67520](https://github.com/ClickHouse/ClickHouse/pull/67520) ([Kruglov Pavel](https://github.com/Avogar)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#65410](https://github.com/ClickHouse/ClickHouse/issues/65410): Re-enable OpenSSL session caching. [#65111](https://github.com/ClickHouse/ClickHouse/pull/65111) ([Robert Schulze](https://github.com/rschu1ze)).
* Backported in [#65903](https://github.com/ClickHouse/ClickHouse/issues/65903): Fix bug with session closing in Keeper. [#65735](https://github.com/ClickHouse/ClickHouse/pull/65735) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#66385](https://github.com/ClickHouse/ClickHouse/issues/66385): Disable broken cases from 02911_join_on_nullsafe_optimization. [#66310](https://github.com/ClickHouse/ClickHouse/pull/66310) ([vdimir](https://github.com/vdimir)).
* Backported in [#66424](https://github.com/ClickHouse/ClickHouse/issues/66424): Ignore subquery for IN in DDLLoadingDependencyVisitor. [#66395](https://github.com/ClickHouse/ClickHouse/pull/66395) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66542](https://github.com/ClickHouse/ClickHouse/issues/66542): Add additional log masking in CI. [#66523](https://github.com/ClickHouse/ClickHouse/pull/66523) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66857](https://github.com/ClickHouse/ClickHouse/issues/66857): Fix data race in S3::ClientCache. [#66644](https://github.com/ClickHouse/ClickHouse/pull/66644) ([Konstantin Morozov](https://github.com/k-morozov)).
* Backported in [#66873](https://github.com/ClickHouse/ClickHouse/issues/66873): Support one more case in JOIN ON ... IS NULL. [#66725](https://github.com/ClickHouse/ClickHouse/pull/66725) ([vdimir](https://github.com/vdimir)).
* Backported in [#67057](https://github.com/ClickHouse/ClickHouse/issues/67057): Increase asio pool size in case the server is tiny. [#66761](https://github.com/ClickHouse/ClickHouse/pull/66761) ([alesapin](https://github.com/alesapin)).
* Backported in [#66944](https://github.com/ClickHouse/ClickHouse/issues/66944): Small fix in realloc memory tracking. [#66820](https://github.com/ClickHouse/ClickHouse/pull/66820) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#67250](https://github.com/ClickHouse/ClickHouse/issues/67250): Followup [#66725](https://github.com/ClickHouse/ClickHouse/issues/66725). [#66869](https://github.com/ClickHouse/ClickHouse/pull/66869) ([vdimir](https://github.com/vdimir)).
* Backported in [#67410](https://github.com/ClickHouse/ClickHouse/issues/67410): CI: Fix build results for release branches. [#67402](https://github.com/ClickHouse/ClickHouse/pull/67402) ([Max K.](https://github.com/maxknv)).

View File

@ -0,0 +1,90 @@
---
slug: /en/sql-reference/aggregate-functions/reference/groupconcat
sidebar_position: 363
sidebar_label: groupConcat
title: groupConcat
---
Calculates a concatenated string from a group of strings, optionally separated by a delimiter, and optionally limited by a maximum number of elements.
**Syntax**
``` sql
groupConcat(expression [, delimiter] [, limit]);
```
**Arguments**
- `expression` — The expression or column name that outputs strings to be concatenated..
- `delimiter` — A [string](../../../sql-reference/data-types/string.md) that will be used to separate concatenated values. This parameter is optional and defaults to an empty string if not specified.
- `limit` — A positive [integer](../../../sql-reference/data-types/int-uint.md) specifying the maximum number of elements to concatenate. If more elements are present, excess elements are ignored. This parameter is optional.
:::note
If delimiter is specified without limit, it must be the first parameter following the expression. If both delimiter and limit are specified, delimiter must precede limit.
:::
**Returned value**
- Returns a [string](../../../sql-reference/data-types/string.md) consisting of the concatenated values of the column or expression. If the group has no elements or only null elements, and the function does not specify a handling for only null values, the result is a nullable string with a null value.
**Examples**
Input table:
``` text
┌─id─┬─name─┐
│ 1 │ John│
│ 2 │ Jane│
│ 3 │ Bob│
└────┴──────┘
```
1. Basic usage without a delimiter:
Query:
``` sql
SELECT groupConcat(Name) FROM Employees;
```
Result:
``` text
JohnJaneBob
```
This concatenates all names into one continuous string without any separator.
2. Using comma as a delimiter:
Query:
``` sql
SELECT groupConcat(Name, ', ', 2) FROM Employees;
```
Result:
``` text
John, Jane, Bob
```
This output shows the names separated by a comma followed by a space.
3. Limiting the number of concatenated elements
Query:
``` sql
SELECT groupConcat(Name, ', ', 2) FROM Employees;
```
Result:
``` text
John, Jane
```
This query limits the output to the first two names, even though there are more names in the table.

View File

@ -14,7 +14,10 @@ public:
, re_gen(key_template) , re_gen(key_template)
{ {
} }
DB::ObjectStorageKey generate(const String &, bool) const override { return DB::ObjectStorageKey::createAsAbsolute(re_gen.generate()); } DB::ObjectStorageKey generate(const String &, bool /* is_directory */, const std::optional<String> & /* key_prefix */) const override
{
return DB::ObjectStorageKey::createAsAbsolute(re_gen.generate());
}
private: private:
String key_template; String key_template;
@ -29,7 +32,7 @@ public:
: key_prefix(std::move(key_prefix_)) : key_prefix(std::move(key_prefix_))
{} {}
DB::ObjectStorageKey generate(const String &, bool) const override DB::ObjectStorageKey generate(const String &, bool /* is_directory */, const std::optional<String> & /* key_prefix */) const override
{ {
/// Path to store the new S3 object. /// Path to store the new S3 object.
@ -60,7 +63,8 @@ public:
: key_prefix(std::move(key_prefix_)) : key_prefix(std::move(key_prefix_))
{} {}
DB::ObjectStorageKey generate(const String & path, bool) const override DB::ObjectStorageKey
generate(const String & path, bool /* is_directory */, const std::optional<String> & /* key_prefix */) const override
{ {
return DB::ObjectStorageKey::createAsRelative(key_prefix, path); return DB::ObjectStorageKey::createAsRelative(key_prefix, path);
} }

View File

@ -1,6 +1,7 @@
#pragma once #pragma once
#include <memory> #include <memory>
#include <optional>
#include "ObjectStorageKey.h" #include "ObjectStorageKey.h"
namespace DB namespace DB
@ -11,7 +12,11 @@ class IObjectStorageKeysGenerator
public: public:
virtual ~IObjectStorageKeysGenerator() = default; virtual ~IObjectStorageKeysGenerator() = default;
virtual ObjectStorageKey generate(const String & path, bool is_directory) const = 0; /// Generates an object storage key based on a path in the virtual filesystem.
/// @param path - Path in the virtual filesystem.
/// @param is_directory - If the path in the virtual filesystem corresponds to a directory.
/// @param key_prefix - Optional key prefix for the generated object storage key. If provided, this prefix will be added to the beginning of the generated key.
virtual ObjectStorageKey generate(const String & path, bool is_directory, const std::optional<String> & key_prefix) const = 0;
}; };
using ObjectStorageKeysGeneratorPtr = std::shared_ptr<IObjectStorageKeysGenerator>; using ObjectStorageKeysGeneratorPtr = std::shared_ptr<IObjectStorageKeysGenerator>;

View File

@ -57,265 +57,446 @@ String ClickHouseVersion::toString() const
/// Note: please check if the key already exists to prevent duplicate entries. /// Note: please check if the key already exists to prevent duplicate entries.
static std::initializer_list<std::pair<ClickHouseVersion, SettingsChangesHistory::SettingsChanges>> settings_changes_history_initializer = static std::initializer_list<std::pair<ClickHouseVersion, SettingsChangesHistory::SettingsChanges>> settings_changes_history_initializer =
{ {
{"24.7", {{"output_format_parquet_write_page_index", false, true, "Add a possibility to write page index into parquet files."}, {"24.12",
{"output_format_binary_encode_types_in_binary_format", false, false, "Added new setting to allow to write type names in binary format in RowBinaryWithNamesAndTypes output format"}, {
{"input_format_binary_decode_types_in_binary_format", false, false, "Added new setting to allow to read type names in binary format in RowBinaryWithNamesAndTypes input format"}, }
{"output_format_native_encode_types_in_binary_format", false, false, "Added new setting to allow to write type names in binary format in Native output format"}, },
{"input_format_native_decode_types_in_binary_format", false, false, "Added new setting to allow to read type names in binary format in Native output format"}, {"24.11",
{"read_in_order_use_buffering", false, true, "Use buffering before merging while reading in order of primary key"}, {
{"enable_named_columns_in_function_tuple", false, true, "Generate named tuples in function tuple() when all names are unique and can be treated as unquoted identifiers."}, }
{"input_format_json_case_insensitive_column_matching", false, false, "Ignore case when matching JSON keys with CH columns."}, },
{"optimize_trivial_insert_select", true, false, "The optimization does not make sense in many cases."}, {"24.10",
{"dictionary_validate_primary_key_type", false, false, "Validate primary key type for dictionaries. By default id type for simple layouts will be implicitly converted to UInt64."}, {
{"collect_hash_table_stats_during_joins", false, true, "New setting."}, }
{"max_size_to_preallocate_for_joins", 0, 100'000'000, "New setting."}, },
{"input_format_orc_reader_time_zone_name", "GMT", "GMT", "The time zone name for ORC row reader, the default ORC row reader's time zone is GMT."}, {"24.9",
{"lightweight_mutation_projection_mode", "throw", "throw", "When lightweight delete happens on a table with projection(s), the possible operations include throw the exception as projection exists, or drop all projection related to this table then do lightweight delete."}, {
{"database_replicated_allow_heavy_create", true, false, "Long-running DDL queries (CREATE AS SELECT and POPULATE) for Replicated database engine was forbidden"}, }
{"query_plan_merge_filters", false, false, "Allow to merge filters in the query plan"}, },
{"azure_sdk_max_retries", 10, 10, "Maximum number of retries in azure sdk"}, {"24.8",
{"azure_sdk_retry_initial_backoff_ms", 10, 10, "Minimal backoff between retries in azure sdk"}, {
{"azure_sdk_retry_max_backoff_ms", 1000, 1000, "Maximal backoff between retries in azure sdk"}, {"merge_tree_min_bytes_per_task_for_remote_reading", 4194304, 2097152, "Value is unified with `filesystem_prefetch_min_bytes_for_single_read_task`"},
{"merge_tree_min_bytes_per_task_for_remote_reading", 4194304, 2097152, "Value is unified with `filesystem_prefetch_min_bytes_for_single_read_task`"}, }
{"ignore_on_cluster_for_replicated_named_collections_queries", false, false, "Ignore ON CLUSTER clause for replicated named collections management queries."}, },
{"backup_restore_s3_retry_attempts", 1000,1000, "Setting for Aws::Client::RetryStrategy, Aws::Client does retries itself, 0 means no retries. It takes place only for backup/restore."}, {"24.7",
{"postgresql_connection_attempt_timeout", 2, 2, "Allow to control 'connect_timeout' parameter of PostgreSQL connection."}, {
{"postgresql_connection_pool_retries", 2, 2, "Allow to control the number of retries in PostgreSQL connection pool."} {"output_format_parquet_write_page_index", false, true, "Add a possibility to write page index into parquet files."},
}}, {"output_format_binary_encode_types_in_binary_format", false, false, "Added new setting to allow to write type names in binary format in RowBinaryWithNamesAndTypes output format"},
{"24.6", {{"materialize_skip_indexes_on_insert", true, true, "Added new setting to allow to disable materialization of skip indexes on insert"}, {"input_format_binary_decode_types_in_binary_format", false, false, "Added new setting to allow to read type names in binary format in RowBinaryWithNamesAndTypes input format"},
{"materialize_statistics_on_insert", true, true, "Added new setting to allow to disable materialization of statistics on insert"}, {"output_format_native_encode_types_in_binary_format", false, false, "Added new setting to allow to write type names in binary format in Native output format"},
{"input_format_parquet_use_native_reader", false, false, "When reading Parquet files, to use native reader instead of arrow reader."}, {"input_format_native_decode_types_in_binary_format", false, false, "Added new setting to allow to read type names in binary format in Native output format"},
{"hdfs_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in HDFS engine instead of empty query result"}, {"read_in_order_use_buffering", false, true, "Use buffering before merging while reading in order of primary key"},
{"azure_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in AzureBlobStorage engine instead of empty query result"}, {"enable_named_columns_in_function_tuple", false, true, "Generate named tuples in function tuple() when all names are unique and can be treated as unquoted identifiers."},
{"s3_validate_request_settings", true, true, "Allow to disable S3 request settings validation"}, {"input_format_json_case_insensitive_column_matching", false, false, "Ignore case when matching JSON keys with CH columns."},
{"allow_experimental_full_text_index", false, false, "Enable experimental full-text index"}, {"optimize_trivial_insert_select", true, false, "The optimization does not make sense in many cases."},
{"azure_skip_empty_files", false, false, "Allow to skip empty files in azure table engine"}, {"dictionary_validate_primary_key_type", false, false, "Validate primary key type for dictionaries. By default id type for simple layouts will be implicitly converted to UInt64."},
{"hdfs_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in HDFS table engine"}, {"collect_hash_table_stats_during_joins", false, true, "New setting."},
{"azure_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in AzureBlobStorage table engine"}, {"max_size_to_preallocate_for_joins", 0, 100'000'000, "New setting."},
{"s3_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in S3 table engine"}, {"input_format_orc_reader_time_zone_name", "GMT", "GMT", "The time zone name for ORC row reader, the default ORC row reader's time zone is GMT."}, {"lightweight_mutation_projection_mode", "throw", "throw", "When lightweight delete happens on a table with projection(s), the possible operations include throw the exception as projection exists, or drop all projection related to this table then do lightweight delete."},
{"s3_max_part_number", 10000, 10000, "Maximum part number number for s3 upload part"}, {"database_replicated_allow_heavy_create", true, false, "Long-running DDL queries (CREATE AS SELECT and POPULATE) for Replicated database engine was forbidden"},
{"s3_max_single_operation_copy_size", 32 * 1024 * 1024, 32 * 1024 * 1024, "Maximum size for a single copy operation in s3"}, {"query_plan_merge_filters", false, false, "Allow to merge filters in the query plan"},
{"input_format_parquet_max_block_size", 8192, DEFAULT_BLOCK_SIZE, "Increase block size for parquet reader."}, {"azure_sdk_max_retries", 10, 10, "Maximum number of retries in azure sdk"},
{"input_format_parquet_prefer_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Average block bytes output by parquet reader."}, {"azure_sdk_retry_initial_backoff_ms", 10, 10, "Minimal backoff between retries in azure sdk"},
{"enable_blob_storage_log", true, true, "Write information about blob storage operations to system.blob_storage_log table"}, {"azure_sdk_retry_max_backoff_ms", 1000, 1000, "Maximal backoff between retries in azure sdk"},
{"allow_deprecated_snowflake_conversion_functions", true, false, "Disabled deprecated functions snowflakeToDateTime[64] and dateTime[64]ToSnowflake."}, {"ignore_on_cluster_for_replicated_named_collections_queries", false, false, "Ignore ON CLUSTER clause for replicated named collections management queries."},
{"allow_statistic_optimize", false, false, "Old setting which popped up here being renamed."}, {"backup_restore_s3_retry_attempts", 1000,1000, "Setting for Aws::Client::RetryStrategy, Aws::Client does retries itself, 0 means no retries. It takes place only for backup/restore."},
{"allow_experimental_statistic", false, false, "Old setting which popped up here being renamed."}, {"postgresql_connection_attempt_timeout", 2, 2, "Allow to control 'connect_timeout' parameter of PostgreSQL connection."},
{"allow_statistics_optimize", false, false, "The setting was renamed. The previous name is `allow_statistic_optimize`."}, {"postgresql_connection_pool_retries", 2, 2, "Allow to control the number of retries in PostgreSQL connection pool."}
{"allow_experimental_statistics", false, false, "The setting was renamed. The previous name is `allow_experimental_statistic`."}, }
{"enable_vertical_final", false, true, "Enable vertical final by default again after fixing bug"}, },
{"parallel_replicas_custom_key_range_lower", 0, 0, "Add settings to control the range filter when using parallel replicas with dynamic shards"}, {"24.6",
{"parallel_replicas_custom_key_range_upper", 0, 0, "Add settings to control the range filter when using parallel replicas with dynamic shards. A value of 0 disables the upper limit"}, {
{"output_format_pretty_display_footer_column_names", 0, 1, "Add a setting to display column names in the footer if there are many rows. Threshold value is controlled by output_format_pretty_display_footer_column_names_min_rows."}, {"materialize_skip_indexes_on_insert", true, true, "Added new setting to allow to disable materialization of skip indexes on insert"},
{"output_format_pretty_display_footer_column_names_min_rows", 0, 50, "Add a setting to control the threshold value for setting output_format_pretty_display_footer_column_names_min_rows. Default 50."}, {"materialize_statistics_on_insert", true, true, "Added new setting to allow to disable materialization of statistics on insert"},
{"output_format_csv_serialize_tuple_into_separate_columns", true, true, "A new way of how interpret tuples in CSV format was added."}, {"input_format_parquet_use_native_reader", false, false, "When reading Parquet files, to use native reader instead of arrow reader."},
{"input_format_csv_deserialize_separate_columns_into_tuple", true, true, "A new way of how interpret tuples in CSV format was added."}, {"hdfs_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in HDFS engine instead of empty query result"},
{"input_format_csv_try_infer_strings_from_quoted_tuples", true, true, "A new way of how interpret tuples in CSV format was added."}, {"azure_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in AzureBlobStorage engine instead of empty query result"},
}}, {"s3_validate_request_settings", true, true, "Allow to disable S3 request settings validation"},
{"24.5", {{"allow_deprecated_error_prone_window_functions", true, false, "Allow usage of deprecated error prone window functions (neighbor, runningAccumulate, runningDifferenceStartingWithFirstValue, runningDifference)"}, {"allow_experimental_full_text_index", false, false, "Enable experimental full-text index"},
{"allow_experimental_join_condition", false, false, "Support join with inequal conditions which involve columns from both left and right table. e.g. t1.y < t2.y."}, {"azure_skip_empty_files", false, false, "Allow to skip empty files in azure table engine"},
{"input_format_tsv_crlf_end_of_line", false, false, "Enables reading of CRLF line endings with TSV formats"}, {"hdfs_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in HDFS table engine"},
{"output_format_parquet_use_custom_encoder", false, true, "Enable custom Parquet encoder."}, {"azure_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in AzureBlobStorage table engine"},
{"cross_join_min_rows_to_compress", 0, 10000000, "Minimal count of rows to compress block in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."}, {"s3_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in S3 table engine"},
{"cross_join_min_bytes_to_compress", 0, 1_GiB, "Minimal size of block to compress in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."}, {"s3_max_part_number", 10000, 10000, "Maximum part number number for s3 upload part"},
{"http_max_chunk_size", 0, 0, "Internal limitation"}, {"s3_max_single_operation_copy_size", 32 * 1024 * 1024, 32 * 1024 * 1024, "Maximum size for a single copy operation in s3"},
{"prefer_external_sort_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Prefer maximum block bytes for external sort, reduce the memory usage during merging."}, {"input_format_parquet_max_block_size", 8192, DEFAULT_BLOCK_SIZE, "Increase block size for parquet reader."},
{"input_format_force_null_for_omitted_fields", false, false, "Disable type-defaults for omitted fields when needed"}, {"input_format_parquet_prefer_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Average block bytes output by parquet reader."},
{"cast_string_to_dynamic_use_inference", false, false, "Add setting to allow converting String to Dynamic through parsing"}, {"enable_blob_storage_log", true, true, "Write information about blob storage operations to system.blob_storage_log table"},
{"allow_experimental_dynamic_type", false, false, "Add new experimental Dynamic type"}, {"allow_deprecated_snowflake_conversion_functions", true, false, "Disabled deprecated functions snowflakeToDateTime[64] and dateTime[64]ToSnowflake."},
{"azure_max_blocks_in_multipart_upload", 50000, 50000, "Maximum number of blocks in multipart upload for Azure."}, {"allow_statistic_optimize", false, false, "Old setting which popped up here being renamed."},
}}, {"allow_experimental_statistic", false, false, "Old setting which popped up here being renamed."},
{"24.4", {{"input_format_json_throw_on_bad_escape_sequence", true, true, "Allow to save JSON strings with bad escape sequences"}, {"allow_statistics_optimize", false, false, "The setting was renamed. The previous name is `allow_statistic_optimize`."},
{"max_parsing_threads", 0, 0, "Add a separate setting to control number of threads in parallel parsing from files"}, {"allow_experimental_statistics", false, false, "The setting was renamed. The previous name is `allow_experimental_statistic`."},
{"ignore_drop_queries_probability", 0, 0, "Allow to ignore drop queries in server with specified probability for testing purposes"}, {"enable_vertical_final", false, true, "Enable vertical final by default again after fixing bug"},
{"lightweight_deletes_sync", 2, 2, "The same as 'mutation_sync', but controls only execution of lightweight deletes"}, {"parallel_replicas_custom_key_range_lower", 0, 0, "Add settings to control the range filter when using parallel replicas with dynamic shards"},
{"query_cache_system_table_handling", "save", "throw", "The query cache no longer caches results of queries against system tables"}, {"parallel_replicas_custom_key_range_upper", 0, 0, "Add settings to control the range filter when using parallel replicas with dynamic shards. A value of 0 disables the upper limit"},
{"input_format_json_ignore_unnecessary_fields", false, true, "Ignore unnecessary fields and not parse them. Enabling this may not throw exceptions on json strings of invalid format or with duplicated fields"}, {"output_format_pretty_display_footer_column_names", 0, 1, "Add a setting to display column names in the footer if there are many rows. Threshold value is controlled by output_format_pretty_display_footer_column_names_min_rows."},
{"input_format_hive_text_allow_variable_number_of_columns", false, true, "Ignore extra columns in Hive Text input (if file has more columns than expected) and treat missing fields in Hive Text input as default values."}, {"output_format_pretty_display_footer_column_names_min_rows", 0, 50, "Add a setting to control the threshold value for setting output_format_pretty_display_footer_column_names_min_rows. Default 50."},
{"allow_experimental_database_replicated", false, true, "Database engine Replicated is now in Beta stage"}, {"output_format_csv_serialize_tuple_into_separate_columns", true, true, "A new way of how interpret tuples in CSV format was added."},
{"temporary_data_in_cache_reserve_space_wait_lock_timeout_milliseconds", (10 * 60 * 1000), (10 * 60 * 1000), "Wait time to lock cache for sapce reservation in temporary data in filesystem cache"}, {"input_format_csv_deserialize_separate_columns_into_tuple", true, true, "A new way of how interpret tuples in CSV format was added."},
{"optimize_rewrite_sum_if_to_count_if", false, true, "Only available for the analyzer, where it works correctly"}, {"input_format_csv_try_infer_strings_from_quoted_tuples", true, true, "A new way of how interpret tuples in CSV format was added."},
{"azure_allow_parallel_part_upload", "true", "true", "Use multiple threads for azure multipart upload."}, }
{"max_recursive_cte_evaluation_depth", DBMS_RECURSIVE_CTE_MAX_EVALUATION_DEPTH, DBMS_RECURSIVE_CTE_MAX_EVALUATION_DEPTH, "Maximum limit on recursive CTE evaluation depth"}, },
{"query_plan_convert_outer_join_to_inner_join", false, true, "Allow to convert OUTER JOIN to INNER JOIN if filter after JOIN always filters default values"}, {"24.5",
}}, {
{"24.3", {{"s3_connect_timeout_ms", 1000, 1000, "Introduce new dedicated setting for s3 connection timeout"}, {"allow_deprecated_error_prone_window_functions", true, false, "Allow usage of deprecated error prone window functions (neighbor, runningAccumulate, runningDifferenceStartingWithFirstValue, runningDifference)"},
{"allow_experimental_shared_merge_tree", false, true, "The setting is obsolete"}, {"allow_experimental_join_condition", false, false, "Support join with inequal conditions which involve columns from both left and right table. e.g. t1.y < t2.y."},
{"use_page_cache_for_disks_without_file_cache", false, false, "Added userspace page cache"}, {"input_format_tsv_crlf_end_of_line", false, false, "Enables reading of CRLF line endings with TSV formats"},
{"read_from_page_cache_if_exists_otherwise_bypass_cache", false, false, "Added userspace page cache"}, {"output_format_parquet_use_custom_encoder", false, true, "Enable custom Parquet encoder."},
{"page_cache_inject_eviction", false, false, "Added userspace page cache"}, {"cross_join_min_rows_to_compress", 0, 10000000, "Minimal count of rows to compress block in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."},
{"default_table_engine", "None", "MergeTree", "Set default table engine to MergeTree for better usability"}, {"cross_join_min_bytes_to_compress", 0, 1_GiB, "Minimal size of block to compress in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."},
{"input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects", false, false, "Allow to use String type for ambiguous paths during named tuple inference from JSON objects"}, {"http_max_chunk_size", 0, 0, "Internal limitation"},
{"traverse_shadow_remote_data_paths", false, false, "Traverse shadow directory when query system.remote_data_paths."}, {"prefer_external_sort_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Prefer maximum block bytes for external sort, reduce the memory usage during merging."},
{"throw_if_deduplication_in_dependent_materialized_views_enabled_with_async_insert", false, true, "Deduplication in dependent materialized view cannot work together with async inserts."}, {"input_format_force_null_for_omitted_fields", false, false, "Disable type-defaults for omitted fields when needed"},
{"parallel_replicas_allow_in_with_subquery", false, true, "If true, subquery for IN will be executed on every follower replica"}, {"cast_string_to_dynamic_use_inference", false, false, "Add setting to allow converting String to Dynamic through parsing"},
{"log_processors_profiles", false, true, "Enable by default"}, {"allow_experimental_dynamic_type", false, false, "Add new experimental Dynamic type"},
{"function_locate_has_mysql_compatible_argument_order", false, true, "Increase compatibility with MySQL's locate function."}, {"azure_max_blocks_in_multipart_upload", 50000, 50000, "Maximum number of blocks in multipart upload for Azure."},
{"allow_suspicious_primary_key", true, false, "Forbid suspicious PRIMARY KEY/ORDER BY for MergeTree (i.e. SimpleAggregateFunction)"}, }
{"filesystem_cache_reserve_space_wait_lock_timeout_milliseconds", 1000, 1000, "Wait time to lock cache for sapce reservation in filesystem cache"}, },
{"max_parser_backtracks", 0, 1000000, "Limiting the complexity of parsing"}, {"24.4",
{"analyzer_compatibility_join_using_top_level_identifier", false, false, "Force to resolve identifier in JOIN USING from projection"}, {
{"distributed_insert_skip_read_only_replicas", false, false, "If true, INSERT into Distributed will skip read-only replicas"}, {"input_format_json_throw_on_bad_escape_sequence", true, true, "Allow to save JSON strings with bad escape sequences"},
{"keeper_max_retries", 10, 10, "Max retries for general keeper operations"}, {"max_parsing_threads", 0, 0, "Add a separate setting to control number of threads in parallel parsing from files"},
{"keeper_retry_initial_backoff_ms", 100, 100, "Initial backoff timeout for general keeper operations"}, {"ignore_drop_queries_probability", 0, 0, "Allow to ignore drop queries in server with specified probability for testing purposes"},
{"keeper_retry_max_backoff_ms", 5000, 5000, "Max backoff timeout for general keeper operations"}, {"lightweight_deletes_sync", 2, 2, "The same as 'mutation_sync', but controls only execution of lightweight deletes"},
{"s3queue_allow_experimental_sharded_mode", false, false, "Enable experimental sharded mode of S3Queue table engine. It is experimental because it will be rewritten"}, {"query_cache_system_table_handling", "save", "throw", "The query cache no longer caches results of queries against system tables"},
{"allow_experimental_analyzer", false, true, "Enable analyzer and planner by default."}, {"input_format_json_ignore_unnecessary_fields", false, true, "Ignore unnecessary fields and not parse them. Enabling this may not throw exceptions on json strings of invalid format or with duplicated fields"},
{"merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability", 0.0, 0.0, "For testing of `PartsSplitter` - split read ranges into intersecting and non intersecting every time you read from MergeTree with the specified probability."}, {"input_format_hive_text_allow_variable_number_of_columns", false, true, "Ignore extra columns in Hive Text input (if file has more columns than expected) and treat missing fields in Hive Text input as default values."},
{"allow_get_client_http_header", false, false, "Introduced a new function."}, {"allow_experimental_database_replicated", false, true, "Database engine Replicated is now in Beta stage"},
{"output_format_pretty_row_numbers", false, true, "It is better for usability."}, {"temporary_data_in_cache_reserve_space_wait_lock_timeout_milliseconds", (10 * 60 * 1000), (10 * 60 * 1000), "Wait time to lock cache for sapce reservation in temporary data in filesystem cache"},
{"output_format_pretty_max_value_width_apply_for_single_value", true, false, "Single values in Pretty formats won't be cut."}, {"optimize_rewrite_sum_if_to_count_if", false, true, "Only available for the analyzer, where it works correctly"},
{"output_format_parquet_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."}, {"azure_allow_parallel_part_upload", "true", "true", "Use multiple threads for azure multipart upload."},
{"output_format_orc_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."}, {"max_recursive_cte_evaluation_depth", DBMS_RECURSIVE_CTE_MAX_EVALUATION_DEPTH, DBMS_RECURSIVE_CTE_MAX_EVALUATION_DEPTH, "Maximum limit on recursive CTE evaluation depth"},
{"output_format_arrow_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."}, {"query_plan_convert_outer_join_to_inner_join", false, true, "Allow to convert OUTER JOIN to INNER JOIN if filter after JOIN always filters default values"},
{"output_format_parquet_compression_method", "lz4", "zstd", "Parquet/ORC/Arrow support many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools, such as 'duckdb', lack support for the faster `lz4` compression method, that's why we set zstd by default."}, }
{"output_format_orc_compression_method", "lz4", "zstd", "Parquet/ORC/Arrow support many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools, such as 'duckdb', lack support for the faster `lz4` compression method, that's why we set zstd by default."}, },
{"output_format_pretty_highlight_digit_groups", false, true, "If enabled and if output is a terminal, highlight every digit corresponding to the number of thousands, millions, etc. with underline."}, {"24.3",
{"geo_distance_returns_float64_on_float64_arguments", false, true, "Increase the default precision."}, {
{"azure_max_inflight_parts_for_one_file", 20, 20, "The maximum number of a concurrent loaded parts in multipart upload request. 0 means unlimited."}, {"s3_connect_timeout_ms", 1000, 1000, "Introduce new dedicated setting for s3 connection timeout"},
{"azure_strict_upload_part_size", 0, 0, "The exact size of part to upload during multipart upload to Azure blob storage."}, {"allow_experimental_shared_merge_tree", false, true, "The setting is obsolete"},
{"azure_min_upload_part_size", 16*1024*1024, 16*1024*1024, "The minimum size of part to upload during multipart upload to Azure blob storage."}, {"use_page_cache_for_disks_without_file_cache", false, false, "Added userspace page cache"},
{"azure_max_upload_part_size", 5ull*1024*1024*1024, 5ull*1024*1024*1024, "The maximum size of part to upload during multipart upload to Azure blob storage."}, {"read_from_page_cache_if_exists_otherwise_bypass_cache", false, false, "Added userspace page cache"},
{"azure_upload_part_size_multiply_factor", 2, 2, "Multiply azure_min_upload_part_size by this factor each time azure_multiply_parts_count_threshold parts were uploaded from a single write to Azure blob storage."}, {"page_cache_inject_eviction", false, false, "Added userspace page cache"},
{"azure_upload_part_size_multiply_parts_count_threshold", 500, 500, "Each time this number of parts was uploaded to Azure blob storage, azure_min_upload_part_size is multiplied by azure_upload_part_size_multiply_factor."}, {"default_table_engine", "None", "MergeTree", "Set default table engine to MergeTree for better usability"},
{"output_format_csv_serialize_tuple_into_separate_columns", true, true, "A new way of how interpret tuples in CSV format was added."}, {"input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects", false, false, "Allow to use String type for ambiguous paths during named tuple inference from JSON objects"},
{"input_format_csv_deserialize_separate_columns_into_tuple", true, true, "A new way of how interpret tuples in CSV format was added."}, {"traverse_shadow_remote_data_paths", false, false, "Traverse shadow directory when query system.remote_data_paths."},
{"input_format_csv_try_infer_strings_from_quoted_tuples", true, true, "A new way of how interpret tuples in CSV format was added."}, {"throw_if_deduplication_in_dependent_materialized_views_enabled_with_async_insert", false, true, "Deduplication in dependent materialized view cannot work together with async inserts."},
}}, {"parallel_replicas_allow_in_with_subquery", false, true, "If true, subquery for IN will be executed on every follower replica"},
{"24.2", {{"allow_suspicious_variant_types", true, false, "Don't allow creating Variant type with suspicious variants by default"}, {"log_processors_profiles", false, true, "Enable by default"},
{"validate_experimental_and_suspicious_types_inside_nested_types", false, true, "Validate usage of experimental and suspicious types inside nested types"}, {"function_locate_has_mysql_compatible_argument_order", false, true, "Increase compatibility with MySQL's locate function."},
{"output_format_values_escape_quote_with_quote", false, false, "If true escape ' with '', otherwise quoted with \\'"}, {"allow_suspicious_primary_key", true, false, "Forbid suspicious PRIMARY KEY/ORDER BY for MergeTree (i.e. SimpleAggregateFunction)"},
{"output_format_pretty_single_large_number_tip_threshold", 0, 1'000'000, "Print a readable number tip on the right side of the table if the block consists of a single number which exceeds this value (except 0)"}, {"filesystem_cache_reserve_space_wait_lock_timeout_milliseconds", 1000, 1000, "Wait time to lock cache for sapce reservation in filesystem cache"},
{"input_format_try_infer_exponent_floats", true, false, "Don't infer floats in exponential notation by default"}, {"max_parser_backtracks", 0, 1000000, "Limiting the complexity of parsing"},
{"query_plan_optimize_prewhere", true, true, "Allow to push down filter to PREWHERE expression for supported storages"}, {"analyzer_compatibility_join_using_top_level_identifier", false, false, "Force to resolve identifier in JOIN USING from projection"},
{"async_insert_max_data_size", 1000000, 10485760, "The previous value appeared to be too small."}, {"distributed_insert_skip_read_only_replicas", false, false, "If true, INSERT into Distributed will skip read-only replicas"},
{"async_insert_poll_timeout_ms", 10, 10, "Timeout in milliseconds for polling data from asynchronous insert queue"}, {"keeper_max_retries", 10, 10, "Max retries for general keeper operations"},
{"async_insert_use_adaptive_busy_timeout", false, true, "Use adaptive asynchronous insert timeout"}, {"keeper_retry_initial_backoff_ms", 100, 100, "Initial backoff timeout for general keeper operations"},
{"async_insert_busy_timeout_min_ms", 50, 50, "The minimum value of the asynchronous insert timeout in milliseconds; it also serves as the initial value, which may be increased later by the adaptive algorithm"}, {"keeper_retry_max_backoff_ms", 5000, 5000, "Max backoff timeout for general keeper operations"},
{"async_insert_busy_timeout_max_ms", 200, 200, "The minimum value of the asynchronous insert timeout in milliseconds; async_insert_busy_timeout_ms is aliased to async_insert_busy_timeout_max_ms"}, {"s3queue_allow_experimental_sharded_mode", false, false, "Enable experimental sharded mode of S3Queue table engine. It is experimental because it will be rewritten"},
{"async_insert_busy_timeout_increase_rate", 0.2, 0.2, "The exponential growth rate at which the adaptive asynchronous insert timeout increases"}, {"allow_experimental_analyzer", false, true, "Enable analyzer and planner by default."},
{"async_insert_busy_timeout_decrease_rate", 0.2, 0.2, "The exponential growth rate at which the adaptive asynchronous insert timeout decreases"}, {"merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability", 0.0, 0.0, "For testing of `PartsSplitter` - split read ranges into intersecting and non intersecting every time you read from MergeTree with the specified probability."},
{"format_template_row_format", "", "", "Template row format string can be set directly in query"}, {"allow_get_client_http_header", false, false, "Introduced a new function."},
{"format_template_resultset_format", "", "", "Template result set format string can be set in query"}, {"output_format_pretty_row_numbers", false, true, "It is better for usability."},
{"split_parts_ranges_into_intersecting_and_non_intersecting_final", true, true, "Allow to split parts ranges into intersecting and non intersecting during FINAL optimization"}, {"output_format_pretty_max_value_width_apply_for_single_value", true, false, "Single values in Pretty formats won't be cut."},
{"split_intersecting_parts_ranges_into_layers_final", true, true, "Allow to split intersecting parts ranges into layers during FINAL optimization"}, {"output_format_parquet_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
{"azure_max_single_part_copy_size", 256*1024*1024, 256*1024*1024, "The maximum size of object to copy using single part copy to Azure blob storage."}, {"output_format_orc_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
{"min_external_table_block_size_rows", DEFAULT_INSERT_BLOCK_SIZE, DEFAULT_INSERT_BLOCK_SIZE, "Squash blocks passed to external table to specified size in rows, if blocks are not big enough"}, {"output_format_arrow_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
{"min_external_table_block_size_bytes", DEFAULT_INSERT_BLOCK_SIZE * 256, DEFAULT_INSERT_BLOCK_SIZE * 256, "Squash blocks passed to external table to specified size in bytes, if blocks are not big enough."}, {"output_format_parquet_compression_method", "lz4", "zstd", "Parquet/ORC/Arrow support many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools, such as 'duckdb', lack support for the faster `lz4` compression method, that's why we set zstd by default."},
{"parallel_replicas_prefer_local_join", true, true, "If true, and JOIN can be executed with parallel replicas algorithm, and all storages of right JOIN part are *MergeTree, local JOIN will be used instead of GLOBAL JOIN."}, {"output_format_orc_compression_method", "lz4", "zstd", "Parquet/ORC/Arrow support many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools, such as 'duckdb', lack support for the faster `lz4` compression method, that's why we set zstd by default."},
{"optimize_time_filter_with_preimage", true, true, "Optimize Date and DateTime predicates by converting functions into equivalent comparisons without conversions (e.g. toYear(col) = 2023 -> col >= '2023-01-01' AND col <= '2023-12-31')"}, {"output_format_pretty_highlight_digit_groups", false, true, "If enabled and if output is a terminal, highlight every digit corresponding to the number of thousands, millions, etc. with underline."},
{"extract_key_value_pairs_max_pairs_per_row", 0, 0, "Max number of pairs that can be produced by the `extractKeyValuePairs` function. Used as a safeguard against consuming too much memory."}, {"geo_distance_returns_float64_on_float64_arguments", false, true, "Increase the default precision."},
{"default_view_definer", "CURRENT_USER", "CURRENT_USER", "Allows to set default `DEFINER` option while creating a view"}, {"azure_max_inflight_parts_for_one_file", 20, 20, "The maximum number of a concurrent loaded parts in multipart upload request. 0 means unlimited."},
{"default_materialized_view_sql_security", "DEFINER", "DEFINER", "Allows to set a default value for SQL SECURITY option when creating a materialized view"}, {"azure_strict_upload_part_size", 0, 0, "The exact size of part to upload during multipart upload to Azure blob storage."},
{"default_normal_view_sql_security", "INVOKER", "INVOKER", "Allows to set default `SQL SECURITY` option while creating a normal view"}, {"azure_min_upload_part_size", 16*1024*1024, 16*1024*1024, "The minimum size of part to upload during multipart upload to Azure blob storage."},
{"mysql_map_string_to_text_in_show_columns", false, true, "Reduce the configuration effort to connect ClickHouse with BI tools."}, {"azure_max_upload_part_size", 5ull*1024*1024*1024, 5ull*1024*1024*1024, "The maximum size of part to upload during multipart upload to Azure blob storage."},
{"mysql_map_fixed_string_to_text_in_show_columns", false, true, "Reduce the configuration effort to connect ClickHouse with BI tools."}, {"azure_upload_part_size_multiply_factor", 2, 2, "Multiply azure_min_upload_part_size by this factor each time azure_multiply_parts_count_threshold parts were uploaded from a single write to Azure blob storage."},
}}, {"azure_upload_part_size_multiply_parts_count_threshold", 500, 500, "Each time this number of parts was uploaded to Azure blob storage, azure_min_upload_part_size is multiplied by azure_upload_part_size_multiply_factor."},
{"24.1", {{"print_pretty_type_names", false, true, "Better user experience."}, {"output_format_csv_serialize_tuple_into_separate_columns", true, true, "A new way of how interpret tuples in CSV format was added."},
{"input_format_json_read_bools_as_strings", false, true, "Allow to read bools as strings in JSON formats by default"}, {"input_format_csv_deserialize_separate_columns_into_tuple", true, true, "A new way of how interpret tuples in CSV format was added."},
{"output_format_arrow_use_signed_indexes_for_dictionary", false, true, "Use signed indexes type for Arrow dictionaries by default as it's recommended"}, {"input_format_csv_try_infer_strings_from_quoted_tuples", true, true, "A new way of how interpret tuples in CSV format was added."},
{"allow_experimental_variant_type", false, false, "Add new experimental Variant type"}, }
{"use_variant_as_common_type", false, false, "Allow to use Variant in if/multiIf if there is no common type"}, },
{"output_format_arrow_use_64_bit_indexes_for_dictionary", false, false, "Allow to use 64 bit indexes type in Arrow dictionaries"}, {"24.2",
{"parallel_replicas_mark_segment_size", 128, 128, "Add new setting to control segment size in new parallel replicas coordinator implementation"}, {
{"ignore_materialized_views_with_dropped_target_table", false, false, "Add new setting to allow to ignore materialized views with dropped target table"}, {"allow_suspicious_variant_types", true, false, "Don't allow creating Variant type with suspicious variants by default"},
{"output_format_compression_level", 3, 3, "Allow to change compression level in the query output"}, {"validate_experimental_and_suspicious_types_inside_nested_types", false, true, "Validate usage of experimental and suspicious types inside nested types"},
{"output_format_compression_zstd_window_log", 0, 0, "Allow to change zstd window log in the query output when zstd compression is used"}, {"output_format_values_escape_quote_with_quote", false, false, "If true escape ' with '', otherwise quoted with \\'"},
{"enable_zstd_qat_codec", false, false, "Add new ZSTD_QAT codec"}, {"output_format_pretty_single_large_number_tip_threshold", 0, 1'000'000, "Print a readable number tip on the right side of the table if the block consists of a single number which exceeds this value (except 0)"},
{"enable_vertical_final", false, true, "Use vertical final by default"}, {"input_format_try_infer_exponent_floats", true, false, "Don't infer floats in exponential notation by default"},
{"output_format_arrow_use_64_bit_indexes_for_dictionary", false, false, "Allow to use 64 bit indexes type in Arrow dictionaries"}, {"query_plan_optimize_prewhere", true, true, "Allow to push down filter to PREWHERE expression for supported storages"},
{"max_rows_in_set_to_optimize_join", 100000, 0, "Disable join optimization as it prevents from read in order optimization"}, {"async_insert_max_data_size", 1000000, 10485760, "The previous value appeared to be too small."},
{"output_format_pretty_color", true, "auto", "Setting is changed to allow also for auto value, disabling ANSI escapes if output is not a tty"}, {"async_insert_poll_timeout_ms", 10, 10, "Timeout in milliseconds for polling data from asynchronous insert queue"},
{"function_visible_width_behavior", 0, 1, "We changed the default behavior of `visibleWidth` to be more precise"}, {"async_insert_use_adaptive_busy_timeout", false, true, "Use adaptive asynchronous insert timeout"},
{"max_estimated_execution_time", 0, 0, "Separate max_execution_time and max_estimated_execution_time"}, {"async_insert_busy_timeout_min_ms", 50, 50, "The minimum value of the asynchronous insert timeout in milliseconds; it also serves as the initial value, which may be increased later by the adaptive algorithm"},
{"iceberg_engine_ignore_schema_evolution", false, false, "Allow to ignore schema evolution in Iceberg table engine"}, {"async_insert_busy_timeout_max_ms", 200, 200, "The minimum value of the asynchronous insert timeout in milliseconds; async_insert_busy_timeout_ms is aliased to async_insert_busy_timeout_max_ms"},
{"optimize_injective_functions_in_group_by", false, true, "Replace injective functions by it's arguments in GROUP BY section in analyzer"}, {"async_insert_busy_timeout_increase_rate", 0.2, 0.2, "The exponential growth rate at which the adaptive asynchronous insert timeout increases"},
{"update_insert_deduplication_token_in_dependent_materialized_views", false, false, "Allow to update insert deduplication token with table identifier during insert in dependent materialized views"}, {"async_insert_busy_timeout_decrease_rate", 0.2, 0.2, "The exponential growth rate at which the adaptive asynchronous insert timeout decreases"},
{"azure_max_unexpected_write_error_retries", 4, 4, "The maximum number of retries in case of unexpected errors during Azure blob storage write"}, {"format_template_row_format", "", "", "Template row format string can be set directly in query"},
{"split_parts_ranges_into_intersecting_and_non_intersecting_final", false, true, "Allow to split parts ranges into intersecting and non intersecting during FINAL optimization"}, {"format_template_resultset_format", "", "", "Template result set format string can be set in query"},
{"split_intersecting_parts_ranges_into_layers_final", true, true, "Allow to split intersecting parts ranges into layers during FINAL optimization"}}}, {"split_parts_ranges_into_intersecting_and_non_intersecting_final", true, true, "Allow to split parts ranges into intersecting and non intersecting during FINAL optimization"},
{"23.12", {{"allow_suspicious_ttl_expressions", true, false, "It is a new setting, and in previous versions the behavior was equivalent to allowing."}, {"split_intersecting_parts_ranges_into_layers_final", true, true, "Allow to split intersecting parts ranges into layers during FINAL optimization"},
{"input_format_parquet_allow_missing_columns", false, true, "Allow missing columns in Parquet files by default"}, {"azure_max_single_part_copy_size", 256*1024*1024, 256*1024*1024, "The maximum size of object to copy using single part copy to Azure blob storage."},
{"input_format_orc_allow_missing_columns", false, true, "Allow missing columns in ORC files by default"}, {"min_external_table_block_size_rows", DEFAULT_INSERT_BLOCK_SIZE, DEFAULT_INSERT_BLOCK_SIZE, "Squash blocks passed to external table to specified size in rows, if blocks are not big enough"},
{"input_format_arrow_allow_missing_columns", false, true, "Allow missing columns in Arrow files by default"}}}, {"min_external_table_block_size_bytes", DEFAULT_INSERT_BLOCK_SIZE * 256, DEFAULT_INSERT_BLOCK_SIZE * 256, "Squash blocks passed to external table to specified size in bytes, if blocks are not big enough."},
{"23.11", {{"parsedatetime_parse_without_leading_zeros", false, true, "Improved compatibility with MySQL DATE_FORMAT/STR_TO_DATE"}}}, {"parallel_replicas_prefer_local_join", true, true, "If true, and JOIN can be executed with parallel replicas algorithm, and all storages of right JOIN part are *MergeTree, local JOIN will be used instead of GLOBAL JOIN."},
{"23.9", {{"optimize_group_by_constant_keys", false, true, "Optimize group by constant keys by default"}, {"optimize_time_filter_with_preimage", true, true, "Optimize Date and DateTime predicates by converting functions into equivalent comparisons without conversions (e.g. toYear(col) = 2023 -> col >= '2023-01-01' AND col <= '2023-12-31')"},
{"input_format_json_try_infer_named_tuples_from_objects", false, true, "Try to infer named Tuples from JSON objects by default"}, {"extract_key_value_pairs_max_pairs_per_row", 0, 0, "Max number of pairs that can be produced by the `extractKeyValuePairs` function. Used as a safeguard against consuming too much memory."},
{"input_format_json_read_numbers_as_strings", false, true, "Allow to read numbers as strings in JSON formats by default"}, {"default_view_definer", "CURRENT_USER", "CURRENT_USER", "Allows to set default `DEFINER` option while creating a view"},
{"input_format_json_read_arrays_as_strings", false, true, "Allow to read arrays as strings in JSON formats by default"}, {"default_materialized_view_sql_security", "DEFINER", "DEFINER", "Allows to set a default value for SQL SECURITY option when creating a materialized view"},
{"input_format_json_infer_incomplete_types_as_strings", false, true, "Allow to infer incomplete types as Strings in JSON formats by default"}, {"default_normal_view_sql_security", "INVOKER", "INVOKER", "Allows to set default `SQL SECURITY` option while creating a normal view"},
{"input_format_json_try_infer_numbers_from_strings", true, false, "Don't infer numbers from strings in JSON formats by default to prevent possible parsing errors"}, {"mysql_map_string_to_text_in_show_columns", false, true, "Reduce the configuration effort to connect ClickHouse with BI tools."},
{"http_write_exception_in_output_format", false, true, "Output valid JSON/XML on exception in HTTP streaming."}}}, {"mysql_map_fixed_string_to_text_in_show_columns", false, true, "Reduce the configuration effort to connect ClickHouse with BI tools."},
{"23.8", {{"rewrite_count_distinct_if_with_count_distinct_implementation", false, true, "Rewrite countDistinctIf with count_distinct_implementation configuration"}}}, }
{"23.7", {{"function_sleep_max_microseconds_per_block", 0, 3000000, "In previous versions, the maximum sleep time of 3 seconds was applied only for `sleep`, but not for `sleepEachRow` function. In the new version, we introduce this setting. If you set compatibility with the previous versions, we will disable the limit altogether."}}}, },
{"23.6", {{"http_send_timeout", 180, 30, "3 minutes seems crazy long. Note that this is timeout for a single network write call, not for the whole upload operation."}, {"24.1",
{"http_receive_timeout", 180, 30, "See http_send_timeout."}}}, {
{"23.5", {{"input_format_parquet_preserve_order", true, false, "Allow Parquet reader to reorder rows for better parallelism."}, {"print_pretty_type_names", false, true, "Better user experience."},
{"parallelize_output_from_storages", false, true, "Allow parallelism when executing queries that read from file/url/s3/etc. This may reorder rows."}, {"input_format_json_read_bools_as_strings", false, true, "Allow to read bools as strings in JSON formats by default"},
{"use_with_fill_by_sorting_prefix", false, true, "Columns preceding WITH FILL columns in ORDER BY clause form sorting prefix. Rows with different values in sorting prefix are filled independently"}, {"output_format_arrow_use_signed_indexes_for_dictionary", false, true, "Use signed indexes type for Arrow dictionaries by default as it's recommended"},
{"output_format_parquet_compliant_nested_types", false, true, "Change an internal field name in output Parquet file schema."}}}, {"allow_experimental_variant_type", false, false, "Add new experimental Variant type"},
{"23.4", {{"allow_suspicious_indices", true, false, "If true, index can defined with identical expressions"}, {"use_variant_as_common_type", false, false, "Allow to use Variant in if/multiIf if there is no common type"},
{"allow_nonconst_timezone_arguments", true, false, "Allow non-const timezone arguments in certain time-related functions like toTimeZone(), fromUnixTimestamp*(), snowflakeToDateTime*()."}, {"output_format_arrow_use_64_bit_indexes_for_dictionary", false, false, "Allow to use 64 bit indexes type in Arrow dictionaries"},
{"connect_timeout_with_failover_ms", 50, 1000, "Increase default connect timeout because of async connect"}, {"parallel_replicas_mark_segment_size", 128, 128, "Add new setting to control segment size in new parallel replicas coordinator implementation"},
{"connect_timeout_with_failover_secure_ms", 100, 1000, "Increase default secure connect timeout because of async connect"}, {"ignore_materialized_views_with_dropped_target_table", false, false, "Add new setting to allow to ignore materialized views with dropped target table"},
{"hedged_connection_timeout_ms", 100, 50, "Start new connection in hedged requests after 50 ms instead of 100 to correspond with previous connect timeout"}, {"output_format_compression_level", 3, 3, "Allow to change compression level in the query output"},
{"formatdatetime_f_prints_single_zero", true, false, "Improved compatibility with MySQL DATE_FORMAT()/STR_TO_DATE()"}, {"output_format_compression_zstd_window_log", 0, 0, "Allow to change zstd window log in the query output when zstd compression is used"},
{"formatdatetime_parsedatetime_m_is_month_name", false, true, "Improved compatibility with MySQL DATE_FORMAT/STR_TO_DATE"}}}, {"enable_zstd_qat_codec", false, false, "Add new ZSTD_QAT codec"},
{"23.3", {{"output_format_parquet_version", "1.0", "2.latest", "Use latest Parquet format version for output format"}, {"enable_vertical_final", false, true, "Use vertical final by default"},
{"input_format_json_ignore_unknown_keys_in_named_tuple", false, true, "Improve parsing JSON objects as named tuples"}, {"output_format_arrow_use_64_bit_indexes_for_dictionary", false, false, "Allow to use 64 bit indexes type in Arrow dictionaries"},
{"input_format_native_allow_types_conversion", false, true, "Allow types conversion in Native input forma"}, {"max_rows_in_set_to_optimize_join", 100000, 0, "Disable join optimization as it prevents from read in order optimization"},
{"output_format_arrow_compression_method", "none", "lz4_frame", "Use lz4 compression in Arrow output format by default"}, {"output_format_pretty_color", true, "auto", "Setting is changed to allow also for auto value, disabling ANSI escapes if output is not a tty"},
{"output_format_parquet_compression_method", "snappy", "lz4", "Use lz4 compression in Parquet output format by default"}, {"function_visible_width_behavior", 0, 1, "We changed the default behavior of `visibleWidth` to be more precise"},
{"output_format_orc_compression_method", "none", "lz4_frame", "Use lz4 compression in ORC output format by default"}, {"max_estimated_execution_time", 0, 0, "Separate max_execution_time and max_estimated_execution_time"},
{"async_query_sending_for_remote", false, true, "Create connections and send query async across shards"}}}, {"iceberg_engine_ignore_schema_evolution", false, false, "Allow to ignore schema evolution in Iceberg table engine"},
{"23.2", {{"output_format_parquet_fixed_string_as_fixed_byte_array", false, true, "Use Parquet FIXED_LENGTH_BYTE_ARRAY type for FixedString by default"}, {"optimize_injective_functions_in_group_by", false, true, "Replace injective functions by it's arguments in GROUP BY section in analyzer"},
{"output_format_arrow_fixed_string_as_fixed_byte_array", false, true, "Use Arrow FIXED_SIZE_BINARY type for FixedString by default"}, {"update_insert_deduplication_token_in_dependent_materialized_views", false, false, "Allow to update insert deduplication token with table identifier during insert in dependent materialized views"},
{"query_plan_remove_redundant_distinct", false, true, "Remove redundant Distinct step in query plan"}, {"azure_max_unexpected_write_error_retries", 4, 4, "The maximum number of retries in case of unexpected errors during Azure blob storage write"},
{"optimize_duplicate_order_by_and_distinct", true, false, "Remove duplicate ORDER BY and DISTINCT if it's possible"}, {"split_parts_ranges_into_intersecting_and_non_intersecting_final", false, true, "Allow to split parts ranges into intersecting and non intersecting during FINAL optimization"},
{"insert_keeper_max_retries", 0, 20, "Enable reconnections to Keeper on INSERT, improve reliability"}}}, {"split_intersecting_parts_ranges_into_layers_final", true, true, "Allow to split intersecting parts ranges into layers during FINAL optimization"}
{"23.1", {{"input_format_json_read_objects_as_strings", 0, 1, "Enable reading nested json objects as strings while object type is experimental"}, }
{"input_format_json_defaults_for_missing_elements_in_named_tuple", false, true, "Allow missing elements in JSON objects while reading named tuples by default"}, },
{"input_format_csv_detect_header", false, true, "Detect header in CSV format by default"}, {"23.12",
{"input_format_tsv_detect_header", false, true, "Detect header in TSV format by default"}, {
{"input_format_custom_detect_header", false, true, "Detect header in CustomSeparated format by default"}, {"allow_suspicious_ttl_expressions", true, false, "It is a new setting, and in previous versions the behavior was equivalent to allowing."},
{"query_plan_remove_redundant_sorting", false, true, "Remove redundant sorting in query plan. For example, sorting steps related to ORDER BY clauses in subqueries"}}}, {"input_format_parquet_allow_missing_columns", false, true, "Allow missing columns in Parquet files by default"},
{"22.12", {{"max_size_to_preallocate_for_aggregation", 10'000'000, 100'000'000, "This optimizes performance"}, {"input_format_orc_allow_missing_columns", false, true, "Allow missing columns in ORC files by default"},
{"query_plan_aggregation_in_order", 0, 1, "Enable some refactoring around query plan"}, {"input_format_arrow_allow_missing_columns", false, true, "Allow missing columns in Arrow files by default"}
{"format_binary_max_string_size", 0, 1_GiB, "Prevent allocating large amount of memory"}}}, }
{"22.11", {{"use_structure_from_insertion_table_in_table_functions", 0, 2, "Improve using structure from insertion table in table functions"}}}, },
{"22.9", {{"force_grouping_standard_compatibility", false, true, "Make GROUPING function output the same as in SQL standard and other DBMS"}}}, {"23.11",
{"22.7", {{"cross_to_inner_join_rewrite", 1, 2, "Force rewrite comma join to inner"}, {
{"enable_positional_arguments", false, true, "Enable positional arguments feature by default"}, {"parsedatetime_parse_without_leading_zeros", false, true, "Improved compatibility with MySQL DATE_FORMAT/STR_TO_DATE"}
{"format_csv_allow_single_quotes", true, false, "Most tools don't treat single quote in CSV specially, don't do it by default too"}}}, }
{"22.6", {{"output_format_json_named_tuples_as_objects", false, true, "Allow to serialize named tuples as JSON objects in JSON formats by default"}, },
{"input_format_skip_unknown_fields", false, true, "Optimize reading subset of columns for some input formats"}}}, {"23.9",
{"22.5", {{"memory_overcommit_ratio_denominator", 0, 1073741824, "Enable memory overcommit feature by default"}, {
{"memory_overcommit_ratio_denominator_for_user", 0, 1073741824, "Enable memory overcommit feature by default"}}}, {"optimize_group_by_constant_keys", false, true, "Optimize group by constant keys by default"},
{"22.4", {{"allow_settings_after_format_in_insert", true, false, "Do not allow SETTINGS after FORMAT for INSERT queries because ClickHouse interpret SETTINGS as some values, which is misleading"}}}, {"input_format_json_try_infer_named_tuples_from_objects", false, true, "Try to infer named Tuples from JSON objects by default"},
{"22.3", {{"cast_ipv4_ipv6_default_on_conversion_error", true, false, "Make functions cast(value, 'IPv4') and cast(value, 'IPv6') behave same as toIPv4 and toIPv6 functions"}}}, {"input_format_json_read_numbers_as_strings", false, true, "Allow to read numbers as strings in JSON formats by default"},
{"21.12", {{"stream_like_engine_allow_direct_select", true, false, "Do not allow direct select for Kafka/RabbitMQ/FileLog by default"}}}, {"input_format_json_read_arrays_as_strings", false, true, "Allow to read arrays as strings in JSON formats by default"},
{"21.9", {{"output_format_decimal_trailing_zeros", true, false, "Do not output trailing zeros in text representation of Decimal types by default for better looking output"}, {"input_format_json_infer_incomplete_types_as_strings", false, true, "Allow to infer incomplete types as Strings in JSON formats by default"},
{"use_hedged_requests", false, true, "Enable Hedged Requests feature by default"}}}, {"input_format_json_try_infer_numbers_from_strings", true, false, "Don't infer numbers from strings in JSON formats by default to prevent possible parsing errors"},
{"21.7", {{"legacy_column_name_of_tuple_literal", true, false, "Add this setting only for compatibility reasons. It makes sense to set to 'true', while doing rolling update of cluster from version lower than 21.7 to higher"}}}, {"http_write_exception_in_output_format", false, true, "Output valid JSON/XML on exception in HTTP streaming."}
{"21.5", {{"async_socket_for_remote", false, true, "Fix all problems and turn on asynchronous reads from socket for remote queries by default again"}}}, }
{"21.3", {{"async_socket_for_remote", true, false, "Turn off asynchronous reads from socket for remote queries because of some problems"}, },
{"optimize_normalize_count_variants", false, true, "Rewrite aggregate functions that semantically equals to count() as count() by default"}, {"23.8",
{"normalize_function_names", false, true, "Normalize function names to their canonical names, this was needed for projection query routing"}}}, {
{"21.2", {{"enable_global_with_statement", false, true, "Propagate WITH statements to UNION queries and all subqueries by default"}}}, {"rewrite_count_distinct_if_with_count_distinct_implementation", false, true, "Rewrite countDistinctIf with count_distinct_implementation configuration"}
{"21.1", {{"insert_quorum_parallel", false, true, "Use parallel quorum inserts by default. It is significantly more convenient to use than sequential quorum inserts"}, }
{"input_format_null_as_default", false, true, "Allow to insert NULL as default for input formats by default"}, },
{"optimize_on_insert", false, true, "Enable data optimization on INSERT by default for better user experience"}, {"23.7",
{"use_compact_format_in_distributed_parts_names", false, true, "Use compact format for async INSERT into Distributed tables by default"}}}, {
{"20.10", {{"format_regexp_escaping_rule", "Escaped", "Raw", "Use Raw as default escaping rule for Regexp format to male the behaviour more like to what users expect"}}}, {"function_sleep_max_microseconds_per_block", 0, 3000000, "In previous versions, the maximum sleep time of 3 seconds was applied only for `sleep`, but not for `sleepEachRow` function. In the new version, we introduce this setting. If you set compatibility with the previous versions, we will disable the limit altogether."}
{"20.7", {{"show_table_uuid_in_table_create_query_if_not_nil", true, false, "Stop showing UID of the table in its CREATE query for Engine=Atomic"}}}, }
{"20.5", {{"input_format_with_names_use_header", false, true, "Enable using header with names for formats with WithNames/WithNamesAndTypes suffixes"}, },
{"allow_suspicious_codecs", true, false, "Don't allow to specify meaningless compression codecs"}}}, {"23.6",
{"20.4", {{"validate_polygons", false, true, "Throw exception if polygon is invalid in function pointInPolygon by default instead of returning possibly wrong results"}}}, {
{"19.18", {{"enable_scalar_subquery_optimization", false, true, "Prevent scalar subqueries from (de)serializing large scalar values and possibly avoid running the same subquery more than once"}}}, {"http_send_timeout", 180, 30, "3 minutes seems crazy long. Note that this is timeout for a single network write call, not for the whole upload operation."},
{"19.14", {{"any_join_distinct_right_table_keys", true, false, "Disable ANY RIGHT and ANY FULL JOINs by default to avoid inconsistency"}}}, {"http_receive_timeout", 180, 30, "See http_send_timeout."}
{"19.12", {{"input_format_defaults_for_omitted_fields", false, true, "Enable calculation of complex default expressions for omitted fields for some input formats, because it should be the expected behaviour"}}}, }
{"19.5", {{"max_partitions_per_insert_block", 0, 100, "Add a limit for the number of partitions in one block"}}}, },
{"18.12.17", {{"enable_optimize_predicate_expression", 0, 1, "Optimize predicates to subqueries by default"}}}, {"23.5",
{
{"input_format_parquet_preserve_order", true, false, "Allow Parquet reader to reorder rows for better parallelism."},
{"parallelize_output_from_storages", false, true, "Allow parallelism when executing queries that read from file/url/s3/etc. This may reorder rows."},
{"use_with_fill_by_sorting_prefix", false, true, "Columns preceding WITH FILL columns in ORDER BY clause form sorting prefix. Rows with different values in sorting prefix are filled independently"},
{"output_format_parquet_compliant_nested_types", false, true, "Change an internal field name in output Parquet file schema."}
}
},
{"23.4",
{
{"allow_suspicious_indices", true, false, "If true, index can defined with identical expressions"},
{"allow_nonconst_timezone_arguments", true, false, "Allow non-const timezone arguments in certain time-related functions like toTimeZone(), fromUnixTimestamp*(), snowflakeToDateTime*()."},
{"connect_timeout_with_failover_ms", 50, 1000, "Increase default connect timeout because of async connect"},
{"connect_timeout_with_failover_secure_ms", 100, 1000, "Increase default secure connect timeout because of async connect"},
{"hedged_connection_timeout_ms", 100, 50, "Start new connection in hedged requests after 50 ms instead of 100 to correspond with previous connect timeout"},
{"formatdatetime_f_prints_single_zero", true, false, "Improved compatibility with MySQL DATE_FORMAT()/STR_TO_DATE()"},
{"formatdatetime_parsedatetime_m_is_month_name", false, true, "Improved compatibility with MySQL DATE_FORMAT/STR_TO_DATE"}
}
},
{"23.3",
{
{"output_format_parquet_version", "1.0", "2.latest", "Use latest Parquet format version for output format"},
{"input_format_json_ignore_unknown_keys_in_named_tuple", false, true, "Improve parsing JSON objects as named tuples"},
{"input_format_native_allow_types_conversion", false, true, "Allow types conversion in Native input forma"},
{"output_format_arrow_compression_method", "none", "lz4_frame", "Use lz4 compression in Arrow output format by default"},
{"output_format_parquet_compression_method", "snappy", "lz4", "Use lz4 compression in Parquet output format by default"},
{"output_format_orc_compression_method", "none", "lz4_frame", "Use lz4 compression in ORC output format by default"},
{"async_query_sending_for_remote", false, true, "Create connections and send query async across shards"}
}
},
{"23.2",
{
{"output_format_parquet_fixed_string_as_fixed_byte_array", false, true, "Use Parquet FIXED_LENGTH_BYTE_ARRAY type for FixedString by default"},
{"output_format_arrow_fixed_string_as_fixed_byte_array", false, true, "Use Arrow FIXED_SIZE_BINARY type for FixedString by default"},
{"query_plan_remove_redundant_distinct", false, true, "Remove redundant Distinct step in query plan"},
{"optimize_duplicate_order_by_and_distinct", true, false, "Remove duplicate ORDER BY and DISTINCT if it's possible"},
{"insert_keeper_max_retries", 0, 20, "Enable reconnections to Keeper on INSERT, improve reliability"}
}
},
{"23.1",
{
{"input_format_json_read_objects_as_strings", 0, 1, "Enable reading nested json objects as strings while object type is experimental"},
{"input_format_json_defaults_for_missing_elements_in_named_tuple", false, true, "Allow missing elements in JSON objects while reading named tuples by default"},
{"input_format_csv_detect_header", false, true, "Detect header in CSV format by default"},
{"input_format_tsv_detect_header", false, true, "Detect header in TSV format by default"},
{"input_format_custom_detect_header", false, true, "Detect header in CustomSeparated format by default"},
{"query_plan_remove_redundant_sorting", false, true, "Remove redundant sorting in query plan. For example, sorting steps related to ORDER BY clauses in subqueries"}
}
},
{"22.12",
{
{"max_size_to_preallocate_for_aggregation", 10'000'000, 100'000'000, "This optimizes performance"},
{"query_plan_aggregation_in_order", 0, 1, "Enable some refactoring around query plan"},
{"format_binary_max_string_size", 0, 1_GiB, "Prevent allocating large amount of memory"}
}
},
{"22.11",
{
{"use_structure_from_insertion_table_in_table_functions", 0, 2, "Improve using structure from insertion table in table functions"}
}
},
{"22.9",
{
{"force_grouping_standard_compatibility", false, true, "Make GROUPING function output the same as in SQL standard and other DBMS"}
}
},
{"22.7",
{
{"cross_to_inner_join_rewrite", 1, 2, "Force rewrite comma join to inner"},
{"enable_positional_arguments", false, true, "Enable positional arguments feature by default"},
{"format_csv_allow_single_quotes", true, false, "Most tools don't treat single quote in CSV specially, don't do it by default too"}
}
},
{"22.6",
{
{"output_format_json_named_tuples_as_objects", false, true, "Allow to serialize named tuples as JSON objects in JSON formats by default"},
{"input_format_skip_unknown_fields", false, true, "Optimize reading subset of columns for some input formats"}
}
},
{"22.5",
{
{"memory_overcommit_ratio_denominator", 0, 1073741824, "Enable memory overcommit feature by default"},
{"memory_overcommit_ratio_denominator_for_user", 0, 1073741824, "Enable memory overcommit feature by default"}
}
},
{"22.4",
{
{"allow_settings_after_format_in_insert", true, false, "Do not allow SETTINGS after FORMAT for INSERT queries because ClickHouse interpret SETTINGS as some values, which is misleading"}
}
},
{"22.3",
{
{"cast_ipv4_ipv6_default_on_conversion_error", true, false, "Make functions cast(value, 'IPv4') and cast(value, 'IPv6') behave same as toIPv4 and toIPv6 functions"}
}
},
{"21.12",
{
{"stream_like_engine_allow_direct_select", true, false, "Do not allow direct select for Kafka/RabbitMQ/FileLog by default"}
}
},
{"21.9",
{
{"output_format_decimal_trailing_zeros", true, false, "Do not output trailing zeros in text representation of Decimal types by default for better looking output"},
{"use_hedged_requests", false, true, "Enable Hedged Requests feature by default"}
}
},
{"21.7",
{
{"legacy_column_name_of_tuple_literal", true, false, "Add this setting only for compatibility reasons. It makes sense to set to 'true', while doing rolling update of cluster from version lower than 21.7 to higher"}
}
},
{"21.5",
{
{"async_socket_for_remote", false, true, "Fix all problems and turn on asynchronous reads from socket for remote queries by default again"}
}
},
{"21.3",
{
{"async_socket_for_remote", true, false, "Turn off asynchronous reads from socket for remote queries because of some problems"},
{"optimize_normalize_count_variants", false, true, "Rewrite aggregate functions that semantically equals to count() as count() by default"},
{"normalize_function_names", false, true, "Normalize function names to their canonical names, this was needed for projection query routing"}
}
},
{"21.2",
{
{"enable_global_with_statement", false, true, "Propagate WITH statements to UNION queries and all subqueries by default"}
}
},
{"21.1",
{
{"insert_quorum_parallel", false, true, "Use parallel quorum inserts by default. It is significantly more convenient to use than sequential quorum inserts"},
{"input_format_null_as_default", false, true, "Allow to insert NULL as default for input formats by default"},
{"optimize_on_insert", false, true, "Enable data optimization on INSERT by default for better user experience"},
{"use_compact_format_in_distributed_parts_names", false, true, "Use compact format for async INSERT into Distributed tables by default"}
}
},
{"20.10",
{
{"format_regexp_escaping_rule", "Escaped", "Raw", "Use Raw as default escaping rule for Regexp format to male the behaviour more like to what users expect"}
}
},
{"20.7",
{
{"show_table_uuid_in_table_create_query_if_not_nil", true, false, "Stop showing UID of the table in its CREATE query for Engine=Atomic"}
}
},
{"20.5",
{
{"input_format_with_names_use_header", false, true, "Enable using header with names for formats with WithNames/WithNamesAndTypes suffixes"},
{"allow_suspicious_codecs", true, false, "Don't allow to specify meaningless compression codecs"}
}
},
{"20.4",
{
{"validate_polygons", false, true, "Throw exception if polygon is invalid in function pointInPolygon by default instead of returning possibly wrong results"}
}
},
{"19.18",
{
{"enable_scalar_subquery_optimization", false, true, "Prevent scalar subqueries from (de)serializing large scalar values and possibly avoid running the same subquery more than once"}
}
},
{"19.14",
{
{"any_join_distinct_right_table_keys", true, false, "Disable ANY RIGHT and ANY FULL JOINs by default to avoid inconsistency"}
}
},
{"19.12",
{
{"input_format_defaults_for_omitted_fields", false, true, "Enable calculation of complex default expressions for omitted fields for some input formats, because it should be the expected behaviour"}
}
},
{"19.5",
{
{"max_partitions_per_insert_block", 0, 100, "Add a limit for the number of partitions in one block"}
}
},
{"18.12.17",
{
{"enable_optimize_predicate_expression", 0, 1, "Optimize predicates to subqueries by default"}
}
},
}; };

View File

@ -1,3 +1,4 @@
#include <optional>
#include <Disks/ObjectStorages/AzureBlobStorage/AzureObjectStorage.h> #include <Disks/ObjectStorages/AzureBlobStorage/AzureObjectStorage.h>
#include "Common/Exception.h" #include "Common/Exception.h"
@ -117,7 +118,8 @@ AzureObjectStorage::AzureObjectStorage(
{ {
} }
ObjectStorageKey AzureObjectStorage::generateObjectKeyForPath(const std::string & /* path */) const ObjectStorageKey
AzureObjectStorage::generateObjectKeyForPath(const std::string & /* path */, const std::optional<std::string> & /* key_prefix */) const
{ {
return ObjectStorageKey::createAsRelative(getRandomASCIIString(32)); return ObjectStorageKey::createAsRelative(getRandomASCIIString(32));
} }

View File

@ -101,7 +101,7 @@ public:
const std::string & config_prefix, const std::string & config_prefix,
ContextPtr context) override; ContextPtr context) override;
ObjectStorageKey generateObjectKeyForPath(const std::string & path) const override; ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const override;
bool isRemote() const override { return true; } bool isRemote() const override { return true; }

View File

@ -34,14 +34,16 @@ FileCache::Key CachedObjectStorage::getCacheKey(const std::string & path) const
return cache->createKeyForPath(path); return cache->createKeyForPath(path);
} }
ObjectStorageKey CachedObjectStorage::generateObjectKeyForPath(const std::string & path) const ObjectStorageKey
CachedObjectStorage::generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const
{ {
return object_storage->generateObjectKeyForPath(path); return object_storage->generateObjectKeyForPath(path, key_prefix);
} }
ObjectStorageKey CachedObjectStorage::generateObjectKeyPrefixForDirectoryPath(const std::string & path) const ObjectStorageKey
CachedObjectStorage::generateObjectKeyPrefixForDirectoryPath(const std::string & path, const std::optional<std::string> & key_prefix) const
{ {
return object_storage->generateObjectKeyPrefixForDirectoryPath(path); return object_storage->generateObjectKeyPrefixForDirectoryPath(path, key_prefix);
} }
ReadSettings CachedObjectStorage::patchSettings(const ReadSettings & read_settings) const ReadSettings CachedObjectStorage::patchSettings(const ReadSettings & read_settings) const

View File

@ -98,9 +98,10 @@ public:
const std::string & getCacheName() const override { return cache_config_name; } const std::string & getCacheName() const override { return cache_config_name; }
ObjectStorageKey generateObjectKeyForPath(const std::string & path) const override; ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const override;
ObjectStorageKey generateObjectKeyPrefixForDirectoryPath(const std::string & path) const override; ObjectStorageKey
generateObjectKeyPrefixForDirectoryPath(const std::string & path, const std::optional<std::string> & key_prefix) const override;
void setKeysGenerator(ObjectStorageKeysGeneratorPtr gen) override { object_storage->setKeysGenerator(gen); } void setKeysGenerator(ObjectStorageKeysGeneratorPtr gen) override { object_storage->setKeysGenerator(gen); }

View File

@ -1,5 +1,7 @@
#include "CommonPathPrefixKeyGenerator.h" #include <Disks/ObjectStorages/CommonPathPrefixKeyGenerator.h>
#include <Disks/ObjectStorages/InMemoryPathMap.h>
#include <Common/SharedLockGuard.h>
#include <Common/getRandomASCIIString.h> #include <Common/getRandomASCIIString.h>
#include <deque> #include <deque>
@ -9,21 +11,22 @@
namespace DB namespace DB
{ {
CommonPathPrefixKeyGenerator::CommonPathPrefixKeyGenerator( CommonPathPrefixKeyGenerator::CommonPathPrefixKeyGenerator(String key_prefix_, std::weak_ptr<InMemoryPathMap> path_map_)
String key_prefix_, SharedMutex & shared_mutex_, std::weak_ptr<PathMap> path_map_) : storage_key_prefix(key_prefix_), path_map(std::move(path_map_))
: storage_key_prefix(key_prefix_), shared_mutex(shared_mutex_), path_map(std::move(path_map_))
{ {
} }
ObjectStorageKey CommonPathPrefixKeyGenerator::generate(const String & path, bool is_directory) const ObjectStorageKey
CommonPathPrefixKeyGenerator::generate(const String & path, bool is_directory, const std::optional<String> & key_prefix) const
{ {
const auto & [object_key_prefix, suffix_parts] = getLongestObjectKeyPrefix(path); const auto & [object_key_prefix, suffix_parts]
= getLongestObjectKeyPrefix(is_directory ? std::filesystem::path(path).parent_path().string() : path);
auto key = std::filesystem::path(object_key_prefix.empty() ? storage_key_prefix : object_key_prefix); auto key = std::filesystem::path(object_key_prefix);
/// The longest prefix is the same as path, meaning that the path is already mapped. /// The longest prefix is the same as path, meaning that the path is already mapped.
if (suffix_parts.empty()) if (suffix_parts.empty())
return ObjectStorageKey::createAsRelative(std::move(key)); return ObjectStorageKey::createAsRelative(key_prefix.has_value() ? *key_prefix : storage_key_prefix, std::move(key));
/// File and top-level directory paths are mapped as is. /// File and top-level directory paths are mapped as is.
if (!is_directory || object_key_prefix.empty()) if (!is_directory || object_key_prefix.empty())
@ -39,7 +42,7 @@ ObjectStorageKey CommonPathPrefixKeyGenerator::generate(const String & path, boo
key /= getRandomASCIIString(part_size); key /= getRandomASCIIString(part_size);
} }
return ObjectStorageKey::createAsRelative(key); return ObjectStorageKey::createAsRelative(key_prefix.has_value() ? *key_prefix : storage_key_prefix, key);
} }
std::tuple<std::string, std::vector<std::string>> CommonPathPrefixKeyGenerator::getLongestObjectKeyPrefix(const std::string & path) const std::tuple<std::string, std::vector<std::string>> CommonPathPrefixKeyGenerator::getLongestObjectKeyPrefix(const std::string & path) const
@ -47,14 +50,13 @@ std::tuple<std::string, std::vector<std::string>> CommonPathPrefixKeyGenerator::
std::filesystem::path p(path); std::filesystem::path p(path);
std::deque<std::string> dq; std::deque<std::string> dq;
std::shared_lock lock(shared_mutex); const auto ptr = path_map.lock();
SharedLockGuard lock(ptr->mutex);
auto ptr = path_map.lock();
while (p != p.root_path()) while (p != p.root_path())
{ {
auto it = ptr->find(p / ""); auto it = ptr->map.find(p);
if (it != ptr->end()) if (it != ptr->map.end())
{ {
std::vector<std::string> vec(std::make_move_iterator(dq.begin()), std::make_move_iterator(dq.end())); std::vector<std::string> vec(std::make_move_iterator(dq.begin()), std::make_move_iterator(dq.end()));
return std::make_tuple(it->second, std::move(vec)); return std::make_tuple(it->second, std::move(vec));

View File

@ -1,14 +1,15 @@
#pragma once #pragma once
#include <Common/ObjectStorageKeyGenerator.h> #include <Common/ObjectStorageKeyGenerator.h>
#include <Common/SharedMutex.h>
#include <filesystem> #include <filesystem>
#include <map> #include <map>
#include <optional>
namespace DB namespace DB
{ {
/// Deprecated. Used for backward compatibility with plain rewritable disks without a separate metadata layout.
/// Object storage key generator used specifically with the /// Object storage key generator used specifically with the
/// MetadataStorageFromPlainObjectStorage if multiple writes are allowed. /// MetadataStorageFromPlainObjectStorage if multiple writes are allowed.
@ -18,15 +19,16 @@ namespace DB
/// ///
/// The key generator ensures that the original directory hierarchy is /// The key generator ensures that the original directory hierarchy is
/// preserved, which is required for the MergeTree family. /// preserved, which is required for the MergeTree family.
struct InMemoryPathMap;
class CommonPathPrefixKeyGenerator : public IObjectStorageKeysGenerator class CommonPathPrefixKeyGenerator : public IObjectStorageKeysGenerator
{ {
public: public:
/// Local to remote path map. Leverages filesystem::path comparator for paths. /// Local to remote path map. Leverages filesystem::path comparator for paths.
using PathMap = std::map<std::filesystem::path, std::string>;
explicit CommonPathPrefixKeyGenerator(String key_prefix_, SharedMutex & shared_mutex_, std::weak_ptr<PathMap> path_map_); explicit CommonPathPrefixKeyGenerator(String key_prefix_, std::weak_ptr<InMemoryPathMap> path_map_);
ObjectStorageKey generate(const String & path, bool is_directory) const override; ObjectStorageKey generate(const String & path, bool is_directory, const std::optional<String> & key_prefix) const override;
private: private:
/// Longest key prefix and unresolved parts of the source path. /// Longest key prefix and unresolved parts of the source path.
@ -34,8 +36,7 @@ private:
const String storage_key_prefix; const String storage_key_prefix;
SharedMutex & shared_mutex; std::weak_ptr<InMemoryPathMap> path_map;
std::weak_ptr<PathMap> path_map;
}; };
} }

View File

@ -537,7 +537,7 @@ struct CopyFileObjectStorageOperation final : public IDiskObjectStorageOperation
for (const auto & object_from : source_blobs) for (const auto & object_from : source_blobs)
{ {
auto object_key = destination_object_storage.generateObjectKeyForPath(to_path); auto object_key = destination_object_storage.generateObjectKeyForPath(to_path, std::nullopt /* key_prefix */);
auto object_to = StoredObject(object_key.serialize()); auto object_to = StoredObject(object_key.serialize());
object_storage.copyObjectToAnotherObjectStorage(object_from, object_to,read_settings,write_settings, destination_object_storage); object_storage.copyObjectToAnotherObjectStorage(object_from, object_to,read_settings,write_settings, destination_object_storage);
@ -738,7 +738,7 @@ std::unique_ptr<WriteBufferFromFileBase> DiskObjectStorageTransaction::writeFile
const WriteSettings & settings, const WriteSettings & settings,
bool autocommit) bool autocommit)
{ {
auto object_key = object_storage.generateObjectKeyForPath(path); auto object_key = object_storage.generateObjectKeyForPath(path, std::nullopt /* key_prefix */);
std::optional<ObjectAttributes> object_attributes; std::optional<ObjectAttributes> object_attributes;
if (metadata_helper) if (metadata_helper)
@ -835,7 +835,7 @@ void DiskObjectStorageTransaction::writeFileUsingBlobWritingFunction(
const String & path, WriteMode mode, WriteBlobFunction && write_blob_function) const String & path, WriteMode mode, WriteBlobFunction && write_blob_function)
{ {
/// This function is a simplified and adapted version of DiskObjectStorageTransaction::writeFile(). /// This function is a simplified and adapted version of DiskObjectStorageTransaction::writeFile().
auto object_key = object_storage.generateObjectKeyForPath(path); auto object_key = object_storage.generateObjectKeyForPath(path, std::nullopt /* key_prefix */);
std::optional<ObjectAttributes> object_attributes; std::optional<ObjectAttributes> object_attributes;
if (metadata_helper) if (metadata_helper)

View File

@ -0,0 +1,51 @@
#include "FlatDirectoryStructureKeyGenerator.h"
#include <Disks/ObjectStorages/InMemoryPathMap.h>
#include "Common/ObjectStorageKey.h"
#include <Common/SharedLockGuard.h>
#include <Common/SharedMutex.h>
#include <Common/getRandomASCIIString.h>
#include <optional>
#include <shared_mutex>
#include <string>
namespace DB
{
FlatDirectoryStructureKeyGenerator::FlatDirectoryStructureKeyGenerator(String storage_key_prefix_, std::weak_ptr<InMemoryPathMap> path_map_)
: storage_key_prefix(storage_key_prefix_), path_map(std::move(path_map_))
{
}
ObjectStorageKey FlatDirectoryStructureKeyGenerator::generate(const String & path, bool is_directory, const std::optional<String> & key_prefix) const
{
if (is_directory)
chassert(path.ends_with('/'));
const auto p = std::filesystem::path(path);
auto directory = p.parent_path();
std::optional<std::filesystem::path> remote_path;
{
const auto ptr = path_map.lock();
SharedLockGuard lock(ptr->mutex);
auto it = ptr->map.find(p);
if (it != ptr->map.end())
return ObjectStorageKey::createAsRelative(key_prefix.has_value() ? *key_prefix : storage_key_prefix, it->second);
it = ptr->map.find(directory);
if (it != ptr->map.end())
remote_path = it->second;
}
constexpr size_t part_size = 32;
std::filesystem::path key = remote_path.has_value() ? *remote_path
: is_directory ? std::filesystem::path(getRandomASCIIString(part_size))
: directory;
if (!is_directory)
key /= p.filename();
return ObjectStorageKey::createAsRelative(key_prefix.has_value() ? *key_prefix : storage_key_prefix, key);
}
}

View File

@ -0,0 +1,23 @@
#pragma once
#include <Common/ObjectStorageKeyGenerator.h>
#include <memory>
namespace DB
{
struct InMemoryPathMap;
class FlatDirectoryStructureKeyGenerator : public IObjectStorageKeysGenerator
{
public:
explicit FlatDirectoryStructureKeyGenerator(String storage_key_prefix_, std::weak_ptr<InMemoryPathMap> path_map_);
ObjectStorageKey generate(const String & path, bool is_directory, const std::optional<String> & key_prefix) const override;
private:
const String storage_key_prefix;
std::weak_ptr<InMemoryPathMap> path_map;
};
}

View File

@ -4,8 +4,8 @@
#include <Storages/ObjectStorage/HDFS/WriteBufferFromHDFS.h> #include <Storages/ObjectStorage/HDFS/WriteBufferFromHDFS.h>
#include <Storages/ObjectStorage/HDFS/HDFSCommon.h> #include <Storages/ObjectStorage/HDFS/HDFSCommon.h>
#include <Storages/ObjectStorage/HDFS/ReadBufferFromHDFS.h>
#include <Disks/IO/ReadBufferFromRemoteFSGather.h> #include <Disks/IO/ReadBufferFromRemoteFSGather.h>
#include <Storages/ObjectStorage/HDFS/ReadBufferFromHDFS.h>
#include <Common/getRandomASCIIString.h> #include <Common/getRandomASCIIString.h>
#include <Common/logger_useful.h> #include <Common/logger_useful.h>
@ -53,7 +53,8 @@ std::string HDFSObjectStorage::extractObjectKeyFromURL(const StoredObject & obje
return path; return path;
} }
ObjectStorageKey HDFSObjectStorage::generateObjectKeyForPath(const std::string & /* path */) const ObjectStorageKey
HDFSObjectStorage::generateObjectKeyForPath(const std::string & /* path */, const std::optional<std::string> & /* key_prefix */) const
{ {
initializeHDFSFS(); initializeHDFSFS();
/// what ever data_source_description.description value is, consider that key as relative key /// what ever data_source_description.description value is, consider that key as relative key

View File

@ -111,7 +111,7 @@ public:
const std::string & config_prefix, const std::string & config_prefix,
ContextPtr context) override; ContextPtr context) override;
ObjectStorageKey generateObjectKeyForPath(const std::string & path) const override; ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const override;
bool isRemote() const override { return true; } bool isRemote() const override { return true; }

View File

@ -232,10 +232,11 @@ public:
/// Generate blob name for passed absolute local path. /// Generate blob name for passed absolute local path.
/// Path can be generated either independently or based on `path`. /// Path can be generated either independently or based on `path`.
virtual ObjectStorageKey generateObjectKeyForPath(const std::string & path) const = 0; virtual ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const = 0;
/// Object key prefix for local paths in the directory 'path'. /// Object key prefix for local paths in the directory 'path'.
virtual ObjectStorageKey generateObjectKeyPrefixForDirectoryPath(const std::string & /* path */) const virtual ObjectStorageKey
generateObjectKeyPrefixForDirectoryPath(const std::string & /* path */, const std::optional<std::string> & /* key_prefix */) const
{ {
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method 'generateObjectKeyPrefixForDirectoryPath' is not implemented"); throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method 'generateObjectKeyPrefixForDirectoryPath' is not implemented");
} }

View File

@ -0,0 +1,37 @@
#pragma once
#include <filesystem>
#include <map>
#include <base/defines.h>
#include <Common/SharedMutex.h>
namespace DB
{
struct InMemoryPathMap
{
struct PathComparator
{
bool operator()(const std::filesystem::path & path1, const std::filesystem::path & path2) const
{
auto d1 = std::distance(path1.begin(), path1.end());
auto d2 = std::distance(path2.begin(), path2.end());
if (d1 != d2)
return d1 < d2;
return path1 < path2;
}
};
/// Local -> Remote path.
using Map = std::map<std::filesystem::path, std::string, PathComparator>;
mutable SharedMutex mutex;
#ifdef OS_LINUX
Map TSA_GUARDED_BY(mutex) map;
/// std::shared_mutex may not be annotated with the 'capability' attribute in libcxx.
#else
Map map;
#endif
};
}

View File

@ -1,15 +1,15 @@
#include <Disks/ObjectStorages/Local/LocalObjectStorage.h> #include <Disks/ObjectStorages/Local/LocalObjectStorage.h>
#include <Interpreters/Context.h> #include <filesystem>
#include <Common/filesystemHelpers.h> #include <Disks/IO/AsynchronousBoundedReadBuffer.h>
#include <Common/logger_useful.h>
#include <Disks/IO/ReadBufferFromRemoteFSGather.h> #include <Disks/IO/ReadBufferFromRemoteFSGather.h>
#include <Disks/IO/createReadBufferFromFileBase.h> #include <Disks/IO/createReadBufferFromFileBase.h>
#include <Disks/IO/AsynchronousBoundedReadBuffer.h>
#include <IO/WriteBufferFromFile.h> #include <IO/WriteBufferFromFile.h>
#include <IO/copyData.h> #include <IO/copyData.h>
#include <Interpreters/Context.h>
#include <Common/filesystemHelpers.h>
#include <Common/getRandomASCIIString.h> #include <Common/getRandomASCIIString.h>
#include <filesystem> #include <Common/logger_useful.h>
namespace fs = std::filesystem; namespace fs = std::filesystem;
@ -222,7 +222,8 @@ std::unique_ptr<IObjectStorage> LocalObjectStorage::cloneObjectStorage(
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "cloneObjectStorage() is not implemented for LocalObjectStorage"); throw Exception(ErrorCodes::NOT_IMPLEMENTED, "cloneObjectStorage() is not implemented for LocalObjectStorage");
} }
ObjectStorageKey LocalObjectStorage::generateObjectKeyForPath(const std::string & /* path */) const ObjectStorageKey
LocalObjectStorage::generateObjectKeyForPath(const std::string & /* path */, const std::optional<std::string> & /* key_prefix */) const
{ {
constexpr size_t key_name_total_size = 32; constexpr size_t key_name_total_size = 32;
return ObjectStorageKey::createAsRelative(key_prefix, getRandomASCIIString(key_name_total_size)); return ObjectStorageKey::createAsRelative(key_prefix, getRandomASCIIString(key_name_total_size));

View File

@ -81,7 +81,7 @@ public:
const std::string & config_prefix, const std::string & config_prefix,
ContextPtr context) override; ContextPtr context) override;
ObjectStorageKey generateObjectKeyForPath(const std::string & path) const override; ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const override;
bool isRemote() const override { return false; } bool isRemote() const override { return false; }

View File

@ -1,5 +1,6 @@
#include "MetadataStorageFromPlainObjectStorage.h" #include "MetadataStorageFromPlainObjectStorage.h"
#include <Disks/IDisk.h> #include <Disks/IDisk.h>
#include <Disks/ObjectStorages/InMemoryPathMap.h>
#include <Disks/ObjectStorages/MetadataStorageFromPlainObjectStorageOperations.h> #include <Disks/ObjectStorages/MetadataStorageFromPlainObjectStorageOperations.h>
#include <Disks/ObjectStorages/StaticDirectoryIterator.h> #include <Disks/ObjectStorages/StaticDirectoryIterator.h>
@ -7,6 +8,7 @@
#include <filesystem> #include <filesystem>
#include <tuple> #include <tuple>
#include <unordered_set>
namespace DB namespace DB
{ {
@ -41,7 +43,7 @@ bool MetadataStorageFromPlainObjectStorage::exists(const std::string & path) con
{ {
/// NOTE: exists() cannot be used here since it works only for existing /// NOTE: exists() cannot be used here since it works only for existing
/// key, and does not work for some intermediate path. /// key, and does not work for some intermediate path.
auto object_key = object_storage->generateObjectKeyForPath(path); auto object_key = object_storage->generateObjectKeyForPath(path, std::nullopt /* key_prefix */);
return object_storage->existsOrHasAnyChild(object_key.serialize()); return object_storage->existsOrHasAnyChild(object_key.serialize());
} }
@ -53,7 +55,7 @@ bool MetadataStorageFromPlainObjectStorage::isFile(const std::string & path) con
bool MetadataStorageFromPlainObjectStorage::isDirectory(const std::string & path) const bool MetadataStorageFromPlainObjectStorage::isDirectory(const std::string & path) const
{ {
auto key_prefix = object_storage->generateObjectKeyForPath(path).serialize(); auto key_prefix = object_storage->generateObjectKeyForPath(path, std::nullopt /* key_prefix */).serialize();
auto directory = std::filesystem::path(std::move(key_prefix)) / ""; auto directory = std::filesystem::path(std::move(key_prefix)) / "";
return object_storage->existsOrHasAnyChild(directory); return object_storage->existsOrHasAnyChild(directory);
@ -61,7 +63,7 @@ bool MetadataStorageFromPlainObjectStorage::isDirectory(const std::string & path
uint64_t MetadataStorageFromPlainObjectStorage::getFileSize(const String & path) const uint64_t MetadataStorageFromPlainObjectStorage::getFileSize(const String & path) const
{ {
auto object_key = object_storage->generateObjectKeyForPath(path); auto object_key = object_storage->generateObjectKeyForPath(path, std::nullopt /* key_prefix */);
auto metadata = object_storage->tryGetObjectMetadata(object_key.serialize()); auto metadata = object_storage->tryGetObjectMetadata(object_key.serialize());
if (metadata) if (metadata)
return metadata->size_bytes; return metadata->size_bytes;
@ -70,7 +72,7 @@ uint64_t MetadataStorageFromPlainObjectStorage::getFileSize(const String & path)
std::vector<std::string> MetadataStorageFromPlainObjectStorage::listDirectory(const std::string & path) const std::vector<std::string> MetadataStorageFromPlainObjectStorage::listDirectory(const std::string & path) const
{ {
auto key_prefix = object_storage->generateObjectKeyForPath(path).serialize(); auto key_prefix = object_storage->generateObjectKeyForPath(path, std::nullopt /* key_prefix */).serialize();
RelativePathsWithMetadata files; RelativePathsWithMetadata files;
std::string abs_key = key_prefix; std::string abs_key = key_prefix;
@ -79,14 +81,27 @@ std::vector<std::string> MetadataStorageFromPlainObjectStorage::listDirectory(co
object_storage->listObjects(abs_key, files, 0); object_storage->listObjects(abs_key, files, 0);
return getDirectChildrenOnDisk(abs_key, files, path); std::unordered_set<std::string> result;
for (const auto & elem : files)
{
const auto & p = elem->relative_path;
chassert(p.find(abs_key) == 0);
const auto child_pos = abs_key.size();
/// string::npos is ok.
const auto slash_pos = p.find('/', child_pos);
if (slash_pos == std::string::npos)
result.emplace(p.substr(child_pos));
else
result.emplace(p.substr(child_pos, slash_pos - child_pos));
}
return std::vector<std::string>(std::make_move_iterator(result.begin()), std::make_move_iterator(result.end()));
} }
DirectoryIteratorPtr MetadataStorageFromPlainObjectStorage::iterateDirectory(const std::string & path) const DirectoryIteratorPtr MetadataStorageFromPlainObjectStorage::iterateDirectory(const std::string & path) const
{ {
/// Required for MergeTree /// Required for MergeTree
auto paths = listDirectory(path); auto paths = listDirectory(path);
// Prepend path, since iterateDirectory() includes path, unlike listDirectory() /// Prepend path, since iterateDirectory() includes path, unlike listDirectory()
std::for_each(paths.begin(), paths.end(), [&](auto & child) { child = fs::path(path) / child; }); std::for_each(paths.begin(), paths.end(), [&](auto & child) { child = fs::path(path) / child; });
std::vector<std::filesystem::path> fs_paths(paths.begin(), paths.end()); std::vector<std::filesystem::path> fs_paths(paths.begin(), paths.end());
return std::make_unique<StaticDirectoryIterator>(std::move(fs_paths)); return std::make_unique<StaticDirectoryIterator>(std::move(fs_paths));
@ -95,29 +110,10 @@ DirectoryIteratorPtr MetadataStorageFromPlainObjectStorage::iterateDirectory(con
StoredObjects MetadataStorageFromPlainObjectStorage::getStorageObjects(const std::string & path) const StoredObjects MetadataStorageFromPlainObjectStorage::getStorageObjects(const std::string & path) const
{ {
size_t object_size = getFileSize(path); size_t object_size = getFileSize(path);
auto object_key = object_storage->generateObjectKeyForPath(path); auto object_key = object_storage->generateObjectKeyForPath(path, std::nullopt /* key_prefix */);
return {StoredObject(object_key.serialize(), path, object_size)}; return {StoredObject(object_key.serialize(), path, object_size)};
} }
std::vector<std::string> MetadataStorageFromPlainObjectStorage::getDirectChildrenOnDisk(
const std::string & storage_key, const RelativePathsWithMetadata & remote_paths, const std::string & /* local_path */) const
{
std::unordered_set<std::string> duplicates_filter;
for (const auto & elem : remote_paths)
{
const auto & path = elem->relative_path;
chassert(path.find(storage_key) == 0);
const auto child_pos = storage_key.size();
/// string::npos is ok.
const auto slash_pos = path.find('/', child_pos);
if (slash_pos == std::string::npos)
duplicates_filter.emplace(path.substr(child_pos));
else
duplicates_filter.emplace(path.substr(child_pos, slash_pos - child_pos));
}
return std::vector<std::string>(std::make_move_iterator(duplicates_filter.begin()), std::make_move_iterator(duplicates_filter.end()));
}
const IMetadataStorage & MetadataStorageFromPlainObjectStorageTransaction::getStorageForNonTransactionalReads() const const IMetadataStorage & MetadataStorageFromPlainObjectStorageTransaction::getStorageForNonTransactionalReads() const
{ {
return metadata_storage; return metadata_storage;
@ -125,7 +121,7 @@ const IMetadataStorage & MetadataStorageFromPlainObjectStorageTransaction::getSt
void MetadataStorageFromPlainObjectStorageTransaction::unlinkFile(const std::string & path) void MetadataStorageFromPlainObjectStorageTransaction::unlinkFile(const std::string & path)
{ {
auto object_key = metadata_storage.object_storage->generateObjectKeyForPath(path); auto object_key = metadata_storage.object_storage->generateObjectKeyForPath(path, std::nullopt /* key_prefix */);
auto object = StoredObject(object_key.serialize()); auto object = StoredObject(object_key.serialize());
metadata_storage.object_storage->removeObject(object); metadata_storage.object_storage->removeObject(object);
} }
@ -140,7 +136,7 @@ void MetadataStorageFromPlainObjectStorageTransaction::removeDirectory(const std
else else
{ {
addOperation(std::make_unique<MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation>( addOperation(std::make_unique<MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation>(
normalizeDirectoryPath(path), *metadata_storage.getPathMap(), object_storage)); normalizeDirectoryPath(path), *metadata_storage.getPathMap(), object_storage, metadata_storage.getMetadataKeyPrefix()));
} }
} }
@ -150,9 +146,11 @@ void MetadataStorageFromPlainObjectStorageTransaction::createDirectory(const std
return; return;
auto normalized_path = normalizeDirectoryPath(path); auto normalized_path = normalizeDirectoryPath(path);
auto key_prefix = object_storage->generateObjectKeyPrefixForDirectoryPath(normalized_path).serialize();
auto op = std::make_unique<MetadataStorageFromPlainObjectStorageCreateDirectoryOperation>( auto op = std::make_unique<MetadataStorageFromPlainObjectStorageCreateDirectoryOperation>(
std::move(normalized_path), std::move(key_prefix), *metadata_storage.getPathMap(), object_storage); std::move(normalized_path),
*metadata_storage.getPathMap(),
object_storage,
metadata_storage.getMetadataKeyPrefix());
addOperation(std::move(op)); addOperation(std::move(op));
} }
@ -167,7 +165,11 @@ void MetadataStorageFromPlainObjectStorageTransaction::moveDirectory(const std::
throwNotImplemented(); throwNotImplemented();
addOperation(std::make_unique<MetadataStorageFromPlainObjectStorageMoveDirectoryOperation>( addOperation(std::make_unique<MetadataStorageFromPlainObjectStorageMoveDirectoryOperation>(
normalizeDirectoryPath(path_from), normalizeDirectoryPath(path_to), *metadata_storage.getPathMap(), object_storage)); normalizeDirectoryPath(path_from),
normalizeDirectoryPath(path_to),
*metadata_storage.getPathMap(),
object_storage,
metadata_storage.getMetadataKeyPrefix()));
} }
void MetadataStorageFromPlainObjectStorageTransaction::addBlobToMetadata( void MetadataStorageFromPlainObjectStorageTransaction::addBlobToMetadata(

View File

@ -2,14 +2,18 @@
#include <Disks/IDisk.h> #include <Disks/IDisk.h>
#include <Disks/ObjectStorages/IMetadataStorage.h> #include <Disks/ObjectStorages/IMetadataStorage.h>
#include <Disks/ObjectStorages/InMemoryPathMap.h>
#include <Disks/ObjectStorages/MetadataOperationsHolder.h> #include <Disks/ObjectStorages/MetadataOperationsHolder.h>
#include <Disks/ObjectStorages/MetadataStorageTransactionState.h> #include <Disks/ObjectStorages/MetadataStorageTransactionState.h>
#include <map> #include <map>
#include <string>
#include <unordered_set>
namespace DB namespace DB
{ {
struct InMemoryPathMap;
struct UnlinkMetadataFileOperationOutcome; struct UnlinkMetadataFileOperationOutcome;
using UnlinkMetadataFileOperationOutcomePtr = std::shared_ptr<UnlinkMetadataFileOperationOutcome>; using UnlinkMetadataFileOperationOutcomePtr = std::shared_ptr<UnlinkMetadataFileOperationOutcome>;
@ -25,10 +29,6 @@ using UnlinkMetadataFileOperationOutcomePtr = std::shared_ptr<UnlinkMetadataFile
/// structure as on disk MergeTree, and does not require metadata from a local disk to restore. /// structure as on disk MergeTree, and does not require metadata from a local disk to restore.
class MetadataStorageFromPlainObjectStorage : public IMetadataStorage class MetadataStorageFromPlainObjectStorage : public IMetadataStorage
{ {
public:
/// Local path prefixes mapped to storage key prefixes.
using PathMap = std::map<std::filesystem::path, std::string>;
private: private:
friend class MetadataStorageFromPlainObjectStorageTransaction; friend class MetadataStorageFromPlainObjectStorageTransaction;
@ -78,10 +78,11 @@ public:
bool supportsStat() const override { return false; } bool supportsStat() const override { return false; }
protected: protected:
virtual std::shared_ptr<PathMap> getPathMap() const { throwNotImplemented(); } /// Get the object storage prefix for storing metadata files.
virtual std::string getMetadataKeyPrefix() const { return object_storage->getCommonKeyPrefix(); }
virtual std::vector<std::string> getDirectChildrenOnDisk( /// Returns a map of virtual filesystem paths to paths in the object storage.
const std::string & storage_key, const RelativePathsWithMetadata & remote_paths, const std::string & local_path) const; virtual std::shared_ptr<InMemoryPathMap> getPathMap() const { throwNotImplemented(); }
}; };
class MetadataStorageFromPlainObjectStorageTransaction final : public IMetadataTransaction, private MetadataOperationsHolder class MetadataStorageFromPlainObjectStorageTransaction final : public IMetadataTransaction, private MetadataOperationsHolder

View File

@ -1,8 +1,10 @@
#include "MetadataStorageFromPlainObjectStorageOperations.h" #include "MetadataStorageFromPlainObjectStorageOperations.h"
#include <Disks/ObjectStorages/InMemoryPathMap.h>
#include <IO/ReadHelpers.h> #include <IO/ReadHelpers.h>
#include <IO/WriteHelpers.h> #include <IO/WriteHelpers.h>
#include <Common/Exception.h> #include <Common/Exception.h>
#include <Common/SharedLockGuard.h>
#include <Common/logger_useful.h> #include <Common/logger_useful.h>
namespace DB namespace DB
@ -20,29 +22,45 @@ namespace
constexpr auto PREFIX_PATH_FILE_NAME = "prefix.path"; constexpr auto PREFIX_PATH_FILE_NAME = "prefix.path";
ObjectStorageKey createMetadataObjectKey(const std::string & object_key_prefix, const std::string & metadata_key_prefix)
{
auto prefix = std::filesystem::path(metadata_key_prefix) / object_key_prefix;
return ObjectStorageKey::createAsRelative(prefix.string(), PREFIX_PATH_FILE_NAME);
}
} }
MetadataStorageFromPlainObjectStorageCreateDirectoryOperation::MetadataStorageFromPlainObjectStorageCreateDirectoryOperation( MetadataStorageFromPlainObjectStorageCreateDirectoryOperation::MetadataStorageFromPlainObjectStorageCreateDirectoryOperation(
std::filesystem::path && path_, std::filesystem::path && path_, InMemoryPathMap & path_map_, ObjectStoragePtr object_storage_, const std::string & metadata_key_prefix_)
std::string && key_prefix_, : path(std::move(path_))
MetadataStorageFromPlainObjectStorage::PathMap & path_map_, , path_map(path_map_)
ObjectStoragePtr object_storage_) , object_storage(object_storage_)
: path(std::move(path_)), key_prefix(key_prefix_), path_map(path_map_), object_storage(object_storage_) , metadata_key_prefix(metadata_key_prefix_)
, object_key_prefix(object_storage->generateObjectKeyPrefixForDirectoryPath(path, "" /* object_key_prefix */).serialize())
{ {
chassert(path.string().ends_with('/'));
} }
void MetadataStorageFromPlainObjectStorageCreateDirectoryOperation::execute(std::unique_lock<SharedMutex> &) void MetadataStorageFromPlainObjectStorageCreateDirectoryOperation::execute(std::unique_lock<SharedMutex> &)
{ {
if (path_map.contains(path)) /// parent_path() removes the trailing '/'
return; const auto base_path = path.parent_path();
{
SharedLockGuard lock(path_map.mutex);
if (path_map.map.contains(base_path))
return;
}
LOG_TRACE(getLogger("MetadataStorageFromPlainObjectStorageCreateDirectoryOperation"), "Creating metadata for directory '{}'", path); auto metadata_object_key = createMetadataObjectKey(object_key_prefix, metadata_key_prefix);
auto object_key = ObjectStorageKey::createAsRelative(key_prefix, PREFIX_PATH_FILE_NAME); LOG_TRACE(
getLogger("MetadataStorageFromPlainObjectStorageCreateDirectoryOperation"),
"Creating metadata for directory '{}' with remote path='{}'",
path,
metadata_object_key.serialize());
auto object = StoredObject(object_key.serialize(), path / PREFIX_PATH_FILE_NAME); auto metadata_object = StoredObject(/*remote_path*/ metadata_object_key.serialize(), /*local_path*/ path / PREFIX_PATH_FILE_NAME);
auto buf = object_storage->writeObject( auto buf = object_storage->writeObject(
object, metadata_object,
WriteMode::Rewrite, WriteMode::Rewrite,
/* object_attributes */ std::nullopt, /* object_attributes */ std::nullopt,
/* buf_size */ DBMS_DEFAULT_BUFFER_SIZE, /* buf_size */ DBMS_DEFAULT_BUFFER_SIZE,
@ -50,8 +68,12 @@ void MetadataStorageFromPlainObjectStorageCreateDirectoryOperation::execute(std:
write_created = true; write_created = true;
[[maybe_unused]] auto result = path_map.emplace(path, std::move(key_prefix)); {
chassert(result.second); std::lock_guard lock(path_map.mutex);
auto & map = path_map.map;
[[maybe_unused]] auto result = map.emplace(base_path, object_key_prefix);
chassert(result.second);
}
auto metric = object_storage->getMetadataStorageMetrics().directory_map_size; auto metric = object_storage->getMetadataStorageMetrics().directory_map_size;
CurrentMetrics::add(metric, 1); CurrentMetrics::add(metric, 1);
@ -66,58 +88,81 @@ void MetadataStorageFromPlainObjectStorageCreateDirectoryOperation::execute(std:
void MetadataStorageFromPlainObjectStorageCreateDirectoryOperation::undo(std::unique_lock<SharedMutex> &) void MetadataStorageFromPlainObjectStorageCreateDirectoryOperation::undo(std::unique_lock<SharedMutex> &)
{ {
auto object_key = ObjectStorageKey::createAsRelative(key_prefix, PREFIX_PATH_FILE_NAME); auto metadata_object_key = createMetadataObjectKey(object_key_prefix, metadata_key_prefix);
if (write_finalized) if (write_finalized)
{ {
path_map.erase(path); const auto base_path = path.parent_path();
{
std::lock_guard lock(path_map.mutex);
path_map.map.erase(base_path);
}
auto metric = object_storage->getMetadataStorageMetrics().directory_map_size; auto metric = object_storage->getMetadataStorageMetrics().directory_map_size;
CurrentMetrics::sub(metric, 1); CurrentMetrics::sub(metric, 1);
object_storage->removeObject(StoredObject(object_key.serialize(), path / PREFIX_PATH_FILE_NAME)); object_storage->removeObject(StoredObject(metadata_object_key.serialize(), path / PREFIX_PATH_FILE_NAME));
} }
else if (write_created) else if (write_created)
object_storage->removeObjectIfExists(StoredObject(object_key.serialize(), path / PREFIX_PATH_FILE_NAME)); object_storage->removeObjectIfExists(StoredObject(metadata_object_key.serialize(), path / PREFIX_PATH_FILE_NAME));
} }
MetadataStorageFromPlainObjectStorageMoveDirectoryOperation::MetadataStorageFromPlainObjectStorageMoveDirectoryOperation( MetadataStorageFromPlainObjectStorageMoveDirectoryOperation::MetadataStorageFromPlainObjectStorageMoveDirectoryOperation(
std::filesystem::path && path_from_, std::filesystem::path && path_from_,
std::filesystem::path && path_to_, std::filesystem::path && path_to_,
MetadataStorageFromPlainObjectStorage::PathMap & path_map_, InMemoryPathMap & path_map_,
ObjectStoragePtr object_storage_) ObjectStoragePtr object_storage_,
: path_from(std::move(path_from_)), path_to(std::move(path_to_)), path_map(path_map_), object_storage(object_storage_) const std::string & metadata_key_prefix_)
: path_from(std::move(path_from_))
, path_to(std::move(path_to_))
, path_map(path_map_)
, object_storage(object_storage_)
, metadata_key_prefix(metadata_key_prefix_)
{ {
chassert(path_from.string().ends_with('/'));
chassert(path_to.string().ends_with('/'));
} }
std::unique_ptr<WriteBufferFromFileBase> MetadataStorageFromPlainObjectStorageMoveDirectoryOperation::createWriteBuf( std::unique_ptr<WriteBufferFromFileBase> MetadataStorageFromPlainObjectStorageMoveDirectoryOperation::createWriteBuf(
const std::filesystem::path & expected_path, const std::filesystem::path & new_path, bool validate_content) const std::filesystem::path & expected_path, const std::filesystem::path & new_path, bool validate_content)
{ {
auto expected_it = path_map.find(expected_path); std::filesystem::path remote_path;
if (expected_it == path_map.end()) {
throw Exception(ErrorCodes::FILE_DOESNT_EXIST, "Metadata object for the expected (source) path '{}' does not exist", expected_path); SharedLockGuard lock(path_map.mutex);
auto & map = path_map.map;
/// parent_path() removes the trailing '/'.
auto expected_it = map.find(expected_path.parent_path());
if (expected_it == map.end())
throw Exception(
ErrorCodes::FILE_DOESNT_EXIST, "Metadata object for the expected (source) path '{}' does not exist", expected_path);
if (path_map.contains(new_path)) if (map.contains(new_path.parent_path()))
throw Exception(ErrorCodes::FILE_ALREADY_EXISTS, "Metadata object for the new (destination) path '{}' already exists", new_path); throw Exception(
ErrorCodes::FILE_ALREADY_EXISTS, "Metadata object for the new (destination) path '{}' already exists", new_path);
auto object_key = ObjectStorageKey::createAsRelative(expected_it->second, PREFIX_PATH_FILE_NAME); remote_path = expected_it->second;
}
auto object = StoredObject(object_key.serialize(), expected_path / PREFIX_PATH_FILE_NAME); auto metadata_object_key = createMetadataObjectKey(remote_path, metadata_key_prefix);
auto metadata_object
= StoredObject(/*remote_path*/ metadata_object_key.serialize(), /*local_path*/ expected_path / PREFIX_PATH_FILE_NAME);
if (validate_content) if (validate_content)
{ {
std::string data; std::string data;
auto read_buf = object_storage->readObject(object); auto read_buf = object_storage->readObject(metadata_object);
readStringUntilEOF(data, *read_buf); readStringUntilEOF(data, *read_buf);
if (data != path_from) if (data != path_from)
throw Exception( throw Exception(
ErrorCodes::INCORRECT_DATA, ErrorCodes::INCORRECT_DATA,
"Incorrect data for object key {}, expected {}, got {}", "Incorrect data for object key {}, expected {}, got {}",
object_key.serialize(), metadata_object_key.serialize(),
expected_path, expected_path,
data); data);
} }
auto write_buf = object_storage->writeObject( auto write_buf = object_storage->writeObject(
object, metadata_object,
WriteMode::Rewrite, WriteMode::Rewrite,
/* object_attributes */ std::nullopt, /* object_attributes */ std::nullopt,
/*buf_size*/ DBMS_DEFAULT_BUFFER_SIZE, /*buf_size*/ DBMS_DEFAULT_BUFFER_SIZE,
@ -136,8 +181,16 @@ void MetadataStorageFromPlainObjectStorageMoveDirectoryOperation::execute(std::u
writeString(path_to.string(), *write_buf); writeString(path_to.string(), *write_buf);
write_buf->finalize(); write_buf->finalize();
[[maybe_unused]] auto result = path_map.emplace(path_to, path_map.extract(path_from).mapped()); /// parent_path() removes the trailing '/'.
chassert(result.second); auto base_path_to = path_to.parent_path();
auto base_path_from = path_from.parent_path();
{
std::lock_guard lock(path_map.mutex);
auto & map = path_map.map;
[[maybe_unused]] auto result = map.emplace(base_path_to, map.extract(base_path_from).mapped());
chassert(result.second);
}
write_finalized = true; write_finalized = true;
} }
@ -145,7 +198,11 @@ void MetadataStorageFromPlainObjectStorageMoveDirectoryOperation::execute(std::u
void MetadataStorageFromPlainObjectStorageMoveDirectoryOperation::undo(std::unique_lock<SharedMutex> &) void MetadataStorageFromPlainObjectStorageMoveDirectoryOperation::undo(std::unique_lock<SharedMutex> &)
{ {
if (write_finalized) if (write_finalized)
path_map.emplace(path_from, path_map.extract(path_to).mapped()); {
std::lock_guard lock(path_map.mutex);
auto & map = path_map.map;
map.emplace(path_from.parent_path(), map.extract(path_to.parent_path()).mapped());
}
if (write_created) if (write_created)
{ {
@ -156,25 +213,37 @@ void MetadataStorageFromPlainObjectStorageMoveDirectoryOperation::undo(std::uniq
} }
MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation::MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation( MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation::MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation(
std::filesystem::path && path_, MetadataStorageFromPlainObjectStorage::PathMap & path_map_, ObjectStoragePtr object_storage_) std::filesystem::path && path_, InMemoryPathMap & path_map_, ObjectStoragePtr object_storage_, const std::string & metadata_key_prefix_)
: path(std::move(path_)), path_map(path_map_), object_storage(object_storage_) : path(std::move(path_)), path_map(path_map_), object_storage(object_storage_), metadata_key_prefix(metadata_key_prefix_)
{ {
chassert(path.string().ends_with('/'));
} }
void MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation::execute(std::unique_lock<SharedMutex> & /* metadata_lock */) void MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation::execute(std::unique_lock<SharedMutex> & /* metadata_lock */)
{ {
auto path_it = path_map.find(path); /// parent_path() removes the trailing '/'
if (path_it == path_map.end()) const auto base_path = path.parent_path();
return; {
SharedLockGuard lock(path_map.mutex);
auto & map = path_map.map;
auto path_it = map.find(base_path);
if (path_it == map.end())
return;
key_prefix = path_it->second;
}
LOG_TRACE(getLogger("MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation"), "Removing directory '{}'", path); LOG_TRACE(getLogger("MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation"), "Removing directory '{}'", path);
key_prefix = path_it->second; auto metadata_object_key = createMetadataObjectKey(key_prefix, metadata_key_prefix);
auto object_key = ObjectStorageKey::createAsRelative(key_prefix, PREFIX_PATH_FILE_NAME); auto metadata_object = StoredObject(/*remote_path*/ metadata_object_key.serialize(), /*local_path*/ path / PREFIX_PATH_FILE_NAME);
auto object = StoredObject(object_key.serialize(), path / PREFIX_PATH_FILE_NAME); object_storage->removeObject(metadata_object);
object_storage->removeObject(object);
{
std::lock_guard lock(path_map.mutex);
auto & map = path_map.map;
map.erase(base_path);
}
path_map.erase(path_it);
auto metric = object_storage->getMetadataStorageMetrics().directory_map_size; auto metric = object_storage->getMetadataStorageMetrics().directory_map_size;
CurrentMetrics::sub(metric, 1); CurrentMetrics::sub(metric, 1);
@ -189,10 +258,10 @@ void MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation::undo(std::un
if (!removed) if (!removed)
return; return;
auto object_key = ObjectStorageKey::createAsRelative(key_prefix, PREFIX_PATH_FILE_NAME); auto metadata_object_key = createMetadataObjectKey(key_prefix, metadata_key_prefix);
auto object = StoredObject(object_key.serialize(), path / PREFIX_PATH_FILE_NAME); auto metadata_object = StoredObject(metadata_object_key.serialize(), path / PREFIX_PATH_FILE_NAME);
auto buf = object_storage->writeObject( auto buf = object_storage->writeObject(
object, metadata_object,
WriteMode::Rewrite, WriteMode::Rewrite,
/* object_attributes */ std::nullopt, /* object_attributes */ std::nullopt,
/* buf_size */ DBMS_DEFAULT_BUFFER_SIZE, /* buf_size */ DBMS_DEFAULT_BUFFER_SIZE,
@ -200,7 +269,11 @@ void MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation::undo(std::un
writeString(path.string(), *buf); writeString(path.string(), *buf);
buf->finalize(); buf->finalize();
path_map.emplace(path, std::move(key_prefix)); {
std::lock_guard lock(path_map.mutex);
auto & map = path_map.map;
map.emplace(path.parent_path(), std::move(key_prefix));
}
auto metric = object_storage->getMetadataStorageMetrics().directory_map_size; auto metric = object_storage->getMetadataStorageMetrics().directory_map_size;
CurrentMetrics::add(metric, 1); CurrentMetrics::add(metric, 1);
} }

View File

@ -1,6 +1,7 @@
#pragma once #pragma once
#include <Disks/ObjectStorages/IMetadataOperation.h> #include <Disks/ObjectStorages/IMetadataOperation.h>
#include <Disks/ObjectStorages/InMemoryPathMap.h>
#include <Disks/ObjectStorages/MetadataStorageFromPlainObjectStorage.h> #include <Disks/ObjectStorages/MetadataStorageFromPlainObjectStorage.h>
#include <filesystem> #include <filesystem>
@ -13,20 +14,21 @@ class MetadataStorageFromPlainObjectStorageCreateDirectoryOperation final : publ
{ {
private: private:
std::filesystem::path path; std::filesystem::path path;
std::string key_prefix; InMemoryPathMap & path_map;
MetadataStorageFromPlainObjectStorage::PathMap & path_map;
ObjectStoragePtr object_storage; ObjectStoragePtr object_storage;
const std::string metadata_key_prefix;
const std::string object_key_prefix;
bool write_created = false; bool write_created = false;
bool write_finalized = false; bool write_finalized = false;
public: public:
// Assuming that paths are normalized.
MetadataStorageFromPlainObjectStorageCreateDirectoryOperation( MetadataStorageFromPlainObjectStorageCreateDirectoryOperation(
/// path_ must end with a trailing '/'.
std::filesystem::path && path_, std::filesystem::path && path_,
std::string && key_prefix_, InMemoryPathMap & path_map_,
MetadataStorageFromPlainObjectStorage::PathMap & path_map_, ObjectStoragePtr object_storage_,
ObjectStoragePtr object_storage_); const std::string & metadata_key_prefix_);
void execute(std::unique_lock<SharedMutex> & metadata_lock) override; void execute(std::unique_lock<SharedMutex> & metadata_lock) override;
void undo(std::unique_lock<SharedMutex> & metadata_lock) override; void undo(std::unique_lock<SharedMutex> & metadata_lock) override;
@ -37,8 +39,9 @@ class MetadataStorageFromPlainObjectStorageMoveDirectoryOperation final : public
private: private:
std::filesystem::path path_from; std::filesystem::path path_from;
std::filesystem::path path_to; std::filesystem::path path_to;
MetadataStorageFromPlainObjectStorage::PathMap & path_map; InMemoryPathMap & path_map;
ObjectStoragePtr object_storage; ObjectStoragePtr object_storage;
const std::string metadata_key_prefix;
bool write_created = false; bool write_created = false;
bool write_finalized = false; bool write_finalized = false;
@ -48,10 +51,12 @@ private:
public: public:
MetadataStorageFromPlainObjectStorageMoveDirectoryOperation( MetadataStorageFromPlainObjectStorageMoveDirectoryOperation(
/// Both path_from_ and path_to_ must end with a trailing '/'.
std::filesystem::path && path_from_, std::filesystem::path && path_from_,
std::filesystem::path && path_to_, std::filesystem::path && path_to_,
MetadataStorageFromPlainObjectStorage::PathMap & path_map_, InMemoryPathMap & path_map_,
ObjectStoragePtr object_storage_); ObjectStoragePtr object_storage_,
const std::string & metadata_key_prefix_);
void execute(std::unique_lock<SharedMutex> & metadata_lock) override; void execute(std::unique_lock<SharedMutex> & metadata_lock) override;
@ -63,15 +68,20 @@ class MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation final : publ
private: private:
std::filesystem::path path; std::filesystem::path path;
MetadataStorageFromPlainObjectStorage::PathMap & path_map; InMemoryPathMap & path_map;
ObjectStoragePtr object_storage; ObjectStoragePtr object_storage;
const std::string metadata_key_prefix;
std::string key_prefix; std::string key_prefix;
bool removed = false; bool removed = false;
public: public:
MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation( MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation(
std::filesystem::path && path_, MetadataStorageFromPlainObjectStorage::PathMap & path_map_, ObjectStoragePtr object_storage_); /// path_ must end with a trailing '/'.
std::filesystem::path && path_,
InMemoryPathMap & path_map_,
ObjectStoragePtr object_storage_,
const std::string & metadata_key_prefix_);
void execute(std::unique_lock<SharedMutex> & metadata_lock) override; void execute(std::unique_lock<SharedMutex> & metadata_lock) override;
void undo(std::unique_lock<SharedMutex> & metadata_lock) override; void undo(std::unique_lock<SharedMutex> & metadata_lock) override;

View File

@ -1,9 +1,14 @@
#include <Disks/ObjectStorages/FlatDirectoryStructureKeyGenerator.h>
#include <Disks/ObjectStorages/InMemoryPathMap.h>
#include <Disks/ObjectStorages/MetadataStorageFromPlainRewritableObjectStorage.h> #include <Disks/ObjectStorages/MetadataStorageFromPlainRewritableObjectStorage.h>
#include <Disks/ObjectStorages/ObjectStorageIterator.h> #include <Disks/ObjectStorages/ObjectStorageIterator.h>
#include <unordered_set>
#include <IO/ReadHelpers.h> #include <IO/ReadHelpers.h>
#include <IO/SharedThreadPools.h>
#include <IO/S3Common.h> #include <IO/S3Common.h>
#include <IO/SharedThreadPools.h>
#include "Common/SharedLockGuard.h"
#include "Common/SharedMutex.h"
#include <Common/ErrorCodes.h> #include <Common/ErrorCodes.h>
#include <Common/logger_useful.h> #include <Common/logger_useful.h>
#include "CommonPathPrefixKeyGenerator.h" #include "CommonPathPrefixKeyGenerator.h"
@ -21,14 +26,28 @@ namespace
{ {
constexpr auto PREFIX_PATH_FILE_NAME = "prefix.path"; constexpr auto PREFIX_PATH_FILE_NAME = "prefix.path";
constexpr auto METADATA_PATH_TOKEN = "__meta/";
MetadataStorageFromPlainObjectStorage::PathMap loadPathPrefixMap(const std::string & root, ObjectStoragePtr object_storage) /// Use a separate layout for metadata if:
/// 1. The disk endpoint does not contain any objects yet (empty), OR
/// 2. The metadata is already stored behind a separate endpoint.
/// Otherwise, store metadata along with regular data for backward compatibility.
std::string getMetadataKeyPrefix(ObjectStoragePtr object_storage)
{ {
MetadataStorageFromPlainObjectStorage::PathMap result; const auto common_key_prefix = std::filesystem::path(object_storage->getCommonKeyPrefix());
const auto metadata_key_prefix = std::filesystem::path(common_key_prefix) / METADATA_PATH_TOKEN;
return !object_storage->existsOrHasAnyChild(metadata_key_prefix / "") && object_storage->existsOrHasAnyChild(common_key_prefix / "")
? common_key_prefix
: metadata_key_prefix;
}
std::shared_ptr<InMemoryPathMap> loadPathPrefixMap(const std::string & metadata_key_prefix, ObjectStoragePtr object_storage)
{
auto result = std::make_shared<InMemoryPathMap>();
using Map = InMemoryPathMap::Map;
ThreadPool & pool = getIOThreadPool().get(); ThreadPool & pool = getIOThreadPool().get();
ThreadPoolCallbackRunnerLocal<void> runner(pool, "PlainRWMetaLoad"); ThreadPoolCallbackRunnerLocal<void> runner(pool, "PlainRWMetaLoad");
std::mutex mutex;
LoggerPtr log = getLogger("MetadataStorageFromPlainObjectStorage"); LoggerPtr log = getLogger("MetadataStorageFromPlainObjectStorage");
@ -39,102 +58,107 @@ MetadataStorageFromPlainObjectStorage::PathMap loadPathPrefixMap(const std::stri
LOG_DEBUG(log, "Loading metadata"); LOG_DEBUG(log, "Loading metadata");
size_t num_files = 0; size_t num_files = 0;
for (auto iterator = object_storage->iterate(root, 0); iterator->isValid(); iterator->next()) for (auto iterator = object_storage->iterate(metadata_key_prefix, 0); iterator->isValid(); iterator->next())
{ {
++num_files; ++num_files;
auto file = iterator->current(); auto file = iterator->current();
String path = file->getPath(); String path = file->getPath();
auto remote_path = std::filesystem::path(path); auto remote_metadata_path = std::filesystem::path(path);
if (remote_path.filename() != PREFIX_PATH_FILE_NAME) if (remote_metadata_path.filename() != PREFIX_PATH_FILE_NAME)
continue; continue;
runner([remote_path, path, &object_storage, &result, &mutex, &log, &settings] runner(
{ [remote_metadata_path, path, &object_storage, &result, &log, &settings, &metadata_key_prefix]
setThreadName("PlainRWMetaLoad");
StoredObject object{path};
String local_path;
try
{ {
auto read_buf = object_storage->readObject(object, settings); setThreadName("PlainRWMetaLoad");
readStringUntilEOF(local_path, *read_buf);
} StoredObject object{path};
String local_path;
try
{
auto read_buf = object_storage->readObject(object, settings);
readStringUntilEOF(local_path, *read_buf);
}
#if USE_AWS_S3 #if USE_AWS_S3
catch (const S3Exception & e) catch (const S3Exception & e)
{ {
/// It is ok if a directory was removed just now. /// It is ok if a directory was removed just now.
/// We support attaching a filesystem that is concurrently modified by someone else. /// We support attaching a filesystem that is concurrently modified by someone else.
if (e.getS3ErrorCode() == Aws::S3::S3Errors::NO_SUCH_KEY) if (e.getS3ErrorCode() == Aws::S3::S3Errors::NO_SUCH_KEY)
return; return;
throw; throw;
} }
#endif #endif
catch (...) catch (...)
{ {
throw; throw;
} }
chassert(remote_path.has_parent_path()); chassert(remote_metadata_path.has_parent_path());
std::pair<MetadataStorageFromPlainObjectStorage::PathMap::iterator, bool> res; chassert(remote_metadata_path.string().starts_with(metadata_key_prefix));
{ auto suffix = remote_metadata_path.string().substr(metadata_key_prefix.size());
std::lock_guard lock(mutex); auto remote_path = std::filesystem::path(std::move(suffix));
res = result.emplace(local_path, remote_path.parent_path()); std::pair<Map::iterator, bool> res;
} {
std::lock_guard lock(result->mutex);
res = result->map.emplace(std::filesystem::path(local_path).parent_path(), remote_path.parent_path());
}
/// This can happen if table replication is enabled, then the same local path is written /// This can happen if table replication is enabled, then the same local path is written
/// in `prefix.path` of each replica. /// in `prefix.path` of each replica.
/// TODO: should replicated tables (e.g., RMT) be explicitly disallowed? /// TODO: should replicated tables (e.g., RMT) be explicitly disallowed?
if (!res.second) if (!res.second)
LOG_WARNING( LOG_WARNING(
log, log,
"The local path '{}' is already mapped to a remote path '{}', ignoring: '{}'", "The local path '{}' is already mapped to a remote path '{}', ignoring: '{}'",
local_path, local_path,
res.first->second, res.first->second,
remote_path.parent_path().string()); remote_path.parent_path().string());
}); });
} }
runner.waitForAllToFinishAndRethrowFirstError(); runner.waitForAllToFinishAndRethrowFirstError();
LOG_DEBUG(log, "Loaded metadata for {} files, found {} directories", num_files, result.size()); {
SharedLockGuard lock(result->mutex);
LOG_DEBUG(log, "Loaded metadata for {} files, found {} directories", num_files, result->map.size());
auto metric = object_storage->getMetadataStorageMetrics().directory_map_size; auto metric = object_storage->getMetadataStorageMetrics().directory_map_size;
CurrentMetrics::add(metric, result.size()); CurrentMetrics::add(metric, result->map.size());
}
return result; return result;
} }
std::vector<std::string> getDirectChildrenOnRewritableDisk( void getDirectChildrenOnDiskImpl(
const std::string & storage_key, const std::string & storage_key,
const RelativePathsWithMetadata & remote_paths, const RelativePathsWithMetadata & remote_paths,
const std::string & local_path, const std::string & local_path,
const MetadataStorageFromPlainObjectStorage::PathMap & local_path_prefixes, const InMemoryPathMap & path_map,
SharedMutex & shared_mutex) std::unordered_set<std::string> & result)
{ {
using PathMap = MetadataStorageFromPlainObjectStorage::PathMap; /// Directories are retrieved from the in-memory path map.
std::unordered_set<std::string> duplicates_filter;
/// Map remote paths into local subdirectories.
std::unordered_map<PathMap::mapped_type, PathMap::key_type> remote_to_local_subdir;
{ {
std::shared_lock lock(shared_mutex); SharedLockGuard lock(path_map.mutex);
auto end_it = local_path_prefixes.end(); const auto & local_path_prefixes = path_map.map;
const auto end_it = local_path_prefixes.end();
for (auto it = local_path_prefixes.lower_bound(local_path); it != end_it; ++it) for (auto it = local_path_prefixes.lower_bound(local_path); it != end_it; ++it)
{ {
const auto & [k, v] = std::make_tuple(it->first.string(), it->second); const auto & [k, _] = std::make_tuple(it->first.string(), it->second);
if (!k.starts_with(local_path)) if (!k.starts_with(local_path))
break; break;
auto slash_num = count(k.begin() + local_path.size(), k.end(), '/'); auto slash_num = count(k.begin() + local_path.size(), k.end(), '/');
if (slash_num != 1) /// The local_path_prefixes comparator ensures that the paths with the smallest number of
continue; /// hops from the local_path are iterated first. The paths do not end with '/', hence
/// break the loop if the number of slashes is greater than 0.
if (slash_num != 0)
break;
chassert(k.back() == '/'); result.emplace(std::string(k.begin() + local_path.size(), k.end()) + "/");
remote_to_local_subdir.emplace(v, std::string(k.begin() + local_path.size(), k.end() - 1));
} }
} }
/// Files.
auto skip_list = std::set<std::string>{PREFIX_PATH_FILE_NAME}; auto skip_list = std::set<std::string>{PREFIX_PATH_FILE_NAME};
for (const auto & elem : remote_paths) for (const auto & elem : remote_paths)
{ {
@ -149,22 +173,9 @@ std::vector<std::string> getDirectChildrenOnRewritableDisk(
/// File names. /// File names.
auto filename = path.substr(child_pos); auto filename = path.substr(child_pos);
if (!skip_list.contains(filename)) if (!skip_list.contains(filename))
duplicates_filter.emplace(std::move(filename)); result.emplace(std::move(filename));
}
else
{
/// Subdirectories.
auto it = remote_to_local_subdir.find(path.substr(0, slash_pos));
/// Mapped subdirectories.
if (it != remote_to_local_subdir.end())
duplicates_filter.emplace(it->second);
/// The remote subdirectory name is the same as the local subdirectory.
else
duplicates_filter.emplace(path.substr(child_pos, slash_pos - child_pos));
} }
} }
return std::vector<std::string>(std::make_move_iterator(duplicates_filter.begin()), std::make_move_iterator(duplicates_filter.end()));
} }
} }
@ -172,7 +183,8 @@ std::vector<std::string> getDirectChildrenOnRewritableDisk(
MetadataStorageFromPlainRewritableObjectStorage::MetadataStorageFromPlainRewritableObjectStorage( MetadataStorageFromPlainRewritableObjectStorage::MetadataStorageFromPlainRewritableObjectStorage(
ObjectStoragePtr object_storage_, String storage_path_prefix_) ObjectStoragePtr object_storage_, String storage_path_prefix_)
: MetadataStorageFromPlainObjectStorage(object_storage_, storage_path_prefix_) : MetadataStorageFromPlainObjectStorage(object_storage_, storage_path_prefix_)
, path_map(std::make_shared<PathMap>(loadPathPrefixMap(object_storage->getCommonKeyPrefix(), object_storage))) , metadata_key_prefix(DB::getMetadataKeyPrefix(object_storage))
, path_map(loadPathPrefixMap(metadata_key_prefix, object_storage))
{ {
if (object_storage->isWriteOnce()) if (object_storage->isWriteOnce())
throw Exception( throw Exception(
@ -180,20 +192,85 @@ MetadataStorageFromPlainRewritableObjectStorage::MetadataStorageFromPlainRewrita
"MetadataStorageFromPlainRewritableObjectStorage is not compatible with write-once storage '{}'", "MetadataStorageFromPlainRewritableObjectStorage is not compatible with write-once storage '{}'",
object_storage->getName()); object_storage->getName());
auto keys_gen = std::make_shared<CommonPathPrefixKeyGenerator>(object_storage->getCommonKeyPrefix(), metadata_mutex, path_map); if (useSeparateLayoutForMetadata())
object_storage->setKeysGenerator(keys_gen); {
/// Use flat directory structure if the metadata is stored separately from the table data.
auto keys_gen = std::make_shared<FlatDirectoryStructureKeyGenerator>(object_storage->getCommonKeyPrefix(), path_map);
object_storage->setKeysGenerator(keys_gen);
}
else
{
auto keys_gen = std::make_shared<CommonPathPrefixKeyGenerator>(object_storage->getCommonKeyPrefix(), path_map);
object_storage->setKeysGenerator(keys_gen);
}
} }
MetadataStorageFromPlainRewritableObjectStorage::~MetadataStorageFromPlainRewritableObjectStorage() MetadataStorageFromPlainRewritableObjectStorage::~MetadataStorageFromPlainRewritableObjectStorage()
{ {
auto metric = object_storage->getMetadataStorageMetrics().directory_map_size; auto metric = object_storage->getMetadataStorageMetrics().directory_map_size;
CurrentMetrics::sub(metric, path_map->size()); CurrentMetrics::sub(metric, path_map->map.size());
} }
std::vector<std::string> MetadataStorageFromPlainRewritableObjectStorage::getDirectChildrenOnDisk( bool MetadataStorageFromPlainRewritableObjectStorage::exists(const std::string & path) const
const std::string & storage_key, const RelativePathsWithMetadata & remote_paths, const std::string & local_path) const
{ {
return getDirectChildrenOnRewritableDisk(storage_key, remote_paths, local_path, *getPathMap(), metadata_mutex); if (MetadataStorageFromPlainObjectStorage::exists(path))
return true;
if (useSeparateLayoutForMetadata())
{
auto key_prefix = object_storage->generateObjectKeyForPath(path, getMetadataKeyPrefix()).serialize();
return object_storage->existsOrHasAnyChild(key_prefix);
}
return false;
} }
bool MetadataStorageFromPlainRewritableObjectStorage::isDirectory(const std::string & path) const
{
if (useSeparateLayoutForMetadata())
{
auto directory = std::filesystem::path(object_storage->generateObjectKeyForPath(path, getMetadataKeyPrefix()).serialize()) / "";
return object_storage->existsOrHasAnyChild(directory);
}
else
return MetadataStorageFromPlainObjectStorage::isDirectory(path);
}
std::vector<std::string> MetadataStorageFromPlainRewritableObjectStorage::listDirectory(const std::string & path) const
{
auto key_prefix = object_storage->generateObjectKeyForPath(path, "" /* key_prefix */).serialize();
RelativePathsWithMetadata files;
auto abs_key = std::filesystem::path(object_storage->getCommonKeyPrefix()) / key_prefix / "";
object_storage->listObjects(abs_key, files, 0);
std::unordered_set<std::string> directories;
getDirectChildrenOnDisk(abs_key, files, std::filesystem::path(path) / "", directories);
/// List empty directories that are identified by the `prefix.path` metadata files. This is required to, e.g., remove
/// metadata along with regular files.
if (useSeparateLayoutForMetadata())
{
auto metadata_key = std::filesystem::path(getMetadataKeyPrefix()) / key_prefix / "";
RelativePathsWithMetadata metadata_files;
object_storage->listObjects(metadata_key, metadata_files, 0);
getDirectChildrenOnDisk(metadata_key, metadata_files, std::filesystem::path(path) / "", directories);
}
return std::vector<std::string>(std::make_move_iterator(directories.begin()), std::make_move_iterator(directories.end()));
}
void MetadataStorageFromPlainRewritableObjectStorage::getDirectChildrenOnDisk(
const std::string & storage_key,
const RelativePathsWithMetadata & remote_paths,
const std::string & local_path,
std::unordered_set<std::string> & result) const
{
getDirectChildrenOnDiskImpl(storage_key, remote_paths, local_path, *getPathMap(), result);
}
bool MetadataStorageFromPlainRewritableObjectStorage::useSeparateLayoutForMetadata() const
{
return getMetadataKeyPrefix() != object_storage->getCommonKeyPrefix();
}
} }

View File

@ -3,6 +3,7 @@
#include <Disks/ObjectStorages/MetadataStorageFromPlainObjectStorage.h> #include <Disks/ObjectStorages/MetadataStorageFromPlainObjectStorage.h>
#include <memory> #include <memory>
#include <unordered_set>
namespace DB namespace DB
@ -11,18 +12,29 @@ namespace DB
class MetadataStorageFromPlainRewritableObjectStorage final : public MetadataStorageFromPlainObjectStorage class MetadataStorageFromPlainRewritableObjectStorage final : public MetadataStorageFromPlainObjectStorage
{ {
private: private:
std::shared_ptr<PathMap> path_map; const std::string metadata_key_prefix;
std::shared_ptr<InMemoryPathMap> path_map;
public: public:
MetadataStorageFromPlainRewritableObjectStorage(ObjectStoragePtr object_storage_, String storage_path_prefix_); MetadataStorageFromPlainRewritableObjectStorage(ObjectStoragePtr object_storage_, String storage_path_prefix_);
~MetadataStorageFromPlainRewritableObjectStorage() override; ~MetadataStorageFromPlainRewritableObjectStorage() override;
MetadataStorageType getType() const override { return MetadataStorageType::PlainRewritable; } MetadataStorageType getType() const override { return MetadataStorageType::PlainRewritable; }
bool exists(const std::string & path) const override;
bool isDirectory(const std::string & path) const override;
std::vector<std::string> listDirectory(const std::string & path) const override;
protected: protected:
std::shared_ptr<PathMap> getPathMap() const override { return path_map; } std::string getMetadataKeyPrefix() const override { return metadata_key_prefix; }
std::vector<std::string> getDirectChildrenOnDisk( std::shared_ptr<InMemoryPathMap> getPathMap() const override { return path_map; }
const std::string & storage_key, const RelativePathsWithMetadata & remote_paths, const std::string & local_path) const override; void getDirectChildrenOnDisk(
const std::string & storage_key,
const RelativePathsWithMetadata & remote_paths,
const std::string & local_path,
std::unordered_set<std::string> & result) const;
private:
bool useSeparateLayoutForMetadata() const;
}; };
} }

View File

@ -26,7 +26,7 @@ public:
bool isPlain() const override { return true; } bool isPlain() const override { return true; }
ObjectStorageKey generateObjectKeyForPath(const std::string & path) const override ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & /* key_prefix */) const override
{ {
return ObjectStorageKey::createAsRelative(BaseObjectStorage::getCommonKeyPrefix(), path); return ObjectStorageKey::createAsRelative(BaseObjectStorage::getCommonKeyPrefix(), path);
} }

View File

@ -1,5 +1,7 @@
#pragma once #pragma once
#include <optional>
#include <string>
#include <Disks/ObjectStorages/IObjectStorage.h> #include <Disks/ObjectStorages/IObjectStorage.h>
#include <Common/ObjectStorageKeyGenerator.h> #include <Common/ObjectStorageKeyGenerator.h>
#include "CommonPathPrefixKeyGenerator.h" #include "CommonPathPrefixKeyGenerator.h"
@ -33,9 +35,10 @@ public:
bool isPlain() const override { return true; } bool isPlain() const override { return true; }
ObjectStorageKey generateObjectKeyForPath(const std::string & path) const override; ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const override;
ObjectStorageKey generateObjectKeyPrefixForDirectoryPath(const std::string & path) const override; ObjectStorageKey
generateObjectKeyPrefixForDirectoryPath(const std::string & path, const std::optional<std::string> & key_prefix) const override;
void setKeysGenerator(ObjectStorageKeysGeneratorPtr gen) override { key_generator = gen; } void setKeysGenerator(ObjectStorageKeysGeneratorPtr gen) override { key_generator = gen; }
@ -46,20 +49,22 @@ private:
template <typename BaseObjectStorage> template <typename BaseObjectStorage>
ObjectStorageKey PlainRewritableObjectStorage<BaseObjectStorage>::generateObjectKeyForPath(const std::string & path) const ObjectStorageKey PlainRewritableObjectStorage<BaseObjectStorage>::generateObjectKeyForPath(
const std::string & path, const std::optional<std::string> & key_prefix) const
{ {
if (!key_generator) if (!key_generator)
throw Exception(ErrorCodes::LOGICAL_ERROR, "Key generator is not set"); throw Exception(ErrorCodes::LOGICAL_ERROR, "Key generator is not set");
return key_generator->generate(path, /* is_directory */ false); return key_generator->generate(path, /* is_directory */ false, key_prefix);
} }
template <typename BaseObjectStorage> template <typename BaseObjectStorage>
ObjectStorageKey PlainRewritableObjectStorage<BaseObjectStorage>::generateObjectKeyPrefixForDirectoryPath(const std::string & path) const ObjectStorageKey PlainRewritableObjectStorage<BaseObjectStorage>::generateObjectKeyPrefixForDirectoryPath(
const std::string & path, const std::optional<std::string> & key_prefix) const
{ {
if (!key_generator) if (!key_generator)
throw Exception(ErrorCodes::LOGICAL_ERROR, "Key generator is not set"); throw Exception(ErrorCodes::LOGICAL_ERROR, "Key generator is not set");
return key_generator->generate(path, /* is_directory */ true); return key_generator->generate(path, /* is_directory */ true, key_prefix);
} }
} }

View File

@ -79,7 +79,7 @@ bool checkBatchRemove(S3ObjectStorage & storage)
/// We are using generateObjectKeyForPath() which returns random object key. /// We are using generateObjectKeyForPath() which returns random object key.
/// That generated key is placed in a right directory where we should have write access. /// That generated key is placed in a right directory where we should have write access.
const String path = fmt::format("clickhouse_remove_objects_capability_{}", getServerUUID()); const String path = fmt::format("clickhouse_remove_objects_capability_{}", getServerUUID());
const auto key = storage.generateObjectKeyForPath(path); const auto key = storage.generateObjectKeyForPath(path, {} /* key_prefix */);
StoredObject object(key.serialize(), path); StoredObject object(key.serialize(), path);
try try
{ {

View File

@ -624,12 +624,12 @@ std::unique_ptr<IObjectStorage> S3ObjectStorage::cloneObjectStorage(
std::move(new_client), std::move(new_s3_settings), new_uri, s3_capabilities, key_generator, disk_name); std::move(new_client), std::move(new_s3_settings), new_uri, s3_capabilities, key_generator, disk_name);
} }
ObjectStorageKey S3ObjectStorage::generateObjectKeyForPath(const std::string & path) const ObjectStorageKey S3ObjectStorage::generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const
{ {
if (!key_generator) if (!key_generator)
throw Exception(ErrorCodes::LOGICAL_ERROR, "Key generator is not set"); throw Exception(ErrorCodes::LOGICAL_ERROR, "Key generator is not set");
return key_generator->generate(path, /* is_directory */ false); return key_generator->generate(path, /* is_directory */ false, key_prefix);
} }
std::shared_ptr<const S3::Client> S3ObjectStorage::getS3StorageClient() std::shared_ptr<const S3::Client> S3ObjectStorage::getS3StorageClient()

View File

@ -164,7 +164,7 @@ public:
bool supportParallelWrite() const override { return true; } bool supportParallelWrite() const override { return true; }
ObjectStorageKey generateObjectKeyForPath(const std::string & path) const override; ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & key_prefix) const override;
bool isReadOnly() const override { return s3_settings.get()->read_only; } bool isReadOnly() const override { return s3_settings.get()->read_only; }

View File

@ -82,7 +82,7 @@ public:
const std::string & config_prefix, const std::string & config_prefix,
ContextPtr context) override; ContextPtr context) override;
ObjectStorageKey generateObjectKeyForPath(const std::string & path) const override ObjectStorageKey generateObjectKeyForPath(const std::string & path, const std::optional<std::string> & /* key_prefix */) const override
{ {
return ObjectStorageKey::createAsRelative(path); return ObjectStorageKey::createAsRelative(path);
} }

View File

@ -3494,18 +3494,22 @@ DDLWorker & Context::getDDLWorker() const
if (shared->ddl_worker_startup_task) if (shared->ddl_worker_startup_task)
waitLoad(shared->ddl_worker_startup_task); // Just wait and do not prioritize, because it depends on all load and startup tasks waitLoad(shared->ddl_worker_startup_task); // Just wait and do not prioritize, because it depends on all load and startup tasks
SharedLockGuard lock(shared->mutex);
if (!shared->ddl_worker)
{ {
if (!hasZooKeeper()) /// Only acquire the lock for reading ddl_worker field.
throw Exception(ErrorCodes::NO_ELEMENTS_IN_CONFIG, "There is no Zookeeper configuration in server config"); /// hasZooKeeper() and hasDistributedDDL() acquire the same lock as well and double acquisition of the lock in shared mode can lead
/// to a deadlock if an exclusive lock attempt is made in the meantime by another thread.
if (!hasDistributedDDL()) SharedLockGuard lock(shared->mutex);
throw Exception(ErrorCodes::NO_ELEMENTS_IN_CONFIG, "There is no DistributedDDL configuration in server config"); if (shared->ddl_worker)
return *shared->ddl_worker;
throw Exception(ErrorCodes::NO_ELEMENTS_IN_CONFIG, "DDL background thread is not initialized");
} }
return *shared->ddl_worker;
if (!hasZooKeeper())
throw Exception(ErrorCodes::NO_ELEMENTS_IN_CONFIG, "There is no Zookeeper configuration in server config");
if (!hasDistributedDDL())
throw Exception(ErrorCodes::NO_ELEMENTS_IN_CONFIG, "There is no DistributedDDL configuration in server config");
throw Exception(ErrorCodes::NO_ELEMENTS_IN_CONFIG, "DDL background thread is not initialized");
} }
zkutil::ZooKeeperPtr Context::getZooKeeper() const zkutil::ZooKeeperPtr Context::getZooKeeper() const

View File

@ -1407,7 +1407,10 @@ void DatabaseCatalog::waitTableFinallyDropped(const UUID & uuid)
}); });
/// TSA doesn't support unique_lock /// TSA doesn't support unique_lock
if (TSA_SUPPRESS_WARNING_FOR_READ(tables_marked_dropped_ids).contains(uuid)) const bool has_table = TSA_SUPPRESS_WARNING_FOR_READ(tables_marked_dropped_ids).contains(uuid);
LOG_DEBUG(log, "Done waiting for the table {} to be dropped. The outcome: {}", toString(uuid), has_table ? "table still exists" : "table dropped successfully");
if (has_table)
throw Exception(ErrorCodes::UNFINISHED, "Did not finish dropping the table with UUID {} because the server is shutting down, " throw Exception(ErrorCodes::UNFINISHED, "Did not finish dropping the table with UUID {} because the server is shutting down, "
"will finish after restart", uuid); "will finish after restart", uuid);
} }

View File

@ -43,6 +43,7 @@
#include <Common/ZooKeeper/Types.h> #include <Common/ZooKeeper/Types.h>
#include <Common/ZooKeeper/ZooKeeper.h> #include <Common/ZooKeeper/ZooKeeper.h>
#include <Common/ZooKeeper/ZooKeeperConstants.h> #include <Common/ZooKeeper/ZooKeeperConstants.h>
#include <Common/ZooKeeper/ZooKeeperRetries.h>
#include <Backups/BackupEntriesCollector.h> #include <Backups/BackupEntriesCollector.h>
#include <Backups/IBackupCoordination.h> #include <Backups/IBackupCoordination.h>
@ -78,6 +79,7 @@ namespace ErrorCodes
extern const int LOGICAL_ERROR; extern const int LOGICAL_ERROR;
extern const int LIMIT_EXCEEDED; extern const int LIMIT_EXCEEDED;
extern const int CANNOT_RESTORE_TABLE; extern const int CANNOT_RESTORE_TABLE;
extern const int INVALID_STATE;
} }
namespace namespace
@ -120,7 +122,7 @@ public:
: SinkToStorage(header), storage(storage_), context(std::move(context_)) : SinkToStorage(header), storage(storage_), context(std::move(context_))
{ {
auto primary_key = storage.getPrimaryKey(); auto primary_key = storage.getPrimaryKey();
assert(primary_key.size() == 1); chassert(primary_key.size() == 1);
primary_key_pos = getHeader().getPositionByName(primary_key[0]); primary_key_pos = getHeader().getPositionByName(primary_key[0]);
} }
@ -171,81 +173,94 @@ public:
template <bool for_update> template <bool for_update>
void finalize(bool strict) void finalize(bool strict)
{ {
auto zookeeper = storage.getClient(); const auto & settings = context->getSettingsRef();
auto keys_limit = storage.keysLimit(); ZooKeeperRetriesControl zk_retry{
getName(),
getLogger(getName()),
ZooKeeperRetriesInfo{
settings.insert_keeper_max_retries,
settings.insert_keeper_retry_initial_backoff_ms,
settings.insert_keeper_retry_max_backoff_ms},
context->getProcessListElement()};
size_t current_keys_num = 0; zk_retry.retryLoop([&]()
size_t new_keys_num = 0;
// We use keys limit as a soft limit so we ignore some cases when it can be still exceeded
// (e.g if parallel insert queries are being run)
if (keys_limit != 0)
{ {
Coordination::Stat data_stat; auto zookeeper = storage.getClient();
zookeeper->get(storage.dataPath(), &data_stat); auto keys_limit = storage.keysLimit();
current_keys_num = data_stat.numChildren;
}
std::vector<std::string> key_paths; size_t current_keys_num = 0;
key_paths.reserve(new_values.size()); size_t new_keys_num = 0;
for (const auto & [key, _] : new_values)
key_paths.push_back(storage.fullPathForKey(key));
zkutil::ZooKeeper::MultiExistsResponse results; // We use keys limit as a soft limit so we ignore some cases when it can be still exceeded
// (e.g if parallel insert queries are being run)
if constexpr (!for_update) if (keys_limit != 0)
{
if (!strict)
results = zookeeper->exists(key_paths);
}
Coordination::Requests requests;
requests.reserve(key_paths.size());
for (size_t i = 0; i < key_paths.size(); ++i)
{
auto key = fs::path(key_paths[i]).filename();
if constexpr (for_update)
{ {
int32_t version = -1; Coordination::Stat data_stat;
if (strict) zookeeper->get(storage.dataPath(), &data_stat);
version = versions.at(key); current_keys_num = data_stat.numChildren;
requests.push_back(zkutil::makeSetRequest(key_paths[i], new_values[key], version));
} }
else
std::vector<std::string> key_paths;
key_paths.reserve(new_values.size());
for (const auto & [key, _] : new_values)
key_paths.push_back(storage.fullPathForKey(key));
zkutil::ZooKeeper::MultiExistsResponse results;
if constexpr (!for_update)
{ {
if (!strict && results[i].error == Coordination::Error::ZOK) if (!strict)
results = zookeeper->exists(key_paths);
}
Coordination::Requests requests;
requests.reserve(key_paths.size());
for (size_t i = 0; i < key_paths.size(); ++i)
{
auto key = fs::path(key_paths[i]).filename();
if constexpr (for_update)
{ {
requests.push_back(zkutil::makeSetRequest(key_paths[i], new_values[key], -1)); int32_t version = -1;
if (strict)
version = versions.at(key);
requests.push_back(zkutil::makeSetRequest(key_paths[i], new_values[key], version));
} }
else else
{ {
requests.push_back(zkutil::makeCreateRequest(key_paths[i], new_values[key], zkutil::CreateMode::Persistent)); if (!strict && results[i].error == Coordination::Error::ZOK)
++new_keys_num; {
requests.push_back(zkutil::makeSetRequest(key_paths[i], new_values[key], -1));
}
else
{
requests.push_back(zkutil::makeCreateRequest(key_paths[i], new_values[key], zkutil::CreateMode::Persistent));
++new_keys_num;
}
} }
} }
}
if (new_keys_num != 0) if (new_keys_num != 0)
{ {
auto will_be = current_keys_num + new_keys_num; auto will_be = current_keys_num + new_keys_num;
if (keys_limit != 0 && will_be > keys_limit) if (keys_limit != 0 && will_be > keys_limit)
throw Exception( throw Exception(
ErrorCodes::LIMIT_EXCEEDED, ErrorCodes::LIMIT_EXCEEDED,
"Limit would be exceeded by inserting {} new key(s). Limit is {}, while the number of keys would be {}", "Limit would be exceeded by inserting {} new key(s). Limit is {}, while the number of keys would be {}",
new_keys_num, new_keys_num,
keys_limit, keys_limit,
will_be); will_be);
} }
zookeeper->multi(requests, /* check_session_valid */ true); zookeeper->multi(requests, /* check_session_valid */ true);
});
} }
}; };
template <typename KeyContainer> template <typename KeyContainer>
class StorageKeeperMapSource : public ISource class StorageKeeperMapSource : public ISource, WithContext
{ {
const StorageKeeperMap & storage; const StorageKeeperMap & storage;
size_t max_block_size; size_t max_block_size;
@ -276,8 +291,15 @@ public:
KeyContainerPtr container_, KeyContainerPtr container_,
KeyContainerIter begin_, KeyContainerIter begin_,
KeyContainerIter end_, KeyContainerIter end_,
bool with_version_column_) bool with_version_column_,
: ISource(getHeader(header, with_version_column_)), storage(storage_), max_block_size(max_block_size_), container(std::move(container_)), it(begin_), end(end_) ContextPtr context_)
: ISource(getHeader(header, with_version_column_))
, WithContext(std::move(context_))
, storage(storage_)
, max_block_size(max_block_size_)
, container(std::move(container_))
, it(begin_)
, end(end_)
, with_version_column(with_version_column_) , with_version_column(with_version_column_)
{ {
} }
@ -302,12 +324,12 @@ public:
for (auto & raw_key : raw_keys) for (auto & raw_key : raw_keys)
raw_key = base64Encode(raw_key, /* url_encoding */ true); raw_key = base64Encode(raw_key, /* url_encoding */ true);
return storage.getBySerializedKeys(raw_keys, nullptr, with_version_column); return storage.getBySerializedKeys(raw_keys, nullptr, with_version_column, getContext());
} }
else else
{ {
size_t elem_num = std::min(max_block_size, static_cast<size_t>(end - it)); size_t elem_num = std::min(max_block_size, static_cast<size_t>(end - it));
auto chunk = storage.getBySerializedKeys(std::span{it, it + elem_num}, nullptr, with_version_column); auto chunk = storage.getBySerializedKeys(std::span{it, it + elem_num}, nullptr, with_version_column, getContext());
it += elem_num; it += elem_num;
return chunk; return chunk;
} }
@ -386,104 +408,192 @@ StorageKeeperMap::StorageKeeperMap(
if (attach) if (attach)
{ {
checkTable<false>(); checkTable<false>(context_);
return; return;
} }
auto client = getClient(); const auto & settings = context_->getSettingsRef();
ZooKeeperRetriesControl zk_retry{
getName(),
getLogger(getName()),
ZooKeeperRetriesInfo{settings.keeper_max_retries, settings.keeper_retry_initial_backoff_ms, settings.keeper_retry_max_backoff_ms},
context_->getProcessListElement()};
if (zk_root_path != "/" && !client->exists(zk_root_path)) zk_retry.retryLoop(
{ [&]
LOG_TRACE(log, "Creating root path {}", zk_root_path); {
client->createAncestors(zk_root_path); auto client = getClient();
client->createIfNotExists(zk_root_path, "");
}
if (zk_root_path != "/" && !client->exists(zk_root_path))
{
LOG_TRACE(log, "Creating root path {}", zk_root_path);
client->createAncestors(zk_root_path);
client->createIfNotExists(zk_root_path, "");
}
});
std::shared_ptr<zkutil::EphemeralNodeHolder> metadata_drop_lock;
int32_t drop_lock_version = -1;
for (size_t i = 0; i < 1000; ++i) for (size_t i = 0; i < 1000; ++i)
{ {
std::string stored_metadata_string; bool success = false;
auto exists = client->tryGet(zk_metadata_path, stored_metadata_string); zk_retry.retryLoop(
[&]
if (exists)
{
// this requires same name for columns
// maybe we can do a smarter comparison for columns and primary key expression
if (stored_metadata_string != metadata_string)
throw Exception(
ErrorCodes::BAD_ARGUMENTS,
"Path {} is already used but the stored table definition doesn't match. Stored metadata: {}",
zk_root_path,
stored_metadata_string);
auto code = client->tryCreate(zk_table_path, "", zkutil::CreateMode::Persistent);
/// A table on the same Keeper path already exists, we just appended our table id to subscribe as a new replica
/// We still don't know if the table matches the expected metadata so table_is_valid is not changed
/// It will be checked lazily on the first operation
if (code == Coordination::Error::ZOK)
return;
if (code != Coordination::Error::ZNONODE)
throw zkutil::KeeperException(code, "Failed to create table on path {} because a table with same UUID already exists", zk_root_path);
/// ZNONODE means we dropped zk_tables_path but didn't finish drop completely
}
if (client->exists(zk_dropped_path))
{
LOG_INFO(log, "Removing leftover nodes");
auto code = client->tryCreate(zk_dropped_lock_path, "", zkutil::CreateMode::Ephemeral);
if (code == Coordination::Error::ZNONODE)
{ {
LOG_INFO(log, "Someone else removed leftover nodes"); auto client = getClient();
} std::string stored_metadata_string;
else if (code == Coordination::Error::ZNODEEXISTS) auto exists = client->tryGet(zk_metadata_path, stored_metadata_string);
{
LOG_INFO(log, "Someone else is removing leftover nodes");
continue;
}
else if (code != Coordination::Error::ZOK)
{
throw Coordination::Exception::fromPath(code, zk_dropped_lock_path);
}
else
{
auto metadata_drop_lock = zkutil::EphemeralNodeHolder::existing(zk_dropped_lock_path, *client);
if (!dropTable(client, metadata_drop_lock))
continue;
}
}
Coordination::Requests create_requests if (exists)
{ {
zkutil::makeCreateRequest(zk_metadata_path, metadata_string, zkutil::CreateMode::Persistent), // this requires same name for columns
zkutil::makeCreateRequest(zk_data_path, metadata_string, zkutil::CreateMode::Persistent), // maybe we can do a smarter comparison for columns and primary key expression
zkutil::makeCreateRequest(zk_tables_path, "", zkutil::CreateMode::Persistent), if (stored_metadata_string != metadata_string)
zkutil::makeCreateRequest(zk_table_path, "", zkutil::CreateMode::Persistent), throw Exception(
}; ErrorCodes::BAD_ARGUMENTS,
"Path {} is already used but the stored table definition doesn't match. Stored metadata: {}",
zk_root_path,
stored_metadata_string);
Coordination::Responses create_responses; auto code = client->tryCreate(zk_table_path, "", zkutil::CreateMode::Persistent);
auto code = client->tryMulti(create_requests, create_responses);
if (code == Coordination::Error::ZNODEEXISTS)
{
LOG_INFO(log, "It looks like a table on path {} was created by another server at the same moment, will retry", zk_root_path);
continue;
}
else if (code != Coordination::Error::ZOK)
{
zkutil::KeeperMultiException::check(code, create_requests, create_responses);
}
/// A table on the same Keeper path already exists, we just appended our table id to subscribe as a new replica
/// We still don't know if the table matches the expected metadata so table_is_valid is not changed
/// It will be checked lazily on the first operation
if (code == Coordination::Error::ZOK)
{
success = true;
return;
}
table_is_valid = true; /// We most likely created the path but got a timeout or disconnect
/// we are the first table created for the specified Keeper path, i.e. we are the first replica if (code == Coordination::Error::ZNODEEXISTS && zk_retry.isRetry())
return; {
success = true;
return;
}
if (code != Coordination::Error::ZNONODE)
throw zkutil::KeeperException(
code, "Failed to create table on path {} because a table with same UUID already exists", zk_root_path);
/// ZNONODE means we dropped zk_tables_path but didn't finish drop completely
}
if (client->exists(zk_dropped_path))
{
LOG_INFO(log, "Removing leftover nodes");
bool drop_finished = false;
if (zk_retry.isRetry() && metadata_drop_lock != nullptr && drop_lock_version != -1)
{
/// if we have leftover lock from previous try, we need to recreate the ephemeral with our session
Coordination::Requests drop_lock_requests{
zkutil::makeRemoveRequest(zk_dropped_lock_path, drop_lock_version),
zkutil::makeCreateRequest(zk_dropped_lock_path, "", zkutil::CreateMode::Ephemeral),
};
Coordination::Responses drop_lock_responses;
auto lock_code = client->tryMulti(drop_lock_requests, drop_lock_responses);
if (lock_code == Coordination::Error::ZBADVERSION)
{
LOG_INFO(log, "Someone else is removing leftover nodes");
metadata_drop_lock->setAlreadyRemoved();
metadata_drop_lock.reset();
return;
}
if (drop_lock_responses[0]->error == Coordination::Error::ZNONODE)
{
/// someone else removed metadata nodes or the previous ephemeral node expired
/// we will try creating dropped lock again to make sure
metadata_drop_lock->setAlreadyRemoved();
metadata_drop_lock.reset();
}
else if (lock_code == Coordination::Error::ZOK)
{
metadata_drop_lock->setAlreadyRemoved();
metadata_drop_lock = zkutil::EphemeralNodeHolder::existing(zk_dropped_lock_path, *client);
drop_lock_version = -1;
Coordination::Stat lock_stat;
client->get(zk_dropped_lock_path, &lock_stat);
drop_lock_version = lock_stat.version;
if (!dropTable(client, metadata_drop_lock))
{
metadata_drop_lock.reset();
return;
}
drop_finished = true;
}
}
if (!drop_finished)
{
auto code = client->tryCreate(zk_dropped_lock_path, "", zkutil::CreateMode::Ephemeral);
if (code == Coordination::Error::ZNONODE)
{
LOG_INFO(log, "Someone else removed leftover nodes");
}
else if (code == Coordination::Error::ZNODEEXISTS)
{
LOG_INFO(log, "Someone else is removing leftover nodes");
return;
}
else if (code != Coordination::Error::ZOK)
{
throw Coordination::Exception::fromPath(code, zk_dropped_lock_path);
}
else
{
metadata_drop_lock = zkutil::EphemeralNodeHolder::existing(zk_dropped_lock_path, *client);
drop_lock_version = -1;
Coordination::Stat lock_stat;
client->get(zk_dropped_lock_path, &lock_stat);
drop_lock_version = lock_stat.version;
if (!dropTable(client, metadata_drop_lock))
{
metadata_drop_lock.reset();
return;
}
}
}
}
Coordination::Requests create_requests{
zkutil::makeCreateRequest(zk_metadata_path, metadata_string, zkutil::CreateMode::Persistent),
zkutil::makeCreateRequest(zk_data_path, metadata_string, zkutil::CreateMode::Persistent),
zkutil::makeCreateRequest(zk_tables_path, "", zkutil::CreateMode::Persistent),
zkutil::makeCreateRequest(zk_table_path, "", zkutil::CreateMode::Persistent),
};
Coordination::Responses create_responses;
auto code = client->tryMulti(create_requests, create_responses);
if (code == Coordination::Error::ZNODEEXISTS)
{
LOG_INFO(
log, "It looks like a table on path {} was created by another server at the same moment, will retry", zk_root_path);
return;
}
else if (code != Coordination::Error::ZOK)
{
zkutil::KeeperMultiException::check(code, create_requests, create_responses);
}
table_status = TableStatus::VALID;
/// we are the first table created for the specified Keeper path, i.e. we are the first replica
success = true;
});
if (success)
return;
} }
throw Exception(ErrorCodes::BAD_ARGUMENTS, throw Exception(
"Cannot create metadata for table, because it is removed concurrently or because " ErrorCodes::BAD_ARGUMENTS,
"of wrong zk_root_path ({})", zk_root_path); "Cannot create metadata for table, because it is removed concurrently or because "
"of wrong zk_root_path ({})",
zk_root_path);
} }
@ -496,7 +606,7 @@ Pipe StorageKeeperMap::read(
size_t max_block_size, size_t max_block_size,
size_t num_streams) size_t num_streams)
{ {
checkTable<true>(); checkTable<true>(context_);
storage_snapshot->check(column_names); storage_snapshot->check(column_names);
FieldVectorPtr filtered_keys; FieldVectorPtr filtered_keys;
@ -529,8 +639,8 @@ Pipe StorageKeeperMap::read(
size_t num_keys = keys->size(); size_t num_keys = keys->size();
size_t num_threads = std::min<size_t>(num_streams, keys->size()); size_t num_threads = std::min<size_t>(num_streams, keys->size());
assert(num_keys <= std::numeric_limits<uint32_t>::max()); chassert(num_keys <= std::numeric_limits<uint32_t>::max());
assert(num_threads <= std::numeric_limits<uint32_t>::max()); chassert(num_threads <= std::numeric_limits<uint32_t>::max());
for (size_t thread_idx = 0; thread_idx < num_threads; ++thread_idx) for (size_t thread_idx = 0; thread_idx < num_threads; ++thread_idx)
{ {
@ -539,29 +649,59 @@ Pipe StorageKeeperMap::read(
using KeyContainer = typename KeyContainerPtr::element_type; using KeyContainer = typename KeyContainerPtr::element_type;
pipes.emplace_back(std::make_shared<StorageKeeperMapSource<KeyContainer>>( pipes.emplace_back(std::make_shared<StorageKeeperMapSource<KeyContainer>>(
*this, sample_block, max_block_size, keys, keys->begin() + begin, keys->begin() + end, with_version_column)); *this, sample_block, max_block_size, keys, keys->begin() + begin, keys->begin() + end, with_version_column, context_));
} }
return Pipe::unitePipes(std::move(pipes)); return Pipe::unitePipes(std::move(pipes));
}; };
auto client = getClient();
if (all_scan) if (all_scan)
return process_keys(std::make_shared<std::vector<std::string>>(client->getChildren(zk_data_path))); {
const auto & settings = context_->getSettingsRef();
ZooKeeperRetriesControl zk_retry{
getName(),
getLogger(getName()),
ZooKeeperRetriesInfo{
settings.keeper_max_retries,
settings.keeper_retry_initial_backoff_ms,
settings.keeper_retry_max_backoff_ms},
context_->getProcessListElement()};
std::vector<std::string> children;
zk_retry.retryLoop([&]
{
auto client = getClient();
children = client->getChildren(zk_data_path);
});
return process_keys(std::make_shared<std::vector<std::string>>(std::move(children)));
}
return process_keys(std::move(filtered_keys)); return process_keys(std::move(filtered_keys));
} }
SinkToStoragePtr StorageKeeperMap::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context, bool /*async_insert*/) SinkToStoragePtr StorageKeeperMap::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context, bool /*async_insert*/)
{ {
checkTable<true>(); checkTable<true>(local_context);
return std::make_shared<StorageKeeperMapSink>(*this, metadata_snapshot->getSampleBlock(), local_context); return std::make_shared<StorageKeeperMapSink>(*this, metadata_snapshot->getSampleBlock(), local_context);
} }
void StorageKeeperMap::truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) void StorageKeeperMap::truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr local_context, TableExclusiveLockHolder &)
{ {
checkTable<true>(); checkTable<true>(local_context);
auto client = getClient(); const auto & settings = local_context->getSettingsRef();
client->tryRemoveChildrenRecursive(zk_data_path, true); ZooKeeperRetriesControl zk_retry{
getName(),
getLogger(getName()),
ZooKeeperRetriesInfo{
settings.keeper_max_retries,
settings.keeper_retry_initial_backoff_ms,
settings.keeper_retry_max_backoff_ms},
local_context->getProcessListElement()};
zk_retry.retryLoop([&]
{
auto client = getClient();
client->tryRemoveChildrenRecursive(zk_data_path, true);
});
} }
bool StorageKeeperMap::dropTable(zkutil::ZooKeeperPtr zookeeper, const zkutil::EphemeralNodeHolder::Ptr & metadata_drop_lock) bool StorageKeeperMap::dropTable(zkutil::ZooKeeperPtr zookeeper, const zkutil::EphemeralNodeHolder::Ptr & metadata_drop_lock)
@ -605,7 +745,18 @@ bool StorageKeeperMap::dropTable(zkutil::ZooKeeperPtr zookeeper, const zkutil::E
void StorageKeeperMap::drop() void StorageKeeperMap::drop()
{ {
checkTable<true>(); auto current_table_status = getTableStatus(getContext());
if (current_table_status == TableStatus::UNKNOWN)
{
static constexpr auto error_msg = "Failed to activate table because of connection issues. It will be activated "
"once a connection is established and metadata is verified";
throw Exception(ErrorCodes::INVALID_STATE, error_msg);
}
/// if only column metadata is wrong we can still drop the table correctly
if (current_table_status == TableStatus::INVALID_METADATA)
return;
auto client = getClient(); auto client = getClient();
// we allow ZNONODE in case we got hardware error on previous drop // we allow ZNONODE in case we got hardware error on previous drop
@ -966,78 +1117,91 @@ UInt64 StorageKeeperMap::keysLimit() const
return keys_limit; return keys_limit;
} }
std::optional<bool> StorageKeeperMap::isTableValid() const StorageKeeperMap::TableStatus StorageKeeperMap::getTableStatus(const ContextPtr & local_context) const
{ {
std::lock_guard lock{init_mutex}; std::lock_guard lock{init_mutex};
if (table_is_valid.has_value()) if (table_status != TableStatus::UNKNOWN)
return table_is_valid; return table_status;
[&] [&]
{ {
try try
{ {
auto client = getClient(); const auto & settings = local_context->getSettingsRef();
ZooKeeperRetriesControl zk_retry{
getName(),
getLogger(getName()),
ZooKeeperRetriesInfo{
settings.keeper_max_retries,
settings.keeper_retry_initial_backoff_ms,
settings.keeper_retry_max_backoff_ms},
local_context->getProcessListElement()};
Coordination::Stat metadata_stat; zk_retry.retryLoop([&]
auto stored_metadata_string = client->get(zk_metadata_path, &metadata_stat);
if (metadata_stat.numChildren == 0)
{ {
table_is_valid = false; auto client = getClient();
return;
}
if (metadata_string != stored_metadata_string) Coordination::Stat metadata_stat;
{ auto stored_metadata_string = client->get(zk_metadata_path, &metadata_stat);
LOG_ERROR(
log,
"Table definition does not match to the one stored in the path {}. Stored definition: {}",
zk_root_path,
stored_metadata_string);
table_is_valid = false;
return;
}
// validate all metadata and data nodes are present if (metadata_stat.numChildren == 0)
Coordination::Requests requests; {
requests.push_back(zkutil::makeCheckRequest(zk_table_path, -1)); table_status = TableStatus::INVALID_KEEPER_STRUCTURE;
requests.push_back(zkutil::makeCheckRequest(zk_data_path, -1)); return;
requests.push_back(zkutil::makeCheckRequest(zk_dropped_path, -1)); }
Coordination::Responses responses; if (metadata_string != stored_metadata_string)
client->tryMulti(requests, responses); {
LOG_ERROR(
log,
"Table definition does not match to the one stored in the path {}. Stored definition: {}",
zk_root_path,
stored_metadata_string);
table_status = TableStatus::INVALID_METADATA;
return;
}
table_is_valid = false; // validate all metadata and data nodes are present
if (responses[0]->error != Coordination::Error::ZOK) Coordination::Requests requests;
{ requests.push_back(zkutil::makeCheckRequest(zk_table_path, -1));
LOG_ERROR(log, "Table node ({}) is missing", zk_table_path); requests.push_back(zkutil::makeCheckRequest(zk_data_path, -1));
return; requests.push_back(zkutil::makeCheckRequest(zk_dropped_path, -1));
}
if (responses[1]->error != Coordination::Error::ZOK) Coordination::Responses responses;
{ client->tryMulti(requests, responses);
LOG_ERROR(log, "Data node ({}) is missing", zk_data_path);
return;
}
if (responses[2]->error == Coordination::Error::ZOK) table_status = TableStatus::INVALID_KEEPER_STRUCTURE;
{ if (responses[0]->error != Coordination::Error::ZOK)
LOG_ERROR(log, "Tables with root node {} are being dropped", zk_root_path); {
return; LOG_ERROR(log, "Table node ({}) is missing", zk_table_path);
} return;
}
table_is_valid = true; if (responses[1]->error != Coordination::Error::ZOK)
{
LOG_ERROR(log, "Data node ({}) is missing", zk_data_path);
return;
}
if (responses[2]->error == Coordination::Error::ZOK)
{
LOG_ERROR(log, "Tables with root node {} are being dropped", zk_root_path);
return;
}
table_status = TableStatus::VALID;
});
} }
catch (const Coordination::Exception & e) catch (const Coordination::Exception & e)
{ {
tryLogCurrentException(log); tryLogCurrentException(log);
if (!Coordination::isHardwareError(e.code)) if (!Coordination::isHardwareError(e.code))
table_is_valid = false; table_status = TableStatus::INVALID_KEEPER_STRUCTURE;
} }
}(); }();
return table_is_valid; return table_status;
} }
Chunk StorageKeeperMap::getByKeys(const ColumnsWithTypeAndName & keys, PaddedPODArray<UInt8> & null_map, const Names &) const Chunk StorageKeeperMap::getByKeys(const ColumnsWithTypeAndName & keys, PaddedPODArray<UInt8> & null_map, const Names &) const
@ -1050,10 +1214,11 @@ Chunk StorageKeeperMap::getByKeys(const ColumnsWithTypeAndName & keys, PaddedPOD
if (raw_keys.size() != keys[0].column->size()) if (raw_keys.size() != keys[0].column->size())
throw Exception(ErrorCodes::LOGICAL_ERROR, "Assertion failed: {} != {}", raw_keys.size(), keys[0].column->size()); throw Exception(ErrorCodes::LOGICAL_ERROR, "Assertion failed: {} != {}", raw_keys.size(), keys[0].column->size());
return getBySerializedKeys(raw_keys, &null_map, /* version_column */ false); return getBySerializedKeys(raw_keys, &null_map, /* version_column */ false, getContext());
} }
Chunk StorageKeeperMap::getBySerializedKeys(const std::span<const std::string> keys, PaddedPODArray<UInt8> * null_map, bool with_version) const Chunk StorageKeeperMap::getBySerializedKeys(
const std::span<const std::string> keys, PaddedPODArray<UInt8> * null_map, bool with_version, const ContextPtr & local_context) const
{ {
Block sample_block = getInMemoryMetadataPtr()->getSampleBlock(); Block sample_block = getInMemoryMetadataPtr()->getSampleBlock();
MutableColumns columns = sample_block.cloneEmptyColumns(); MutableColumns columns = sample_block.cloneEmptyColumns();
@ -1070,17 +1235,27 @@ Chunk StorageKeeperMap::getBySerializedKeys(const std::span<const std::string> k
null_map->resize_fill(keys.size(), 1); null_map->resize_fill(keys.size(), 1);
} }
auto client = getClient();
Strings full_key_paths; Strings full_key_paths;
full_key_paths.reserve(keys.size()); full_key_paths.reserve(keys.size());
for (const auto & key : keys) for (const auto & key : keys)
{
full_key_paths.emplace_back(fullPathForKey(key)); full_key_paths.emplace_back(fullPathForKey(key));
}
auto values = client->tryGet(full_key_paths); const auto & settings = local_context->getSettingsRef();
ZooKeeperRetriesControl zk_retry{
getName(),
getLogger(getName()),
ZooKeeperRetriesInfo{
settings.keeper_max_retries,
settings.keeper_retry_initial_backoff_ms,
settings.keeper_retry_max_backoff_ms},
local_context->getProcessListElement()};
zkutil::ZooKeeper::MultiTryGetResponse values;
zk_retry.retryLoop([&]{
auto client = getClient();
values = client->tryGet(full_key_paths);
});
for (size_t i = 0; i < keys.size(); ++i) for (size_t i = 0; i < keys.size(); ++i)
{ {
@ -1153,14 +1328,14 @@ void StorageKeeperMap::checkMutationIsPossible(const MutationCommands & commands
void StorageKeeperMap::mutate(const MutationCommands & commands, ContextPtr local_context) void StorageKeeperMap::mutate(const MutationCommands & commands, ContextPtr local_context)
{ {
checkTable<true>(); checkTable<true>(local_context);
if (commands.empty()) if (commands.empty())
return; return;
bool strict = local_context->getSettingsRef().keeper_map_strict_mode; bool strict = local_context->getSettingsRef().keeper_map_strict_mode;
assert(commands.size() == 1); chassert(commands.size() == 1);
auto metadata_snapshot = getInMemoryMetadataPtr(); auto metadata_snapshot = getInMemoryMetadataPtr();
auto storage = getStorageID(); auto storage = getStorageID();
@ -1168,16 +1343,16 @@ void StorageKeeperMap::mutate(const MutationCommands & commands, ContextPtr loca
if (commands.front().type == MutationCommand::Type::DELETE) if (commands.front().type == MutationCommand::Type::DELETE)
{ {
MutationsInterpreter::Settings settings(true); MutationsInterpreter::Settings mutation_settings(true);
settings.return_all_columns = true; mutation_settings.return_all_columns = true;
settings.return_mutated_rows = true; mutation_settings.return_mutated_rows = true;
auto interpreter = std::make_unique<MutationsInterpreter>( auto interpreter = std::make_unique<MutationsInterpreter>(
storage_ptr, storage_ptr,
metadata_snapshot, metadata_snapshot,
commands, commands,
local_context, local_context,
settings); mutation_settings);
auto pipeline = QueryPipelineBuilder::getPipeline(interpreter->execute()); auto pipeline = QueryPipelineBuilder::getPipeline(interpreter->execute());
PullingPipelineExecutor executor(pipeline); PullingPipelineExecutor executor(pipeline);
@ -1186,8 +1361,6 @@ void StorageKeeperMap::mutate(const MutationCommands & commands, ContextPtr loca
auto primary_key_pos = header.getPositionByName(primary_key); auto primary_key_pos = header.getPositionByName(primary_key);
auto version_position = header.getPositionByName(std::string{version_column_name}); auto version_position = header.getPositionByName(std::string{version_column_name});
auto client = getClient();
Block block; Block block;
while (executor.pull(block)) while (executor.pull(block))
{ {
@ -1215,7 +1388,23 @@ void StorageKeeperMap::mutate(const MutationCommands & commands, ContextPtr loca
} }
Coordination::Responses responses; Coordination::Responses responses;
auto status = client->tryMulti(delete_requests, responses, /* check_session_valid */ true);
const auto & settings = local_context->getSettingsRef();
ZooKeeperRetriesControl zk_retry{
getName(),
getLogger(getName()),
ZooKeeperRetriesInfo{
settings.keeper_max_retries,
settings.keeper_retry_initial_backoff_ms,
settings.keeper_retry_max_backoff_ms},
local_context->getProcessListElement()};
Coordination::Error status;
zk_retry.retryLoop([&]
{
auto client = getClient();
status = client->tryMulti(delete_requests, responses, /* check_session_valid */ true);
});
if (status == Coordination::Error::ZOK) if (status == Coordination::Error::ZOK)
return; return;
@ -1227,16 +1416,21 @@ void StorageKeeperMap::mutate(const MutationCommands & commands, ContextPtr loca
for (const auto & delete_request : delete_requests) for (const auto & delete_request : delete_requests)
{ {
auto code = client->tryRemove(delete_request->getPath()); zk_retry.retryLoop([&]
if (code != Coordination::Error::ZOK && code != Coordination::Error::ZNONODE) {
throw zkutil::KeeperException::fromPath(code, delete_request->getPath()); auto client = getClient();
status = client->tryRemove(delete_request->getPath());
});
if (status != Coordination::Error::ZOK && status != Coordination::Error::ZNONODE)
throw zkutil::KeeperException::fromPath(status, delete_request->getPath());
} }
} }
return; return;
} }
assert(commands.front().type == MutationCommand::Type::UPDATE); chassert(commands.front().type == MutationCommand::Type::UPDATE);
if (commands.front().column_to_update_expression.contains(primary_key)) if (commands.front().column_to_update_expression.contains(primary_key))
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Primary key cannot be updated (cannot update column {})", primary_key); throw Exception(ErrorCodes::BAD_ARGUMENTS, "Primary key cannot be updated (cannot update column {})", primary_key);

View File

@ -54,7 +54,8 @@ public:
Names getPrimaryKey() const override { return {primary_key}; } Names getPrimaryKey() const override { return {primary_key}; }
Chunk getByKeys(const ColumnsWithTypeAndName & keys, PaddedPODArray<UInt8> & null_map, const Names &) const override; Chunk getByKeys(const ColumnsWithTypeAndName & keys, PaddedPODArray<UInt8> & null_map, const Names &) const override;
Chunk getBySerializedKeys(std::span<const std::string> keys, PaddedPODArray<UInt8> * null_map, bool with_version) const; Chunk getBySerializedKeys(
std::span<const std::string> keys, PaddedPODArray<UInt8> * null_map, bool with_version, const ContextPtr & local_context) const;
Block getSampleBlock(const Names &) const override; Block getSampleBlock(const Names &) const override;
@ -77,10 +78,10 @@ public:
UInt64 keysLimit() const; UInt64 keysLimit() const;
template <bool throw_on_error> template <bool throw_on_error>
void checkTable() const void checkTable(const ContextPtr & local_context) const
{ {
auto is_table_valid = isTableValid(); auto current_table_status = getTableStatus(local_context);
if (!is_table_valid.has_value()) if (table_status == TableStatus::UNKNOWN)
{ {
static constexpr auto error_msg = "Failed to activate table because of connection issues. It will be activated " static constexpr auto error_msg = "Failed to activate table because of connection issues. It will be activated "
"once a connection is established and metadata is verified"; "once a connection is established and metadata is verified";
@ -93,10 +94,10 @@ public:
} }
} }
if (!*is_table_valid) if (current_table_status != TableStatus::VALID)
{ {
static constexpr auto error_msg static constexpr auto error_msg
= "Failed to activate table because of invalid metadata in ZooKeeper. Please DETACH table"; = "Failed to activate table because of invalid metadata in ZooKeeper. Please DROP/DETACH table";
if constexpr (throw_on_error) if constexpr (throw_on_error)
throw Exception(ErrorCodes::INVALID_STATE, error_msg); throw Exception(ErrorCodes::INVALID_STATE, error_msg);
else else
@ -110,7 +111,15 @@ public:
private: private:
bool dropTable(zkutil::ZooKeeperPtr zookeeper, const zkutil::EphemeralNodeHolder::Ptr & metadata_drop_lock); bool dropTable(zkutil::ZooKeeperPtr zookeeper, const zkutil::EphemeralNodeHolder::Ptr & metadata_drop_lock);
std::optional<bool> isTableValid() const; enum class TableStatus : uint8_t
{
UNKNOWN,
INVALID_METADATA,
INVALID_KEEPER_STRUCTURE,
VALID
};
TableStatus getTableStatus(const ContextPtr & context) const;
void restoreDataImpl( void restoreDataImpl(
const BackupPtr & backup, const BackupPtr & backup,
@ -142,7 +151,8 @@ private:
mutable zkutil::ZooKeeperPtr zookeeper_client{nullptr}; mutable zkutil::ZooKeeperPtr zookeeper_client{nullptr};
mutable std::mutex init_mutex; mutable std::mutex init_mutex;
mutable std::optional<bool> table_is_valid;
mutable TableStatus table_status{TableStatus::UNKNOWN};
LoggerPtr log; LoggerPtr log;
}; };

View File

@ -0,0 +1,3 @@
<clickhouse>
<database_catalog_drop_table_concurrency>256</database_catalog_drop_table_concurrency>
</clickhouse>

View File

@ -21,6 +21,7 @@ ln -sf $SRC_PATH/config.d/listen.xml $DEST_SERVER_PATH/config.d/
ln -sf $SRC_PATH/config.d/text_log.xml $DEST_SERVER_PATH/config.d/ ln -sf $SRC_PATH/config.d/text_log.xml $DEST_SERVER_PATH/config.d/
ln -sf $SRC_PATH/config.d/blob_storage_log.xml $DEST_SERVER_PATH/config.d/ ln -sf $SRC_PATH/config.d/blob_storage_log.xml $DEST_SERVER_PATH/config.d/
ln -sf $SRC_PATH/config.d/custom_settings_prefixes.xml $DEST_SERVER_PATH/config.d/ ln -sf $SRC_PATH/config.d/custom_settings_prefixes.xml $DEST_SERVER_PATH/config.d/
ln -sf $SRC_PATH/config.d/database_catalog_drop_table_concurrency.xml $DEST_SERVER_PATH/config.d/
ln -sf $SRC_PATH/config.d/enable_access_control_improvements.xml $DEST_SERVER_PATH/config.d/ ln -sf $SRC_PATH/config.d/enable_access_control_improvements.xml $DEST_SERVER_PATH/config.d/
ln -sf $SRC_PATH/config.d/macros.xml $DEST_SERVER_PATH/config.d/ ln -sf $SRC_PATH/config.d/macros.xml $DEST_SERVER_PATH/config.d/
ln -sf $SRC_PATH/config.d/secure_ports.xml $DEST_SERVER_PATH/config.d/ ln -sf $SRC_PATH/config.d/secure_ports.xml $DEST_SERVER_PATH/config.d/

View File

@ -1588,6 +1588,7 @@
"groupBitmapXorResample" "groupBitmapXorResample"
"groupBitmapXorSimpleState" "groupBitmapXorSimpleState"
"groupBitmapXorState" "groupBitmapXorState"
"groupConcat"
"groupUniqArray" "groupUniqArray"
"groupUniqArrayArgMax" "groupUniqArrayArgMax"
"groupUniqArrayArgMin" "groupUniqArrayArgMin"

View File

@ -142,7 +142,7 @@ of parallel workers for `pytest-xdist`.
$ export CLICKHOUSE_TESTS_BASE_CONFIG_DIR=$HOME/ClickHouse/programs/server/ $ export CLICKHOUSE_TESTS_BASE_CONFIG_DIR=$HOME/ClickHouse/programs/server/
$ export CLICKHOUSE_TESTS_SERVER_BIN_PATH=$HOME/ClickHouse/programs/clickhouse $ export CLICKHOUSE_TESTS_SERVER_BIN_PATH=$HOME/ClickHouse/programs/clickhouse
$ export CLICKHOUSE_TESTS_ODBC_BRIDGE_BIN_PATH=$HOME/ClickHouse/programs/clickhouse-odbc-bridge $ export CLICKHOUSE_TESTS_ODBC_BRIDGE_BIN_PATH=$HOME/ClickHouse/programs/clickhouse-odbc-bridge
$ ./runner 'test_storage_s3_queue/test.py::test_max_set_age -- --count 10 -n 5' $ ./runner test_storage_s3_queue/test.py::test_max_set_age --count 10 -n 5
Start tests Start tests
=============================================================================== test session starts ================================================================================ =============================================================================== test session starts ================================================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3
@ -188,6 +188,14 @@ docker build -t clickhouse/integration-test .
``` ```
The helper container used by the `runner` script is in `docker/test/integration/runner/Dockerfile`. The helper container used by the `runner` script is in `docker/test/integration/runner/Dockerfile`.
It can be rebuild with
```
cd docker/test/integration/runner
docker build -t clickhouse/integration-test-runner .
```
If your docker configuration doesn't allow access to public internet with docker build command you may also need to add option --network=host if you rebuild image for a local integration testsing.
### Adding new tests ### Adding new tests

View File

@ -2,7 +2,7 @@ version: '2.3'
services: services:
minio1: minio1:
image: minio/minio:RELEASE.2023-09-30T07-02-29Z image: minio/minio:RELEASE.2024-07-31T05-46-26Z
volumes: volumes:
- data1-1:/data1 - data1-1:/data1
- ${MINIO_CERTS_DIR:-}:/certs - ${MINIO_CERTS_DIR:-}:/certs

View File

@ -3922,7 +3922,11 @@ class ClickHouseInstance:
) )
def contains_in_log( def contains_in_log(
self, substring, from_host=False, filename="clickhouse-server.log" self,
substring,
from_host=False,
filename="clickhouse-server.log",
exclusion_substring="",
): ):
if from_host: if from_host:
# We check fist file exists but want to look for all rotated logs as well # We check fist file exists but want to look for all rotated logs as well
@ -3930,7 +3934,7 @@ class ClickHouseInstance:
[ [
"bash", "bash",
"-c", "-c",
f'[ -f {self.logs_dir}/{filename} ] && zgrep -aH "{substring}" {self.logs_dir}/{filename}* || true', f'[ -f {self.logs_dir}/{filename} ] && zgrep -aH "{substring}" {self.logs_dir}/{filename}* | ( [ -z "{exclusion_substring}" ] && cat || grep -v "${exclusion_substring}" ) || true',
] ]
) )
else: else:
@ -3938,7 +3942,7 @@ class ClickHouseInstance:
[ [
"bash", "bash",
"-c", "-c",
f'[ -f /var/log/clickhouse-server/{filename} ] && zgrep -aH "{substring}" /var/log/clickhouse-server/{filename} || true', f'[ -f /var/log/clickhouse-server/{filename} ] && zgrep -aH "{substring}" /var/log/clickhouse-server/{filename} | ( [ -z "{exclusion_substring}" ] && cat || grep -v "${exclusion_substring}" ) || true',
] ]
) )
return len(result) > 0 return len(result) > 0

View File

@ -0,0 +1,17 @@
<clickhouse>
<!-- Native interface with TLS.
You have to configure certificate to enable this interface.
See the openSSL section below.
-->
<tcp_port_secure>9440</tcp_port_secure>
<!-- Used with https_port and tcp_port_secure. Full ssl options list: https://github.com/ClickHouse-Extras/poco/blob/master/NetSSL_OpenSSL/include/Poco/Net/SSLManager.h#L71 -->
<openSSL replace="replace">
<server> <!-- Used for https server AND secure tcp port -->
<certificateFile>/etc/clickhouse-server/config.d/self-cert.pem</certificateFile>
<privateKeyFile>/etc/clickhouse-server/config.d/self-key.pem</privateKeyFile>
<caConfig>/etc/clickhouse-server/config.d/ca-cert.pem</caConfig>
<verificationMode>strict</verificationMode>
</server>
</openSSL>
</clickhouse>

View File

@ -17,6 +17,19 @@ instance = cluster.add_instance(
"certs/self-cert.pem", "certs/self-cert.pem",
"certs/ca-cert.pem", "certs/ca-cert.pem",
], ],
with_zookeeper=False,
)
node1 = cluster.add_instance(
"node1",
main_configs=[
"configs/ssl_config_strict.xml",
"certs/self-key.pem",
"certs/self-cert.pem",
"certs/ca-cert.pem",
],
with_zookeeper=False,
) )
@ -90,3 +103,25 @@ def test_connection_accept():
) )
== "1\n" == "1\n"
) )
def test_strict_reject():
with pytest.raises(Exception) as err:
execute_query_native(node1, "SELECT 1", "<clickhouse></clickhouse>")
assert "certificate verify failed" in str(err.value)
def test_strict_reject_with_config():
with pytest.raises(Exception) as err:
execute_query_native(node1, "SELECT 1", config_accept)
assert "alert certificate required" in str(err.value)
def test_strict_connection_reject():
with pytest.raises(Exception) as err:
execute_query_native(
node1,
"SELECT 1",
config_connection_accept.format(ip_address=f"{instance.ip_address}"),
)
assert "certificate verify failed" in str(err.value)

View File

@ -0,0 +1,14 @@
<clickhouse>
<profiles>
<default>
<insert_keeper_max_retries>0</insert_keeper_max_retries>
<keeper_max_retries>0</keeper_max_retries>
</default>
</profiles>
<users>
<default>
<password></password>
<profile>default</profile>
</default>
</users>
</clickhouse>

View File

@ -10,6 +10,7 @@ cluster = ClickHouseCluster(__file__)
node = cluster.add_instance( node = cluster.add_instance(
"node", "node",
main_configs=["configs/enable_keeper_map.xml"], main_configs=["configs/enable_keeper_map.xml"],
user_configs=["configs/keeper_retries.xml"],
with_zookeeper=True, with_zookeeper=True,
stay_alive=True, stay_alive=True,
) )
@ -46,7 +47,10 @@ def assert_keeper_exception_after_partition(query):
with PartitionManager() as pm: with PartitionManager() as pm:
pm.drop_instance_zk_connections(node) pm.drop_instance_zk_connections(node)
try: try:
error = node.query_and_get_error_with_retry(query, sleep_time=1) error = node.query_and_get_error_with_retry(
query,
sleep_time=1,
)
assert "Coordination::Exception" in error assert "Coordination::Exception" in error
except: except:
print_iptables_rules() print_iptables_rules()
@ -63,6 +67,7 @@ def run_query(query):
def test_keeper_map_without_zk(started_cluster): def test_keeper_map_without_zk(started_cluster):
run_query("DROP TABLE IF EXISTS test_keeper_map_without_zk SYNC")
assert_keeper_exception_after_partition( assert_keeper_exception_after_partition(
"CREATE TABLE test_keeper_map_without_zk (key UInt64, value UInt64) ENGINE = KeeperMap('/test_keeper_map_without_zk') PRIMARY KEY(key);" "CREATE TABLE test_keeper_map_without_zk (key UInt64, value UInt64) ENGINE = KeeperMap('/test_keeper_map_without_zk') PRIMARY KEY(key);"
) )
@ -84,7 +89,8 @@ def test_keeper_map_without_zk(started_cluster):
node.restart_clickhouse(60) node.restart_clickhouse(60)
try: try:
error = node.query_and_get_error_with_retry( error = node.query_and_get_error_with_retry(
"SELECT * FROM test_keeper_map_without_zk", sleep_time=1 "SELECT * FROM test_keeper_map_without_zk",
sleep_time=1,
) )
assert "Failed to activate table because of connection issues" in error assert "Failed to activate table because of connection issues" in error
except: except:
@ -101,12 +107,12 @@ def test_keeper_map_without_zk(started_cluster):
) )
assert "Failed to activate table because of invalid metadata in ZooKeeper" in error assert "Failed to activate table because of invalid metadata in ZooKeeper" in error
node.query("DETACH TABLE test_keeper_map_without_zk")
client.stop() client.stop()
def test_keeper_map_with_failed_drop(started_cluster): def test_keeper_map_with_failed_drop(started_cluster):
run_query("DROP TABLE IF EXISTS test_keeper_map_with_failed_drop SYNC")
run_query("DROP TABLE IF EXISTS test_keeper_map_with_failed_drop_another SYNC")
run_query( run_query(
"CREATE TABLE test_keeper_map_with_failed_drop (key UInt64, value UInt64) ENGINE = KeeperMap('/test_keeper_map_with_failed_drop') PRIMARY KEY(key);" "CREATE TABLE test_keeper_map_with_failed_drop (key UInt64, value UInt64) ENGINE = KeeperMap('/test_keeper_map_with_failed_drop') PRIMARY KEY(key);"
) )

View File

@ -0,0 +1,3 @@
<clickhouse>
<keeper_map_path_prefix>/test_keeper_map</keeper_map_path_prefix>
</clickhouse>

View File

@ -0,0 +1,7 @@
<clickhouse>
<zookeeper>
<enable_fault_injections_during_startup>1</enable_fault_injections_during_startup>
<send_fault_probability>0.005</send_fault_probability>
<recv_fault_probability>0.005</recv_fault_probability>
</zookeeper>
</clickhouse>

View File

@ -0,0 +1,14 @@
<clickhouse>
<profiles>
<default>
<keeper_max_retries>20</keeper_max_retries>
<keeper_retry_max_backoff_ms>10000</keeper_retry_max_backoff_ms>
</default>
</profiles>
<users>
<default>
<password></password>
<profile>default</profile>
</default>
</users>
</clickhouse>

View File

@ -0,0 +1,75 @@
import pytest
from helpers.cluster import ClickHouseCluster
import os
CONFIG_DIR = os.path.join(os.path.dirname(os.path.realpath(__file__)), "configs")
cluster = ClickHouseCluster(__file__)
node = cluster.add_instance(
"node",
main_configs=["configs/enable_keeper_map.xml"],
user_configs=["configs/keeper_retries.xml"],
with_zookeeper=True,
stay_alive=True,
)
def start_clean_clickhouse():
# remove fault injection if present
if "fault_injection.xml" in node.exec_in_container(
["bash", "-c", "ls /etc/clickhouse-server/config.d"]
):
print("Removing fault injection")
node.exec_in_container(
["bash", "-c", "rm /etc/clickhouse-server/config.d/fault_injection.xml"]
)
node.restart_clickhouse()
@pytest.fixture(scope="module")
def started_cluster():
try:
cluster.start()
yield cluster
finally:
cluster.shutdown()
def repeat_query(query, repeat):
for _ in range(repeat):
node.query(
query,
)
def test_queries(started_cluster):
start_clean_clickhouse()
node.query("DROP TABLE IF EXISTS keeper_map_retries SYNC")
node.stop_clickhouse()
node.copy_file_to_container(
os.path.join(CONFIG_DIR, "fault_injection.xml"),
"/etc/clickhouse-server/config.d/fault_injection.xml",
)
node.start_clickhouse()
repeat_count = 10
node.query(
"CREATE TABLE keeper_map_retries (a UInt64, b UInt64) Engine=KeeperMap('/keeper_map_retries') PRIMARY KEY a",
)
repeat_query(
"INSERT INTO keeper_map_retries SELECT number, number FROM numbers(500)",
repeat_count,
)
repeat_query("SELECT * FROM keeper_map_retries", repeat_count)
repeat_query(
"ALTER TABLE keeper_map_retries UPDATE b = 3 WHERE a > 2", repeat_count
)
repeat_query("ALTER TABLE keeper_map_retries DELETE WHERE a > 2", repeat_count)
repeat_query("TRUNCATE keeper_map_retries", repeat_count)

View File

@ -13,6 +13,7 @@ node = cluster.add_instance(
with_zookeeper=True, with_zookeeper=True,
with_azurite=True, with_azurite=True,
) )
base_search_query = "SELECT COUNT() FROM system.query_log WHERE query LIKE "
@pytest.fixture(scope="module", autouse=True) @pytest.fixture(scope="module", autouse=True)
@ -35,7 +36,7 @@ def check_logs(must_contain=[], must_not_contain=[]):
.replace("]", "\\]") .replace("]", "\\]")
.replace("*", "\\*") .replace("*", "\\*")
) )
assert node.contains_in_log(escaped_str) assert node.contains_in_log(escaped_str, exclusion_substring=base_search_query)
for str in must_not_contain: for str in must_not_contain:
escaped_str = ( escaped_str = (
@ -44,7 +45,9 @@ def check_logs(must_contain=[], must_not_contain=[]):
.replace("]", "\\]") .replace("]", "\\]")
.replace("*", "\\*") .replace("*", "\\*")
) )
assert not node.contains_in_log(escaped_str) assert not node.contains_in_log(
escaped_str, exclusion_substring=base_search_query
)
for str in must_contain: for str in must_contain:
escaped_str = str.replace("'", "\\'") escaped_str = str.replace("'", "\\'")
@ -60,7 +63,7 @@ def system_query_log_contains_search_pattern(search_pattern):
return ( return (
int( int(
node.query( node.query(
f"SELECT COUNT() FROM system.query_log WHERE query LIKE '%{search_pattern}%'" f"{base_search_query}'%{search_pattern}%' AND query NOT LIKE '{base_search_query}%'"
).strip() ).strip()
) )
>= 1 >= 1
@ -105,7 +108,6 @@ def test_create_alter_user():
must_not_contain=[ must_not_contain=[
password, password,
"IDENTIFIED BY", "IDENTIFIED BY",
"IDENTIFIED BY",
"IDENTIFIED WITH plaintext_password BY", "IDENTIFIED WITH plaintext_password BY",
], ],
) )
@ -366,10 +368,7 @@ def test_table_functions():
f"remoteSecure(named_collection_6, addresses_expr = '127.{{2..11}}', database = 'default', table = 'remote_table', user = 'remote_user', password = '{password}')", f"remoteSecure(named_collection_6, addresses_expr = '127.{{2..11}}', database = 'default', table = 'remote_table', user = 'remote_user', password = '{password}')",
f"s3('http://minio1:9001/root/data/test9.csv.gz', 'NOSIGN', 'CSV')", f"s3('http://minio1:9001/root/data/test9.csv.gz', 'NOSIGN', 'CSV')",
f"s3('http://minio1:9001/root/data/test10.csv.gz', 'minio', '{password}')", f"s3('http://minio1:9001/root/data/test10.csv.gz', 'minio', '{password}')",
( f"deltaLake('http://minio1:9001/root/data/test11.csv.gz', 'minio', '{password}')",
f"deltaLake('http://minio1:9001/root/data/test11.csv.gz', 'minio', '{password}')",
"DNS_ERROR",
),
f"azureBlobStorage('{azure_conn_string}', 'cont', 'test_simple.csv', 'CSV')", f"azureBlobStorage('{azure_conn_string}', 'cont', 'test_simple.csv', 'CSV')",
f"azureBlobStorage('{azure_conn_string}', 'cont', 'test_simple_1.csv', 'CSV', 'none')", f"azureBlobStorage('{azure_conn_string}', 'cont', 'test_simple_1.csv', 'CSV', 'none')",
f"azureBlobStorage('{azure_conn_string}', 'cont', 'test_simple_2.csv', 'CSV', 'none', 'auto')", f"azureBlobStorage('{azure_conn_string}', 'cont', 'test_simple_2.csv', 'CSV', 'none', 'auto')",

View File

@ -1,6 +1,6 @@
<clickhouse> <clickhouse>
<background_schedule_pool_size>1</background_schedule_pool_size> <background_schedule_pool_size>1</background_schedule_pool_size>
<merge_tree> <merge_tree>
<initialization_retry_period>5</initialization_retry_period> <initialization_retry_period>3</initialization_retry_period>
</merge_tree> </merge_tree>
</clickhouse> </clickhouse>

View File

@ -80,4 +80,8 @@ def test_startup_with_small_bg_pool_partitioned(started_cluster):
assert_values() assert_values()
# check that we activate it in the end # check that we activate it in the end
node.query_with_retry("INSERT INTO replicated_table_partitioned VALUES(20, 30)") node.query_with_retry(
"INSERT INTO replicated_table_partitioned VALUES(20, 30)",
retry_count=20,
sleep_time=3,
)

View File

@ -139,6 +139,19 @@ def test(storage_policy):
== insert_values_arr[i] == insert_values_arr[i]
) )
metadata_it = cluster.minio_client.list_objects(
cluster.minio_bucket, "data/", recursive=True
)
metadata_count = 0
for obj in list(metadata_it):
if "/__meta/" in obj.object_name:
assert obj.object_name.endswith("/prefix.path")
metadata_count += 1
else:
assert not obj.object_name.endswith("/prefix.path")
assert metadata_count > 0
for i in range(NUM_WORKERS): for i in range(NUM_WORKERS):
node = cluster.instances[f"node{i + 1}"] node = cluster.instances[f"node{i + 1}"]
node.query("DROP TABLE IF EXISTS test SYNC") node.query("DROP TABLE IF EXISTS test SYNC")

View File

@ -1,5 +1,4 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Tags: no-parallel
import os import os
import sys import sys
@ -17,6 +16,7 @@ log = None
with client(name="client1>", log=log) as client1, client( with client(name="client1>", log=log) as client1, client(
name="client2>", log=log name="client2>", log=log
) as client2: ) as client2:
database_name = os.environ["CLICKHOUSE_DATABASE"]
client1.expect(prompt) client1.expect(prompt)
client2.expect(prompt) client2.expect(prompt)
@ -31,40 +31,38 @@ with client(name="client1>", log=log) as client1, client(
client2.send("SET allow_experimental_analyzer = 0") client2.send("SET allow_experimental_analyzer = 0")
client2.expect(prompt) client2.expect(prompt)
client1.send("CREATE DATABASE IF NOT EXISTS 01062_window_view_event_hop_watch_asc") client1.send(f"DROP TABLE IF EXISTS {database_name}.mt")
client1.expect(prompt) client1.expect(prompt)
client1.send("DROP TABLE IF EXISTS 01062_window_view_event_hop_watch_asc.mt") client1.send(f"DROP TABLE IF EXISTS {database_name}.wv SYNC")
client1.expect(prompt)
client1.send("DROP TABLE IF EXISTS 01062_window_view_event_hop_watch_asc.wv SYNC")
client1.expect(prompt) client1.expect(prompt)
client1.send( client1.send(
"CREATE TABLE 01062_window_view_event_hop_watch_asc.mt(a Int32, timestamp DateTime) ENGINE=MergeTree ORDER BY tuple()" f"CREATE TABLE {database_name}.mt(a Int32, timestamp DateTime) ENGINE=MergeTree ORDER BY tuple()"
) )
client1.expect(prompt) client1.expect(prompt)
client1.send( client1.send(
"CREATE WINDOW VIEW 01062_window_view_event_hop_watch_asc.wv ENGINE Memory WATERMARK=ASCENDING AS SELECT count(a) AS count, hopEnd(wid) AS w_end FROM 01062_window_view_event_hop_watch_asc.mt GROUP BY hop(timestamp, INTERVAL '2' SECOND, INTERVAL '3' SECOND, 'US/Samoa') AS wid" f"CREATE WINDOW VIEW {database_name}.wv ENGINE Memory WATERMARK=ASCENDING AS SELECT count(a) AS count, hopEnd(wid) AS w_end FROM {database_name}.mt GROUP BY hop(timestamp, INTERVAL '2' SECOND, INTERVAL '3' SECOND, 'US/Samoa') AS wid"
) )
client1.expect(prompt) client1.expect(prompt)
client1.send("WATCH 01062_window_view_event_hop_watch_asc.wv") client1.send(f"WATCH {database_name}.wv")
client1.expect("Query id" + end_of_block) client1.expect("Query id" + end_of_block)
client1.expect("Progress: 0.00 rows.*\\)") client1.expect("Progress: 0.00 rows.*\\)")
client2.send( client2.send(
"INSERT INTO 01062_window_view_event_hop_watch_asc.mt VALUES (1, toDateTime('1990/01/01 12:00:00', 'US/Samoa'));" f"INSERT INTO {database_name}.mt VALUES (1, toDateTime('1990/01/01 12:00:00', 'US/Samoa'));"
) )
client2.expect(prompt) client2.expect(prompt)
client2.send( client2.send(
"INSERT INTO 01062_window_view_event_hop_watch_asc.mt VALUES (1, toDateTime('1990/01/01 12:00:05', 'US/Samoa'));" f"INSERT INTO {database_name}.mt VALUES (1, toDateTime('1990/01/01 12:00:05', 'US/Samoa'));"
) )
client2.expect(prompt) client2.expect(prompt)
client1.expect("1*" + end_of_block) client1.expect("1*" + end_of_block)
client2.send( client2.send(
"INSERT INTO 01062_window_view_event_hop_watch_asc.mt VALUES (1, toDateTime('1990/01/01 12:00:06', 'US/Samoa'));" f"INSERT INTO {database_name}.mt VALUES (1, toDateTime('1990/01/01 12:00:06', 'US/Samoa'));"
) )
client2.expect(prompt) client2.expect(prompt)
client2.send( client2.send(
"INSERT INTO 01062_window_view_event_hop_watch_asc.mt VALUES (1, toDateTime('1990/01/01 12:00:10', 'US/Samoa'));" f"INSERT INTO {database_name}.mt VALUES (1, toDateTime('1990/01/01 12:00:10', 'US/Samoa'));"
) )
client2.expect(prompt) client2.expect(prompt)
client1.expect("1" + end_of_block) client1.expect("1" + end_of_block)
@ -77,9 +75,7 @@ with client(name="client1>", log=log) as client1, client(
if match.groups()[1]: if match.groups()[1]:
client1.send(client1.command) client1.send(client1.command)
client1.expect(prompt) client1.expect(prompt)
client1.send("DROP TABLE 01062_window_view_event_hop_watch_asc.wv SYNC") client1.send(f"DROP TABLE {database_name}.wv SYNC")
client1.expect(prompt) client1.expect(prompt)
client1.send("DROP TABLE 01062_window_view_event_hop_watch_asc.mt") client1.send(f"DROP TABLE {database_name}.mt")
client1.expect(prompt)
client1.send("DROP DATABASE IF EXISTS 01062_window_view_event_hop_watch_asc")
client1.expect(prompt) client1.expect(prompt)

View File

@ -1,29 +1,31 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Tags: no-parallel, no-fasttest # Tags: no-fasttest, no-parallel
# no-parallel: FIXME: Timing issues with INSERT + DETACH (https://github.com/ClickHouse/ClickHouse/pull/67610/files#r1700345054)
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh # shellcheck source=../shell_config.sh
. "$CURDIR"/../shell_config.sh . "$CURDIR"/../shell_config.sh
$CLICKHOUSE_CLIENT -q "DROP DATABASE IF EXISTS test_01107" NEW_DATABASE=test_01107_${CLICKHOUSE_DATABASE}
$CLICKHOUSE_CLIENT -q "CREATE DATABASE test_01107 ENGINE=Atomic" $CLICKHOUSE_CLIENT -q "DROP DATABASE IF EXISTS ${NEW_DATABASE}"
$CLICKHOUSE_CLIENT -q "CREATE TABLE test_01107.mt (n UInt64) ENGINE=MergeTree() ORDER BY tuple()" $CLICKHOUSE_CLIENT -q "CREATE DATABASE ${NEW_DATABASE} ENGINE=Atomic"
$CLICKHOUSE_CLIENT -q "CREATE TABLE ${NEW_DATABASE}.mt (n UInt64) ENGINE=MergeTree() ORDER BY tuple()"
$CLICKHOUSE_CLIENT --function_sleep_max_microseconds_per_block 60000000 -q "INSERT INTO test_01107.mt SELECT number + sleepEachRow(3) FROM numbers(5)" & $CLICKHOUSE_CLIENT --function_sleep_max_microseconds_per_block 60000000 -q "INSERT INTO ${NEW_DATABASE}.mt SELECT number + sleepEachRow(3) FROM numbers(5)" &
sleep 1 sleep 1
$CLICKHOUSE_CLIENT -q "DETACH TABLE test_01107.mt" --database_atomic_wait_for_drop_and_detach_synchronously=0 $CLICKHOUSE_CLIENT -q "DETACH TABLE ${NEW_DATABASE}.mt" --database_atomic_wait_for_drop_and_detach_synchronously=0
$CLICKHOUSE_CLIENT -q "ATTACH TABLE test_01107.mt" --database_atomic_wait_for_drop_and_detach_synchronously=0 2>&1 | grep -F "Code: 57" > /dev/null && echo "OK" $CLICKHOUSE_CLIENT -q "ATTACH TABLE ${NEW_DATABASE}.mt" --database_atomic_wait_for_drop_and_detach_synchronously=0 2>&1 | grep -F "Code: 57" > /dev/null && echo "OK"
$CLICKHOUSE_CLIENT -q "DETACH DATABASE test_01107" --database_atomic_wait_for_drop_and_detach_synchronously=0 2>&1 | grep -F "Code: 219" > /dev/null && echo "OK" $CLICKHOUSE_CLIENT -q "DETACH DATABASE ${NEW_DATABASE}" --database_atomic_wait_for_drop_and_detach_synchronously=0 2>&1 | grep -F "Code: 219" > /dev/null && echo "OK"
wait wait
$CLICKHOUSE_CLIENT -q "ATTACH TABLE test_01107.mt" $CLICKHOUSE_CLIENT -q "ATTACH TABLE ${NEW_DATABASE}.mt"
$CLICKHOUSE_CLIENT -q "SELECT count(n), sum(n) FROM test_01107.mt" $CLICKHOUSE_CLIENT -q "SELECT count(n), sum(n) FROM ${NEW_DATABASE}.mt"
$CLICKHOUSE_CLIENT -q "DETACH DATABASE test_01107" --database_atomic_wait_for_drop_and_detach_synchronously=0 $CLICKHOUSE_CLIENT -q "DETACH DATABASE ${NEW_DATABASE}" --database_atomic_wait_for_drop_and_detach_synchronously=0
$CLICKHOUSE_CLIENT -q "ATTACH DATABASE test_01107" $CLICKHOUSE_CLIENT -q "ATTACH DATABASE ${NEW_DATABASE}"
$CLICKHOUSE_CLIENT -q "SELECT count(n), sum(n) FROM test_01107.mt" $CLICKHOUSE_CLIENT -q "SELECT count(n), sum(n) FROM ${NEW_DATABASE}.mt"
$CLICKHOUSE_CLIENT --function_sleep_max_microseconds_per_block 60000000 -q "INSERT INTO test_01107.mt SELECT number + sleepEachRow(1) FROM numbers(5)" && echo "end" & $CLICKHOUSE_CLIENT --function_sleep_max_microseconds_per_block 60000000 -q "INSERT INTO ${NEW_DATABASE}.mt SELECT number + sleepEachRow(1) FROM numbers(5)" && echo "end" &
sleep 1 sleep 1
$CLICKHOUSE_CLIENT -q "DROP DATABASE test_01107" --database_atomic_wait_for_drop_and_detach_synchronously=0 && sleep 1 && echo "dropped" $CLICKHOUSE_CLIENT -q "DROP DATABASE ${NEW_DATABASE}" --database_atomic_wait_for_drop_and_detach_synchronously=0 && sleep 1 && echo "dropped"
wait wait

View File

@ -1,4 +1,4 @@
-- Tags: zookeeper, no-parallel -- Tags: zookeeper
DROP TABLE IF EXISTS r_prop_table1; DROP TABLE IF EXISTS r_prop_table1;
DROP TABLE IF EXISTS r_prop_table2; DROP TABLE IF EXISTS r_prop_table2;

View File

@ -1,10 +1,4 @@
-- Tags: no-parallel CREATE TABLE date_table
DROP DATABASE IF EXISTS database_for_range_dict;
CREATE DATABASE database_for_range_dict;
CREATE TABLE database_for_range_dict.date_table
( (
CountryID UInt64, CountryID UInt64,
StartDate Date, StartDate Date,
@ -14,11 +8,11 @@ CREATE TABLE database_for_range_dict.date_table
ENGINE = MergeTree() ENGINE = MergeTree()
ORDER BY CountryID; ORDER BY CountryID;
INSERT INTO database_for_range_dict.date_table VALUES(1, toDate('2019-05-05'), toDate('2019-05-20'), 0.33); INSERT INTO date_table VALUES(1, toDate('2019-05-05'), toDate('2019-05-20'), 0.33);
INSERT INTO database_for_range_dict.date_table VALUES(1, toDate('2019-05-21'), toDate('2019-05-30'), 0.42); INSERT INTO date_table VALUES(1, toDate('2019-05-21'), toDate('2019-05-30'), 0.42);
INSERT INTO database_for_range_dict.date_table VALUES(2, toDate('2019-05-21'), toDate('2019-05-30'), 0.46); INSERT INTO date_table VALUES(2, toDate('2019-05-21'), toDate('2019-05-30'), 0.46);
CREATE DICTIONARY database_for_range_dict.range_dictionary CREATE DICTIONARY range_dictionary
( (
CountryID UInt64, CountryID UInt64,
StartDate Date, StartDate Date,
@ -26,7 +20,7 @@ CREATE DICTIONARY database_for_range_dict.range_dictionary
Tax Float64 DEFAULT 0.2 Tax Float64 DEFAULT 0.2
) )
PRIMARY KEY CountryID PRIMARY KEY CountryID
SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'date_table' DB 'database_for_range_dict')) SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'date_table' DB currentDatabase()))
LIFETIME(MIN 1 MAX 1000) LIFETIME(MIN 1 MAX 1000)
LAYOUT(RANGE_HASHED()) LAYOUT(RANGE_HASHED())
RANGE(MIN StartDate MAX EndDate) RANGE(MIN StartDate MAX EndDate)
@ -35,30 +29,30 @@ SETTINGS(dictionary_use_async_executor=1, max_threads=8)
SELECT 'Dictionary not nullable'; SELECT 'Dictionary not nullable';
SELECT 'dictGet'; SELECT 'dictGet';
SELECT dictGet('database_for_range_dict.range_dictionary', 'Tax', toUInt64(1), toDate('2019-05-15')); SELECT dictGet('range_dictionary', 'Tax', toUInt64(1), toDate('2019-05-15'));
SELECT dictGet('database_for_range_dict.range_dictionary', 'Tax', toUInt64(1), toDate('2019-05-29')); SELECT dictGet('range_dictionary', 'Tax', toUInt64(1), toDate('2019-05-29'));
SELECT dictGet('database_for_range_dict.range_dictionary', 'Tax', toUInt64(2), toDate('2019-05-29')); SELECT dictGet('range_dictionary', 'Tax', toUInt64(2), toDate('2019-05-29'));
SELECT dictGet('database_for_range_dict.range_dictionary', 'Tax', toUInt64(2), toDate('2019-05-31')); SELECT dictGet('range_dictionary', 'Tax', toUInt64(2), toDate('2019-05-31'));
SELECT dictGetOrDefault('database_for_range_dict.range_dictionary', 'Tax', toUInt64(2), toDate('2019-05-31'), 0.4); SELECT dictGetOrDefault('range_dictionary', 'Tax', toUInt64(2), toDate('2019-05-31'), 0.4);
SELECT 'dictHas'; SELECT 'dictHas';
SELECT dictHas('database_for_range_dict.range_dictionary', toUInt64(1), toDate('2019-05-15')); SELECT dictHas('range_dictionary', toUInt64(1), toDate('2019-05-15'));
SELECT dictHas('database_for_range_dict.range_dictionary', toUInt64(1), toDate('2019-05-29')); SELECT dictHas('range_dictionary', toUInt64(1), toDate('2019-05-29'));
SELECT dictHas('database_for_range_dict.range_dictionary', toUInt64(2), toDate('2019-05-29')); SELECT dictHas('range_dictionary', toUInt64(2), toDate('2019-05-29'));
SELECT dictHas('database_for_range_dict.range_dictionary', toUInt64(2), toDate('2019-05-31')); SELECT dictHas('range_dictionary', toUInt64(2), toDate('2019-05-31'));
SELECT 'select columns from dictionary'; SELECT 'select columns from dictionary';
SELECT 'allColumns'; SELECT 'allColumns';
SELECT * FROM database_for_range_dict.range_dictionary ORDER BY CountryID, StartDate, EndDate; SELECT * FROM range_dictionary ORDER BY CountryID, StartDate, EndDate;
SELECT 'noColumns'; SELECT 'noColumns';
SELECT 1 FROM database_for_range_dict.range_dictionary ORDER BY CountryID, StartDate, EndDate; SELECT 1 FROM range_dictionary ORDER BY CountryID, StartDate, EndDate;
SELECT 'onlySpecificColumns'; SELECT 'onlySpecificColumns';
SELECT CountryID, StartDate, Tax FROM database_for_range_dict.range_dictionary ORDER BY CountryID, StartDate, EndDate; SELECT CountryID, StartDate, Tax FROM range_dictionary ORDER BY CountryID, StartDate, EndDate;
SELECT 'onlySpecificColumn'; SELECT 'onlySpecificColumn';
SELECT Tax FROM database_for_range_dict.range_dictionary ORDER BY CountryID, StartDate, EndDate; SELECT Tax FROM range_dictionary ORDER BY CountryID, StartDate, EndDate;
DROP DICTIONARY database_for_range_dict.range_dictionary; DROP DICTIONARY range_dictionary;
DROP TABLE database_for_range_dict.date_table; DROP TABLE date_table;
CREATE TABLE database_for_range_dict.date_table CREATE TABLE date_table
( (
CountryID UInt64, CountryID UInt64,
StartDate Date, StartDate Date,
@ -68,11 +62,11 @@ CREATE TABLE database_for_range_dict.date_table
ENGINE = MergeTree() ENGINE = MergeTree()
ORDER BY CountryID; ORDER BY CountryID;
INSERT INTO database_for_range_dict.date_table VALUES(1, toDate('2019-05-05'), toDate('2019-05-20'), 0.33); INSERT INTO date_table VALUES(1, toDate('2019-05-05'), toDate('2019-05-20'), 0.33);
INSERT INTO database_for_range_dict.date_table VALUES(1, toDate('2019-05-21'), toDate('2019-05-30'), 0.42); INSERT INTO date_table VALUES(1, toDate('2019-05-21'), toDate('2019-05-30'), 0.42);
INSERT INTO database_for_range_dict.date_table VALUES(2, toDate('2019-05-21'), toDate('2019-05-30'), NULL); INSERT INTO date_table VALUES(2, toDate('2019-05-21'), toDate('2019-05-30'), NULL);
CREATE DICTIONARY database_for_range_dict.range_dictionary_nullable CREATE DICTIONARY range_dictionary_nullable
( (
CountryID UInt64, CountryID UInt64,
StartDate Date, StartDate Date,
@ -80,35 +74,33 @@ CREATE DICTIONARY database_for_range_dict.range_dictionary_nullable
Tax Nullable(Float64) DEFAULT 0.2 Tax Nullable(Float64) DEFAULT 0.2
) )
PRIMARY KEY CountryID PRIMARY KEY CountryID
SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'date_table' DB 'database_for_range_dict')) SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'date_table' DB currentDatabase()))
LIFETIME(MIN 1 MAX 1000) LIFETIME(MIN 1 MAX 1000)
LAYOUT(RANGE_HASHED()) LAYOUT(RANGE_HASHED())
RANGE(MIN StartDate MAX EndDate); RANGE(MIN StartDate MAX EndDate);
SELECT 'Dictionary nullable'; SELECT 'Dictionary nullable';
SELECT 'dictGet'; SELECT 'dictGet';
SELECT dictGet('database_for_range_dict.range_dictionary_nullable', 'Tax', toUInt64(1), toDate('2019-05-15')); SELECT dictGet('range_dictionary_nullable', 'Tax', toUInt64(1), toDate('2019-05-15'));
SELECT dictGet('database_for_range_dict.range_dictionary_nullable', 'Tax', toUInt64(1), toDate('2019-05-29')); SELECT dictGet('range_dictionary_nullable', 'Tax', toUInt64(1), toDate('2019-05-29'));
SELECT dictGet('database_for_range_dict.range_dictionary_nullable', 'Tax', toUInt64(2), toDate('2019-05-29')); SELECT dictGet('range_dictionary_nullable', 'Tax', toUInt64(2), toDate('2019-05-29'));
SELECT dictGet('database_for_range_dict.range_dictionary_nullable', 'Tax', toUInt64(2), toDate('2019-05-31')); SELECT dictGet('range_dictionary_nullable', 'Tax', toUInt64(2), toDate('2019-05-31'));
SELECT dictGetOrDefault('database_for_range_dict.range_dictionary_nullable', 'Tax', toUInt64(2), toDate('2019-05-31'), 0.4); SELECT dictGetOrDefault('range_dictionary_nullable', 'Tax', toUInt64(2), toDate('2019-05-31'), 0.4);
SELECT 'dictHas'; SELECT 'dictHas';
SELECT dictHas('database_for_range_dict.range_dictionary_nullable', toUInt64(1), toDate('2019-05-15')); SELECT dictHas('range_dictionary_nullable', toUInt64(1), toDate('2019-05-15'));
SELECT dictHas('database_for_range_dict.range_dictionary_nullable', toUInt64(1), toDate('2019-05-29')); SELECT dictHas('range_dictionary_nullable', toUInt64(1), toDate('2019-05-29'));
SELECT dictHas('database_for_range_dict.range_dictionary_nullable', toUInt64(2), toDate('2019-05-29')); SELECT dictHas('range_dictionary_nullable', toUInt64(2), toDate('2019-05-29'));
SELECT dictHas('database_for_range_dict.range_dictionary_nullable', toUInt64(2), toDate('2019-05-31')); SELECT dictHas('range_dictionary_nullable', toUInt64(2), toDate('2019-05-31'));
SELECT 'select columns from dictionary'; SELECT 'select columns from dictionary';
SELECT 'allColumns'; SELECT 'allColumns';
SELECT * FROM database_for_range_dict.range_dictionary_nullable ORDER BY CountryID, StartDate, EndDate; SELECT * FROM range_dictionary_nullable ORDER BY CountryID, StartDate, EndDate;
SELECT 'noColumns'; SELECT 'noColumns';
SELECT 1 FROM database_for_range_dict.range_dictionary_nullable ORDER BY CountryID, StartDate, EndDate; SELECT 1 FROM range_dictionary_nullable ORDER BY CountryID, StartDate, EndDate;
SELECT 'onlySpecificColumns'; SELECT 'onlySpecificColumns';
SELECT CountryID, StartDate, Tax FROM database_for_range_dict.range_dictionary_nullable ORDER BY CountryID, StartDate, EndDate; SELECT CountryID, StartDate, Tax FROM range_dictionary_nullable ORDER BY CountryID, StartDate, EndDate;
SELECT 'onlySpecificColumn'; SELECT 'onlySpecificColumn';
SELECT Tax FROM database_for_range_dict.range_dictionary_nullable ORDER BY CountryID, StartDate, EndDate; SELECT Tax FROM range_dictionary_nullable ORDER BY CountryID, StartDate, EndDate;
DROP DICTIONARY database_for_range_dict.range_dictionary_nullable; DROP DICTIONARY range_dictionary_nullable;
DROP TABLE database_for_range_dict.date_table; DROP TABLE date_table;
DROP DATABASE database_for_range_dict;

View File

@ -1,6 +1,7 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Tags: no-fasttest # Tags: no-fasttest, no-parallel
# Tag no-fasttest: needs pv # Tag no-fasttest: needs pv
# Tag no-parallel: reads from a system table
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh # shellcheck source=../shell_config.sh
@ -12,9 +13,23 @@ ${CLICKHOUSE_CLIENT} --query "DROP TABLE IF EXISTS t; CREATE TABLE t (x UInt64)
seq 1 1000 | pv --quiet --rate-limit 400 | ${CLICKHOUSE_CLIENT} --query "INSERT INTO t FORMAT TSV" seq 1 1000 | pv --quiet --rate-limit 400 | ${CLICKHOUSE_CLIENT} --query "INSERT INTO t FORMAT TSV"
# We check that the value of NetworkReceiveElapsedMicroseconds correctly includes the time spent waiting data from the client. # We check that the value of NetworkReceiveElapsedMicroseconds correctly includes the time spent waiting data from the client.
${CLICKHOUSE_CLIENT} --query "SYSTEM FLUSH LOGS; result=$(${CLICKHOUSE_CLIENT} --query "SYSTEM FLUSH LOGS;
WITH ProfileEvents['NetworkReceiveElapsedMicroseconds'] AS time WITH ProfileEvents['NetworkReceiveElapsedMicroseconds'] AS elapsed_us
SELECT time >= 1000000 ? 1 : time FROM system.query_log SELECT elapsed_us FROM system.query_log
WHERE current_database = currentDatabase() AND query_kind = 'Insert' AND event_date >= yesterday() AND type = 2 ORDER BY event_time DESC LIMIT 1;" WHERE current_database = currentDatabase() AND query_kind = 'Insert' AND event_date >= yesterday() AND type = 'QueryFinish'
ORDER BY event_time DESC LIMIT 1;")
elapsed_us=$(echo $result | sed 's/ .*//')
min_elapsed_us=1000000
if [[ "$elapsed_us" -ge "$min_elapsed_us" ]]; then
echo 1
else
# Print debug info
${CLICKHOUSE_CLIENT} --query "
WITH ProfileEvents['NetworkReceiveElapsedMicroseconds'] AS elapsed_us
SELECT query_start_time_microseconds, event_time_microseconds, query_duration_ms, elapsed_us, query FROM system.query_log
WHERE current_database = currentDatabase() and event_date >= yesterday() AND type = 'QueryFinish' ORDER BY query_start_time;"
fi
${CLICKHOUSE_CLIENT} --query "DROP TABLE t" ${CLICKHOUSE_CLIENT} --query "DROP TABLE t"

View File

@ -1,7 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Tags: no-parallel
set -e set -e
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
@ -14,11 +12,11 @@ $CLICKHOUSE_CLIENT -q "SELECT plu(1, 1) SETTINGS allow_experimental_analyzer = 1
$CLICKHOUSE_CLIENT -q "SELECT uniqExac(1, 1) SETTINGS allow_experimental_analyzer = 1;" 2>&1 \ $CLICKHOUSE_CLIENT -q "SELECT uniqExac(1, 1) SETTINGS allow_experimental_analyzer = 1;" 2>&1 \
| grep "Maybe you meant: \['uniqExact'\]" &>/dev/null; | grep "Maybe you meant: \['uniqExact'\]" &>/dev/null;
$CLICKHOUSE_CLIENT -q "DROP FUNCTION IF EXISTS test_user_defined_function;" $CLICKHOUSE_CLIENT -q "DROP FUNCTION IF EXISTS test_user_defined_function_$CLICKHOUSE_DATABASE;"
$CLICKHOUSE_CLIENT -q "CREATE FUNCTION test_user_defined_function AS x -> x + 1;" $CLICKHOUSE_CLIENT -q "CREATE FUNCTION test_user_defined_function_$CLICKHOUSE_DATABASE AS x -> x + 1;"
$CLICKHOUSE_CLIENT -q "SELECT test_user_defined_functio(1) SETTINGS allow_experimental_analyzer = 1;" 2>&1 \ $CLICKHOUSE_CLIENT -q "SELECT test_user_defined_function_${CLICKHOUSE_DATABASE}A(1) SETTINGS allow_experimental_analyzer = 1;" 2>&1 \
| grep "Maybe you meant: \['test_user_defined_function'\]" &>/dev/null; | grep -E "Maybe you meant: \[.*'test_user_defined_function_$CLICKHOUSE_DATABASE'.*\]" &>/dev/null;
$CLICKHOUSE_CLIENT -q "DROP FUNCTION test_user_defined_function"; $CLICKHOUSE_CLIENT -q "DROP FUNCTION test_user_defined_function_$CLICKHOUSE_DATABASE";
$CLICKHOUSE_CLIENT -q "WITH (x -> x + 1) AS lambda_function SELECT lambda_functio(1) SETTINGS allow_experimental_analyzer = 1;" 2>&1 \ $CLICKHOUSE_CLIENT -q "WITH (x -> x + 1) AS lambda_function SELECT lambda_functio(1) SETTINGS allow_experimental_analyzer = 1;" 2>&1 \
| grep "Maybe you meant: \['lambda_function'\]" &>/dev/null; | grep "Maybe you meant: \['lambda_function'\]" &>/dev/null;

View File

@ -11,7 +11,12 @@ CREATE VIEW number_view as SELECT * FROM numbers(10) as tb;
CREATE MATERIALIZED VIEW null_mv Engine = Log AS SELECT * FROM null_table LEFT JOIN number_view as tb USING number; CREATE MATERIALIZED VIEW null_mv Engine = Log AS SELECT * FROM null_table LEFT JOIN number_view as tb USING number;
CREATE TABLE null_table_buffer (number UInt64) ENGINE = Buffer(currentDatabase(), null_table, 1, 1, 1, 100, 200, 10000, 20000); CREATE TABLE null_table_buffer (number UInt64) ENGINE = Buffer(currentDatabase(), null_table, 1, 1, 1, 100, 200, 10000, 20000);
INSERT INTO null_table_buffer VALUES (1); INSERT INTO null_table_buffer VALUES (1);
SELECT sleep(3) FORMAT Null; -- OPTIMIZE query should flush Buffer table, but still it is not guaranteed
-- (see the comment StorageBuffer::optimize)
-- But the combination of OPTIMIZE + sleep + OPTIMIZE should be enough.
OPTIMIZE TABLE null_table_buffer;
SELECT sleep(1) FORMAT Null;
OPTIMIZE TABLE null_table_buffer;
-- Insert about should've landed into `null_mv` -- Insert about should've landed into `null_mv`
SELECT count() FROM null_mv; SELECT count() FROM null_mv;
1 1

View File

@ -13,7 +13,13 @@ CREATE MATERIALIZED VIEW null_mv Engine = Log AS SELECT * FROM null_table LEFT J
CREATE TABLE null_table_buffer (number UInt64) ENGINE = Buffer(currentDatabase(), null_table, 1, 1, 1, 100, 200, 10000, 20000); CREATE TABLE null_table_buffer (number UInt64) ENGINE = Buffer(currentDatabase(), null_table, 1, 1, 1, 100, 200, 10000, 20000);
INSERT INTO null_table_buffer VALUES (1); INSERT INTO null_table_buffer VALUES (1);
SELECT sleep(3) FORMAT Null;
-- OPTIMIZE query should flush Buffer table, but still it is not guaranteed
-- (see the comment StorageBuffer::optimize)
-- But the combination of OPTIMIZE + sleep + OPTIMIZE should be enough.
OPTIMIZE TABLE null_table_buffer;
SELECT sleep(1) FORMAT Null;
OPTIMIZE TABLE null_table_buffer;
-- Insert about should've landed into `null_mv` -- Insert about should've landed into `null_mv`
SELECT count() FROM null_mv; SELECT count() FROM null_mv;

View File

@ -13,20 +13,9 @@ $CLICKHOUSE_CLIENT -nm -q "
CREATE TABLE $database_name.02911_backup_restore_keeper_map3 (key UInt64, value String) Engine=KeeperMap('/' || currentDatabase() || '/test02911_different') PRIMARY KEY key; CREATE TABLE $database_name.02911_backup_restore_keeper_map3 (key UInt64, value String) Engine=KeeperMap('/' || currentDatabase() || '/test02911_different') PRIMARY KEY key;
" "
# KeeperMap table engine doesn't have internal retries for interaction with Keeper. Do it on our own, otherwise tests with overloaded server can be flaky. $CLICKHOUSE_CLIENT -nm -q "INSERT INTO $database_name.02911_backup_restore_keeper_map2 SELECT number, 'test' || toString(number) FROM system.numbers LIMIT 5000;"
while true
do
$CLICKHOUSE_CLIENT -nm -q "INSERT INTO $database_name.02911_backup_restore_keeper_map2 SELECT number, 'test' || toString(number) FROM system.numbers LIMIT 5000;
" 2>&1 | grep -q "KEEPER_EXCEPTION" && sleep 1 && continue
break
done
while true $CLICKHOUSE_CLIENT -nm -q "INSERT INTO $database_name.02911_backup_restore_keeper_map3 SELECT number, 'test' || toString(number) FROM system.numbers LIMIT 3000;"
do
$CLICKHOUSE_CLIENT -nm -q "INSERT INTO $database_name.02911_backup_restore_keeper_map3 SELECT number, 'test' || toString(number) FROM system.numbers LIMIT 3000;
" 2>&1 | grep -q "KEEPER_EXCEPTION" && sleep 1 && continue
break
done
backup_path="$database_name" backup_path="$database_name"
for i in $(seq 1 3); do for i in $(seq 1 3); do

View File

@ -11,23 +11,28 @@ allow_create_index_without_type 0
allow_custom_error_code_in_throwif 0 allow_custom_error_code_in_throwif 0
allow_ddl 1 allow_ddl 1
allow_deprecated_database_ordinary 0 allow_deprecated_database_ordinary 0
allow_deprecated_error_prone_window_functions 0
allow_deprecated_snowflake_conversion_functions 0
allow_deprecated_syntax_for_merge_tree 0 allow_deprecated_syntax_for_merge_tree 0
allow_distributed_ddl 1 allow_distributed_ddl 1
allow_drop_detached 0 allow_drop_detached 0
allow_execute_multiif_columnar 1 allow_execute_multiif_columnar 1
allow_experimental_alter_materialized_view_structure 1 allow_experimental_alter_materialized_view_structure 1
allow_experimental_analyzer 0 allow_experimental_analyzer 1
allow_experimental_annoy_index 0 allow_experimental_annoy_index 0
allow_experimental_bigint_types 1 allow_experimental_bigint_types 1
allow_experimental_codecs 0 allow_experimental_codecs 0
allow_experimental_database_atomic 1 allow_experimental_database_atomic 1
allow_experimental_database_materialized_mysql 0 allow_experimental_database_materialized_mysql 0
allow_experimental_database_materialized_postgresql 0 allow_experimental_database_materialized_postgresql 0
allow_experimental_database_replicated 0 allow_experimental_database_replicated 1
allow_experimental_dynamic_type 0
allow_experimental_full_text_index 0
allow_experimental_funnel_functions 0 allow_experimental_funnel_functions 0
allow_experimental_geo_types 1 allow_experimental_geo_types 1
allow_experimental_hash_functions 0 allow_experimental_hash_functions 0
allow_experimental_inverted_index 0 allow_experimental_inverted_index 0
allow_experimental_join_condition 0
allow_experimental_lightweight_delete 1 allow_experimental_lightweight_delete 1
allow_experimental_live_view 0 allow_experimental_live_view 0
allow_experimental_map_type 1 allow_experimental_map_type 1
@ -40,12 +45,15 @@ allow_experimental_query_cache 1
allow_experimental_query_deduplication 0 allow_experimental_query_deduplication 0
allow_experimental_refreshable_materialized_view 0 allow_experimental_refreshable_materialized_view 0
allow_experimental_s3queue 1 allow_experimental_s3queue 1
allow_experimental_shared_merge_tree 0 allow_experimental_shared_merge_tree 1
allow_experimental_statistic 0
allow_experimental_statistics 0 allow_experimental_statistics 0
allow_experimental_undrop_table_query 1 allow_experimental_undrop_table_query 1
allow_experimental_usearch_index 0 allow_experimental_usearch_index 0
allow_experimental_variant_type 0
allow_experimental_window_functions 1 allow_experimental_window_functions 1
allow_experimental_window_view 0 allow_experimental_window_view 0
allow_get_client_http_header 0
allow_hyperscan 1 allow_hyperscan 1
allow_introspection_functions 0 allow_introspection_functions 0
allow_named_collection_override_by_default 1 allow_named_collection_override_by_default 1
@ -58,17 +66,21 @@ allow_prefetched_read_pool_for_remote_filesystem 1
allow_push_predicate_when_subquery_contains_with 1 allow_push_predicate_when_subquery_contains_with 1
allow_settings_after_format_in_insert 0 allow_settings_after_format_in_insert 0
allow_simdjson 1 allow_simdjson 1
allow_statistic_optimize 0
allow_statistics_optimize 0 allow_statistics_optimize 0
allow_suspicious_codecs 0 allow_suspicious_codecs 0
allow_suspicious_fixed_string_types 0 allow_suspicious_fixed_string_types 0
allow_suspicious_indices 0 allow_suspicious_indices 0
allow_suspicious_low_cardinality_types 0 allow_suspicious_low_cardinality_types 0
allow_suspicious_primary_key 0
allow_suspicious_ttl_expressions 0 allow_suspicious_ttl_expressions 0
allow_suspicious_variant_types 0
allow_unrestricted_reads_from_keeper 0 allow_unrestricted_reads_from_keeper 0
alter_move_to_space_execute_async 0 alter_move_to_space_execute_async 0
alter_partition_verbose_result 0 alter_partition_verbose_result 0
alter_sync 1 alter_sync 1
analyze_index_with_space_filling_curves 1 analyze_index_with_space_filling_curves 1
analyzer_compatibility_join_using_top_level_identifier 0
annoy_index_search_k_nodes -1 annoy_index_search_k_nodes -1
any_join_distinct_right_table_keys 0 any_join_distinct_right_table_keys 0
apply_deleted_mask 1 apply_deleted_mask 1
@ -76,20 +88,42 @@ apply_mutations_on_fly 0
asterisk_include_alias_columns 0 asterisk_include_alias_columns 0
asterisk_include_materialized_columns 0 asterisk_include_materialized_columns 0
async_insert 0 async_insert 0
async_insert_busy_timeout_decrease_rate 0.2
async_insert_busy_timeout_increase_rate 0.2
async_insert_busy_timeout_max_ms 200
async_insert_busy_timeout_min_ms 50
async_insert_busy_timeout_ms 200 async_insert_busy_timeout_ms 200
async_insert_cleanup_timeout_ms 1000 async_insert_cleanup_timeout_ms 1000
async_insert_deduplicate 0 async_insert_deduplicate 0
async_insert_max_data_size 1000000 async_insert_max_data_size 10485760
async_insert_max_query_number 450 async_insert_max_query_number 450
async_insert_poll_timeout_ms 10
async_insert_stale_timeout_ms 0 async_insert_stale_timeout_ms 0
async_insert_threads 16 async_insert_threads 16
async_insert_use_adaptive_busy_timeout 1
async_query_sending_for_remote 1 async_query_sending_for_remote 1
async_socket_for_remote 1 async_socket_for_remote 1
azure_allow_parallel_part_upload 1
azure_create_new_file_on_insert 0 azure_create_new_file_on_insert 0
azure_ignore_file_doesnt_exist 0
azure_list_object_keys_size 1000 azure_list_object_keys_size 1000
azure_max_blocks_in_multipart_upload 50000
azure_max_inflight_parts_for_one_file 20
azure_max_single_part_copy_size 268435456
azure_max_single_part_upload_size 104857600 azure_max_single_part_upload_size 104857600
azure_max_single_read_retries 4 azure_max_single_read_retries 4
azure_max_unexpected_write_error_retries 4
azure_max_upload_part_size 5368709120
azure_min_upload_part_size 16777216
azure_sdk_max_retries 10
azure_sdk_retry_initial_backoff_ms 10
azure_sdk_retry_max_backoff_ms 1000
azure_skip_empty_files 0
azure_strict_upload_part_size 0
azure_throw_on_zero_files_match 0
azure_truncate_on_insert 0 azure_truncate_on_insert 0
azure_upload_part_size_multiply_factor 2
azure_upload_part_size_multiply_parts_count_threshold 500
background_buffer_flush_schedule_pool_size 16 background_buffer_flush_schedule_pool_size 16
background_common_pool_size 8 background_common_pool_size 8
background_distributed_schedule_pool_size 16 background_distributed_schedule_pool_size 16
@ -107,6 +141,7 @@ backup_restore_keeper_max_retries 20
backup_restore_keeper_retry_initial_backoff_ms 100 backup_restore_keeper_retry_initial_backoff_ms 100
backup_restore_keeper_retry_max_backoff_ms 5000 backup_restore_keeper_retry_max_backoff_ms 5000
backup_restore_keeper_value_max_size 1048576 backup_restore_keeper_value_max_size 1048576
backup_restore_s3_retry_attempts 1000
backup_threads 16 backup_threads 16
bool_false_representation false bool_false_representation false
bool_true_representation true bool_true_representation true
@ -115,6 +150,7 @@ calculate_text_stack_trace 1
cancel_http_readonly_queries_on_client_close 0 cancel_http_readonly_queries_on_client_close 0
cast_ipv4_ipv6_default_on_conversion_error 0 cast_ipv4_ipv6_default_on_conversion_error 0
cast_keep_nullable 0 cast_keep_nullable 0
cast_string_to_dynamic_use_inference 0
check_query_single_value_result 1 check_query_single_value_result 1
check_referential_table_dependencies 0 check_referential_table_dependencies 0
check_table_dependencies 1 check_table_dependencies 1
@ -123,6 +159,7 @@ cloud_mode 0
cloud_mode_engine 1 cloud_mode_engine 1
cluster_for_parallel_replicas cluster_for_parallel_replicas
collect_hash_table_stats_during_aggregation 1 collect_hash_table_stats_during_aggregation 1
collect_hash_table_stats_during_joins 1
column_names_for_schema_inference column_names_for_schema_inference
compatibility compatibility
compatibility_ignore_auto_increment_in_create_table 0 compatibility_ignore_auto_increment_in_create_table 0
@ -141,9 +178,12 @@ count_distinct_optimization 0
create_index_ignore_unique 0 create_index_ignore_unique 0
create_replicated_merge_tree_fault_injection_probability 0 create_replicated_merge_tree_fault_injection_probability 0
create_table_empty_primary_key_by_default 0 create_table_empty_primary_key_by_default 0
cross_join_min_bytes_to_compress 1073741824
cross_join_min_rows_to_compress 10000000
cross_to_inner_join_rewrite 1 cross_to_inner_join_rewrite 1
data_type_default_nullable 0 data_type_default_nullable 0
database_atomic_wait_for_drop_and_detach_synchronously 0 database_atomic_wait_for_drop_and_detach_synchronously 0
database_replicated_allow_heavy_create 0
database_replicated_allow_only_replicated_engine 0 database_replicated_allow_only_replicated_engine 0
database_replicated_allow_replicated_engine_arguments 1 database_replicated_allow_replicated_engine_arguments 1
database_replicated_always_detach_permanently 0 database_replicated_always_detach_permanently 0
@ -156,15 +196,19 @@ date_time_overflow_behavior ignore
decimal_check_overflow 1 decimal_check_overflow 1
deduplicate_blocks_in_dependent_materialized_views 0 deduplicate_blocks_in_dependent_materialized_views 0
default_database_engine Atomic default_database_engine Atomic
default_materialized_view_sql_security DEFINER
default_max_bytes_in_join 1000000000 default_max_bytes_in_join 1000000000
default_table_engine None default_normal_view_sql_security INVOKER
default_table_engine MergeTree
default_temporary_table_engine Memory default_temporary_table_engine Memory
default_view_definer CURRENT_USER
describe_compact_output 0 describe_compact_output 0
describe_extend_object_types 0 describe_extend_object_types 0
describe_include_subcolumns 0 describe_include_subcolumns 0
describe_include_virtual_columns 0 describe_include_virtual_columns 0
dialect clickhouse dialect clickhouse
dictionary_use_async_executor 0 dictionary_use_async_executor 0
dictionary_validate_primary_key_type 0
distinct_overflow_mode throw distinct_overflow_mode throw
distributed_aggregation_memory_efficient 1 distributed_aggregation_memory_efficient 1
distributed_background_insert_batch 0 distributed_background_insert_batch 0
@ -182,6 +226,7 @@ distributed_directory_monitor_sleep_time_ms 100
distributed_directory_monitor_split_batch_on_failure 0 distributed_directory_monitor_split_batch_on_failure 0
distributed_foreground_insert 0 distributed_foreground_insert 0
distributed_group_by_no_merge 0 distributed_group_by_no_merge 0
distributed_insert_skip_read_only_replicas 0
distributed_product_mode deny distributed_product_mode deny
distributed_push_down_limit 1 distributed_push_down_limit 1
distributed_replica_error_cap 1000 distributed_replica_error_cap 1000
@ -191,6 +236,7 @@ do_not_merge_across_partitions_select_final 0
drain_timeout 3 drain_timeout 3
empty_result_for_aggregation_by_constant_keys_on_empty_set 1 empty_result_for_aggregation_by_constant_keys_on_empty_set 1
empty_result_for_aggregation_by_empty_set 0 empty_result_for_aggregation_by_empty_set 0
enable_blob_storage_log 1
enable_debug_queries 0 enable_debug_queries 0
enable_deflate_qpl_codec 0 enable_deflate_qpl_codec 0
enable_early_constant_folding 1 enable_early_constant_folding 1
@ -205,6 +251,7 @@ enable_job_stack_trace 0
enable_lightweight_delete 1 enable_lightweight_delete 1
enable_memory_bound_merging_of_aggregation_results 1 enable_memory_bound_merging_of_aggregation_results 1
enable_multiple_prewhere_read_steps 1 enable_multiple_prewhere_read_steps 1
enable_named_columns_in_function_tuple 1
enable_optimize_predicate_expression 1 enable_optimize_predicate_expression 1
enable_optimize_predicate_expression_to_final_subquery 1 enable_optimize_predicate_expression_to_final_subquery 1
enable_order_by_all 1 enable_order_by_all 1
@ -216,7 +263,9 @@ enable_sharing_sets_for_mutations 1
enable_software_prefetch_in_aggregation 1 enable_software_prefetch_in_aggregation 1
enable_unaligned_array_join 0 enable_unaligned_array_join 0
enable_url_encoding 1 enable_url_encoding 1
enable_vertical_final 1
enable_writes_to_query_cache 1 enable_writes_to_query_cache 1
enable_zstd_qat_codec 0
engine_file_allow_create_multiple_files 0 engine_file_allow_create_multiple_files 0
engine_file_empty_if_not_exists 0 engine_file_empty_if_not_exists 0
engine_file_skip_empty_files 0 engine_file_skip_empty_files 0
@ -231,10 +280,12 @@ external_storage_max_read_rows 0
external_storage_rw_timeout_sec 300 external_storage_rw_timeout_sec 300
external_table_functions_use_nulls 1 external_table_functions_use_nulls 1
external_table_strict_query 0 external_table_strict_query 0
extract_key_value_pairs_max_pairs_per_row 1000
extract_kvp_max_pairs_per_row 1000 extract_kvp_max_pairs_per_row 1000
extremes 0 extremes 0
fallback_to_stale_replicas_for_distributed_queries 1 fallback_to_stale_replicas_for_distributed_queries 1
filesystem_cache_max_download_size 137438953472 filesystem_cache_max_download_size 137438953472
filesystem_cache_reserve_space_wait_lock_timeout_milliseconds 1000
filesystem_cache_segments_batch_size 20 filesystem_cache_segments_batch_size 20
filesystem_prefetch_max_memory_usage 1073741824 filesystem_prefetch_max_memory_usage 1073741824
filesystem_prefetch_min_bytes_for_single_read_task 2097152 filesystem_prefetch_min_bytes_for_single_read_task 2097152
@ -278,7 +329,9 @@ format_regexp_escaping_rule Raw
format_regexp_skip_unmatched 0 format_regexp_skip_unmatched 0
format_schema format_schema
format_template_resultset format_template_resultset
format_template_resultset_format
format_template_row format_template_row
format_template_row_format
format_template_rows_between_delimiter \n format_template_rows_between_delimiter \n
format_tsv_null_representation \\N format_tsv_null_representation \\N
formatdatetime_f_prints_single_zero 0 formatdatetime_f_prints_single_zero 0
@ -288,8 +341,11 @@ fsync_metadata 1
function_implementation function_implementation
function_json_value_return_type_allow_complex 0 function_json_value_return_type_allow_complex 0
function_json_value_return_type_allow_nullable 0 function_json_value_return_type_allow_nullable 0
function_locate_has_mysql_compatible_argument_order 1
function_range_max_elements_in_block 500000000 function_range_max_elements_in_block 500000000
function_sleep_max_microseconds_per_block 3000000 function_sleep_max_microseconds_per_block 3000000
function_visible_width_behavior 1
geo_distance_returns_float64_on_float64_arguments 1
glob_expansion_max_elements 1000 glob_expansion_max_elements 1000
grace_hash_join_initial_buckets 1 grace_hash_join_initial_buckets 1
grace_hash_join_max_buckets 1024 grace_hash_join_max_buckets 1024
@ -300,8 +356,10 @@ group_by_use_nulls 0
handle_kafka_error_mode default handle_kafka_error_mode default
handshake_timeout_ms 10000 handshake_timeout_ms 10000
hdfs_create_new_file_on_insert 0 hdfs_create_new_file_on_insert 0
hdfs_ignore_file_doesnt_exist 0
hdfs_replication 0 hdfs_replication 0
hdfs_skip_empty_files 0 hdfs_skip_empty_files 0
hdfs_throw_on_zero_files_match 0
hdfs_truncate_on_insert 0 hdfs_truncate_on_insert 0
hedged_connection_timeout_ms 50 hedged_connection_timeout_ms 50
hsts_max_age 0 hsts_max_age 0
@ -326,10 +384,14 @@ http_skip_not_found_url_for_globs 1
http_wait_end_of_query 0 http_wait_end_of_query 0
http_write_exception_in_output_format 1 http_write_exception_in_output_format 1
http_zlib_compression_level 3 http_zlib_compression_level 3
iceberg_engine_ignore_schema_evolution 0
idle_connection_timeout 3600 idle_connection_timeout 3600
ignore_cold_parts_seconds 0 ignore_cold_parts_seconds 0
ignore_data_skipping_indices ignore_data_skipping_indices
ignore_drop_queries_probability 0
ignore_materialized_views_with_dropped_target_table 0
ignore_on_cluster_for_replicated_access_entities_queries 0 ignore_on_cluster_for_replicated_access_entities_queries 0
ignore_on_cluster_for_replicated_named_collections_queries 0
ignore_on_cluster_for_replicated_udf_queries 0 ignore_on_cluster_for_replicated_udf_queries 0
implicit_transaction 0 implicit_transaction 0
input_format_allow_errors_num 0 input_format_allow_errors_num 0
@ -341,12 +403,14 @@ input_format_arrow_import_nested 0
input_format_arrow_skip_columns_with_unsupported_types_in_schema_inference 0 input_format_arrow_skip_columns_with_unsupported_types_in_schema_inference 0
input_format_avro_allow_missing_fields 0 input_format_avro_allow_missing_fields 0
input_format_avro_null_as_default 0 input_format_avro_null_as_default 0
input_format_binary_decode_types_in_binary_format 0
input_format_bson_skip_fields_with_unsupported_types_in_schema_inference 0 input_format_bson_skip_fields_with_unsupported_types_in_schema_inference 0
input_format_capn_proto_skip_fields_with_unsupported_types_in_schema_inference 0 input_format_capn_proto_skip_fields_with_unsupported_types_in_schema_inference 0
input_format_csv_allow_cr_end_of_line 0 input_format_csv_allow_cr_end_of_line 0
input_format_csv_allow_variable_number_of_columns 0 input_format_csv_allow_variable_number_of_columns 0
input_format_csv_allow_whitespace_or_tab_as_delimiter 0 input_format_csv_allow_whitespace_or_tab_as_delimiter 0
input_format_csv_arrays_as_nested_csv 0 input_format_csv_arrays_as_nested_csv 0
input_format_csv_deserialize_separate_columns_into_tuple 1
input_format_csv_detect_header 1 input_format_csv_detect_header 1
input_format_csv_empty_as_default 1 input_format_csv_empty_as_default 1
input_format_csv_enum_as_number 0 input_format_csv_enum_as_number 0
@ -354,29 +418,37 @@ input_format_csv_skip_first_lines 0
input_format_csv_skip_trailing_empty_lines 0 input_format_csv_skip_trailing_empty_lines 0
input_format_csv_trim_whitespaces 1 input_format_csv_trim_whitespaces 1
input_format_csv_try_infer_numbers_from_strings 0 input_format_csv_try_infer_numbers_from_strings 0
input_format_csv_try_infer_strings_from_quoted_tuples 1
input_format_csv_use_best_effort_in_schema_inference 1 input_format_csv_use_best_effort_in_schema_inference 1
input_format_csv_use_default_on_bad_values 0 input_format_csv_use_default_on_bad_values 0
input_format_custom_allow_variable_number_of_columns 0 input_format_custom_allow_variable_number_of_columns 0
input_format_custom_detect_header 1 input_format_custom_detect_header 1
input_format_custom_skip_trailing_empty_lines 0 input_format_custom_skip_trailing_empty_lines 0
input_format_defaults_for_omitted_fields 1 input_format_defaults_for_omitted_fields 1
input_format_force_null_for_omitted_fields 0
input_format_hive_text_allow_variable_number_of_columns 1
input_format_hive_text_collection_items_delimiter  input_format_hive_text_collection_items_delimiter 
input_format_hive_text_fields_delimiter  input_format_hive_text_fields_delimiter 
input_format_hive_text_map_keys_delimiter  input_format_hive_text_map_keys_delimiter 
input_format_import_nested_json 0 input_format_import_nested_json 0
input_format_ipv4_default_on_conversion_error 0 input_format_ipv4_default_on_conversion_error 0
input_format_ipv6_default_on_conversion_error 0 input_format_ipv6_default_on_conversion_error 0
input_format_json_case_insensitive_column_matching 0
input_format_json_compact_allow_variable_number_of_columns 0 input_format_json_compact_allow_variable_number_of_columns 0
input_format_json_defaults_for_missing_elements_in_named_tuple 1 input_format_json_defaults_for_missing_elements_in_named_tuple 1
input_format_json_ignore_unknown_keys_in_named_tuple 1 input_format_json_ignore_unknown_keys_in_named_tuple 1
input_format_json_ignore_unnecessary_fields 1
input_format_json_infer_incomplete_types_as_strings 1 input_format_json_infer_incomplete_types_as_strings 1
input_format_json_named_tuples_as_objects 1 input_format_json_named_tuples_as_objects 1
input_format_json_read_arrays_as_strings 1 input_format_json_read_arrays_as_strings 1
input_format_json_read_bools_as_numbers 1 input_format_json_read_bools_as_numbers 1
input_format_json_read_bools_as_strings 1
input_format_json_read_numbers_as_strings 1 input_format_json_read_numbers_as_strings 1
input_format_json_read_objects_as_strings 1 input_format_json_read_objects_as_strings 1
input_format_json_throw_on_bad_escape_sequence 1
input_format_json_try_infer_named_tuples_from_objects 1 input_format_json_try_infer_named_tuples_from_objects 1
input_format_json_try_infer_numbers_from_strings 0 input_format_json_try_infer_numbers_from_strings 0
input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects 0
input_format_json_validate_types_from_metadata 1 input_format_json_validate_types_from_metadata 1
input_format_max_bytes_to_read_for_schema_inference 33554432 input_format_max_bytes_to_read_for_schema_inference 33554432
input_format_max_rows_to_read_for_schema_inference 25000 input_format_max_rows_to_read_for_schema_inference 25000
@ -384,11 +456,13 @@ input_format_msgpack_number_of_columns 0
input_format_mysql_dump_map_column_names 1 input_format_mysql_dump_map_column_names 1
input_format_mysql_dump_table_name input_format_mysql_dump_table_name
input_format_native_allow_types_conversion 1 input_format_native_allow_types_conversion 1
input_format_native_decode_types_in_binary_format 0
input_format_null_as_default 1 input_format_null_as_default 1
input_format_orc_allow_missing_columns 1 input_format_orc_allow_missing_columns 1
input_format_orc_case_insensitive_column_matching 0 input_format_orc_case_insensitive_column_matching 0
input_format_orc_filter_push_down 1 input_format_orc_filter_push_down 1
input_format_orc_import_nested 0 input_format_orc_import_nested 0
input_format_orc_reader_time_zone_name GMT
input_format_orc_row_batch_size 100000 input_format_orc_row_batch_size 100000
input_format_orc_skip_columns_with_unsupported_types_in_schema_inference 0 input_format_orc_skip_columns_with_unsupported_types_in_schema_inference 0
input_format_orc_use_fast_decoder 1 input_format_orc_use_fast_decoder 1
@ -398,17 +472,21 @@ input_format_parquet_case_insensitive_column_matching 0
input_format_parquet_filter_push_down 1 input_format_parquet_filter_push_down 1
input_format_parquet_import_nested 0 input_format_parquet_import_nested 0
input_format_parquet_local_file_min_bytes_for_seek 8192 input_format_parquet_local_file_min_bytes_for_seek 8192
input_format_parquet_max_block_size 8192 input_format_parquet_max_block_size 65409
input_format_parquet_prefer_block_bytes 16744704
input_format_parquet_preserve_order 0 input_format_parquet_preserve_order 0
input_format_parquet_skip_columns_with_unsupported_types_in_schema_inference 0 input_format_parquet_skip_columns_with_unsupported_types_in_schema_inference 0
input_format_parquet_use_native_reader 0
input_format_protobuf_flatten_google_wrappers 0 input_format_protobuf_flatten_google_wrappers 0
input_format_protobuf_skip_fields_with_unsupported_types_in_schema_inference 0 input_format_protobuf_skip_fields_with_unsupported_types_in_schema_inference 0
input_format_record_errors_file_path input_format_record_errors_file_path
input_format_skip_unknown_fields 1 input_format_skip_unknown_fields 1
input_format_try_infer_dates 1 input_format_try_infer_dates 1
input_format_try_infer_datetimes 1 input_format_try_infer_datetimes 1
input_format_try_infer_exponent_floats 0
input_format_try_infer_integers 1 input_format_try_infer_integers 1
input_format_tsv_allow_variable_number_of_columns 0 input_format_tsv_allow_variable_number_of_columns 0
input_format_tsv_crlf_end_of_line 0
input_format_tsv_detect_header 1 input_format_tsv_detect_header 1
input_format_tsv_empty_as_default 0 input_format_tsv_empty_as_default 0
input_format_tsv_enum_as_number 0 input_format_tsv_enum_as_number 0
@ -450,7 +528,12 @@ joined_subquery_requires_alias 1
kafka_disable_num_consumers_limit 0 kafka_disable_num_consumers_limit 0
kafka_max_wait_ms 5000 kafka_max_wait_ms 5000
keeper_map_strict_mode 0 keeper_map_strict_mode 0
keeper_max_retries 10
keeper_retry_initial_backoff_ms 100
keeper_retry_max_backoff_ms 5000
legacy_column_name_of_tuple_literal 0 legacy_column_name_of_tuple_literal 0
lightweight_deletes_sync 2
lightweight_mutation_projection_mode throw
limit 0 limit 0
live_view_heartbeat_interval 15 live_view_heartbeat_interval 15
load_balancing random load_balancing random
@ -461,7 +544,7 @@ local_filesystem_read_prefetch 0
lock_acquire_timeout 120 lock_acquire_timeout 120
log_comment log_comment
log_formatted_queries 0 log_formatted_queries 0
log_processors_profiles 0 log_processors_profiles 1
log_profile_events 1 log_profile_events 1
log_queries 1 log_queries 1
log_queries_cut_to_length 100000 log_queries_cut_to_length 100000
@ -474,6 +557,8 @@ log_query_views 1
low_cardinality_allow_in_native_format 1 low_cardinality_allow_in_native_format 1
low_cardinality_max_dictionary_size 8192 low_cardinality_max_dictionary_size 8192
low_cardinality_use_single_dictionary_for_part 0 low_cardinality_use_single_dictionary_for_part 0
materialize_skip_indexes_on_insert 1
materialize_statistics_on_insert 1
materialize_ttl_after_modify 1 materialize_ttl_after_modify 1
materialized_views_ignore_errors 0 materialized_views_ignore_errors 0
max_alter_threads \'auto(16)\' max_alter_threads \'auto(16)\'
@ -501,6 +586,7 @@ max_distributed_depth 5
max_download_buffer_size 10485760 max_download_buffer_size 10485760
max_download_threads 4 max_download_threads 4
max_entries_for_hash_table_stats 10000 max_entries_for_hash_table_stats 10000
max_estimated_execution_time 0
max_execution_speed 0 max_execution_speed 0
max_execution_speed_bytes 0 max_execution_speed_bytes 0
max_execution_time 0 max_execution_time 0
@ -528,7 +614,9 @@ max_network_bandwidth_for_user 0
max_network_bytes 0 max_network_bytes 0
max_number_of_partitions_for_independent_aggregation 128 max_number_of_partitions_for_independent_aggregation 128
max_parallel_replicas 1 max_parallel_replicas 1
max_parser_backtracks 1000000
max_parser_depth 1000 max_parser_depth 1000
max_parsing_threads \'auto(16)\'
max_partition_size_to_drop 50000000000 max_partition_size_to_drop 50000000000
max_partitions_per_insert_block 100 max_partitions_per_insert_block 100
max_partitions_to_read -1 max_partitions_to_read -1
@ -537,6 +625,7 @@ max_query_size 262144
max_read_buffer_size 1048576 max_read_buffer_size 1048576
max_read_buffer_size_local_fs 131072 max_read_buffer_size_local_fs 131072
max_read_buffer_size_remote_fs 0 max_read_buffer_size_remote_fs 0
max_recursive_cte_evaluation_depth 1000
max_remote_read_network_bandwidth 0 max_remote_read_network_bandwidth 0
max_remote_read_network_bandwidth_for_server 0 max_remote_read_network_bandwidth_for_server 0
max_remote_write_network_bandwidth 0 max_remote_write_network_bandwidth 0
@ -549,7 +638,7 @@ max_result_rows 0
max_rows_in_distinct 0 max_rows_in_distinct 0
max_rows_in_join 0 max_rows_in_join 0
max_rows_in_set 0 max_rows_in_set 0
max_rows_in_set_to_optimize_join 100000 max_rows_in_set_to_optimize_join 0
max_rows_to_group_by 0 max_rows_to_group_by 0
max_rows_to_read 0 max_rows_to_read 0
max_rows_to_read_leaf 0 max_rows_to_read_leaf 0
@ -557,6 +646,7 @@ max_rows_to_sort 0
max_rows_to_transfer 0 max_rows_to_transfer 0
max_sessions_for_user 0 max_sessions_for_user 0
max_size_to_preallocate_for_aggregation 100000000 max_size_to_preallocate_for_aggregation 100000000
max_size_to_preallocate_for_joins 100000000
max_streams_for_merge_tree_reading 0 max_streams_for_merge_tree_reading 0
max_streams_multiplier_for_merge_tables 5 max_streams_multiplier_for_merge_tables 5
max_streams_to_max_threads_ratio 1 max_streams_to_max_threads_ratio 1
@ -592,6 +682,7 @@ merge_tree_min_bytes_per_task_for_remote_reading 4194304
merge_tree_min_rows_for_concurrent_read 163840 merge_tree_min_rows_for_concurrent_read 163840
merge_tree_min_rows_for_concurrent_read_for_remote_filesystem 163840 merge_tree_min_rows_for_concurrent_read_for_remote_filesystem 163840
merge_tree_min_rows_for_seek 0 merge_tree_min_rows_for_seek 0
merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability 0
merge_tree_use_const_size_tasks_for_remote_reading 1 merge_tree_use_const_size_tasks_for_remote_reading 1
metrics_perf_events_enabled 0 metrics_perf_events_enabled 0
metrics_perf_events_list metrics_perf_events_list
@ -604,6 +695,8 @@ min_count_to_compile_expression 3
min_count_to_compile_sort_description 3 min_count_to_compile_sort_description 3
min_execution_speed 0 min_execution_speed 0
min_execution_speed_bytes 0 min_execution_speed_bytes 0
min_external_table_block_size_bytes 268402944
min_external_table_block_size_rows 1048449
min_free_disk_space_for_temporary_data 0 min_free_disk_space_for_temporary_data 0
min_hit_rate_to_use_consecutive_keys_optimization 0.5 min_hit_rate_to_use_consecutive_keys_optimization 0.5
min_insert_block_size_bytes 268402944 min_insert_block_size_bytes 268402944
@ -619,8 +712,8 @@ mutations_execute_subqueries_on_initiator 0
mutations_max_literal_size_to_replace 16384 mutations_max_literal_size_to_replace 16384
mutations_sync 0 mutations_sync 0
mysql_datatypes_support_level mysql_datatypes_support_level
mysql_map_fixed_string_to_text_in_show_columns 0 mysql_map_fixed_string_to_text_in_show_columns 1
mysql_map_string_to_text_in_show_columns 0 mysql_map_string_to_text_in_show_columns 1
mysql_max_rows_to_insert 65536 mysql_max_rows_to_insert 65536
network_compression_method LZ4 network_compression_method LZ4
network_zstd_compression_level 1 network_zstd_compression_level 1
@ -647,6 +740,7 @@ optimize_group_by_constant_keys 1
optimize_group_by_function_keys 1 optimize_group_by_function_keys 1
optimize_if_chain_to_multiif 0 optimize_if_chain_to_multiif 0
optimize_if_transform_strings_to_enum 0 optimize_if_transform_strings_to_enum 0
optimize_injective_functions_in_group_by 1
optimize_injective_functions_inside_uniq 1 optimize_injective_functions_inside_uniq 1
optimize_min_equality_disjunction_chain_length 3 optimize_min_equality_disjunction_chain_length 3
optimize_min_inequality_conjunction_chain_length 3 optimize_min_inequality_conjunction_chain_length 3
@ -664,7 +758,7 @@ optimize_redundant_functions_in_order_by 1
optimize_respect_aliases 1 optimize_respect_aliases 1
optimize_rewrite_aggregate_function_with_if 1 optimize_rewrite_aggregate_function_with_if 1
optimize_rewrite_array_exists_to_has 0 optimize_rewrite_array_exists_to_has 0
optimize_rewrite_sum_if_to_count_if 0 optimize_rewrite_sum_if_to_count_if 1
optimize_skip_merged_partitions 0 optimize_skip_merged_partitions 0
optimize_skip_unused_shards 0 optimize_skip_unused_shards 0
optimize_skip_unused_shards_limit 1000 optimize_skip_unused_shards_limit 1000
@ -674,9 +768,10 @@ optimize_sorting_by_input_stream_properties 1
optimize_substitute_columns 0 optimize_substitute_columns 0
optimize_syntax_fuse_functions 0 optimize_syntax_fuse_functions 0
optimize_throw_if_noop 0 optimize_throw_if_noop 0
optimize_time_filter_with_preimage 1
optimize_trivial_approximate_count_query 0 optimize_trivial_approximate_count_query 0
optimize_trivial_count_query 1 optimize_trivial_count_query 1
optimize_trivial_insert_select 1 optimize_trivial_insert_select 0
optimize_uniq_to_count 1 optimize_uniq_to_count 1
optimize_use_implicit_projections 1 optimize_use_implicit_projections 1
optimize_use_projections 1 optimize_use_projections 1
@ -685,13 +780,19 @@ os_thread_priority 0
output_format_arrow_compression_method lz4_frame output_format_arrow_compression_method lz4_frame
output_format_arrow_fixed_string_as_fixed_byte_array 1 output_format_arrow_fixed_string_as_fixed_byte_array 1
output_format_arrow_low_cardinality_as_dictionary 0 output_format_arrow_low_cardinality_as_dictionary 0
output_format_arrow_string_as_string 0 output_format_arrow_string_as_string 1
output_format_arrow_use_64_bit_indexes_for_dictionary 0
output_format_arrow_use_signed_indexes_for_dictionary 1
output_format_avro_codec output_format_avro_codec
output_format_avro_rows_in_file 1 output_format_avro_rows_in_file 1
output_format_avro_string_column_pattern output_format_avro_string_column_pattern
output_format_avro_sync_interval 16384 output_format_avro_sync_interval 16384
output_format_binary_encode_types_in_binary_format 0
output_format_bson_string_as_string 0 output_format_bson_string_as_string 0
output_format_compression_level 3
output_format_compression_zstd_window_log 0
output_format_csv_crlf_end_of_line 0 output_format_csv_crlf_end_of_line 0
output_format_csv_serialize_tuple_into_separate_columns 1
output_format_decimal_trailing_zeros 0 output_format_decimal_trailing_zeros 0
output_format_enable_streaming 0 output_format_enable_streaming 0
output_format_json_array_of_rows 0 output_format_json_array_of_rows 0
@ -705,27 +806,34 @@ output_format_json_skip_null_value_in_named_tuples 0
output_format_json_validate_utf8 0 output_format_json_validate_utf8 0
output_format_markdown_escape_special_characters 0 output_format_markdown_escape_special_characters 0
output_format_msgpack_uuid_representation ext output_format_msgpack_uuid_representation ext
output_format_orc_compression_method lz4 output_format_native_encode_types_in_binary_format 0
output_format_orc_compression_method zstd
output_format_orc_row_index_stride 10000 output_format_orc_row_index_stride 10000
output_format_orc_string_as_string 0 output_format_orc_string_as_string 1
output_format_parallel_formatting 1 output_format_parallel_formatting 1
output_format_parquet_batch_size 1024 output_format_parquet_batch_size 1024
output_format_parquet_compliant_nested_types 1 output_format_parquet_compliant_nested_types 1
output_format_parquet_compression_method lz4 output_format_parquet_compression_method zstd
output_format_parquet_data_page_size 1048576 output_format_parquet_data_page_size 1048576
output_format_parquet_fixed_string_as_fixed_byte_array 1 output_format_parquet_fixed_string_as_fixed_byte_array 1
output_format_parquet_parallel_encoding 1 output_format_parquet_parallel_encoding 1
output_format_parquet_row_group_size 1000000 output_format_parquet_row_group_size 1000000
output_format_parquet_row_group_size_bytes 536870912 output_format_parquet_row_group_size_bytes 536870912
output_format_parquet_string_as_string 0 output_format_parquet_string_as_string 1
output_format_parquet_use_custom_encoder 0 output_format_parquet_use_custom_encoder 1
output_format_parquet_version 2.latest output_format_parquet_version 2.latest
output_format_pretty_color 1 output_format_parquet_write_page_index 1
output_format_pretty_color auto
output_format_pretty_display_footer_column_names 1
output_format_pretty_display_footer_column_names_min_rows 50
output_format_pretty_grid_charset UTF-8 output_format_pretty_grid_charset UTF-8
output_format_pretty_highlight_digit_groups 1
output_format_pretty_max_column_pad_width 250 output_format_pretty_max_column_pad_width 250
output_format_pretty_max_rows 10000 output_format_pretty_max_rows 10000
output_format_pretty_max_value_width 10000 output_format_pretty_max_value_width 10000
output_format_pretty_row_numbers 0 output_format_pretty_max_value_width_apply_for_single_value 0
output_format_pretty_row_numbers 1
output_format_pretty_single_large_number_tip_threshold 1000000
output_format_protobuf_nullables_with_google_wrappers 0 output_format_protobuf_nullables_with_google_wrappers 0
output_format_schema output_format_schema
output_format_sql_insert_include_column_names 1 output_format_sql_insert_include_column_names 1
@ -734,15 +842,22 @@ output_format_sql_insert_quote_names 1
output_format_sql_insert_table_name table output_format_sql_insert_table_name table
output_format_sql_insert_use_replace 0 output_format_sql_insert_use_replace 0
output_format_tsv_crlf_end_of_line 0 output_format_tsv_crlf_end_of_line 0
output_format_values_escape_quote_with_quote 0
output_format_write_statistics 1 output_format_write_statistics 1
page_cache_inject_eviction 0
parallel_distributed_insert_select 0 parallel_distributed_insert_select 0
parallel_replica_offset 0 parallel_replica_offset 0
parallel_replicas_allow_in_with_subquery 1
parallel_replicas_count 0 parallel_replicas_count 0
parallel_replicas_custom_key parallel_replicas_custom_key
parallel_replicas_custom_key_filter_type default parallel_replicas_custom_key_filter_type default
parallel_replicas_custom_key_range_lower 0
parallel_replicas_custom_key_range_upper 0
parallel_replicas_for_non_replicated_merge_tree 0 parallel_replicas_for_non_replicated_merge_tree 0
parallel_replicas_mark_segment_size 128
parallel_replicas_min_number_of_granules_to_enable 0 parallel_replicas_min_number_of_granules_to_enable 0
parallel_replicas_min_number_of_rows_per_replica 0 parallel_replicas_min_number_of_rows_per_replica 0
parallel_replicas_prefer_local_join 1
parallel_replicas_single_task_marks_count_multiplier 2 parallel_replicas_single_task_marks_count_multiplier 2
parallel_view_processing 0 parallel_view_processing 0
parallelize_output_from_storages 1 parallelize_output_from_storages 1
@ -755,11 +870,14 @@ parts_to_delay_insert 0
parts_to_throw_insert 0 parts_to_throw_insert 0
periodic_live_view_refresh 60 periodic_live_view_refresh 60
poll_interval 10 poll_interval 10
postgresql_connection_attempt_timeout 2
postgresql_connection_pool_auto_close_connection 0 postgresql_connection_pool_auto_close_connection 0
postgresql_connection_pool_retries 2
postgresql_connection_pool_size 16 postgresql_connection_pool_size 16
postgresql_connection_pool_wait_timeout 5000 postgresql_connection_pool_wait_timeout 5000
precise_float_parsing 0 precise_float_parsing 0
prefer_column_name_to_alias 0 prefer_column_name_to_alias 0
prefer_external_sort_block_bytes 16744704
prefer_global_in_and_join 0 prefer_global_in_and_join 0
prefer_localhost_replica 1 prefer_localhost_replica 1
prefer_warmed_unmerged_parts_seconds 0 prefer_warmed_unmerged_parts_seconds 0
@ -767,7 +885,7 @@ preferred_block_size_bytes 1000000
preferred_max_column_in_block_size_bytes 0 preferred_max_column_in_block_size_bytes 0
preferred_optimize_projection_name preferred_optimize_projection_name
prefetch_buffer_size 1048576 prefetch_buffer_size 1048576
print_pretty_type_names 0 print_pretty_type_names 1
priority 0 priority 0
query_cache_compress_entries 1 query_cache_compress_entries 1
query_cache_max_entries 0 query_cache_max_entries 0
@ -778,8 +896,10 @@ query_cache_nondeterministic_function_handling throw
query_cache_share_between_users 0 query_cache_share_between_users 0
query_cache_squash_partial_results 1 query_cache_squash_partial_results 1
query_cache_store_results_of_queries_with_nondeterministic_functions 0 query_cache_store_results_of_queries_with_nondeterministic_functions 0
query_cache_system_table_handling throw
query_cache_ttl 60 query_cache_ttl 60
query_plan_aggregation_in_order 1 query_plan_aggregation_in_order 1
query_plan_convert_outer_join_to_inner_join 1
query_plan_enable_multithreading_after_window_functions 1 query_plan_enable_multithreading_after_window_functions 1
query_plan_enable_optimizations 1 query_plan_enable_optimizations 1
query_plan_execute_functions_after_sorting 1 query_plan_execute_functions_after_sorting 1
@ -788,6 +908,8 @@ query_plan_lift_up_array_join 1
query_plan_lift_up_union 1 query_plan_lift_up_union 1
query_plan_max_optimizations_to_apply 10000 query_plan_max_optimizations_to_apply 10000
query_plan_merge_expressions 1 query_plan_merge_expressions 1
query_plan_merge_filters 0
query_plan_optimize_prewhere 1
query_plan_optimize_primary_key 1 query_plan_optimize_primary_key 1
query_plan_optimize_projection 1 query_plan_optimize_projection 1
query_plan_push_down_limit 1 query_plan_push_down_limit 1
@ -806,7 +928,9 @@ read_backoff_min_events 2
read_backoff_min_interval_between_events_ms 1000 read_backoff_min_interval_between_events_ms 1000
read_backoff_min_latency_ms 1000 read_backoff_min_latency_ms 1000
read_from_filesystem_cache_if_exists_otherwise_bypass_cache 0 read_from_filesystem_cache_if_exists_otherwise_bypass_cache 0
read_from_page_cache_if_exists_otherwise_bypass_cache 0
read_in_order_two_level_merge_threshold 100 read_in_order_two_level_merge_threshold 100
read_in_order_use_buffering 1
read_overflow_mode throw read_overflow_mode throw
read_overflow_mode_leaf throw read_overflow_mode_leaf throw
read_priority 0 read_priority 0
@ -835,17 +959,20 @@ result_overflow_mode throw
rewrite_count_distinct_if_with_count_distinct_implementation 0 rewrite_count_distinct_if_with_count_distinct_implementation 0
s3_allow_parallel_part_upload 1 s3_allow_parallel_part_upload 1
s3_check_objects_after_upload 0 s3_check_objects_after_upload 0
s3_connect_timeout_ms 1000
s3_create_new_file_on_insert 0 s3_create_new_file_on_insert 0
s3_disable_checksum 0 s3_disable_checksum 0
s3_http_connection_pool_size 1000 s3_ignore_file_doesnt_exist 0
s3_list_object_keys_size 1000 s3_list_object_keys_size 1000
s3_max_connections 1024 s3_max_connections 1024
s3_max_get_burst 0 s3_max_get_burst 0
s3_max_get_rps 0 s3_max_get_rps 0
s3_max_inflight_parts_for_one_file 20 s3_max_inflight_parts_for_one_file 20
s3_max_part_number 10000
s3_max_put_burst 0 s3_max_put_burst 0
s3_max_put_rps 0 s3_max_put_rps 0
s3_max_redirects 10 s3_max_redirects 10
s3_max_single_operation_copy_size 33554432
s3_max_single_part_upload_size 33554432 s3_max_single_part_upload_size 33554432
s3_max_single_read_retries 4 s3_max_single_read_retries 4
s3_max_unexpected_write_error_retries 4 s3_max_unexpected_write_error_retries 4
@ -860,6 +987,8 @@ s3_truncate_on_insert 0
s3_upload_part_size_multiply_factor 2 s3_upload_part_size_multiply_factor 2
s3_upload_part_size_multiply_parts_count_threshold 500 s3_upload_part_size_multiply_parts_count_threshold 500
s3_use_adaptive_timeouts 1 s3_use_adaptive_timeouts 1
s3_validate_request_settings 1
s3queue_allow_experimental_sharded_mode 0
s3queue_default_zookeeper_path /clickhouse/s3queue/ s3queue_default_zookeeper_path /clickhouse/s3queue/
s3queue_enable_logging_to_s3queue_log 0 s3queue_enable_logging_to_s3queue_log 0
schema_inference_cache_require_modification_time_for_url 1 schema_inference_cache_require_modification_time_for_url 1
@ -887,6 +1016,8 @@ sleep_after_receiving_query_ms 0
sleep_in_send_data_ms 0 sleep_in_send_data_ms 0
sleep_in_send_tables_status_ms 0 sleep_in_send_tables_status_ms 0
sort_overflow_mode throw sort_overflow_mode throw
split_intersecting_parts_ranges_into_layers_final 1
split_parts_ranges_into_intersecting_and_non_intersecting_final 1
splitby_max_substrings_includes_remaining_string 0 splitby_max_substrings_includes_remaining_string 0
stop_refreshable_materialized_views_on_startup 0 stop_refreshable_materialized_views_on_startup 0
storage_file_read_method pread storage_file_read_method pread
@ -898,8 +1029,10 @@ stream_poll_timeout_ms 500
system_events_show_zero_values 0 system_events_show_zero_values 0
table_function_remote_max_addresses 1000 table_function_remote_max_addresses 1000
tcp_keep_alive_timeout 290 tcp_keep_alive_timeout 290
temporary_data_in_cache_reserve_space_wait_lock_timeout_milliseconds 600000
temporary_files_codec LZ4 temporary_files_codec LZ4
temporary_live_view_timeout 1 temporary_live_view_timeout 1
throw_if_deduplication_in_dependent_materialized_views_enabled_with_async_insert 1
throw_if_no_data_to_insert 1 throw_if_no_data_to_insert 1
throw_on_error_from_cache_on_write_operations 0 throw_on_error_from_cache_on_write_operations 0
throw_on_max_partitions_per_insert_block 1 throw_on_max_partitions_per_insert_block 1
@ -912,8 +1045,10 @@ totals_mode after_having_exclusive
trace_profile_events 0 trace_profile_events 0
transfer_overflow_mode throw transfer_overflow_mode throw
transform_null_in 0 transform_null_in 0
traverse_shadow_remote_data_paths 0
union_default_mode union_default_mode
unknown_packet_in_send_data 0 unknown_packet_in_send_data 0
update_insert_deduplication_token_in_dependent_materialized_views 0
use_cache_for_count_from_files 1 use_cache_for_count_from_files 1
use_client_time_zone 0 use_client_time_zone 0
use_compact_format_in_distributed_parts_names 1 use_compact_format_in_distributed_parts_names 1
@ -923,12 +1058,15 @@ use_index_for_in_with_subqueries 1
use_index_for_in_with_subqueries_max_values 0 use_index_for_in_with_subqueries_max_values 0
use_local_cache_for_remote_storage 1 use_local_cache_for_remote_storage 1
use_mysql_types_in_show_columns 0 use_mysql_types_in_show_columns 0
use_page_cache_for_disks_without_file_cache 0
use_query_cache 0 use_query_cache 0
use_skip_indexes 1 use_skip_indexes 1
use_skip_indexes_if_final 0 use_skip_indexes_if_final 0
use_structure_from_insertion_table_in_table_functions 2 use_structure_from_insertion_table_in_table_functions 2
use_uncompressed_cache 0 use_uncompressed_cache 0
use_variant_as_common_type 0
use_with_fill_by_sorting_prefix 1 use_with_fill_by_sorting_prefix 1
validate_experimental_and_suspicious_types_inside_nested_types 1
validate_polygons 1 validate_polygons 1
wait_changes_become_visible_after_commit_mode wait_unknown wait_changes_become_visible_after_commit_mode wait_unknown
wait_for_async_insert 1 wait_for_async_insert 1
1 add_http_cors_header 0
11 allow_custom_error_code_in_throwif 0
12 allow_ddl 1
13 allow_deprecated_database_ordinary 0
14 allow_deprecated_error_prone_window_functions 0
15 allow_deprecated_snowflake_conversion_functions 0
16 allow_deprecated_syntax_for_merge_tree 0
17 allow_distributed_ddl 1
18 allow_drop_detached 0
19 allow_execute_multiif_columnar 1
20 allow_experimental_alter_materialized_view_structure 1
21 allow_experimental_analyzer 0 1
22 allow_experimental_annoy_index 0
23 allow_experimental_bigint_types 1
24 allow_experimental_codecs 0
25 allow_experimental_database_atomic 1
26 allow_experimental_database_materialized_mysql 0
27 allow_experimental_database_materialized_postgresql 0
28 allow_experimental_database_replicated 0 1
29 allow_experimental_dynamic_type 0
30 allow_experimental_full_text_index 0
31 allow_experimental_funnel_functions 0
32 allow_experimental_geo_types 1
33 allow_experimental_hash_functions 0
34 allow_experimental_inverted_index 0
35 allow_experimental_join_condition 0
36 allow_experimental_lightweight_delete 1
37 allow_experimental_live_view 0
38 allow_experimental_map_type 1
45 allow_experimental_query_deduplication 0
46 allow_experimental_refreshable_materialized_view 0
47 allow_experimental_s3queue 1
48 allow_experimental_shared_merge_tree 0 1
49 allow_experimental_statistic 0
50 allow_experimental_statistics 0
51 allow_experimental_undrop_table_query 1
52 allow_experimental_usearch_index 0
53 allow_experimental_variant_type 0
54 allow_experimental_window_functions 1
55 allow_experimental_window_view 0
56 allow_get_client_http_header 0
57 allow_hyperscan 1
58 allow_introspection_functions 0
59 allow_named_collection_override_by_default 1
66 allow_push_predicate_when_subquery_contains_with 1
67 allow_settings_after_format_in_insert 0
68 allow_simdjson 1
69 allow_statistic_optimize 0
70 allow_statistics_optimize 0
71 allow_suspicious_codecs 0
72 allow_suspicious_fixed_string_types 0
73 allow_suspicious_indices 0
74 allow_suspicious_low_cardinality_types 0
75 allow_suspicious_primary_key 0
76 allow_suspicious_ttl_expressions 0
77 allow_suspicious_variant_types 0
78 allow_unrestricted_reads_from_keeper 0
79 alter_move_to_space_execute_async 0
80 alter_partition_verbose_result 0
81 alter_sync 1
82 analyze_index_with_space_filling_curves 1
83 analyzer_compatibility_join_using_top_level_identifier 0
84 annoy_index_search_k_nodes -1
85 any_join_distinct_right_table_keys 0
86 apply_deleted_mask 1
88 asterisk_include_alias_columns 0
89 asterisk_include_materialized_columns 0
90 async_insert 0
91 async_insert_busy_timeout_decrease_rate 0.2
92 async_insert_busy_timeout_increase_rate 0.2
93 async_insert_busy_timeout_max_ms 200
94 async_insert_busy_timeout_min_ms 50
95 async_insert_busy_timeout_ms 200
96 async_insert_cleanup_timeout_ms 1000
97 async_insert_deduplicate 0
98 async_insert_max_data_size 1000000 10485760
99 async_insert_max_query_number 450
100 async_insert_poll_timeout_ms 10
101 async_insert_stale_timeout_ms 0
102 async_insert_threads 16
103 async_insert_use_adaptive_busy_timeout 1
104 async_query_sending_for_remote 1
105 async_socket_for_remote 1
106 azure_allow_parallel_part_upload 1
107 azure_create_new_file_on_insert 0
108 azure_ignore_file_doesnt_exist 0
109 azure_list_object_keys_size 1000
110 azure_max_blocks_in_multipart_upload 50000
111 azure_max_inflight_parts_for_one_file 20
112 azure_max_single_part_copy_size 268435456
113 azure_max_single_part_upload_size 104857600
114 azure_max_single_read_retries 4
115 azure_max_unexpected_write_error_retries 4
116 azure_max_upload_part_size 5368709120
117 azure_min_upload_part_size 16777216
118 azure_sdk_max_retries 10
119 azure_sdk_retry_initial_backoff_ms 10
120 azure_sdk_retry_max_backoff_ms 1000
121 azure_skip_empty_files 0
122 azure_strict_upload_part_size 0
123 azure_throw_on_zero_files_match 0
124 azure_truncate_on_insert 0
125 azure_upload_part_size_multiply_factor 2
126 azure_upload_part_size_multiply_parts_count_threshold 500
127 background_buffer_flush_schedule_pool_size 16
128 background_common_pool_size 8
129 background_distributed_schedule_pool_size 16
141 backup_restore_keeper_retry_initial_backoff_ms 100
142 backup_restore_keeper_retry_max_backoff_ms 5000
143 backup_restore_keeper_value_max_size 1048576
144 backup_restore_s3_retry_attempts 1000
145 backup_threads 16
146 bool_false_representation false
147 bool_true_representation true
150 cancel_http_readonly_queries_on_client_close 0
151 cast_ipv4_ipv6_default_on_conversion_error 0
152 cast_keep_nullable 0
153 cast_string_to_dynamic_use_inference 0
154 check_query_single_value_result 1
155 check_referential_table_dependencies 0
156 check_table_dependencies 1
159 cloud_mode_engine 1
160 cluster_for_parallel_replicas
161 collect_hash_table_stats_during_aggregation 1
162 collect_hash_table_stats_during_joins 1
163 column_names_for_schema_inference
164 compatibility
165 compatibility_ignore_auto_increment_in_create_table 0
178 create_index_ignore_unique 0
179 create_replicated_merge_tree_fault_injection_probability 0
180 create_table_empty_primary_key_by_default 0
181 cross_join_min_bytes_to_compress 1073741824
182 cross_join_min_rows_to_compress 10000000
183 cross_to_inner_join_rewrite 1
184 data_type_default_nullable 0
185 database_atomic_wait_for_drop_and_detach_synchronously 0
186 database_replicated_allow_heavy_create 0
187 database_replicated_allow_only_replicated_engine 0
188 database_replicated_allow_replicated_engine_arguments 1
189 database_replicated_always_detach_permanently 0
196 decimal_check_overflow 1
197 deduplicate_blocks_in_dependent_materialized_views 0
198 default_database_engine Atomic
199 default_materialized_view_sql_security DEFINER
200 default_max_bytes_in_join 1000000000
201 default_table_engine default_normal_view_sql_security None INVOKER
202 default_table_engine MergeTree
203 default_temporary_table_engine Memory
204 default_view_definer CURRENT_USER
205 describe_compact_output 0
206 describe_extend_object_types 0
207 describe_include_subcolumns 0
208 describe_include_virtual_columns 0
209 dialect clickhouse
210 dictionary_use_async_executor 0
211 dictionary_validate_primary_key_type 0
212 distinct_overflow_mode throw
213 distributed_aggregation_memory_efficient 1
214 distributed_background_insert_batch 0
226 distributed_directory_monitor_split_batch_on_failure 0
227 distributed_foreground_insert 0
228 distributed_group_by_no_merge 0
229 distributed_insert_skip_read_only_replicas 0
230 distributed_product_mode deny
231 distributed_push_down_limit 1
232 distributed_replica_error_cap 1000
236 drain_timeout 3
237 empty_result_for_aggregation_by_constant_keys_on_empty_set 1
238 empty_result_for_aggregation_by_empty_set 0
239 enable_blob_storage_log 1
240 enable_debug_queries 0
241 enable_deflate_qpl_codec 0
242 enable_early_constant_folding 1
251 enable_lightweight_delete 1
252 enable_memory_bound_merging_of_aggregation_results 1
253 enable_multiple_prewhere_read_steps 1
254 enable_named_columns_in_function_tuple 1
255 enable_optimize_predicate_expression 1
256 enable_optimize_predicate_expression_to_final_subquery 1
257 enable_order_by_all 1
263 enable_software_prefetch_in_aggregation 1
264 enable_unaligned_array_join 0
265 enable_url_encoding 1
266 enable_vertical_final 1
267 enable_writes_to_query_cache 1
268 enable_zstd_qat_codec 0
269 engine_file_allow_create_multiple_files 0
270 engine_file_empty_if_not_exists 0
271 engine_file_skip_empty_files 0
280 external_storage_rw_timeout_sec 300
281 external_table_functions_use_nulls 1
282 external_table_strict_query 0
283 extract_key_value_pairs_max_pairs_per_row 1000
284 extract_kvp_max_pairs_per_row 1000
285 extremes 0
286 fallback_to_stale_replicas_for_distributed_queries 1
287 filesystem_cache_max_download_size 137438953472
288 filesystem_cache_reserve_space_wait_lock_timeout_milliseconds 1000
289 filesystem_cache_segments_batch_size 20
290 filesystem_prefetch_max_memory_usage 1073741824
291 filesystem_prefetch_min_bytes_for_single_read_task 2097152
329 format_regexp_skip_unmatched 0
330 format_schema
331 format_template_resultset
332 format_template_resultset_format
333 format_template_row
334 format_template_row_format
335 format_template_rows_between_delimiter \n
336 format_tsv_null_representation \\N
337 formatdatetime_f_prints_single_zero 0
341 function_implementation
342 function_json_value_return_type_allow_complex 0
343 function_json_value_return_type_allow_nullable 0
344 function_locate_has_mysql_compatible_argument_order 1
345 function_range_max_elements_in_block 500000000
346 function_sleep_max_microseconds_per_block 3000000
347 function_visible_width_behavior 1
348 geo_distance_returns_float64_on_float64_arguments 1
349 glob_expansion_max_elements 1000
350 grace_hash_join_initial_buckets 1
351 grace_hash_join_max_buckets 1024
356 handle_kafka_error_mode default
357 handshake_timeout_ms 10000
358 hdfs_create_new_file_on_insert 0
359 hdfs_ignore_file_doesnt_exist 0
360 hdfs_replication 0
361 hdfs_skip_empty_files 0
362 hdfs_throw_on_zero_files_match 0
363 hdfs_truncate_on_insert 0
364 hedged_connection_timeout_ms 50
365 hsts_max_age 0
384 http_wait_end_of_query 0
385 http_write_exception_in_output_format 1
386 http_zlib_compression_level 3
387 iceberg_engine_ignore_schema_evolution 0
388 idle_connection_timeout 3600
389 ignore_cold_parts_seconds 0
390 ignore_data_skipping_indices
391 ignore_drop_queries_probability 0
392 ignore_materialized_views_with_dropped_target_table 0
393 ignore_on_cluster_for_replicated_access_entities_queries 0
394 ignore_on_cluster_for_replicated_named_collections_queries 0
395 ignore_on_cluster_for_replicated_udf_queries 0
396 implicit_transaction 0
397 input_format_allow_errors_num 0
403 input_format_arrow_skip_columns_with_unsupported_types_in_schema_inference 0
404 input_format_avro_allow_missing_fields 0
405 input_format_avro_null_as_default 0
406 input_format_binary_decode_types_in_binary_format 0
407 input_format_bson_skip_fields_with_unsupported_types_in_schema_inference 0
408 input_format_capn_proto_skip_fields_with_unsupported_types_in_schema_inference 0
409 input_format_csv_allow_cr_end_of_line 0
410 input_format_csv_allow_variable_number_of_columns 0
411 input_format_csv_allow_whitespace_or_tab_as_delimiter 0
412 input_format_csv_arrays_as_nested_csv 0
413 input_format_csv_deserialize_separate_columns_into_tuple 1
414 input_format_csv_detect_header 1
415 input_format_csv_empty_as_default 1
416 input_format_csv_enum_as_number 0
418 input_format_csv_skip_trailing_empty_lines 0
419 input_format_csv_trim_whitespaces 1
420 input_format_csv_try_infer_numbers_from_strings 0
421 input_format_csv_try_infer_strings_from_quoted_tuples 1
422 input_format_csv_use_best_effort_in_schema_inference 1
423 input_format_csv_use_default_on_bad_values 0
424 input_format_custom_allow_variable_number_of_columns 0
425 input_format_custom_detect_header 1
426 input_format_custom_skip_trailing_empty_lines 0
427 input_format_defaults_for_omitted_fields 1
428 input_format_force_null_for_omitted_fields 0
429 input_format_hive_text_allow_variable_number_of_columns 1
430 input_format_hive_text_collection_items_delimiter 
431 input_format_hive_text_fields_delimiter 
432 input_format_hive_text_map_keys_delimiter 
433 input_format_import_nested_json 0
434 input_format_ipv4_default_on_conversion_error 0
435 input_format_ipv6_default_on_conversion_error 0
436 input_format_json_case_insensitive_column_matching 0
437 input_format_json_compact_allow_variable_number_of_columns 0
438 input_format_json_defaults_for_missing_elements_in_named_tuple 1
439 input_format_json_ignore_unknown_keys_in_named_tuple 1
440 input_format_json_ignore_unnecessary_fields 1
441 input_format_json_infer_incomplete_types_as_strings 1
442 input_format_json_named_tuples_as_objects 1
443 input_format_json_read_arrays_as_strings 1
444 input_format_json_read_bools_as_numbers 1
445 input_format_json_read_bools_as_strings 1
446 input_format_json_read_numbers_as_strings 1
447 input_format_json_read_objects_as_strings 1
448 input_format_json_throw_on_bad_escape_sequence 1
449 input_format_json_try_infer_named_tuples_from_objects 1
450 input_format_json_try_infer_numbers_from_strings 0
451 input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects 0
452 input_format_json_validate_types_from_metadata 1
453 input_format_max_bytes_to_read_for_schema_inference 33554432
454 input_format_max_rows_to_read_for_schema_inference 25000
456 input_format_mysql_dump_map_column_names 1
457 input_format_mysql_dump_table_name
458 input_format_native_allow_types_conversion 1
459 input_format_native_decode_types_in_binary_format 0
460 input_format_null_as_default 1
461 input_format_orc_allow_missing_columns 1
462 input_format_orc_case_insensitive_column_matching 0
463 input_format_orc_filter_push_down 1
464 input_format_orc_import_nested 0
465 input_format_orc_reader_time_zone_name GMT
466 input_format_orc_row_batch_size 100000
467 input_format_orc_skip_columns_with_unsupported_types_in_schema_inference 0
468 input_format_orc_use_fast_decoder 1
472 input_format_parquet_filter_push_down 1
473 input_format_parquet_import_nested 0
474 input_format_parquet_local_file_min_bytes_for_seek 8192
475 input_format_parquet_max_block_size 8192 65409
476 input_format_parquet_prefer_block_bytes 16744704
477 input_format_parquet_preserve_order 0
478 input_format_parquet_skip_columns_with_unsupported_types_in_schema_inference 0
479 input_format_parquet_use_native_reader 0
480 input_format_protobuf_flatten_google_wrappers 0
481 input_format_protobuf_skip_fields_with_unsupported_types_in_schema_inference 0
482 input_format_record_errors_file_path
483 input_format_skip_unknown_fields 1
484 input_format_try_infer_dates 1
485 input_format_try_infer_datetimes 1
486 input_format_try_infer_exponent_floats 0
487 input_format_try_infer_integers 1
488 input_format_tsv_allow_variable_number_of_columns 0
489 input_format_tsv_crlf_end_of_line 0
490 input_format_tsv_detect_header 1
491 input_format_tsv_empty_as_default 0
492 input_format_tsv_enum_as_number 0
528 kafka_disable_num_consumers_limit 0
529 kafka_max_wait_ms 5000
530 keeper_map_strict_mode 0
531 keeper_max_retries 10
532 keeper_retry_initial_backoff_ms 100
533 keeper_retry_max_backoff_ms 5000
534 legacy_column_name_of_tuple_literal 0
535 lightweight_deletes_sync 2
536 lightweight_mutation_projection_mode throw
537 limit 0
538 live_view_heartbeat_interval 15
539 load_balancing random
544 lock_acquire_timeout 120
545 log_comment
546 log_formatted_queries 0
547 log_processors_profiles 0 1
548 log_profile_events 1
549 log_queries 1
550 log_queries_cut_to_length 100000
557 low_cardinality_allow_in_native_format 1
558 low_cardinality_max_dictionary_size 8192
559 low_cardinality_use_single_dictionary_for_part 0
560 materialize_skip_indexes_on_insert 1
561 materialize_statistics_on_insert 1
562 materialize_ttl_after_modify 1
563 materialized_views_ignore_errors 0
564 max_alter_threads \'auto(16)\'
586 max_download_buffer_size 10485760
587 max_download_threads 4
588 max_entries_for_hash_table_stats 10000
589 max_estimated_execution_time 0
590 max_execution_speed 0
591 max_execution_speed_bytes 0
592 max_execution_time 0
614 max_network_bytes 0
615 max_number_of_partitions_for_independent_aggregation 128
616 max_parallel_replicas 1
617 max_parser_backtracks 1000000
618 max_parser_depth 1000
619 max_parsing_threads \'auto(16)\'
620 max_partition_size_to_drop 50000000000
621 max_partitions_per_insert_block 100
622 max_partitions_to_read -1
625 max_read_buffer_size 1048576
626 max_read_buffer_size_local_fs 131072
627 max_read_buffer_size_remote_fs 0
628 max_recursive_cte_evaluation_depth 1000
629 max_remote_read_network_bandwidth 0
630 max_remote_read_network_bandwidth_for_server 0
631 max_remote_write_network_bandwidth 0
638 max_rows_in_distinct 0
639 max_rows_in_join 0
640 max_rows_in_set 0
641 max_rows_in_set_to_optimize_join 100000 0
642 max_rows_to_group_by 0
643 max_rows_to_read 0
644 max_rows_to_read_leaf 0
646 max_rows_to_transfer 0
647 max_sessions_for_user 0
648 max_size_to_preallocate_for_aggregation 100000000
649 max_size_to_preallocate_for_joins 100000000
650 max_streams_for_merge_tree_reading 0
651 max_streams_multiplier_for_merge_tables 5
652 max_streams_to_max_threads_ratio 1
682 merge_tree_min_rows_for_concurrent_read 163840
683 merge_tree_min_rows_for_concurrent_read_for_remote_filesystem 163840
684 merge_tree_min_rows_for_seek 0
685 merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability 0
686 merge_tree_use_const_size_tasks_for_remote_reading 1
687 metrics_perf_events_enabled 0
688 metrics_perf_events_list
695 min_count_to_compile_sort_description 3
696 min_execution_speed 0
697 min_execution_speed_bytes 0
698 min_external_table_block_size_bytes 268402944
699 min_external_table_block_size_rows 1048449
700 min_free_disk_space_for_temporary_data 0
701 min_hit_rate_to_use_consecutive_keys_optimization 0.5
702 min_insert_block_size_bytes 268402944
712 mutations_max_literal_size_to_replace 16384
713 mutations_sync 0
714 mysql_datatypes_support_level
715 mysql_map_fixed_string_to_text_in_show_columns 0 1
716 mysql_map_string_to_text_in_show_columns 0 1
717 mysql_max_rows_to_insert 65536
718 network_compression_method LZ4
719 network_zstd_compression_level 1
740 optimize_group_by_function_keys 1
741 optimize_if_chain_to_multiif 0
742 optimize_if_transform_strings_to_enum 0
743 optimize_injective_functions_in_group_by 1
744 optimize_injective_functions_inside_uniq 1
745 optimize_min_equality_disjunction_chain_length 3
746 optimize_min_inequality_conjunction_chain_length 3
758 optimize_respect_aliases 1
759 optimize_rewrite_aggregate_function_with_if 1
760 optimize_rewrite_array_exists_to_has 0
761 optimize_rewrite_sum_if_to_count_if 0 1
762 optimize_skip_merged_partitions 0
763 optimize_skip_unused_shards 0
764 optimize_skip_unused_shards_limit 1000
768 optimize_substitute_columns 0
769 optimize_syntax_fuse_functions 0
770 optimize_throw_if_noop 0
771 optimize_time_filter_with_preimage 1
772 optimize_trivial_approximate_count_query 0
773 optimize_trivial_count_query 1
774 optimize_trivial_insert_select 1 0
775 optimize_uniq_to_count 1
776 optimize_use_implicit_projections 1
777 optimize_use_projections 1
780 output_format_arrow_compression_method lz4_frame
781 output_format_arrow_fixed_string_as_fixed_byte_array 1
782 output_format_arrow_low_cardinality_as_dictionary 0
783 output_format_arrow_string_as_string 0 1
784 output_format_arrow_use_64_bit_indexes_for_dictionary 0
785 output_format_arrow_use_signed_indexes_for_dictionary 1
786 output_format_avro_codec
787 output_format_avro_rows_in_file 1
788 output_format_avro_string_column_pattern
789 output_format_avro_sync_interval 16384
790 output_format_binary_encode_types_in_binary_format 0
791 output_format_bson_string_as_string 0
792 output_format_compression_level 3
793 output_format_compression_zstd_window_log 0
794 output_format_csv_crlf_end_of_line 0
795 output_format_csv_serialize_tuple_into_separate_columns 1
796 output_format_decimal_trailing_zeros 0
797 output_format_enable_streaming 0
798 output_format_json_array_of_rows 0
806 output_format_json_validate_utf8 0
807 output_format_markdown_escape_special_characters 0
808 output_format_msgpack_uuid_representation ext
809 output_format_orc_compression_method output_format_native_encode_types_in_binary_format lz4 0
810 output_format_orc_compression_method zstd
811 output_format_orc_row_index_stride 10000
812 output_format_orc_string_as_string 0 1
813 output_format_parallel_formatting 1
814 output_format_parquet_batch_size 1024
815 output_format_parquet_compliant_nested_types 1
816 output_format_parquet_compression_method lz4 zstd
817 output_format_parquet_data_page_size 1048576
818 output_format_parquet_fixed_string_as_fixed_byte_array 1
819 output_format_parquet_parallel_encoding 1
820 output_format_parquet_row_group_size 1000000
821 output_format_parquet_row_group_size_bytes 536870912
822 output_format_parquet_string_as_string 0 1
823 output_format_parquet_use_custom_encoder 0 1
824 output_format_parquet_version 2.latest
825 output_format_pretty_color output_format_parquet_write_page_index 1
826 output_format_pretty_color auto
827 output_format_pretty_display_footer_column_names 1
828 output_format_pretty_display_footer_column_names_min_rows 50
829 output_format_pretty_grid_charset UTF-8
830 output_format_pretty_highlight_digit_groups 1
831 output_format_pretty_max_column_pad_width 250
832 output_format_pretty_max_rows 10000
833 output_format_pretty_max_value_width 10000
834 output_format_pretty_row_numbers output_format_pretty_max_value_width_apply_for_single_value 0
835 output_format_pretty_row_numbers 1
836 output_format_pretty_single_large_number_tip_threshold 1000000
837 output_format_protobuf_nullables_with_google_wrappers 0
838 output_format_schema
839 output_format_sql_insert_include_column_names 1
842 output_format_sql_insert_table_name table
843 output_format_sql_insert_use_replace 0
844 output_format_tsv_crlf_end_of_line 0
845 output_format_values_escape_quote_with_quote 0
846 output_format_write_statistics 1
847 page_cache_inject_eviction 0
848 parallel_distributed_insert_select 0
849 parallel_replica_offset 0
850 parallel_replicas_allow_in_with_subquery 1
851 parallel_replicas_count 0
852 parallel_replicas_custom_key
853 parallel_replicas_custom_key_filter_type default
854 parallel_replicas_custom_key_range_lower 0
855 parallel_replicas_custom_key_range_upper 0
856 parallel_replicas_for_non_replicated_merge_tree 0
857 parallel_replicas_mark_segment_size 128
858 parallel_replicas_min_number_of_granules_to_enable 0
859 parallel_replicas_min_number_of_rows_per_replica 0
860 parallel_replicas_prefer_local_join 1
861 parallel_replicas_single_task_marks_count_multiplier 2
862 parallel_view_processing 0
863 parallelize_output_from_storages 1
870 parts_to_throw_insert 0
871 periodic_live_view_refresh 60
872 poll_interval 10
873 postgresql_connection_attempt_timeout 2
874 postgresql_connection_pool_auto_close_connection 0
875 postgresql_connection_pool_retries 2
876 postgresql_connection_pool_size 16
877 postgresql_connection_pool_wait_timeout 5000
878 precise_float_parsing 0
879 prefer_column_name_to_alias 0
880 prefer_external_sort_block_bytes 16744704
881 prefer_global_in_and_join 0
882 prefer_localhost_replica 1
883 prefer_warmed_unmerged_parts_seconds 0
885 preferred_max_column_in_block_size_bytes 0
886 preferred_optimize_projection_name
887 prefetch_buffer_size 1048576
888 print_pretty_type_names 0 1
889 priority 0
890 query_cache_compress_entries 1
891 query_cache_max_entries 0
896 query_cache_share_between_users 0
897 query_cache_squash_partial_results 1
898 query_cache_store_results_of_queries_with_nondeterministic_functions 0
899 query_cache_system_table_handling throw
900 query_cache_ttl 60
901 query_plan_aggregation_in_order 1
902 query_plan_convert_outer_join_to_inner_join 1
903 query_plan_enable_multithreading_after_window_functions 1
904 query_plan_enable_optimizations 1
905 query_plan_execute_functions_after_sorting 1
908 query_plan_lift_up_union 1
909 query_plan_max_optimizations_to_apply 10000
910 query_plan_merge_expressions 1
911 query_plan_merge_filters 0
912 query_plan_optimize_prewhere 1
913 query_plan_optimize_primary_key 1
914 query_plan_optimize_projection 1
915 query_plan_push_down_limit 1
928 read_backoff_min_interval_between_events_ms 1000
929 read_backoff_min_latency_ms 1000
930 read_from_filesystem_cache_if_exists_otherwise_bypass_cache 0
931 read_from_page_cache_if_exists_otherwise_bypass_cache 0
932 read_in_order_two_level_merge_threshold 100
933 read_in_order_use_buffering 1
934 read_overflow_mode throw
935 read_overflow_mode_leaf throw
936 read_priority 0
959 rewrite_count_distinct_if_with_count_distinct_implementation 0
960 s3_allow_parallel_part_upload 1
961 s3_check_objects_after_upload 0
962 s3_connect_timeout_ms 1000
963 s3_create_new_file_on_insert 0
964 s3_disable_checksum 0
965 s3_http_connection_pool_size s3_ignore_file_doesnt_exist 1000 0
966 s3_list_object_keys_size 1000
967 s3_max_connections 1024
968 s3_max_get_burst 0
969 s3_max_get_rps 0
970 s3_max_inflight_parts_for_one_file 20
971 s3_max_part_number 10000
972 s3_max_put_burst 0
973 s3_max_put_rps 0
974 s3_max_redirects 10
975 s3_max_single_operation_copy_size 33554432
976 s3_max_single_part_upload_size 33554432
977 s3_max_single_read_retries 4
978 s3_max_unexpected_write_error_retries 4
987 s3_upload_part_size_multiply_factor 2
988 s3_upload_part_size_multiply_parts_count_threshold 500
989 s3_use_adaptive_timeouts 1
990 s3_validate_request_settings 1
991 s3queue_allow_experimental_sharded_mode 0
992 s3queue_default_zookeeper_path /clickhouse/s3queue/
993 s3queue_enable_logging_to_s3queue_log 0
994 schema_inference_cache_require_modification_time_for_url 1
1016 sleep_in_send_data_ms 0
1017 sleep_in_send_tables_status_ms 0
1018 sort_overflow_mode throw
1019 split_intersecting_parts_ranges_into_layers_final 1
1020 split_parts_ranges_into_intersecting_and_non_intersecting_final 1
1021 splitby_max_substrings_includes_remaining_string 0
1022 stop_refreshable_materialized_views_on_startup 0
1023 storage_file_read_method pread
1029 system_events_show_zero_values 0
1030 table_function_remote_max_addresses 1000
1031 tcp_keep_alive_timeout 290
1032 temporary_data_in_cache_reserve_space_wait_lock_timeout_milliseconds 600000
1033 temporary_files_codec LZ4
1034 temporary_live_view_timeout 1
1035 throw_if_deduplication_in_dependent_materialized_views_enabled_with_async_insert 1
1036 throw_if_no_data_to_insert 1
1037 throw_on_error_from_cache_on_write_operations 0
1038 throw_on_max_partitions_per_insert_block 1
1045 trace_profile_events 0
1046 transfer_overflow_mode throw
1047 transform_null_in 0
1048 traverse_shadow_remote_data_paths 0
1049 union_default_mode
1050 unknown_packet_in_send_data 0
1051 update_insert_deduplication_token_in_dependent_materialized_views 0
1052 use_cache_for_count_from_files 1
1053 use_client_time_zone 0
1054 use_compact_format_in_distributed_parts_names 1
1058 use_index_for_in_with_subqueries_max_values 0
1059 use_local_cache_for_remote_storage 1
1060 use_mysql_types_in_show_columns 0
1061 use_page_cache_for_disks_without_file_cache 0
1062 use_query_cache 0
1063 use_skip_indexes 1
1064 use_skip_indexes_if_final 0
1065 use_structure_from_insertion_table_in_table_functions 2
1066 use_uncompressed_cache 0
1067 use_variant_as_common_type 0
1068 use_with_fill_by_sorting_prefix 1
1069 validate_experimental_and_suspicious_types_inside_nested_types 1
1070 validate_polygons 1
1071 wait_changes_become_visible_after_commit_mode wait_unknown
1072 wait_for_async_insert 1

View File

@ -7,12 +7,12 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
. "$CUR_DIR"/../shell_config.sh . "$CUR_DIR"/../shell_config.sh
# Note that this is a broad check. A per version check is done in the upgrade test # Note that this is a broad check. A per version check is done in the upgrade test
# Baseline generated with 23.12.1 # Baseline generated with 24.7.2
# clickhouse local --query "select name, default from system.settings order by name format TSV" > 02995_baseline_23_12_1.tsv # clickhouse local --query "select name, default from system.settings order by name format TSV" > 02995_baseline_24_7_2.tsv
$CLICKHOUSE_LOCAL --query " $CLICKHOUSE_LOCAL --query "
WITH old_settings AS WITH old_settings AS
( (
SELECT * FROM file('${CUR_DIR}/02995_baseline_23_12_1.tsv', 'TSV', 'name String, default String') SELECT * FROM file('${CUR_DIR}/02995_baseline_24_7_2.tsv', 'TSV', 'name String, default String')
), ),
new_settings AS new_settings AS
( (
@ -21,7 +21,7 @@ $CLICKHOUSE_LOCAL --query "
) )
SELECT * FROM SELECT * FROM
( (
SELECT 'PLEASE ADD THE NEW SETTING TO SettingsChangesHistory.h: ' || name || ' WAS ADDED', SELECT 'PLEASE ADD THE NEW SETTING TO SettingsChangesHistory.cpp: ' || name || ' WAS ADDED',
FROM new_settings FROM new_settings
WHERE (name NOT IN ( WHERE (name NOT IN (
SELECT name SELECT name
@ -29,17 +29,17 @@ $CLICKHOUSE_LOCAL --query "
)) AND (name NOT IN ( )) AND (name NOT IN (
SELECT arrayJoin(tupleElement(changes, 'name')) SELECT arrayJoin(tupleElement(changes, 'name'))
FROM system.settings_changes FROM system.settings_changes
WHERE splitByChar('.', version())[1] >= '24' WHERE splitByChar('.', version)[1]::UInt64 >= 24 AND splitByChar('.', version)[2]::UInt64 > 7
)) ))
UNION ALL UNION ALL
( (
SELECT 'PLEASE ADD THE SETTING VALUE CHANGE TO SettingsChangesHistory.h: ' || name || ' WAS CHANGED FROM ' || old_settings.default || ' TO ' || new_settings.default, SELECT 'PLEASE ADD THE SETTING VALUE CHANGE TO SettingsChangesHistory.cpp: ' || name || ' WAS CHANGED FROM ' || old_settings.default || ' TO ' || new_settings.default,
FROM new_settings FROM new_settings
LEFT JOIN old_settings ON new_settings.name = old_settings.name LEFT JOIN old_settings ON new_settings.name = old_settings.name
WHERE (new_settings.default != old_settings.default) AND (name NOT IN ( WHERE (new_settings.default != old_settings.default) AND (name NOT IN (
SELECT arrayJoin(tupleElement(changes, 'name')) SELECT arrayJoin(tupleElement(changes, 'name'))
FROM system.settings_changes FROM system.settings_changes
WHERE splitByChar('.', version())[1] >= '24' WHERE splitByChar('.', version)[1]::UInt64 >= 24 AND splitByChar('.', version)[2]::UInt64 > 7
)) ))
) )
) )

View File

@ -1,4 +1,5 @@
-- Tags: long -- Tags: long, no-random-merge-tree-settings
-- no-random-merge-tree-settings - times out in private
DROP TABLE IF EXISTS build; DROP TABLE IF EXISTS build;
DROP TABLE IF EXISTS skewed_probe; DROP TABLE IF EXISTS skewed_probe;

View File

@ -1733,6 +1733,7 @@ groupBitmap
groupBitmapAnd groupBitmapAnd
groupBitmapOr groupBitmapOr
groupBitmapXor groupBitmapXor
groupConcat
groupUniqArray groupUniqArray
grouparray grouparray
grouparrayinsertat grouparrayinsertat
@ -1749,6 +1750,7 @@ groupbitmapor
groupbitmapxor groupbitmapxor
groupbitor groupbitor
groupbitxor groupbitxor
groupconcat
groupuniqarray groupuniqarray
grpc grpc
grpcio grpcio

View File

@ -6,9 +6,11 @@ v24.5.4.49-stable 2024-07-01
v24.5.3.5-stable 2024-06-13 v24.5.3.5-stable 2024-06-13
v24.5.2.34-stable 2024-06-13 v24.5.2.34-stable 2024-06-13
v24.5.1.1763-stable 2024-06-01 v24.5.1.1763-stable 2024-06-01
v24.4.4.113-stable 2024-08-02
v24.4.3.25-stable 2024-06-14 v24.4.3.25-stable 2024-06-14
v24.4.2.141-stable 2024-06-07 v24.4.2.141-stable 2024-06-07
v24.4.1.2088-stable 2024-05-01 v24.4.1.2088-stable 2024-05-01
v24.3.6.48-lts 2024-08-02
v24.3.5.46-lts 2024-07-03 v24.3.5.46-lts 2024-07-03
v24.3.4.147-lts 2024-06-13 v24.3.4.147-lts 2024-06-13
v24.3.3.102-lts 2024-05-01 v24.3.3.102-lts 2024-05-01

1 v24.7.2.13-stable 2024-08-01
6 v24.5.3.5-stable 2024-06-13
7 v24.5.2.34-stable 2024-06-13
8 v24.5.1.1763-stable 2024-06-01
9 v24.4.4.113-stable 2024-08-02
10 v24.4.3.25-stable 2024-06-14
11 v24.4.2.141-stable 2024-06-07
12 v24.4.1.2088-stable 2024-05-01
13 v24.3.6.48-lts 2024-08-02
14 v24.3.5.46-lts 2024-07-03
15 v24.3.4.147-lts 2024-06-13
16 v24.3.3.102-lts 2024-05-01