mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-22 15:42:02 +00:00
Merge remote-tracking branch 'upstream/master' into HEAD
This commit is contained in:
commit
884af52e09
152
CHANGELOG.md
152
CHANGELOG.md
@ -1,5 +1,5 @@
|
|||||||
### Table of Contents
|
### Table of Contents
|
||||||
**[ClickHouse release v24.6, 2024-06-27](#246)**<br/>
|
**[ClickHouse release v24.6, 2024-07-01](#246)**<br/>
|
||||||
**[ClickHouse release v24.5, 2024-05-30](#245)**<br/>
|
**[ClickHouse release v24.5, 2024-05-30](#245)**<br/>
|
||||||
**[ClickHouse release v24.4, 2024-04-30](#244)**<br/>
|
**[ClickHouse release v24.4, 2024-04-30](#244)**<br/>
|
||||||
**[ClickHouse release v24.3 LTS, 2024-03-26](#243)**<br/>
|
**[ClickHouse release v24.3 LTS, 2024-03-26](#243)**<br/>
|
||||||
@ -9,107 +9,105 @@
|
|||||||
|
|
||||||
# 2024 Changelog
|
# 2024 Changelog
|
||||||
|
|
||||||
### <a id="246"></a> ClickHouse release 24.6, 2024-06-27
|
### <a id="246"></a> ClickHouse release 24.6, 2024-07-01
|
||||||
|
|
||||||
#### Backward Incompatible Change
|
#### Backward Incompatible Change
|
||||||
* Enable asynchronous load of databases and tables by default. See the `async_load_databases` in config.xml. While this change is fully compatible, it can introduce a difference in behavior. When `async_load_databases` is false, as in the previous versions, the server will not accept connections until all tables are loaded. When `async_load_databases` is true, as in the new version, the server can accept connections before all the tables are loaded. If a query is made to a table that is not yet loaded, it will wait for the table's loading, which can take considerable time. It can change the behavior of the server if it is part of a large distributed system under a load balancer. In the first case, the load balancer can get a connection refusal and quickly failover to another server. In the second case, the load balancer can connect to a server that is still loading the tables, and the query will have a higher latency. Moreover, if many queries accumulate in the waiting state, it can lead to a "thundering herd" problem when they start processing simultaneously. This can make a difference only for highly loaded distributed backends. You can set the value of `async_load_databases` to false to avoid this problem. [#57695](https://github.com/ClickHouse/ClickHouse/pull/57695) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
* Enable asynchronous load of databases and tables by default. See the `async_load_databases` in config.xml. While this change is fully compatible, it can introduce a difference in behavior. When `async_load_databases` is false, as in the previous versions, the server will not accept connections until all tables are loaded. When `async_load_databases` is true, as in the new version, the server can accept connections before all the tables are loaded. If a query is made to a table that is not yet loaded, it will wait for the table's loading, which can take considerable time. It can change the behavior of the server if it is part of a large distributed system under a load balancer. In the first case, the load balancer can get a connection refusal and quickly failover to another server. In the second case, the load balancer can connect to a server that is still loading the tables, and the query will have a higher latency. Moreover, if many queries accumulate in the waiting state, it can lead to a "thundering herd" problem when they start processing simultaneously. This can make a difference only for highly loaded distributed backends. You can set the value of `async_load_databases` to false to avoid this problem. [#57695](https://github.com/ClickHouse/ClickHouse/pull/57695) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Setting `replace_long_file_name_to_hash` is enabled by default for `MergeTree` tables. [#64457](https://github.com/ClickHouse/ClickHouse/pull/64457) ([Anton Popov](https://github.com/CurtizJ)). This setting is fully compatible, and no actions needed during upgrade. The new data format is supported from all versions starting from 23.9. After enabling this setting, you can no longer downgrade to a version 23.8 or older.
|
||||||
* Some invalid queries will fail earlier during parsing. Note: disabled the support for inline KQL expressions (the experimental Kusto language) when they are put into a `kql` table function without a string literal, e.g. `kql(garbage | trash)` instead of `kql('garbage | trash')` or `kql($$garbage | trash$$)`. This feature was introduced unintentionally and should not exist. [#61500](https://github.com/ClickHouse/ClickHouse/pull/61500) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
* Some invalid queries will fail earlier during parsing. Note: disabled the support for inline KQL expressions (the experimental Kusto language) when they are put into a `kql` table function without a string literal, e.g. `kql(garbage | trash)` instead of `kql('garbage | trash')` or `kql($$garbage | trash$$)`. This feature was introduced unintentionally and should not exist. [#61500](https://github.com/ClickHouse/ClickHouse/pull/61500) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
* Rework parallel processing in `Ordered` mode of storage `S3Queue`. This PR is backward incompatible for Ordered mode if you used settings `s3queue_processing_threads_num` or `s3queue_total_shards_num`. Setting `s3queue_total_shards_num` is deleted, previously it was allowed to use only under `s3queue_allow_experimental_sharded_mode`, which is now deprecated. A new setting is added - `s3queue_buckets`. [#64349](https://github.com/ClickHouse/ClickHouse/pull/64349) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
* Rework parallel processing in `Ordered` mode of storage `S3Queue`. This PR is backward incompatible for Ordered mode if you used settings `s3queue_processing_threads_num` or `s3queue_total_shards_num`. Setting `s3queue_total_shards_num` is deleted, previously it was allowed to use only under `s3queue_allow_experimental_sharded_mode`, which is now deprecated. A new setting is added - `s3queue_buckets`. [#64349](https://github.com/ClickHouse/ClickHouse/pull/64349) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
* New functions `snowflakeIDToDateTime`, `snowflakeIDToDateTime64`, `dateTimeToSnowflakeID`, and `dateTime64ToSnowflakeID` were added. Unlike the existing functions `snowflakeToDateTime`, `snowflakeToDateTime64`, `dateTimeToSnowflake`, and `dateTime64ToSnowflake`, the new functions are compatible with function `generateSnowflakeID`, i.e. they accept the snowflake IDs generated by `generateSnowflakeID` and produce snowflake IDs of the same type as `generateSnowflakeID` (i.e. `UInt64`). Furthermore, the new functions default to the UNIX epoch (aka. 1970-01-01), just like `generateSnowflakeID`. If necessary, a different epoch, e.g. Twitter's/X's epoch 2010-11-04 aka. 1288834974657 msec since UNIX epoch, can be passed. The old conversion functions are deprecated and will be removed after a transition period: to use them regardless, enable setting `allow_deprecated_snowflake_conversion_functions`. [#64948](https://github.com/ClickHouse/ClickHouse/pull/64948) ([Robert Schulze](https://github.com/rschu1ze)).
|
* New functions `snowflakeIDToDateTime`, `snowflakeIDToDateTime64`, `dateTimeToSnowflakeID`, and `dateTime64ToSnowflakeID` were added. Unlike the existing functions `snowflakeToDateTime`, `snowflakeToDateTime64`, `dateTimeToSnowflake`, and `dateTime64ToSnowflake`, the new functions are compatible with function `generateSnowflakeID`, i.e. they accept the snowflake IDs generated by `generateSnowflakeID` and produce snowflake IDs of the same type as `generateSnowflakeID` (i.e. `UInt64`). Furthermore, the new functions default to the UNIX epoch (aka. 1970-01-01), just like `generateSnowflakeID`. If necessary, a different epoch, e.g. Twitter's/X's epoch 2010-11-04 aka. 1288834974657 msec since UNIX epoch, can be passed. The old conversion functions are deprecated and will be removed after a transition period: to use them regardless, enable setting `allow_deprecated_snowflake_conversion_functions`. [#64948](https://github.com/ClickHouse/ClickHouse/pull/64948) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
|
||||||
#### New Feature
|
#### New Feature
|
||||||
* Introduce statistics of type "number of distinct values". [#59357](https://github.com/ClickHouse/ClickHouse/pull/59357) ([Han Fei](https://github.com/hanfei1991)).
|
* Allow to store named collections in ClickHouse Keeper. [#64574](https://github.com/ClickHouse/ClickHouse/pull/64574) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Support empty tuples. [#55061](https://github.com/ClickHouse/ClickHouse/pull/55061) ([Amos Bird](https://github.com/amosbird)).
|
||||||
* Add Hilbert Curve encode and decode functions. [#60156](https://github.com/ClickHouse/ClickHouse/pull/60156) ([Artem Mustafin](https://github.com/Artemmm91)).
|
* Add Hilbert Curve encode and decode functions. [#60156](https://github.com/ClickHouse/ClickHouse/pull/60156) ([Artem Mustafin](https://github.com/Artemmm91)).
|
||||||
* Added support for reading LINESTRING geometry in WKT format using function `readWKTLineString`. [#62519](https://github.com/ClickHouse/ClickHouse/pull/62519) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
|
||||||
* Allow to attach parts from a different disk. [#63087](https://github.com/ClickHouse/ClickHouse/pull/63087) ([Unalian](https://github.com/Unalian)).
|
|
||||||
* Allow proxy to be bypassed for hosts specified in `no_proxy` env variable and ClickHouse proxy configuration. [#63314](https://github.com/ClickHouse/ClickHouse/pull/63314) ([Arthur Passos](https://github.com/arthurpassos)).
|
|
||||||
* Added a new table function `loop` to support returning query results in an infinite loop. [#63452](https://github.com/ClickHouse/ClickHouse/pull/63452) ([Sariel](https://github.com/sarielwxm)).
|
|
||||||
* Added new SQL functions `generateSnowflakeID` for generating Twitter-style Snowflake IDs. [#63577](https://github.com/ClickHouse/ClickHouse/pull/63577) ([Danila Puzov](https://github.com/kazalika)).
|
|
||||||
* Add the ability to reshuffle rows during insert to optimize for size without violating the order set by `PRIMARY KEY`. It's controlled by the setting `optimize_row_order` (off by default). [#63578](https://github.com/ClickHouse/ClickHouse/pull/63578) ([Igor Markelov](https://github.com/ElderlyPassionFruit)).
|
|
||||||
* Added `merge_workload` and `mutation_workload` settings to regulate how resources are utilized and shared between merges, mutations and other workloads. [#64061](https://github.com/ClickHouse/ClickHouse/pull/64061) ([Sergei Trifonov](https://github.com/serxa)).
|
|
||||||
* Add support for comparing IPv4 and IPv6 types using the `=` operator. [#64292](https://github.com/ClickHouse/ClickHouse/pull/64292) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)).
|
|
||||||
* Allow to store named collections in zookeeper. [#64574](https://github.com/ClickHouse/ClickHouse/pull/64574) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
|
||||||
* Support decimal arguments in binary math functions (pow, atan2, max2, min2, hypot). [#64582](https://github.com/ClickHouse/ClickHouse/pull/64582) ([Mikhail Gorshkov](https://github.com/mgorshkov)).
|
|
||||||
* Add support for index analysis over `hilbertEncode`. [#64662](https://github.com/ClickHouse/ClickHouse/pull/64662) ([Artem Mustafin](https://github.com/Artemmm91)).
|
* Add support for index analysis over `hilbertEncode`. [#64662](https://github.com/ClickHouse/ClickHouse/pull/64662) ([Artem Mustafin](https://github.com/Artemmm91)).
|
||||||
|
* Added support for reading `LINESTRING` geometry in the WKT format using function `readWKTLineString`. [#62519](https://github.com/ClickHouse/ClickHouse/pull/62519) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Allow to attach parts from a different disk. [#63087](https://github.com/ClickHouse/ClickHouse/pull/63087) ([Unalian](https://github.com/Unalian)).
|
||||||
|
* Added new SQL functions `generateSnowflakeID` for generating Twitter-style Snowflake IDs. [#63577](https://github.com/ClickHouse/ClickHouse/pull/63577) ([Danila Puzov](https://github.com/kazalika)).
|
||||||
|
* Added `merge_workload` and `mutation_workload` settings to regulate how resources are utilized and shared between merges, mutations and other workloads. [#64061](https://github.com/ClickHouse/ClickHouse/pull/64061) ([Sergei Trifonov](https://github.com/serxa)).
|
||||||
|
* Add support for comparing `IPv4` and `IPv6` types using the `=` operator. [#64292](https://github.com/ClickHouse/ClickHouse/pull/64292) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)).
|
||||||
|
* Support decimal arguments in binary math functions (pow, atan2, max2, min2, hypot). [#64582](https://github.com/ClickHouse/ClickHouse/pull/64582) ([Mikhail Gorshkov](https://github.com/mgorshkov)).
|
||||||
* Added SQL functions `parseReadableSize` (along with `OrNull` and `OrZero` variants). [#64742](https://github.com/ClickHouse/ClickHouse/pull/64742) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)).
|
* Added SQL functions `parseReadableSize` (along with `OrNull` and `OrZero` variants). [#64742](https://github.com/ClickHouse/ClickHouse/pull/64742) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)).
|
||||||
* Add server settings `max_table_num_to_throw` and `max_database_num_to_throw` to limit the number of databases or tables on `CREATE` queries. [#64781](https://github.com/ClickHouse/ClickHouse/pull/64781) ([Xu Jia](https://github.com/XuJia0210)).
|
* Add server settings `max_table_num_to_throw` and `max_database_num_to_throw` to limit the number of databases or tables on `CREATE` queries. [#64781](https://github.com/ClickHouse/ClickHouse/pull/64781) ([Xu Jia](https://github.com/XuJia0210)).
|
||||||
* Add _time virtual column to file alike storages (s3/file/hdfs/url/azureBlobStorage). [#64947](https://github.com/ClickHouse/ClickHouse/pull/64947) ([Ilya Golshtein](https://github.com/ilejn)).
|
* Add `_time` virtual column to file alike storages (s3/file/hdfs/url/azureBlobStorage). [#64947](https://github.com/ClickHouse/ClickHouse/pull/64947) ([Ilya Golshtein](https://github.com/ilejn)).
|
||||||
* Introduced new functions `base64URLEncode`, `base64URLDecode` and `tryBase64URLDecode`. [#64991](https://github.com/ClickHouse/ClickHouse/pull/64991) ([Mikhail Gorshkov](https://github.com/mgorshkov)).
|
* Introduced new functions `base64URLEncode`, `base64URLDecode` and `tryBase64URLDecode`. [#64991](https://github.com/ClickHouse/ClickHouse/pull/64991) ([Mikhail Gorshkov](https://github.com/mgorshkov)).
|
||||||
* Add new function `editDistanceUTF8`, which calculates the [edit distance](https://en.wikipedia.org/wiki/Edit_distance) between two UTF8 strings. [#65269](https://github.com/ClickHouse/ClickHouse/pull/65269) ([LiuNeng](https://github.com/liuneng1994)).
|
* Add new function `editDistanceUTF8`, which calculates the [edit distance](https://en.wikipedia.org/wiki/Edit_distance) between two UTF8 strings. [#65269](https://github.com/ClickHouse/ClickHouse/pull/65269) ([LiuNeng](https://github.com/liuneng1994)).
|
||||||
|
* Add `http_response_headers` setting to support custom response headers in custom HTTP handlers. [#63562](https://github.com/ClickHouse/ClickHouse/pull/63562) ([Grigorii](https://github.com/GSokol)).
|
||||||
|
* Added a new table function `loop` to support returning query results in an infinite loop. [#63452](https://github.com/ClickHouse/ClickHouse/pull/63452) ([Sariel](https://github.com/sarielwxm)). This is useful for testing.
|
||||||
|
* Introduced two additional columns in the `system.query_log`: `used_privileges` and `missing_privileges`. `used_privileges` is populated with the privileges that were checked during query execution, and `missing_privileges` contains required privileges that are missing. [#64597](https://github.com/ClickHouse/ClickHouse/pull/64597) ([Alexey Katsman](https://github.com/alexkats)).
|
||||||
|
* Added a setting `output_format_pretty_display_footer_column_names` which when enabled displays column names at the end of the table for long tables (50 rows by default), with the threshold value for minimum number of rows controlled by `output_format_pretty_display_footer_column_names_min_rows`. [#65144](https://github.com/ClickHouse/ClickHouse/pull/65144) ([Shaun Struwig](https://github.com/Blargian)).
|
||||||
|
|
||||||
|
#### Experimental Feature
|
||||||
|
* Introduce statistics of type "number of distinct values". [#59357](https://github.com/ClickHouse/ClickHouse/pull/59357) ([Han Fei](https://github.com/hanfei1991)).
|
||||||
|
* Support statistics with ReplicatedMergeTree. [#64934](https://github.com/ClickHouse/ClickHouse/pull/64934) ([Han Fei](https://github.com/hanfei1991)).
|
||||||
|
* If "replica group" is configured for a `Replicated` database, automatically create a cluster that includes replicas from all groups. [#64312](https://github.com/ClickHouse/ClickHouse/pull/64312) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* Add settings `parallel_replicas_custom_key_range_lower` and `parallel_replicas_custom_key_range_upper` to control how parallel replicas with dynamic shards parallelizes queries when using a range filter. [#64604](https://github.com/ClickHouse/ClickHouse/pull/64604) ([josh-hildred](https://github.com/josh-hildred)).
|
||||||
|
|
||||||
#### Performance Improvement
|
#### Performance Improvement
|
||||||
|
* Add the ability to reshuffle rows during insert to optimize for size without violating the order set by `PRIMARY KEY`. It's controlled by the setting `optimize_row_order` (off by default). [#63578](https://github.com/ClickHouse/ClickHouse/pull/63578) ([Igor Markelov](https://github.com/ElderlyPassionFruit)).
|
||||||
* Add a native parquet reader, which can read parquet binary to ClickHouse Columns directly. It's controlled by the setting `input_format_parquet_use_native_reader` (disabled by default). [#60361](https://github.com/ClickHouse/ClickHouse/pull/60361) ([ZhiHong Zhang](https://github.com/copperybean)).
|
* Add a native parquet reader, which can read parquet binary to ClickHouse Columns directly. It's controlled by the setting `input_format_parquet_use_native_reader` (disabled by default). [#60361](https://github.com/ClickHouse/ClickHouse/pull/60361) ([ZhiHong Zhang](https://github.com/copperybean)).
|
||||||
* Reduce the number of virtual function calls in ColumnNullable::size. [#60556](https://github.com/ClickHouse/ClickHouse/pull/60556) ([HappenLee](https://github.com/HappenLee)).
|
|
||||||
* Speedup `splitByRegexp` when the regular expression argument is a single-character. [#62696](https://github.com/ClickHouse/ClickHouse/pull/62696) ([Robert Schulze](https://github.com/rschu1ze)).
|
|
||||||
* Speed up FixedHashTable by keeping track of the min and max keys used. This allows to reduce the number of cells that need to be verified. [#62746](https://github.com/ClickHouse/ClickHouse/pull/62746) ([Jiebin Sun](https://github.com/jiebinn)).
|
|
||||||
* Optimize the resolution of in(LowCardinality, ConstantSet). [#64060](https://github.com/ClickHouse/ClickHouse/pull/64060) ([Zhiguo Zhou](https://github.com/ZhiguoZh)).
|
|
||||||
* Use a thread pool to initialize and destroy hash tables inside `ConcurrentHashJoin`. [#64241](https://github.com/ClickHouse/ClickHouse/pull/64241) ([Nikita Taranov](https://github.com/nickitat)).
|
|
||||||
* Optimized vertical merges in tables with sparse columns. [#64311](https://github.com/ClickHouse/ClickHouse/pull/64311) ([Anton Popov](https://github.com/CurtizJ)).
|
|
||||||
* Enabled prefetches of data from remote filesystem during vertical merges. It improves latency of vertical merges in tables with data stored on remote filesystem. [#64314](https://github.com/ClickHouse/ClickHouse/pull/64314) ([Anton Popov](https://github.com/CurtizJ)).
|
|
||||||
* Reduce redundant calls to `isDefault()` of `ColumnSparse::filter` to improve performance. [#64426](https://github.com/ClickHouse/ClickHouse/pull/64426) ([Jiebin Sun](https://github.com/jiebinn)).
|
|
||||||
* Speedup `find_super_nodes` and `find_big_family` keeper-client commands by making multiple asynchronous getChildren requests. [#64628](https://github.com/ClickHouse/ClickHouse/pull/64628) ([Alexander Gololobov](https://github.com/davenger)).
|
|
||||||
* Improve function least/greatest for nullable numberic type arguments. [#64668](https://github.com/ClickHouse/ClickHouse/pull/64668) ([KevinyhZou](https://github.com/KevinyhZou)).
|
|
||||||
* Allow merging two consequent `FilterSteps` of a query plan. This improves filter-push-down optimization if the filter condition can be pushed down from the parent step. [#64760](https://github.com/ClickHouse/ClickHouse/pull/64760) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
|
||||||
* Remove bad optimization in vertical final implementation and re-enable vertical final algorithm by default. [#64783](https://github.com/ClickHouse/ClickHouse/pull/64783) ([Duc Canh Le](https://github.com/canhld94)).
|
|
||||||
* Remove ALIAS nodes from the filter expression. This slightly improves performance for queries with `PREWHERE` (with the new analyzer). [#64793](https://github.com/ClickHouse/ClickHouse/pull/64793) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
|
||||||
* Fix performance regression in cross join introduced in [#60459](https://github.com/ClickHouse/ClickHouse/issues/60459) (24.5). [#65243](https://github.com/ClickHouse/ClickHouse/pull/65243) ([Nikita Taranov](https://github.com/nickitat)).
|
|
||||||
|
|
||||||
#### Improvement
|
|
||||||
* Support empty tuples. [#55061](https://github.com/ClickHouse/ClickHouse/pull/55061) ([Amos Bird](https://github.com/amosbird)).
|
|
||||||
* Hot reload storage policy for distributed tables when adding a new disk. [#58285](https://github.com/ClickHouse/ClickHouse/pull/58285) ([Duc Canh Le](https://github.com/canhld94)).
|
|
||||||
* Avoid possible deadlock during MergeTree index analysis when scheduling threads in a saturated service. [#59427](https://github.com/ClickHouse/ClickHouse/pull/59427) ([Sean Haynes](https://github.com/seandhaynes)).
|
|
||||||
* Support partial trivial count optimization when the query filter is able to select exact ranges from merge tree tables. [#60463](https://github.com/ClickHouse/ClickHouse/pull/60463) ([Amos Bird](https://github.com/amosbird)).
|
* Support partial trivial count optimization when the query filter is able to select exact ranges from merge tree tables. [#60463](https://github.com/ClickHouse/ClickHouse/pull/60463) ([Amos Bird](https://github.com/amosbird)).
|
||||||
* Reduce max memory usage of multithreaded `INSERT`s by collecting chunks of multiple threads in a single transform. [#61047](https://github.com/ClickHouse/ClickHouse/pull/61047) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
* Reduce max memory usage of multithreaded `INSERT`s by collecting chunks of multiple threads in a single transform. [#61047](https://github.com/ClickHouse/ClickHouse/pull/61047) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||||
* Reduce the memory usage when using Azure object storage by using fixed memory allocation, avoiding the allocation of an extra buffer. [#63160](https://github.com/ClickHouse/ClickHouse/pull/63160) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
* Reduce the memory usage when using Azure object storage by using fixed memory allocation, avoiding the allocation of an extra buffer. [#63160](https://github.com/ClickHouse/ClickHouse/pull/63160) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||||
* Several minor corner case fixes to proxy support & tunneling. [#63427](https://github.com/ClickHouse/ClickHouse/pull/63427) ([Arthur Passos](https://github.com/arthurpassos)).
|
* Reduce the number of virtual function calls in `ColumnNullable::size`. [#60556](https://github.com/ClickHouse/ClickHouse/pull/60556) ([HappenLee](https://github.com/HappenLee)).
|
||||||
* Add `http_response_headers` setting to support custom response headers in custom HTTP handlers. [#63562](https://github.com/ClickHouse/ClickHouse/pull/63562) ([Grigorii](https://github.com/GSokol)).
|
* Speedup `splitByRegexp` when the regular expression argument is a single-character. [#62696](https://github.com/ClickHouse/ClickHouse/pull/62696) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
* Improve io_uring resubmit visibility. Rename profile event `IOUringSQEsResubmits` -> `IOUringSQEsResubmitsAsync` and add a new one `IOUringSQEsResubmitsSync`. [#63699](https://github.com/ClickHouse/ClickHouse/pull/63699) ([Tomer Shafir](https://github.com/tomershafir)).
|
* Speed up aggregation by 8-bit and 16-bit keys by keeping track of the min and max keys used. This allows to reduce the number of cells that need to be verified. [#62746](https://github.com/ClickHouse/ClickHouse/pull/62746) ([Jiebin Sun](https://github.com/jiebinn)).
|
||||||
* Introduce assertions to verify all functions are called with columns of the right size. [#63723](https://github.com/ClickHouse/ClickHouse/pull/63723) ([Raúl Marín](https://github.com/Algunenano)).
|
* Optimize operator IN when the left hand side is `LowCardinality` and the right is a set of constants. [#64060](https://github.com/ClickHouse/ClickHouse/pull/64060) ([Zhiguo Zhou](https://github.com/ZhiguoZh)).
|
||||||
|
* Use a thread pool to initialize and destroy hash tables inside `ConcurrentHashJoin`. [#64241](https://github.com/ClickHouse/ClickHouse/pull/64241) ([Nikita Taranov](https://github.com/nickitat)).
|
||||||
|
* Optimized vertical merges in tables with sparse columns. [#64311](https://github.com/ClickHouse/ClickHouse/pull/64311) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Enabled prefetches of data from remote filesystem during vertical merges. It improves latency of vertical merges in tables with data stored on remote filesystem. [#64314](https://github.com/ClickHouse/ClickHouse/pull/64314) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Reduce redundant calls to `isDefault` of `ColumnSparse::filter` to improve performance. [#64426](https://github.com/ClickHouse/ClickHouse/pull/64426) ([Jiebin Sun](https://github.com/jiebinn)).
|
||||||
|
* Speedup `find_super_nodes` and `find_big_family` keeper-client commands by making multiple asynchronous getChildren requests. [#64628](https://github.com/ClickHouse/ClickHouse/pull/64628) ([Alexander Gololobov](https://github.com/davenger)).
|
||||||
|
* Improve function `least`/`greatest` for nullable numberic type arguments. [#64668](https://github.com/ClickHouse/ClickHouse/pull/64668) ([KevinyhZou](https://github.com/KevinyhZou)).
|
||||||
|
* Allow merging two consequent filtering steps of a query plan. This improves filter-push-down optimization if the filter condition can be pushed down from the parent step. [#64760](https://github.com/ClickHouse/ClickHouse/pull/64760) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Remove bad optimization in the vertical final implementation and re-enable vertical final algorithm by default. [#64783](https://github.com/ClickHouse/ClickHouse/pull/64783) ([Duc Canh Le](https://github.com/canhld94)).
|
||||||
|
* Remove ALIAS nodes from the filter expression. This slightly improves performance for queries with `PREWHERE` (with the new analyzer). [#64793](https://github.com/ClickHouse/ClickHouse/pull/64793) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Re-enable OpenSSL session caching. [#65111](https://github.com/ClickHouse/ClickHouse/pull/65111) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* Added settings to disable materialization of skip indexes and statistics on inserts (`materialize_skip_indexes_on_insert` and `materialize_statistics_on_insert`). [#64391](https://github.com/ClickHouse/ClickHouse/pull/64391) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Use the allocated memory size to calculate the row group size and reduce the peak memory of the parquet writer in the single-threaded mode. [#64424](https://github.com/ClickHouse/ClickHouse/pull/64424) ([LiuNeng](https://github.com/liuneng1994)).
|
||||||
|
* Improve the iterator of sparse column to reduce call of `size`. [#64497](https://github.com/ClickHouse/ClickHouse/pull/64497) ([Jiebin Sun](https://github.com/jiebinn)).
|
||||||
|
* Update condition to use server-side copy for backups to Azure blob storage. [#64518](https://github.com/ClickHouse/ClickHouse/pull/64518) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||||
|
* Optimized memory usage of vertical merges for tables with high number of skip indexes. [#64580](https://github.com/ClickHouse/ClickHouse/pull/64580) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
* `SHOW CREATE TABLE` executed on top of system tables will now show the super handy comment unique for each table which will explain why this table is needed. [#63788](https://github.com/ClickHouse/ClickHouse/pull/63788) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
* `SHOW CREATE TABLE` executed on top of system tables will now show the super handy comment unique for each table which will explain why this table is needed. [#63788](https://github.com/ClickHouse/ClickHouse/pull/63788) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
* Added setting `metadata_storage_type` to keep free space on metadata storage disk. [#64128](https://github.com/ClickHouse/ClickHouse/pull/64128) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
* The second argument (scale) of functions `round()`, `roundBankers()`, `floor()`, `ceil()` and `trunc()` can now be non-const. [#64798](https://github.com/ClickHouse/ClickHouse/pull/64798) ([Mikhail Gorshkov](https://github.com/mgorshkov)).
|
||||||
* Add metrics to track the number of directories created and removed by the plain_rewritable metadata storage, and the number of entries in the local-to-remote in-memory map. [#64175](https://github.com/ClickHouse/ClickHouse/pull/64175) ([Julia Kartseva](https://github.com/jkartseva)).
|
* Hot reload storage policy for `Distributed` tables when adding a new disk. [#58285](https://github.com/ClickHouse/ClickHouse/pull/58285) ([Duc Canh Le](https://github.com/canhld94)).
|
||||||
|
* Avoid possible deadlock during MergeTree index analysis when scheduling threads in a saturated service. [#59427](https://github.com/ClickHouse/ClickHouse/pull/59427) ([Sean Haynes](https://github.com/seandhaynes)).
|
||||||
|
* Several minor corner case fixes to S3 proxy support & tunneling. [#63427](https://github.com/ClickHouse/ClickHouse/pull/63427) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||||
|
* Improve io_uring resubmit visibility. Rename profile event `IOUringSQEsResubmits` -> `IOUringSQEsResubmitsAsync` and add a new one `IOUringSQEsResubmitsSync`. [#63699](https://github.com/ClickHouse/ClickHouse/pull/63699) ([Tomer Shafir](https://github.com/tomershafir)).
|
||||||
|
* Added a new setting, `metadata_keep_free_space_bytes` to keep free space on the metadata storage disk. [#64128](https://github.com/ClickHouse/ClickHouse/pull/64128) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||||
|
* Add metrics to track the number of directories created and removed by the `plain_rewritable` metadata storage, and the number of entries in the local-to-remote in-memory map. [#64175](https://github.com/ClickHouse/ClickHouse/pull/64175) ([Julia Kartseva](https://github.com/jkartseva)).
|
||||||
* The query cache now considers identical queries with different settings as different. This increases robustness in cases where different settings (e.g. `limit` or `additional_table_filters`) would affect the query result. [#64205](https://github.com/ClickHouse/ClickHouse/pull/64205) ([Robert Schulze](https://github.com/rschu1ze)).
|
* The query cache now considers identical queries with different settings as different. This increases robustness in cases where different settings (e.g. `limit` or `additional_table_filters`) would affect the query result. [#64205](https://github.com/ClickHouse/ClickHouse/pull/64205) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
* Better Exception Message in Delete Table with Projection, users can understand the error and the steps should be taken. [#64212](https://github.com/ClickHouse/ClickHouse/pull/64212) ([jsc0218](https://github.com/jsc0218)).
|
|
||||||
* Support the non standard error code `QpsLimitExceeded` in object storage as a retryable error. [#64225](https://github.com/ClickHouse/ClickHouse/pull/64225) ([Sema Checherinda](https://github.com/CheSema)).
|
* Support the non standard error code `QpsLimitExceeded` in object storage as a retryable error. [#64225](https://github.com/ClickHouse/ClickHouse/pull/64225) ([Sema Checherinda](https://github.com/CheSema)).
|
||||||
* Forbid converting a MergeTree table to replicated if the zookeeper path for this table already exists. [#64244](https://github.com/ClickHouse/ClickHouse/pull/64244) ([Kirill](https://github.com/kirillgarbar)).
|
* Forbid converting a MergeTree table to replicated if the zookeeper path for this table already exists. [#64244](https://github.com/ClickHouse/ClickHouse/pull/64244) ([Kirill](https://github.com/kirillgarbar)).
|
||||||
* If "replica group" is configured for a `Replicated` database, automatically create a cluster that includes replicas from all groups. [#64312](https://github.com/ClickHouse/ClickHouse/pull/64312) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
* Added a new setting `input_format_parquet_prefer_block_bytes` to control the average output block bytes, and modified the default value of `input_format_parquet_max_block_size` to 65409. [#64427](https://github.com/ClickHouse/ClickHouse/pull/64427) ([LiuNeng](https://github.com/liuneng1994)).
|
||||||
* Added settings to disable materialization of skip indexes and statistics on inserts (`materialize_skip_indexes_on_insert` and `materialize_statistics_on_insert`). [#64391](https://github.com/ClickHouse/ClickHouse/pull/64391) ([Anton Popov](https://github.com/CurtizJ)).
|
* Allow proxy to be bypassed for hosts specified in `no_proxy` env variable and ClickHouse proxy configuration. [#63314](https://github.com/ClickHouse/ClickHouse/pull/63314) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||||
* Use the allocated memory size to calculate the row group size and reduce the peak memory of the parquet writer in single-threaded mode. [#64424](https://github.com/ClickHouse/ClickHouse/pull/64424) ([LiuNeng](https://github.com/liuneng1994)).
|
|
||||||
* Added new configuration input_format_parquet_prefer_block_bytes to control the average output block bytes, and modified the default value of input_format_parquet_max_block_size to 65409. [#64427](https://github.com/ClickHouse/ClickHouse/pull/64427) ([LiuNeng](https://github.com/liuneng1994)).
|
|
||||||
* Always start Keeper with sufficient amount of threads in global thread pool. [#64444](https://github.com/ClickHouse/ClickHouse/pull/64444) ([Duc Canh Le](https://github.com/canhld94)).
|
* Always start Keeper with sufficient amount of threads in global thread pool. [#64444](https://github.com/ClickHouse/ClickHouse/pull/64444) ([Duc Canh Le](https://github.com/canhld94)).
|
||||||
* Settings from user config doesn't affect merges and mutations for MergeTree on top of object storage. [#64456](https://github.com/ClickHouse/ClickHouse/pull/64456) ([alesapin](https://github.com/alesapin)).
|
* Settings from the user's config don't affect merges and mutations for `MergeTree` on top of object storage. [#64456](https://github.com/ClickHouse/ClickHouse/pull/64456) ([alesapin](https://github.com/alesapin)).
|
||||||
* Setting `replace_long_file_name_to_hash` is enabled by default for `MergeTree` tables. [#64457](https://github.com/ClickHouse/ClickHouse/pull/64457) ([Anton Popov](https://github.com/CurtizJ)).
|
|
||||||
* Improve the iterator of sparse column to reduce call of size(). [#64497](https://github.com/ClickHouse/ClickHouse/pull/64497) ([Jiebin Sun](https://github.com/jiebinn)).
|
|
||||||
* Update condition to use copy for azure blob storage. [#64518](https://github.com/ClickHouse/ClickHouse/pull/64518) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
|
||||||
* Support the non standard error code `TotalQpsLimitExceeded` in object storage as a retryable error. [#64520](https://github.com/ClickHouse/ClickHouse/pull/64520) ([Sema Checherinda](https://github.com/CheSema)).
|
* Support the non standard error code `TotalQpsLimitExceeded` in object storage as a retryable error. [#64520](https://github.com/ClickHouse/ClickHouse/pull/64520) ([Sema Checherinda](https://github.com/CheSema)).
|
||||||
* Optimized memory usage of vertical merges for tables with high number of skip indexes. [#64580](https://github.com/ClickHouse/ClickHouse/pull/64580) ([Anton Popov](https://github.com/CurtizJ)).
|
|
||||||
* Introduced two additional columns in the `system.query_log`: `used_privileges` and `missing_privileges`. `used_privileges` is populated with the privileges that were checked during query execution, and `missing_privileges` contains required privileges that are missing. [#64597](https://github.com/ClickHouse/ClickHouse/pull/64597) ([Alexey Katsman](https://github.com/alexkats)).
|
|
||||||
* Add settings `parallel_replicas_custom_key_range_lower` and `parallel_replicas_custom_key_range_upper` to control how parallel replicas with dynamic shards parallelizes queries when using a range filter. [#64604](https://github.com/ClickHouse/ClickHouse/pull/64604) ([josh-hildred](https://github.com/josh-hildred)).
|
|
||||||
* Updated Advanced Dashboard for both open-source and ClickHouse Cloud versions to include a chart for 'Maximum concurrent network connections'. [#64610](https://github.com/ClickHouse/ClickHouse/pull/64610) ([Thom O'Connor](https://github.com/thomoco)).
|
* Updated Advanced Dashboard for both open-source and ClickHouse Cloud versions to include a chart for 'Maximum concurrent network connections'. [#64610](https://github.com/ClickHouse/ClickHouse/pull/64610) ([Thom O'Connor](https://github.com/thomoco)).
|
||||||
* The second argument (scale) of functions `round()`, `roundBankers()`, `floor()`, `ceil()` and `trunc()` can now be non-const. [#64798](https://github.com/ClickHouse/ClickHouse/pull/64798) ([Mikhail Gorshkov](https://github.com/mgorshkov)).
|
* Improve progress report on `zeros_mt` and `generateRandom`. [#64804](https://github.com/ClickHouse/ClickHouse/pull/64804) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
* Improve progress report on zeros_mt and generateRandom. [#64804](https://github.com/ClickHouse/ClickHouse/pull/64804) ([Raúl Marín](https://github.com/Algunenano)).
|
* Add an asynchronous metric `jemalloc.profile.active` to show whether sampling is currently active. This is an activation mechanism in addition to prof.active; both must be active for the calling thread to sample. [#64842](https://github.com/ClickHouse/ClickHouse/pull/64842) ([Unalian](https://github.com/Unalian)).
|
||||||
* Add an asynchronous metric jemalloc.profile.active to show whether sampling is currently active. This is an activation mechanism in addition to prof.active; both must be active for the calling thread to sample. [#64842](https://github.com/ClickHouse/ClickHouse/pull/64842) ([Unalian](https://github.com/Unalian)).
|
|
||||||
* Support statistics with ReplicatedMergeTree. [#64934](https://github.com/ClickHouse/ClickHouse/pull/64934) ([Han Fei](https://github.com/hanfei1991)).
|
|
||||||
* Remove mark of `allow_experimental_join_condition` as important. This mark may have prevented distributed queries in a mixed versions cluster from being executed successfully. [#65008](https://github.com/ClickHouse/ClickHouse/pull/65008) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
* Remove mark of `allow_experimental_join_condition` as important. This mark may have prevented distributed queries in a mixed versions cluster from being executed successfully. [#65008](https://github.com/ClickHouse/ClickHouse/pull/65008) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
* Added server Asynchronous metrics `DiskGetObjectThrottler*` and `DiskGetObjectThrottler*` reflecting request per second rate limit defined with `s3_max_get_rps` and `s3_max_put_rps` disk settings and currently available number of requests that could be sent without hitting throttling limit on the disk. Metrics are defined for every disk that has a configured limit. [#65050](https://github.com/ClickHouse/ClickHouse/pull/65050) ([Sergei Trifonov](https://github.com/serxa)).
|
* Added server Asynchronous metrics `DiskGetObjectThrottler*` and `DiskGetObjectThrottler*` reflecting request per second rate limit defined with `s3_max_get_rps` and `s3_max_put_rps` disk settings and currently available number of requests that could be sent without hitting throttling limit on the disk. Metrics are defined for every disk that has a configured limit. [#65050](https://github.com/ClickHouse/ClickHouse/pull/65050) ([Sergei Trifonov](https://github.com/serxa)).
|
||||||
* Added a setting `output_format_pretty_display_footer_column_names` which when enabled displays column names at the end of the table for long tables (50 rows by default), with the threshold value for minimum number of rows controlled by `output_format_pretty_display_footer_column_names_min_rows`. [#65144](https://github.com/ClickHouse/ClickHouse/pull/65144) ([Shaun Struwig](https://github.com/Blargian)).
|
* Initialize global trace collector for `Poco::ThreadPool` (needed for Keeper, etc). [#65239](https://github.com/ClickHouse/ClickHouse/pull/65239) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
* Returned back the behaviour of how ClickHouse works and interprets Tuples in CSV format. This change effectively reverts https://github.com/ClickHouse/ClickHouse/pull/60994 and makes it available only under a few settings: `output_format_csv_serialize_tuple_into_separate_columns`, `input_format_csv_deserialize_separate_columns_into_tuple` and `input_format_csv_try_infer_strings_from_quoted_tuples`. [#65170](https://github.com/ClickHouse/ClickHouse/pull/65170) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
* Add a validation when creating a user with `bcrypt_hash`. [#65242](https://github.com/ClickHouse/ClickHouse/pull/65242) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
* Initialize global trace collector for Poco::ThreadPool (needed for keeper, etc). [#65239](https://github.com/ClickHouse/ClickHouse/pull/65239) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
* Add profile events for number of rows read during/after `PREWHERE`. [#64198](https://github.com/ClickHouse/ClickHouse/pull/64198) ([Nikita Taranov](https://github.com/nickitat)).
|
||||||
* Add validation when creating a user with bcrypt_hash. [#65242](https://github.com/ClickHouse/ClickHouse/pull/65242) ([Raúl Marín](https://github.com/Algunenano)).
|
* Print query in `EXPLAIN PLAN` with parallel replicas. [#64298](https://github.com/ClickHouse/ClickHouse/pull/64298) ([vdimir](https://github.com/vdimir)).
|
||||||
* Unite s3/hdfs/azure storage implementations into a single class working with IObjectStorage. Same for *Cluster, data lakes and Queue storages. [#59767](https://github.com/ClickHouse/ClickHouse/pull/59767) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
|
||||||
* Refactor data part writer to remove dependencies on MergeTreeData and DataPart. [#63620](https://github.com/ClickHouse/ClickHouse/pull/63620) ([Alexander Gololobov](https://github.com/davenger)).
|
|
||||||
* Add profile events for number of rows read during/after prewhere. [#64198](https://github.com/ClickHouse/ClickHouse/pull/64198) ([Nikita Taranov](https://github.com/nickitat)).
|
|
||||||
* Print query in explain plan with parallel replicas. [#64298](https://github.com/ClickHouse/ClickHouse/pull/64298) ([vdimir](https://github.com/vdimir)).
|
|
||||||
* Rename `allow_deprecated_functions` to `allow_deprecated_error_prone_window_functions`. [#64358](https://github.com/ClickHouse/ClickHouse/pull/64358) ([Raúl Marín](https://github.com/Algunenano)).
|
* Rename `allow_deprecated_functions` to `allow_deprecated_error_prone_window_functions`. [#64358](https://github.com/ClickHouse/ClickHouse/pull/64358) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
* Respect `max_read_buffer_size` setting for file descriptors as well in file() table function. [#64532](https://github.com/ClickHouse/ClickHouse/pull/64532) ([Azat Khuzhin](https://github.com/azat)).
|
* Respect `max_read_buffer_size` setting for file descriptors as well in the `file` table function. [#64532](https://github.com/ClickHouse/ClickHouse/pull/64532) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
* Disable transactions for unsupported storages even for materialized views. [#64918](https://github.com/ClickHouse/ClickHouse/pull/64918) ([alesapin](https://github.com/alesapin)).
|
* Disable transactions for unsupported storages even for materialized views. [#64918](https://github.com/ClickHouse/ClickHouse/pull/64918) ([alesapin](https://github.com/alesapin)).
|
||||||
* Refactor `KeyCondition` and key analysis to improve PartitionPruner and trivial count optimization. This is separated from [#60463](https://github.com/ClickHouse/ClickHouse/issues/60463) . [#61459](https://github.com/ClickHouse/ClickHouse/pull/61459) ([Amos Bird](https://github.com/amosbird)).
|
* Forbid `QUALIFY` clause in the old analyzer. The old analyzer ignored `QUALIFY`, so it could lead to unexpected data removal in mutations. [#65356](https://github.com/ClickHouse/ClickHouse/pull/65356) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
|
||||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||||
|
* A bug in Apache ORC library was fixed: Fixed ORC statistics calculation, when writing, for unsigned types on all platforms and Int8 on ARM. [#64563](https://github.com/ClickHouse/ClickHouse/pull/64563) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||||
|
* Returned back the behaviour of how ClickHouse works and interprets Tuples in CSV format. This change effectively reverts https://github.com/ClickHouse/ClickHouse/pull/60994 and makes it available only under a few settings: `output_format_csv_serialize_tuple_into_separate_columns`, `input_format_csv_deserialize_separate_columns_into_tuple` and `input_format_csv_try_infer_strings_from_quoted_tuples`. [#65170](https://github.com/ClickHouse/ClickHouse/pull/65170) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
* Fix a permission error where a user in a specific situation can escalate their privileges on the default database without necessary grants. [#64769](https://github.com/ClickHouse/ClickHouse/pull/64769) ([pufit](https://github.com/pufit)).
|
* Fix a permission error where a user in a specific situation can escalate their privileges on the default database without necessary grants. [#64769](https://github.com/ClickHouse/ClickHouse/pull/64769) ([pufit](https://github.com/pufit)).
|
||||||
* Fix crash with UniqInjectiveFunctionsEliminationPass and uniqCombined. [#65188](https://github.com/ClickHouse/ClickHouse/pull/65188) ([Raúl Marín](https://github.com/Algunenano)).
|
* Fix crash with UniqInjectiveFunctionsEliminationPass and uniqCombined. [#65188](https://github.com/ClickHouse/ClickHouse/pull/65188) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
* Fix a bug in ClickHouse Keeper that causes digest mismatch during closing session. [#65198](https://github.com/ClickHouse/ClickHouse/pull/65198) ([Aleksei Filatov](https://github.com/aalexfvk)).
|
* Fix a bug in ClickHouse Keeper that causes digest mismatch during closing session. [#65198](https://github.com/ClickHouse/ClickHouse/pull/65198) ([Aleksei Filatov](https://github.com/aalexfvk)).
|
||||||
* Forbid `QUALIFY` clause in the old analyzer. The old analyzer ignored `QUALIFY`, so it could lead to unexpected data removal in mutations. [#65356](https://github.com/ClickHouse/ClickHouse/pull/65356) ([Dmitry Novik](https://github.com/novikd)).
|
|
||||||
* Use correct memory alignment for Distinct combinator. Previously, crash could happen because of invalid memory allocation when the combinator was used. [#65379](https://github.com/ClickHouse/ClickHouse/pull/65379) ([Antonio Andelic](https://github.com/antonio2368)).
|
* Use correct memory alignment for Distinct combinator. Previously, crash could happen because of invalid memory allocation when the combinator was used. [#65379](https://github.com/ClickHouse/ClickHouse/pull/65379) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
* Fix crash with `DISTINCT` and window functions. [#64767](https://github.com/ClickHouse/ClickHouse/pull/64767) ([Igor Nikonov](https://github.com/devcrafter)).
|
* Fix crash with `DISTINCT` and window functions. [#64767](https://github.com/ClickHouse/ClickHouse/pull/64767) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||||
* Fixed 'set' skip index not working with IN and indexHint(). [#62083](https://github.com/ClickHouse/ClickHouse/pull/62083) ([Michael Kolupaev](https://github.com/al13n321)).
|
* Fixed 'set' skip index not working with IN and indexHint(). [#62083](https://github.com/ClickHouse/ClickHouse/pull/62083) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||||
@ -132,7 +130,6 @@
|
|||||||
* Fixed `optimize_read_in_order` behaviour for ORDER BY ... NULLS FIRST / LAST on tables with nullable keys. [#64483](https://github.com/ClickHouse/ClickHouse/pull/64483) ([Eduard Karacharov](https://github.com/korowa)).
|
* Fixed `optimize_read_in_order` behaviour for ORDER BY ... NULLS FIRST / LAST on tables with nullable keys. [#64483](https://github.com/ClickHouse/ClickHouse/pull/64483) ([Eduard Karacharov](https://github.com/korowa)).
|
||||||
* Fix the `Expression nodes list expected 1 projection names` and `Unknown expression or identifier` errors for queries with aliases to `GLOBAL IN.`. [#64517](https://github.com/ClickHouse/ClickHouse/pull/64517) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
* Fix the `Expression nodes list expected 1 projection names` and `Unknown expression or identifier` errors for queries with aliases to `GLOBAL IN.`. [#64517](https://github.com/ClickHouse/ClickHouse/pull/64517) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
* Fix an error `Cannot find column` in distributed queries with constant CTE in the `GROUP BY` key. [#64519](https://github.com/ClickHouse/ClickHouse/pull/64519) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
* Fix an error `Cannot find column` in distributed queries with constant CTE in the `GROUP BY` key. [#64519](https://github.com/ClickHouse/ClickHouse/pull/64519) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
* Fixed ORC statistics calculation, when writing, for unsigned types on all platforms and Int8 on ARM. [#64563](https://github.com/ClickHouse/ClickHouse/pull/64563) ([Michael Kolupaev](https://github.com/al13n321)).
|
|
||||||
* Fix the crash loop when restoring from backup is blocked by creating an MV with a definer that hasn't been restored yet. [#64595](https://github.com/ClickHouse/ClickHouse/pull/64595) ([pufit](https://github.com/pufit)).
|
* Fix the crash loop when restoring from backup is blocked by creating an MV with a definer that hasn't been restored yet. [#64595](https://github.com/ClickHouse/ClickHouse/pull/64595) ([pufit](https://github.com/pufit)).
|
||||||
* Fix the output of function `formatDateTimeInJodaSyntax` when a formatter generates an uneven number of characters and the last character is `0`. For example, `SELECT formatDateTimeInJodaSyntax(toDate('2012-05-29'), 'D')` now correctly returns `150` instead of previously `15`. [#64614](https://github.com/ClickHouse/ClickHouse/pull/64614) ([LiuNeng](https://github.com/liuneng1994)).
|
* Fix the output of function `formatDateTimeInJodaSyntax` when a formatter generates an uneven number of characters and the last character is `0`. For example, `SELECT formatDateTimeInJodaSyntax(toDate('2012-05-29'), 'D')` now correctly returns `150` instead of previously `15`. [#64614](https://github.com/ClickHouse/ClickHouse/pull/64614) ([LiuNeng](https://github.com/liuneng1994)).
|
||||||
* Do not rewrite aggregation if `-If` combinator is already used. [#64638](https://github.com/ClickHouse/ClickHouse/pull/64638) ([Dmitry Novik](https://github.com/novikd)).
|
* Do not rewrite aggregation if `-If` combinator is already used. [#64638](https://github.com/ClickHouse/ClickHouse/pull/64638) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
@ -166,21 +163,14 @@
|
|||||||
* This PR ensures that the type of the constant(IN operator's second parameter) is always visible during the IN operator's type conversion process. Otherwise, losing type information may cause some conversions to fail, such as the conversion from DateTime to Date. This fixes ([#64487](https://github.com/ClickHouse/ClickHouse/issues/64487)). [#65315](https://github.com/ClickHouse/ClickHouse/pull/65315) ([pn](https://github.com/chloro-pn)).
|
* This PR ensures that the type of the constant(IN operator's second parameter) is always visible during the IN operator's type conversion process. Otherwise, losing type information may cause some conversions to fail, such as the conversion from DateTime to Date. This fixes ([#64487](https://github.com/ClickHouse/ClickHouse/issues/64487)). [#65315](https://github.com/ClickHouse/ClickHouse/pull/65315) ([pn](https://github.com/chloro-pn)).
|
||||||
|
|
||||||
#### Build/Testing/Packaging Improvement
|
#### Build/Testing/Packaging Improvement
|
||||||
* Make `network` service be required when using the rc init script to start the ClickHouse server daemon. [#60650](https://github.com/ClickHouse/ClickHouse/pull/60650) ([Chun-Sheng, Li](https://github.com/peter279k)).
|
* Add support for LLVM XRay. [#64592](https://github.com/ClickHouse/ClickHouse/pull/64592) [#64837](https://github.com/ClickHouse/ClickHouse/pull/64837) ([Tomer Shafir](https://github.com/tomershafir)).
|
||||||
* Fix typo in test_hdfsCluster_unset_skip_unavailable_shards. The test writes data to unskip_unavailable_shards, but uses skip_unavailable_shards from the previous test. [#64243](https://github.com/ClickHouse/ClickHouse/pull/64243) ([Mikhail Artemenko](https://github.com/Michicosun)).
|
* Unite s3/hdfs/azure storage implementations into a single class working with IObjectStorage. Same for *Cluster, data lakes and Queue storages. [#59767](https://github.com/ClickHouse/ClickHouse/pull/59767) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
* Reduce the size of some slow tests. [#64387](https://github.com/ClickHouse/ClickHouse/pull/64387) ([Raúl Marín](https://github.com/Algunenano)).
|
* Refactor data part writer to remove dependencies on MergeTreeData and DataPart. [#63620](https://github.com/ClickHouse/ClickHouse/pull/63620) ([Alexander Gololobov](https://github.com/davenger)).
|
||||||
* Reduce the size of some slow tests. [#64452](https://github.com/ClickHouse/ClickHouse/pull/64452) ([Raúl Marín](https://github.com/Algunenano)).
|
* Refactor `KeyCondition` and key analysis to improve PartitionPruner and trivial count optimization. This is separated from [#60463](https://github.com/ClickHouse/ClickHouse/issues/60463) . [#61459](https://github.com/ClickHouse/ClickHouse/pull/61459) ([Amos Bird](https://github.com/amosbird)).
|
||||||
* Fix test_lost_part_other_replica. [#64512](https://github.com/ClickHouse/ClickHouse/pull/64512) ([Raúl Marín](https://github.com/Algunenano)).
|
* Introduce assertions to verify all functions are called with columns of the right size. [#63723](https://github.com/ClickHouse/ClickHouse/pull/63723) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
* Add tests for experimental unequal joins and randomize new settings in clickhouse-test. [#64535](https://github.com/ClickHouse/ClickHouse/pull/64535) ([Nikita Fomichev](https://github.com/fm4v)).
|
* Make `network` service be required when using the `rc` init script to start the ClickHouse server daemon. [#60650](https://github.com/ClickHouse/ClickHouse/pull/60650) ([Chun-Sheng, Li](https://github.com/peter279k)).
|
||||||
* Upgrade tests: Update config and work with release candidates. [#64542](https://github.com/ClickHouse/ClickHouse/pull/64542) ([Raúl Marín](https://github.com/Algunenano)).
|
* Reduce the size of some slow tests. [#64387](https://github.com/ClickHouse/ClickHouse/pull/64387) [#64452](https://github.com/ClickHouse/ClickHouse/pull/64452) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
* Add support for LLVM XRay. [#64592](https://github.com/ClickHouse/ClickHouse/pull/64592) ([Tomer Shafir](https://github.com/tomershafir)).
|
|
||||||
* Speed up 02995_forget_partition. [#64761](https://github.com/ClickHouse/ClickHouse/pull/64761) ([Raúl Marín](https://github.com/Algunenano)).
|
|
||||||
* Fix 02790_async_queries_in_query_log. [#64764](https://github.com/ClickHouse/ClickHouse/pull/64764) ([Raúl Marín](https://github.com/Algunenano)).
|
|
||||||
* Support LLVM XRay on Linux amd64 only. [#64837](https://github.com/ClickHouse/ClickHouse/pull/64837) ([Tomer Shafir](https://github.com/tomershafir)).
|
|
||||||
* Get rid of custom code in `tests/ci/download_release_packages.py` and `tests/ci/get_previous_release_tag.py` to avoid issues after the https://github.com/ClickHouse/ClickHouse/pull/64759 is merged. [#64848](https://github.com/ClickHouse/ClickHouse/pull/64848) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
|
||||||
* Decrease the `unit-test` image a few times. [#65102](https://github.com/ClickHouse/ClickHouse/pull/65102) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
|
||||||
* Replay ZooKeeper logs using keeper-bench. [#62481](https://github.com/ClickHouse/ClickHouse/pull/62481) ([Antonio Andelic](https://github.com/antonio2368)).
|
* Replay ZooKeeper logs using keeper-bench. [#62481](https://github.com/ClickHouse/ClickHouse/pull/62481) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
* Re-enable OpenSSL session caching. [#65111](https://github.com/ClickHouse/ClickHouse/pull/65111) ([Robert Schulze](https://github.com/rschu1ze)).
|
|
||||||
|
|
||||||
### <a id="245"></a> ClickHouse release 24.5, 2024-05-30
|
### <a id="245"></a> ClickHouse release 24.5, 2024-05-30
|
||||||
|
|
||||||
|
@ -56,6 +56,15 @@ SELECT * FROM test_table;
|
|||||||
- `_size` — Size of the file in bytes. Type: `Nullable(UInt64)`. If the size is unknown, the value is `NULL`.
|
- `_size` — Size of the file in bytes. Type: `Nullable(UInt64)`. If the size is unknown, the value is `NULL`.
|
||||||
- `_time` — Last modified time of the file. Type: `Nullable(DateTime)`. If the time is unknown, the value is `NULL`.
|
- `_time` — Last modified time of the file. Type: `Nullable(DateTime)`. If the time is unknown, the value is `NULL`.
|
||||||
|
|
||||||
|
## Authentication
|
||||||
|
|
||||||
|
Currently there are 3 ways to authenticate:
|
||||||
|
- `Managed Identity` - Can be used by providing an `endpoint`, `connection_string` or `storage_account_url`.
|
||||||
|
- `SAS Token` - Can be used by providing an `endpoint`, `connection_string` or `storage_account_url`. It is identified by presence of '?' in the url.
|
||||||
|
- `Workload Identity` - Can be used by providing an `endpoint` or `storage_account_url`. If `use_workload_identity` parameter is set in config, ([workload identity](https://github.com/Azure/azure-sdk-for-cpp/tree/main/sdk/identity/azure-identity#authenticate-azure-hosted-applications)) is used for authentication.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## See also
|
## See also
|
||||||
|
|
||||||
[Azure Blob Storage Table Function](/docs/en/sql-reference/table-functions/azureBlobStorage)
|
[Azure Blob Storage Table Function](/docs/en/sql-reference/table-functions/azureBlobStorage)
|
||||||
|
Binary file not shown.
After Width: | Height: | Size: 162 KiB |
394
docs/en/getting-started/example-datasets/stackoverflow.md
Normal file
394
docs/en/getting-started/example-datasets/stackoverflow.md
Normal file
@ -0,0 +1,394 @@
|
|||||||
|
---
|
||||||
|
slug: /en/getting-started/example-datasets/stackoverflow
|
||||||
|
sidebar_label: Stack Overflow
|
||||||
|
sidebar_position: 1
|
||||||
|
description: Analyzing Stack Overflow data with ClickHouse
|
||||||
|
---
|
||||||
|
|
||||||
|
# Analyzing Stack Overflow data with ClickHouse
|
||||||
|
|
||||||
|
This dataset contains every `Post`, `User`, `Vote`, `Comment`, `Badge, `PostHistory`, and `PostLink` that has occurred on Stack Overflow.
|
||||||
|
|
||||||
|
Users can either download pre-prepared Parquet versions of the data, containing every post up to April 2024, or download the latest data in XML format and load this. Stack Overflow provide updates to this data periodically - historically every 3 months.
|
||||||
|
|
||||||
|
The following diagram shows the schema for the available tables assuming Parquet format.
|
||||||
|
|
||||||
|
![Stack Overflow schema](./images/stackoverflow.png)
|
||||||
|
|
||||||
|
A description of the schema of this data can be found [here](https://meta.stackexchange.com/questions/2677/database-schema-documentation-for-the-public-data-dump-and-sede).
|
||||||
|
|
||||||
|
## Pre-prepared data
|
||||||
|
|
||||||
|
We provide a copy of this data in Parquet format, up to date as of April 2024. While small for ClickHouse with respect to the number of rows (60 million posts), this dataset contains significant volumes of text and large String columns.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE DATABASE stackoverflow
|
||||||
|
```
|
||||||
|
|
||||||
|
The following timings are for a 96 GiB, 24 vCPU ClickHouse Cloud cluster located in `eu-west-2`. The dataset is located in `eu-west-3`.
|
||||||
|
|
||||||
|
### Posts
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE stackoverflow.posts
|
||||||
|
(
|
||||||
|
`Id` Int32 CODEC(Delta(4), ZSTD(1)),
|
||||||
|
`PostTypeId` Enum8('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
|
||||||
|
`AcceptedAnswerId` UInt32,
|
||||||
|
`CreationDate` DateTime64(3, 'UTC'),
|
||||||
|
`Score` Int32,
|
||||||
|
`ViewCount` UInt32 CODEC(Delta(4), ZSTD(1)),
|
||||||
|
`Body` String,
|
||||||
|
`OwnerUserId` Int32,
|
||||||
|
`OwnerDisplayName` String,
|
||||||
|
`LastEditorUserId` Int32,
|
||||||
|
`LastEditorDisplayName` String,
|
||||||
|
`LastEditDate` DateTime64(3, 'UTC') CODEC(Delta(8), ZSTD(1)),
|
||||||
|
`LastActivityDate` DateTime64(3, 'UTC'),
|
||||||
|
`Title` String,
|
||||||
|
`Tags` String,
|
||||||
|
`AnswerCount` UInt16 CODEC(Delta(2), ZSTD(1)),
|
||||||
|
`CommentCount` UInt8,
|
||||||
|
`FavoriteCount` UInt8,
|
||||||
|
`ContentLicense` LowCardinality(String),
|
||||||
|
`ParentId` String,
|
||||||
|
`CommunityOwnedDate` DateTime64(3, 'UTC'),
|
||||||
|
`ClosedDate` DateTime64(3, 'UTC')
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree
|
||||||
|
PARTITION BY toYear(CreationDate)
|
||||||
|
ORDER BY (PostTypeId, toDate(CreationDate), CreationDate)
|
||||||
|
|
||||||
|
INSERT INTO stackoverflow.posts SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/*.parquet')
|
||||||
|
|
||||||
|
0 rows in set. Elapsed: 265.466 sec. Processed 59.82 million rows, 38.07 GB (225.34 thousand rows/s., 143.42 MB/s.)
|
||||||
|
```
|
||||||
|
|
||||||
|
Posts are also available by year e.g. [https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/2020.parquet](https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/2020.parquet)
|
||||||
|
|
||||||
|
|
||||||
|
### Votes
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE stackoverflow.votes
|
||||||
|
(
|
||||||
|
`Id` UInt32,
|
||||||
|
`PostId` Int32,
|
||||||
|
`VoteTypeId` UInt8,
|
||||||
|
`CreationDate` DateTime64(3, 'UTC'),
|
||||||
|
`UserId` Int32,
|
||||||
|
`BountyAmount` UInt8
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree
|
||||||
|
ORDER BY (VoteTypeId, CreationDate, PostId, UserId)
|
||||||
|
|
||||||
|
INSERT INTO stackoverflow.votes SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/votes/*.parquet')
|
||||||
|
|
||||||
|
0 rows in set. Elapsed: 21.605 sec. Processed 238.98 million rows, 2.13 GB (11.06 million rows/s., 98.46 MB/s.)
|
||||||
|
```
|
||||||
|
|
||||||
|
Votes are also available by year e.g. [https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/2020.parquet](https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/votes/2020.parquet)
|
||||||
|
|
||||||
|
|
||||||
|
### Comments
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE stackoverflow.comments
|
||||||
|
(
|
||||||
|
`Id` UInt32,
|
||||||
|
`PostId` UInt32,
|
||||||
|
`Score` UInt16,
|
||||||
|
`Text` String,
|
||||||
|
`CreationDate` DateTime64(3, 'UTC'),
|
||||||
|
`UserId` Int32,
|
||||||
|
`UserDisplayName` LowCardinality(String)
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree
|
||||||
|
ORDER BY CreationDate
|
||||||
|
|
||||||
|
INSERT INTO stackoverflow.comments SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/comments/*.parquet')
|
||||||
|
|
||||||
|
0 rows in set. Elapsed: 56.593 sec. Processed 90.38 million rows, 11.14 GB (1.60 million rows/s., 196.78 MB/s.)
|
||||||
|
```
|
||||||
|
|
||||||
|
Comments are also available by year e.g. [https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/2020.parquet](https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/comments/2020.parquet)
|
||||||
|
|
||||||
|
### Users
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE stackoverflow.users
|
||||||
|
(
|
||||||
|
`Id` Int32,
|
||||||
|
`Reputation` LowCardinality(String),
|
||||||
|
`CreationDate` DateTime64(3, 'UTC') CODEC(Delta(8), ZSTD(1)),
|
||||||
|
`DisplayName` String,
|
||||||
|
`LastAccessDate` DateTime64(3, 'UTC'),
|
||||||
|
`AboutMe` String,
|
||||||
|
`Views` UInt32,
|
||||||
|
`UpVotes` UInt32,
|
||||||
|
`DownVotes` UInt32,
|
||||||
|
`WebsiteUrl` String,
|
||||||
|
`Location` LowCardinality(String),
|
||||||
|
`AccountId` Int32
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree
|
||||||
|
ORDER BY (Id, CreationDate)
|
||||||
|
|
||||||
|
INSERT INTO stackoverflow.users SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/users.parquet')
|
||||||
|
|
||||||
|
0 rows in set. Elapsed: 10.988 sec. Processed 22.48 million rows, 1.36 GB (2.05 million rows/s., 124.10 MB/s.)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Badges
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE stackoverflow.badges
|
||||||
|
(
|
||||||
|
`Id` UInt32,
|
||||||
|
`UserId` Int32,
|
||||||
|
`Name` LowCardinality(String),
|
||||||
|
`Date` DateTime64(3, 'UTC'),
|
||||||
|
`Class` Enum8('Gold' = 1, 'Silver' = 2, 'Bronze' = 3),
|
||||||
|
`TagBased` Bool
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree
|
||||||
|
ORDER BY UserId
|
||||||
|
|
||||||
|
INSERT INTO stackoverflow.badges SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/badges.parquet')
|
||||||
|
|
||||||
|
0 rows in set. Elapsed: 6.635 sec. Processed 51.29 million rows, 797.05 MB (7.73 million rows/s., 120.13 MB/s.)
|
||||||
|
```
|
||||||
|
|
||||||
|
### `PostLinks`
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE stackoverflow.postlinks
|
||||||
|
(
|
||||||
|
`Id` UInt64,
|
||||||
|
`CreationDate` DateTime64(3, 'UTC'),
|
||||||
|
`PostId` Int32,
|
||||||
|
`RelatedPostId` Int32,
|
||||||
|
`LinkTypeId` Enum8('Linked' = 1, 'Duplicate' = 3)
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree
|
||||||
|
ORDER BY (PostId, RelatedPostId)
|
||||||
|
|
||||||
|
INSERT INTO stackoverflow.postlinks SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/postlinks.parquet')
|
||||||
|
|
||||||
|
0 rows in set. Elapsed: 1.534 sec. Processed 6.55 million rows, 129.70 MB (4.27 million rows/s., 84.57 MB/s.)
|
||||||
|
```
|
||||||
|
|
||||||
|
### `PostHistory`
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE stackoverflow.posthistory
|
||||||
|
(
|
||||||
|
`Id` UInt64,
|
||||||
|
`PostHistoryTypeId` UInt8,
|
||||||
|
`PostId` Int32,
|
||||||
|
`RevisionGUID` String,
|
||||||
|
`CreationDate` DateTime64(3, 'UTC'),
|
||||||
|
`UserId` Int32,
|
||||||
|
`Text` String,
|
||||||
|
`ContentLicense` LowCardinality(String),
|
||||||
|
`Comment` String,
|
||||||
|
`UserDisplayName` String
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree
|
||||||
|
ORDER BY (CreationDate, PostId)
|
||||||
|
|
||||||
|
INSERT INTO stackoverflow.posthistory SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posthistory/*.parquet')
|
||||||
|
|
||||||
|
0 rows in set. Elapsed: 422.795 sec. Processed 160.79 million rows, 67.08 GB (380.30 thousand rows/s., 158.67 MB/s.)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Original dataset
|
||||||
|
|
||||||
|
The original dataset is available in compressed (7zip) XML format at [https://archive.org/download/stackexchange](https://archive.org/download/stackexchange) - files with prefix `stackoverflow.com*`.
|
||||||
|
|
||||||
|
### Download
|
||||||
|
|
||||||
|
```bash
|
||||||
|
wget https://archive.org/download/stackexchange/stackoverflow.com-Badges.7z
|
||||||
|
wget https://archive.org/download/stackexchange/stackoverflow.com-Comments.7z
|
||||||
|
wget https://archive.org/download/stackexchange/stackoverflow.com-PostHistory.7z
|
||||||
|
wget https://archive.org/download/stackexchange/stackoverflow.com-PostLinks.7z
|
||||||
|
wget https://archive.org/download/stackexchange/stackoverflow.com-Posts.7z
|
||||||
|
wget https://archive.org/download/stackexchange/stackoverflow.com-Users.7z
|
||||||
|
wget https://archive.org/download/stackexchange/stackoverflow.com-Votes.7z
|
||||||
|
```
|
||||||
|
|
||||||
|
These files are up to 35GB and can take around 30 mins to download depending on internet connection - the download server throttles at around 20MB/sec.
|
||||||
|
|
||||||
|
### Convert to JSON
|
||||||
|
|
||||||
|
At the time of writing, ClickHouse does not have native support for XML as an input format. To load the data into ClickHouse we first convert to NDJSON.
|
||||||
|
|
||||||
|
To convert XML to JSON we recommend the [`xq`](https://github.com/kislyuk/yq) linux tool, a simple `jq` wrapper for XML documents.
|
||||||
|
|
||||||
|
Install xq and jq:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt install jq
|
||||||
|
pip install yq
|
||||||
|
```
|
||||||
|
|
||||||
|
The following steps apply to any of the above files. We use the `stackoverflow.com-Posts.7z` file as an example. Modify as required.
|
||||||
|
|
||||||
|
Extract the file using [p7zip](https://p7zip.sourceforge.net/). This will produce a single xml file - in this case `Posts.xml`.
|
||||||
|
|
||||||
|
> Files are compressed approximately 4.5x. At 22GB compressed, the posts file requires around 97G uncompressed.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
p7zip -d stackoverflow.com-Posts.7z
|
||||||
|
```
|
||||||
|
|
||||||
|
The following splits the xml file into files, each containing 10000 rows.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir posts
|
||||||
|
cd posts
|
||||||
|
# the following splits the input xml file into sub files of 10000 rows
|
||||||
|
tail +3 ../Posts.xml | head -n -1 | split -l 10000 --filter='{ printf "<rows>\n"; cat - ; printf "</rows>\n"; } > $FILE' -
|
||||||
|
```
|
||||||
|
|
||||||
|
After running the above users will have a set of files, each with 10000 lines. This ensures the memory overhead of the next command is not excessive (xml to JSON conversion is done in memory).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
find . -maxdepth 1 -type f -exec xq -c '.rows.row[]' {} \; | sed -e 's:"@:":g' > posts_v2.json
|
||||||
|
```
|
||||||
|
|
||||||
|
The above command will produce a single `posts.json` file.
|
||||||
|
|
||||||
|
Load into ClickHouse with the following command. Note the schema is specified for the `posts.json` file. This will need to be adjusted per data type to align with the target table.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
clickhouse local --query "SELECT * FROM file('posts.json', JSONEachRow, 'Id Int32, PostTypeId UInt8, AcceptedAnswerId UInt32, CreationDate DateTime64(3, \'UTC\'), Score Int32, ViewCount UInt32, Body String, OwnerUserId Int32, OwnerDisplayName String, LastEditorUserId Int32, LastEditorDisplayName String, LastEditDate DateTime64(3, \'UTC\'), LastActivityDate DateTime64(3, \'UTC\'), Title String, Tags String, AnswerCount UInt16, CommentCount UInt8, FavoriteCount UInt8, ContentLicense String, ParentId String, CommunityOwnedDate DateTime64(3, \'UTC\'), ClosedDate DateTime64(3, \'UTC\')') FORMAT Native" | clickhouse client --host <host> --secure --password <password> --query "INSERT INTO stackoverflow.posts_v2 FORMAT Native"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Example queries
|
||||||
|
|
||||||
|
A few simple questions to you get started.
|
||||||
|
|
||||||
|
### Most popular tags on Stack Overflow
|
||||||
|
|
||||||
|
```sql
|
||||||
|
|
||||||
|
SELECT
|
||||||
|
arrayJoin(arrayFilter(t -> (t != ''), splitByChar('|', Tags))) AS Tags,
|
||||||
|
count() AS c
|
||||||
|
FROM stackoverflow.posts
|
||||||
|
GROUP BY Tags
|
||||||
|
ORDER BY c DESC
|
||||||
|
LIMIT 10
|
||||||
|
|
||||||
|
┌─Tags───────┬───────c─┐
|
||||||
|
│ javascript │ 2527130 │
|
||||||
|
│ python │ 2189638 │
|
||||||
|
│ java │ 1916156 │
|
||||||
|
│ c# │ 1614236 │
|
||||||
|
│ php │ 1463901 │
|
||||||
|
│ android │ 1416442 │
|
||||||
|
│ html │ 1186567 │
|
||||||
|
│ jquery │ 1034621 │
|
||||||
|
│ c++ │ 806202 │
|
||||||
|
│ css │ 803755 │
|
||||||
|
└────────────┴─────────┘
|
||||||
|
|
||||||
|
10 rows in set. Elapsed: 1.013 sec. Processed 59.82 million rows, 1.21 GB (59.07 million rows/s., 1.19 GB/s.)
|
||||||
|
Peak memory usage: 224.03 MiB.
|
||||||
|
```
|
||||||
|
|
||||||
|
### User with the most answers (active accounts)
|
||||||
|
|
||||||
|
Account requires a `UserId`.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT
|
||||||
|
any(OwnerUserId) UserId,
|
||||||
|
OwnerDisplayName,
|
||||||
|
count() AS c
|
||||||
|
FROM stackoverflow.posts WHERE OwnerDisplayName != '' AND PostTypeId='Answer' AND OwnerUserId != 0
|
||||||
|
GROUP BY OwnerDisplayName
|
||||||
|
ORDER BY c DESC
|
||||||
|
LIMIT 5
|
||||||
|
|
||||||
|
┌─UserId─┬─OwnerDisplayName─┬────c─┐
|
||||||
|
│ 22656 │ Jon Skeet │ 2727 │
|
||||||
|
│ 23354 │ Marc Gravell │ 2150 │
|
||||||
|
│ 12950 │ tvanfosson │ 1530 │
|
||||||
|
│ 3043 │ Joel Coehoorn │ 1438 │
|
||||||
|
│ 10661 │ S.Lott │ 1087 │
|
||||||
|
└────────┴──────────────────┴──────┘
|
||||||
|
|
||||||
|
5 rows in set. Elapsed: 0.154 sec. Processed 35.83 million rows, 193.39 MB (232.33 million rows/s., 1.25 GB/s.)
|
||||||
|
Peak memory usage: 206.45 MiB.
|
||||||
|
```
|
||||||
|
|
||||||
|
### ClickHouse related posts with the most views
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT
|
||||||
|
Id,
|
||||||
|
Title,
|
||||||
|
ViewCount,
|
||||||
|
AnswerCount
|
||||||
|
FROM stackoverflow.posts
|
||||||
|
WHERE Title ILIKE '%ClickHouse%'
|
||||||
|
ORDER BY ViewCount DESC
|
||||||
|
LIMIT 10
|
||||||
|
|
||||||
|
┌───────Id─┬─Title────────────────────────────────────────────────────────────────────────────┬─ViewCount─┬─AnswerCount─┐
|
||||||
|
│ 52355143 │ Is it possible to delete old records from clickhouse table? │ 41462 │ 3 │
|
||||||
|
│ 37954203 │ Clickhouse Data Import │ 38735 │ 3 │
|
||||||
|
│ 37901642 │ Updating data in Clickhouse │ 36236 │ 6 │
|
||||||
|
│ 58422110 │ Pandas: How to insert dataframe into Clickhouse │ 29731 │ 4 │
|
||||||
|
│ 63621318 │ DBeaver - Clickhouse - SQL Error [159] .. Read timed out │ 27350 │ 1 │
|
||||||
|
│ 47591813 │ How to filter clickhouse table by array column contents? │ 27078 │ 2 │
|
||||||
|
│ 58728436 │ How to search the string in query with case insensitive on Clickhouse database? │ 26567 │ 3 │
|
||||||
|
│ 65316905 │ Clickhouse: DB::Exception: Memory limit (for query) exceeded │ 24899 │ 2 │
|
||||||
|
│ 49944865 │ How to add a column in clickhouse │ 24424 │ 1 │
|
||||||
|
│ 59712399 │ How to cast date Strings to DateTime format with extended parsing in ClickHouse? │ 22620 │ 1 │
|
||||||
|
└──────────┴──────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────────┘
|
||||||
|
|
||||||
|
10 rows in set. Elapsed: 0.472 sec. Processed 59.82 million rows, 1.91 GB (126.63 million rows/s., 4.03 GB/s.)
|
||||||
|
Peak memory usage: 240.01 MiB.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Most controversial posts
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT
|
||||||
|
Id,
|
||||||
|
Title,
|
||||||
|
UpVotes,
|
||||||
|
DownVotes,
|
||||||
|
abs(UpVotes - DownVotes) AS Controversial_ratio
|
||||||
|
FROM stackoverflow.posts
|
||||||
|
INNER JOIN
|
||||||
|
(
|
||||||
|
SELECT
|
||||||
|
PostId,
|
||||||
|
countIf(VoteTypeId = 2) AS UpVotes,
|
||||||
|
countIf(VoteTypeId = 3) AS DownVotes
|
||||||
|
FROM stackoverflow.votes
|
||||||
|
GROUP BY PostId
|
||||||
|
HAVING (UpVotes > 10) AND (DownVotes > 10)
|
||||||
|
) AS votes ON posts.Id = votes.PostId
|
||||||
|
WHERE Title != ''
|
||||||
|
ORDER BY Controversial_ratio ASC
|
||||||
|
LIMIT 3
|
||||||
|
|
||||||
|
┌───────Id─┬─Title─────────────────────────────────────────────┬─UpVotes─┬─DownVotes─┬─Controversial_ratio─┐
|
||||||
|
│ 583177 │ VB.NET Infinite For Loop │ 12 │ 12 │ 0 │
|
||||||
|
│ 9756797 │ Read console input as enumerable - one statement? │ 16 │ 16 │ 0 │
|
||||||
|
│ 13329132 │ What's the point of ARGV in Ruby? │ 22 │ 22 │ 0 │
|
||||||
|
└──────────┴───────────────────────────────────────────────────┴─────────┴───────────┴─────────────────────┘
|
||||||
|
|
||||||
|
3 rows in set. Elapsed: 4.779 sec. Processed 298.80 million rows, 3.16 GB (62.52 million rows/s., 661.05 MB/s.)
|
||||||
|
Peak memory usage: 6.05 GiB.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Attribution
|
||||||
|
|
||||||
|
We thank Stack Overflow for providing this data under the `cc-by-sa 4.0` license, acknowledging their efforts and the original source of the data at [https://archive.org/details/stackexchange](https://archive.org/details/stackexchange).
|
@ -201,18 +201,18 @@ ClickHouse 不要求主键唯一,所以您可以插入多条具有相同主键
|
|||||||
|
|
||||||
主键中列的数量并没有明确的限制。依据数据结构,您可以在主键包含多些或少些列。这样可以:
|
主键中列的数量并没有明确的限制。依据数据结构,您可以在主键包含多些或少些列。这样可以:
|
||||||
|
|
||||||
- 改善索引的性能。
|
- 改善索引的性能。
|
||||||
|
|
||||||
- 如果当前主键是 `(a, b)` ,在下列情况下添加另一个 `c` 列会提升性能:
|
如果当前主键是 `(a, b)` ,在下列情况下添加另一个 `c` 列会提升性能:
|
||||||
|
|
||||||
- 查询会使用 `c` 列作为条件
|
- 查询会使用 `c` 列作为条件
|
||||||
- 很长的数据范围( `index_granularity` 的数倍)里 `(a, b)` 都是相同的值,并且这样的情况很普遍。换言之,就是加入另一列后,可以让您的查询略过很长的数据范围。
|
- 很长的数据范围( `index_granularity` 的数倍)里 `(a, b)` 都是相同的值,并且这样的情况很普遍。换言之,就是加入另一列后,可以让您的查询略过很长的数据范围。
|
||||||
|
|
||||||
- 改善数据压缩。
|
- 改善数据压缩。
|
||||||
|
|
||||||
ClickHouse 以主键排序片段数据,所以,数据的一致性越高,压缩越好。
|
ClickHouse 以主键排序片段数据,所以,数据的一致性越高,压缩越好。
|
||||||
|
|
||||||
- 在[CollapsingMergeTree](collapsingmergetree.md#table_engine-collapsingmergetree) 和 [SummingMergeTree](summingmergetree.md) 引擎里进行数据合并时会提供额外的处理逻辑。
|
- 在[CollapsingMergeTree](collapsingmergetree.md#table_engine-collapsingmergetree) 和 [SummingMergeTree](summingmergetree.md) 引擎里进行数据合并时会提供额外的处理逻辑。
|
||||||
|
|
||||||
在这种情况下,指定与主键不同的 *排序键* 也是有意义的。
|
在这种情况下,指定与主键不同的 *排序键* 也是有意义的。
|
||||||
|
|
||||||
|
@ -383,6 +383,9 @@ int KeeperClient::main(const std::vector<String> & /* args */)
|
|||||||
|
|
||||||
for (const auto & key : keys)
|
for (const auto & key : keys)
|
||||||
{
|
{
|
||||||
|
if (key != "node")
|
||||||
|
continue;
|
||||||
|
|
||||||
String prefix = "zookeeper." + key;
|
String prefix = "zookeeper." + key;
|
||||||
String host = clickhouse_config.configuration->getString(prefix + ".host");
|
String host = clickhouse_config.configuration->getString(prefix + ".host");
|
||||||
String port = clickhouse_config.configuration->getString(prefix + ".port");
|
String port = clickhouse_config.configuration->getString(prefix + ".port");
|
||||||
@ -401,6 +404,7 @@ int KeeperClient::main(const std::vector<String> & /* args */)
|
|||||||
zk_args.hosts.push_back(host + ":" + port);
|
zk_args.hosts.push_back(host + ":" + port);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
zk_args.availability_zones.resize(zk_args.hosts.size());
|
||||||
zk_args.connection_timeout_ms = config().getInt("connection-timeout", 10) * 1000;
|
zk_args.connection_timeout_ms = config().getInt("connection-timeout", 10) * 1000;
|
||||||
zk_args.session_timeout_ms = config().getInt("session-timeout", 10) * 1000;
|
zk_args.session_timeout_ms = config().getInt("session-timeout", 10) * 1000;
|
||||||
zk_args.operation_timeout_ms = config().getInt("operation-timeout", 10) * 1000;
|
zk_args.operation_timeout_ms = config().getInt("operation-timeout", 10) * 1000;
|
||||||
|
@ -355,10 +355,7 @@ try
|
|||||||
|
|
||||||
std::string include_from_path = config().getString("include_from", "/etc/metrika.xml");
|
std::string include_from_path = config().getString("include_from", "/etc/metrika.xml");
|
||||||
|
|
||||||
if (config().has(DB::PlacementInfo::PLACEMENT_CONFIG_PREFIX))
|
PlacementInfo::PlacementInfo::instance().initialize(config());
|
||||||
{
|
|
||||||
PlacementInfo::PlacementInfo::instance().initialize(config());
|
|
||||||
}
|
|
||||||
|
|
||||||
GlobalThreadPool::initialize(
|
GlobalThreadPool::initialize(
|
||||||
/// We need to have sufficient amount of threads for connections + nuraft workers + keeper workers, 1000 is an estimation
|
/// We need to have sufficient amount of threads for connections + nuraft workers + keeper workers, 1000 is an estimation
|
||||||
|
@ -32,6 +32,7 @@
|
|||||||
#include <Common/quoteString.h>
|
#include <Common/quoteString.h>
|
||||||
#include <Common/randomSeed.h>
|
#include <Common/randomSeed.h>
|
||||||
#include <Common/ThreadPool.h>
|
#include <Common/ThreadPool.h>
|
||||||
|
#include <Common/CurrentMetrics.h>
|
||||||
#include <Loggers/OwnFormattingChannel.h>
|
#include <Loggers/OwnFormattingChannel.h>
|
||||||
#include <Loggers/OwnPatternFormatter.h>
|
#include <Loggers/OwnPatternFormatter.h>
|
||||||
#include <IO/ReadBufferFromFile.h>
|
#include <IO/ReadBufferFromFile.h>
|
||||||
@ -59,8 +60,13 @@
|
|||||||
# include <azure/storage/common/internal/xml_wrapper.hpp>
|
# include <azure/storage/common/internal/xml_wrapper.hpp>
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
|
||||||
namespace fs = std::filesystem;
|
namespace fs = std::filesystem;
|
||||||
|
|
||||||
|
namespace CurrentMetrics
|
||||||
|
{
|
||||||
|
extern const Metric MemoryTracking;
|
||||||
|
}
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
@ -131,11 +137,12 @@ void LocalServer::initialize(Poco::Util::Application & self)
|
|||||||
getClientConfiguration().add(loaded_config.configuration.duplicate(), PRIO_DEFAULT, false);
|
getClientConfiguration().add(loaded_config.configuration.duplicate(), PRIO_DEFAULT, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
server_settings.loadSettingsFromConfig(config());
|
||||||
|
|
||||||
GlobalThreadPool::initialize(
|
GlobalThreadPool::initialize(
|
||||||
getClientConfiguration().getUInt("max_thread_pool_size", 10000),
|
server_settings.max_thread_pool_size,
|
||||||
getClientConfiguration().getUInt("max_thread_pool_free_size", 1000),
|
server_settings.max_thread_pool_free_size,
|
||||||
getClientConfiguration().getUInt("thread_pool_queue_size", 10000)
|
server_settings.thread_pool_queue_size);
|
||||||
);
|
|
||||||
|
|
||||||
#if USE_AZURE_BLOB_STORAGE
|
#if USE_AZURE_BLOB_STORAGE
|
||||||
/// See the explanation near the same line in Server.cpp
|
/// See the explanation near the same line in Server.cpp
|
||||||
@ -146,18 +153,17 @@ void LocalServer::initialize(Poco::Util::Application & self)
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
getIOThreadPool().initialize(
|
getIOThreadPool().initialize(
|
||||||
getClientConfiguration().getUInt("max_io_thread_pool_size", 100),
|
server_settings.max_io_thread_pool_size,
|
||||||
getClientConfiguration().getUInt("max_io_thread_pool_free_size", 0),
|
server_settings.max_io_thread_pool_free_size,
|
||||||
getClientConfiguration().getUInt("io_thread_pool_queue_size", 10000));
|
server_settings.io_thread_pool_queue_size);
|
||||||
|
|
||||||
|
const size_t active_parts_loading_threads = server_settings.max_active_parts_loading_thread_pool_size;
|
||||||
const size_t active_parts_loading_threads = getClientConfiguration().getUInt("max_active_parts_loading_thread_pool_size", 64);
|
|
||||||
getActivePartsLoadingThreadPool().initialize(
|
getActivePartsLoadingThreadPool().initialize(
|
||||||
active_parts_loading_threads,
|
active_parts_loading_threads,
|
||||||
0, // We don't need any threads one all the parts will be loaded
|
0, // We don't need any threads one all the parts will be loaded
|
||||||
active_parts_loading_threads);
|
active_parts_loading_threads);
|
||||||
|
|
||||||
const size_t outdated_parts_loading_threads = getClientConfiguration().getUInt("max_outdated_parts_loading_thread_pool_size", 32);
|
const size_t outdated_parts_loading_threads = server_settings.max_outdated_parts_loading_thread_pool_size;
|
||||||
getOutdatedPartsLoadingThreadPool().initialize(
|
getOutdatedPartsLoadingThreadPool().initialize(
|
||||||
outdated_parts_loading_threads,
|
outdated_parts_loading_threads,
|
||||||
0, // We don't need any threads one all the parts will be loaded
|
0, // We don't need any threads one all the parts will be loaded
|
||||||
@ -165,7 +171,7 @@ void LocalServer::initialize(Poco::Util::Application & self)
|
|||||||
|
|
||||||
getOutdatedPartsLoadingThreadPool().setMaxTurboThreads(active_parts_loading_threads);
|
getOutdatedPartsLoadingThreadPool().setMaxTurboThreads(active_parts_loading_threads);
|
||||||
|
|
||||||
const size_t unexpected_parts_loading_threads = getClientConfiguration().getUInt("max_unexpected_parts_loading_thread_pool_size", 32);
|
const size_t unexpected_parts_loading_threads = server_settings.max_unexpected_parts_loading_thread_pool_size;
|
||||||
getUnexpectedPartsLoadingThreadPool().initialize(
|
getUnexpectedPartsLoadingThreadPool().initialize(
|
||||||
unexpected_parts_loading_threads,
|
unexpected_parts_loading_threads,
|
||||||
0, // We don't need any threads one all the parts will be loaded
|
0, // We don't need any threads one all the parts will be loaded
|
||||||
@ -173,7 +179,7 @@ void LocalServer::initialize(Poco::Util::Application & self)
|
|||||||
|
|
||||||
getUnexpectedPartsLoadingThreadPool().setMaxTurboThreads(active_parts_loading_threads);
|
getUnexpectedPartsLoadingThreadPool().setMaxTurboThreads(active_parts_loading_threads);
|
||||||
|
|
||||||
const size_t cleanup_threads = getClientConfiguration().getUInt("max_parts_cleaning_thread_pool_size", 128);
|
const size_t cleanup_threads = server_settings.max_parts_cleaning_thread_pool_size;
|
||||||
getPartsCleaningThreadPool().initialize(
|
getPartsCleaningThreadPool().initialize(
|
||||||
cleanup_threads,
|
cleanup_threads,
|
||||||
0, // We don't need any threads one all the parts will be deleted
|
0, // We don't need any threads one all the parts will be deleted
|
||||||
@ -438,7 +444,7 @@ try
|
|||||||
UseSSL use_ssl;
|
UseSSL use_ssl;
|
||||||
thread_status.emplace();
|
thread_status.emplace();
|
||||||
|
|
||||||
StackTrace::setShowAddresses(getClientConfiguration().getBool("show_addresses_in_stack_traces", true));
|
StackTrace::setShowAddresses(server_settings.show_addresses_in_stack_traces);
|
||||||
|
|
||||||
setupSignalHandler();
|
setupSignalHandler();
|
||||||
|
|
||||||
@ -624,12 +630,43 @@ void LocalServer::processConfig()
|
|||||||
global_context->getProcessList().setMaxSize(0);
|
global_context->getProcessList().setMaxSize(0);
|
||||||
|
|
||||||
const size_t physical_server_memory = getMemoryAmount();
|
const size_t physical_server_memory = getMemoryAmount();
|
||||||
const double cache_size_to_ram_max_ratio = getClientConfiguration().getDouble("cache_size_to_ram_max_ratio", 0.5);
|
|
||||||
|
size_t max_server_memory_usage = server_settings.max_server_memory_usage;
|
||||||
|
double max_server_memory_usage_to_ram_ratio = server_settings.max_server_memory_usage_to_ram_ratio;
|
||||||
|
|
||||||
|
size_t default_max_server_memory_usage = static_cast<size_t>(physical_server_memory * max_server_memory_usage_to_ram_ratio);
|
||||||
|
|
||||||
|
if (max_server_memory_usage == 0)
|
||||||
|
{
|
||||||
|
max_server_memory_usage = default_max_server_memory_usage;
|
||||||
|
LOG_INFO(log, "Setting max_server_memory_usage was set to {}"
|
||||||
|
" ({} available * {:.2f} max_server_memory_usage_to_ram_ratio)",
|
||||||
|
formatReadableSizeWithBinarySuffix(max_server_memory_usage),
|
||||||
|
formatReadableSizeWithBinarySuffix(physical_server_memory),
|
||||||
|
max_server_memory_usage_to_ram_ratio);
|
||||||
|
}
|
||||||
|
else if (max_server_memory_usage > default_max_server_memory_usage)
|
||||||
|
{
|
||||||
|
max_server_memory_usage = default_max_server_memory_usage;
|
||||||
|
LOG_INFO(log, "Setting max_server_memory_usage was lowered to {}"
|
||||||
|
" because the system has low amount of memory. The amount was"
|
||||||
|
" calculated as {} available"
|
||||||
|
" * {:.2f} max_server_memory_usage_to_ram_ratio",
|
||||||
|
formatReadableSizeWithBinarySuffix(max_server_memory_usage),
|
||||||
|
formatReadableSizeWithBinarySuffix(physical_server_memory),
|
||||||
|
max_server_memory_usage_to_ram_ratio);
|
||||||
|
}
|
||||||
|
|
||||||
|
total_memory_tracker.setHardLimit(max_server_memory_usage);
|
||||||
|
total_memory_tracker.setDescription("(total)");
|
||||||
|
total_memory_tracker.setMetric(CurrentMetrics::MemoryTracking);
|
||||||
|
|
||||||
|
const double cache_size_to_ram_max_ratio = server_settings.cache_size_to_ram_max_ratio;
|
||||||
const size_t max_cache_size = static_cast<size_t>(physical_server_memory * cache_size_to_ram_max_ratio);
|
const size_t max_cache_size = static_cast<size_t>(physical_server_memory * cache_size_to_ram_max_ratio);
|
||||||
|
|
||||||
String uncompressed_cache_policy = getClientConfiguration().getString("uncompressed_cache_policy", DEFAULT_UNCOMPRESSED_CACHE_POLICY);
|
String uncompressed_cache_policy = server_settings.uncompressed_cache_policy;
|
||||||
size_t uncompressed_cache_size = getClientConfiguration().getUInt64("uncompressed_cache_size", DEFAULT_UNCOMPRESSED_CACHE_MAX_SIZE);
|
size_t uncompressed_cache_size = server_settings.uncompressed_cache_size;
|
||||||
double uncompressed_cache_size_ratio = getClientConfiguration().getDouble("uncompressed_cache_size_ratio", DEFAULT_UNCOMPRESSED_CACHE_SIZE_RATIO);
|
double uncompressed_cache_size_ratio = server_settings.uncompressed_cache_size_ratio;
|
||||||
if (uncompressed_cache_size > max_cache_size)
|
if (uncompressed_cache_size > max_cache_size)
|
||||||
{
|
{
|
||||||
uncompressed_cache_size = max_cache_size;
|
uncompressed_cache_size = max_cache_size;
|
||||||
@ -637,9 +674,9 @@ void LocalServer::processConfig()
|
|||||||
}
|
}
|
||||||
global_context->setUncompressedCache(uncompressed_cache_policy, uncompressed_cache_size, uncompressed_cache_size_ratio);
|
global_context->setUncompressedCache(uncompressed_cache_policy, uncompressed_cache_size, uncompressed_cache_size_ratio);
|
||||||
|
|
||||||
String mark_cache_policy = getClientConfiguration().getString("mark_cache_policy", DEFAULT_MARK_CACHE_POLICY);
|
String mark_cache_policy = server_settings.mark_cache_policy;
|
||||||
size_t mark_cache_size = getClientConfiguration().getUInt64("mark_cache_size", DEFAULT_MARK_CACHE_MAX_SIZE);
|
size_t mark_cache_size = server_settings.mark_cache_size;
|
||||||
double mark_cache_size_ratio = getClientConfiguration().getDouble("mark_cache_size_ratio", DEFAULT_MARK_CACHE_SIZE_RATIO);
|
double mark_cache_size_ratio = server_settings.mark_cache_size_ratio;
|
||||||
if (!mark_cache_size)
|
if (!mark_cache_size)
|
||||||
LOG_ERROR(log, "Too low mark cache size will lead to severe performance degradation.");
|
LOG_ERROR(log, "Too low mark cache size will lead to severe performance degradation.");
|
||||||
if (mark_cache_size > max_cache_size)
|
if (mark_cache_size > max_cache_size)
|
||||||
@ -649,9 +686,9 @@ void LocalServer::processConfig()
|
|||||||
}
|
}
|
||||||
global_context->setMarkCache(mark_cache_policy, mark_cache_size, mark_cache_size_ratio);
|
global_context->setMarkCache(mark_cache_policy, mark_cache_size, mark_cache_size_ratio);
|
||||||
|
|
||||||
String index_uncompressed_cache_policy = getClientConfiguration().getString("index_uncompressed_cache_policy", DEFAULT_INDEX_UNCOMPRESSED_CACHE_POLICY);
|
String index_uncompressed_cache_policy = server_settings.index_uncompressed_cache_policy;
|
||||||
size_t index_uncompressed_cache_size = getClientConfiguration().getUInt64("index_uncompressed_cache_size", DEFAULT_INDEX_UNCOMPRESSED_CACHE_MAX_SIZE);
|
size_t index_uncompressed_cache_size = server_settings.index_uncompressed_cache_size;
|
||||||
double index_uncompressed_cache_size_ratio = getClientConfiguration().getDouble("index_uncompressed_cache_size_ratio", DEFAULT_INDEX_UNCOMPRESSED_CACHE_SIZE_RATIO);
|
double index_uncompressed_cache_size_ratio = server_settings.index_uncompressed_cache_size_ratio;
|
||||||
if (index_uncompressed_cache_size > max_cache_size)
|
if (index_uncompressed_cache_size > max_cache_size)
|
||||||
{
|
{
|
||||||
index_uncompressed_cache_size = max_cache_size;
|
index_uncompressed_cache_size = max_cache_size;
|
||||||
@ -659,9 +696,9 @@ void LocalServer::processConfig()
|
|||||||
}
|
}
|
||||||
global_context->setIndexUncompressedCache(index_uncompressed_cache_policy, index_uncompressed_cache_size, index_uncompressed_cache_size_ratio);
|
global_context->setIndexUncompressedCache(index_uncompressed_cache_policy, index_uncompressed_cache_size, index_uncompressed_cache_size_ratio);
|
||||||
|
|
||||||
String index_mark_cache_policy = getClientConfiguration().getString("index_mark_cache_policy", DEFAULT_INDEX_MARK_CACHE_POLICY);
|
String index_mark_cache_policy = server_settings.index_mark_cache_policy;
|
||||||
size_t index_mark_cache_size = getClientConfiguration().getUInt64("index_mark_cache_size", DEFAULT_INDEX_MARK_CACHE_MAX_SIZE);
|
size_t index_mark_cache_size = server_settings.index_mark_cache_size;
|
||||||
double index_mark_cache_size_ratio = getClientConfiguration().getDouble("index_mark_cache_size_ratio", DEFAULT_INDEX_MARK_CACHE_SIZE_RATIO);
|
double index_mark_cache_size_ratio = server_settings.index_mark_cache_size_ratio;
|
||||||
if (index_mark_cache_size > max_cache_size)
|
if (index_mark_cache_size > max_cache_size)
|
||||||
{
|
{
|
||||||
index_mark_cache_size = max_cache_size;
|
index_mark_cache_size = max_cache_size;
|
||||||
@ -669,7 +706,7 @@ void LocalServer::processConfig()
|
|||||||
}
|
}
|
||||||
global_context->setIndexMarkCache(index_mark_cache_policy, index_mark_cache_size, index_mark_cache_size_ratio);
|
global_context->setIndexMarkCache(index_mark_cache_policy, index_mark_cache_size, index_mark_cache_size_ratio);
|
||||||
|
|
||||||
size_t mmap_cache_size = getClientConfiguration().getUInt64("mmap_cache_size", DEFAULT_MMAP_CACHE_MAX_SIZE);
|
size_t mmap_cache_size = server_settings.mmap_cache_size;
|
||||||
if (mmap_cache_size > max_cache_size)
|
if (mmap_cache_size > max_cache_size)
|
||||||
{
|
{
|
||||||
mmap_cache_size = max_cache_size;
|
mmap_cache_size = max_cache_size;
|
||||||
@ -681,8 +718,8 @@ void LocalServer::processConfig()
|
|||||||
global_context->setQueryCache(0, 0, 0, 0);
|
global_context->setQueryCache(0, 0, 0, 0);
|
||||||
|
|
||||||
#if USE_EMBEDDED_COMPILER
|
#if USE_EMBEDDED_COMPILER
|
||||||
size_t compiled_expression_cache_max_size_in_bytes = getClientConfiguration().getUInt64("compiled_expression_cache_size", DEFAULT_COMPILED_EXPRESSION_CACHE_MAX_SIZE);
|
size_t compiled_expression_cache_max_size_in_bytes = server_settings.compiled_expression_cache_size;
|
||||||
size_t compiled_expression_cache_max_elements = getClientConfiguration().getUInt64("compiled_expression_cache_elements_size", DEFAULT_COMPILED_EXPRESSION_CACHE_MAX_ENTRIES);
|
size_t compiled_expression_cache_max_elements = server_settings.compiled_expression_cache_elements_size;
|
||||||
CompiledExpressionCacheFactory::instance().init(compiled_expression_cache_max_size_in_bytes, compiled_expression_cache_max_elements);
|
CompiledExpressionCacheFactory::instance().init(compiled_expression_cache_max_size_in_bytes, compiled_expression_cache_max_elements);
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
@ -699,7 +736,7 @@ void LocalServer::processConfig()
|
|||||||
/// We load temporary database first, because projections need it.
|
/// We load temporary database first, because projections need it.
|
||||||
DatabaseCatalog::instance().initializeAndLoadTemporaryDatabase();
|
DatabaseCatalog::instance().initializeAndLoadTemporaryDatabase();
|
||||||
|
|
||||||
std::string default_database = getClientConfiguration().getString("default_database", "default");
|
std::string default_database = server_settings.default_database;
|
||||||
DatabaseCatalog::instance().attachDatabase(default_database, createClickHouseLocalDatabaseOverlay(default_database, global_context));
|
DatabaseCatalog::instance().attachDatabase(default_database, createClickHouseLocalDatabaseOverlay(default_database, global_context));
|
||||||
global_context->setCurrentDatabase(default_database);
|
global_context->setCurrentDatabase(default_database);
|
||||||
|
|
||||||
|
@ -66,6 +66,8 @@ private:
|
|||||||
void applyCmdOptions(ContextMutablePtr context);
|
void applyCmdOptions(ContextMutablePtr context);
|
||||||
void applyCmdSettings(ContextMutablePtr context);
|
void applyCmdSettings(ContextMutablePtr context);
|
||||||
|
|
||||||
|
ServerSettings server_settings;
|
||||||
|
|
||||||
std::optional<StatusFile> status;
|
std::optional<StatusFile> status;
|
||||||
std::optional<std::filesystem::path> temporary_directory_to_delete;
|
std::optional<std::filesystem::path> temporary_directory_to_delete;
|
||||||
|
|
||||||
|
@ -13,6 +13,7 @@
|
|||||||
|
|
||||||
#include <fmt/format.h>
|
#include <fmt/format.h>
|
||||||
|
|
||||||
|
#include "config.h"
|
||||||
#include "config_tools.h"
|
#include "config_tools.h"
|
||||||
|
|
||||||
#include <Common/StringUtils.h>
|
#include <Common/StringUtils.h>
|
||||||
@ -439,6 +440,14 @@ extern "C"
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
/// Prevent messages from JeMalloc in the release build.
|
||||||
|
/// Some of these messages are non-actionable for the users, such as:
|
||||||
|
/// <jemalloc>: Number of CPUs detected is not deterministic. Per-CPU arena disabled.
|
||||||
|
#if USE_JEMALLOC && defined(NDEBUG) && !defined(SANITIZER)
|
||||||
|
extern "C" void (*malloc_message)(void *, const char *s);
|
||||||
|
__attribute__((constructor(0))) void init_je_malloc_message() { malloc_message = [](void *, const char *){}; }
|
||||||
|
#endif
|
||||||
|
|
||||||
/// This allows to implement assert to forbid initialization of a class in static constructors.
|
/// This allows to implement assert to forbid initialization of a class in static constructors.
|
||||||
/// Usage:
|
/// Usage:
|
||||||
///
|
///
|
||||||
|
@ -1003,6 +1003,8 @@ try
|
|||||||
|
|
||||||
ServerUUID::load(path / "uuid", log);
|
ServerUUID::load(path / "uuid", log);
|
||||||
|
|
||||||
|
PlacementInfo::PlacementInfo::instance().initialize(config());
|
||||||
|
|
||||||
zkutil::validateZooKeeperConfig(config());
|
zkutil::validateZooKeeperConfig(config());
|
||||||
bool has_zookeeper = zkutil::hasZooKeeperConfig(config());
|
bool has_zookeeper = zkutil::hasZooKeeperConfig(config());
|
||||||
|
|
||||||
@ -1817,11 +1819,6 @@ try
|
|||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (config().has(DB::PlacementInfo::PLACEMENT_CONFIG_PREFIX))
|
|
||||||
{
|
|
||||||
PlacementInfo::PlacementInfo::instance().initialize(config());
|
|
||||||
}
|
|
||||||
|
|
||||||
{
|
{
|
||||||
std::lock_guard lock(servers_lock);
|
std::lock_guard lock(servers_lock);
|
||||||
/// We should start interserver communications before (and more important shutdown after) tables.
|
/// We should start interserver communications before (and more important shutdown after) tables.
|
||||||
|
@ -358,22 +358,18 @@ bool LocalConnection::poll(size_t)
|
|||||||
|
|
||||||
if (!state->is_finished)
|
if (!state->is_finished)
|
||||||
{
|
{
|
||||||
if (send_progress && (state->after_send_progress.elapsedMicroseconds() >= query_context->getSettingsRef().interactive_delay))
|
if (needSendProgressOrMetrics())
|
||||||
{
|
|
||||||
state->after_send_progress.restart();
|
|
||||||
next_packet_type = Protocol::Server::Progress;
|
|
||||||
return true;
|
return true;
|
||||||
}
|
|
||||||
|
|
||||||
if (send_profile_events && (state->after_send_profile_events.elapsedMicroseconds() >= query_context->getSettingsRef().interactive_delay))
|
|
||||||
{
|
|
||||||
sendProfileEvents();
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
pollImpl();
|
while (pollImpl())
|
||||||
|
{
|
||||||
|
LOG_DEBUG(&Poco::Logger::get("LocalConnection"), "Executor timeout encountered, will retry");
|
||||||
|
|
||||||
|
if (needSendProgressOrMetrics())
|
||||||
|
return true;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
catch (const Exception & e)
|
catch (const Exception & e)
|
||||||
{
|
{
|
||||||
@ -468,12 +464,34 @@ bool LocalConnection::poll(size_t)
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool LocalConnection::needSendProgressOrMetrics()
|
||||||
|
{
|
||||||
|
if (send_progress && (state->after_send_progress.elapsedMicroseconds() >= query_context->getSettingsRef().interactive_delay))
|
||||||
|
{
|
||||||
|
state->after_send_progress.restart();
|
||||||
|
next_packet_type = Protocol::Server::Progress;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (send_profile_events && (state->after_send_profile_events.elapsedMicroseconds() >= query_context->getSettingsRef().interactive_delay))
|
||||||
|
{
|
||||||
|
sendProfileEvents();
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
bool LocalConnection::pollImpl()
|
bool LocalConnection::pollImpl()
|
||||||
{
|
{
|
||||||
Block block;
|
Block block;
|
||||||
auto next_read = pullBlock(block);
|
auto next_read = pullBlock(block);
|
||||||
|
|
||||||
if (block && !state->io.null_format)
|
if (!block && next_read)
|
||||||
|
{
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
else if (block && !state->io.null_format)
|
||||||
{
|
{
|
||||||
state->block.emplace(block);
|
state->block.emplace(block);
|
||||||
}
|
}
|
||||||
@ -482,7 +500,7 @@ bool LocalConnection::pollImpl()
|
|||||||
state->is_finished = true;
|
state->is_finished = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
return true;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
Packet LocalConnection::receivePacket()
|
Packet LocalConnection::receivePacket()
|
||||||
|
@ -151,8 +151,11 @@ private:
|
|||||||
|
|
||||||
void sendProfileEvents();
|
void sendProfileEvents();
|
||||||
|
|
||||||
|
/// Returns true on executor timeout, meaning a retryable error.
|
||||||
bool pollImpl();
|
bool pollImpl();
|
||||||
|
|
||||||
|
bool needSendProgressOrMetrics();
|
||||||
|
|
||||||
ContextMutablePtr query_context;
|
ContextMutablePtr query_context;
|
||||||
Session session;
|
Session session;
|
||||||
|
|
||||||
|
@ -60,4 +60,26 @@ GetPriorityForLoadBalancing::getPriorityFunc(LoadBalancing load_balance, size_t
|
|||||||
return get_priority;
|
return get_priority;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Some load balancing strategies (such as "nearest hostname") have preferred nodes to connect to.
|
||||||
|
/// Usually it's a node in the same data center/availability zone.
|
||||||
|
/// For other strategies there's no difference between nodes.
|
||||||
|
bool GetPriorityForLoadBalancing::hasOptimalNode() const
|
||||||
|
{
|
||||||
|
switch (load_balancing)
|
||||||
|
{
|
||||||
|
case LoadBalancing::NEAREST_HOSTNAME:
|
||||||
|
return true;
|
||||||
|
case LoadBalancing::HOSTNAME_LEVENSHTEIN_DISTANCE:
|
||||||
|
return true;
|
||||||
|
case LoadBalancing::IN_ORDER:
|
||||||
|
return false;
|
||||||
|
case LoadBalancing::RANDOM:
|
||||||
|
return false;
|
||||||
|
case LoadBalancing::FIRST_OR_RANDOM:
|
||||||
|
return true;
|
||||||
|
case LoadBalancing::ROUND_ROBIN:
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -30,6 +30,8 @@ public:
|
|||||||
|
|
||||||
Func getPriorityFunc(LoadBalancing load_balance, size_t offset, size_t pool_size) const;
|
Func getPriorityFunc(LoadBalancing load_balance, size_t offset, size_t pool_size) const;
|
||||||
|
|
||||||
|
bool hasOptimalNode() const;
|
||||||
|
|
||||||
std::vector<size_t> hostname_prefix_distance; /// Prefix distances from name of this host to the names of hosts of pools.
|
std::vector<size_t> hostname_prefix_distance; /// Prefix distances from name of this host to the names of hosts of pools.
|
||||||
std::vector<size_t> hostname_levenshtein_distance; /// Levenshtein Distances from name of this host to the names of hosts of pools.
|
std::vector<size_t> hostname_levenshtein_distance; /// Levenshtein Distances from name of this host to the names of hosts of pools.
|
||||||
|
|
||||||
|
@ -559,6 +559,8 @@ public:
|
|||||||
/// Useful to check owner of ephemeral node.
|
/// Useful to check owner of ephemeral node.
|
||||||
virtual int64_t getSessionID() const = 0;
|
virtual int64_t getSessionID() const = 0;
|
||||||
|
|
||||||
|
virtual String tryGetAvailabilityZone() { return ""; }
|
||||||
|
|
||||||
/// If the method will throw an exception, callbacks won't be called.
|
/// If the method will throw an exception, callbacks won't be called.
|
||||||
///
|
///
|
||||||
/// After the method is executed successfully, you must wait for callbacks
|
/// After the method is executed successfully, you must wait for callbacks
|
||||||
@ -635,10 +637,6 @@ public:
|
|||||||
|
|
||||||
virtual const DB::KeeperFeatureFlags * getKeeperFeatureFlags() const { return nullptr; }
|
virtual const DB::KeeperFeatureFlags * getKeeperFeatureFlags() const { return nullptr; }
|
||||||
|
|
||||||
/// A ZooKeeper session can have an optional deadline set on it.
|
|
||||||
/// After it has been reached, the session needs to be finalized.
|
|
||||||
virtual bool hasReachedDeadline() const = 0;
|
|
||||||
|
|
||||||
/// Expire session and finish all pending requests
|
/// Expire session and finish all pending requests
|
||||||
virtual void finalize(const String & reason) = 0;
|
virtual void finalize(const String & reason) = 0;
|
||||||
};
|
};
|
||||||
|
@ -39,7 +39,6 @@ public:
|
|||||||
~TestKeeper() override;
|
~TestKeeper() override;
|
||||||
|
|
||||||
bool isExpired() const override { return expired; }
|
bool isExpired() const override { return expired; }
|
||||||
bool hasReachedDeadline() const override { return false; }
|
|
||||||
Int8 getConnectedNodeIdx() const override { return 0; }
|
Int8 getConnectedNodeIdx() const override { return 0; }
|
||||||
String getConnectedHostPort() const override { return "TestKeeper:0000"; }
|
String getConnectedHostPort() const override { return "TestKeeper:0000"; }
|
||||||
int32_t getConnectionXid() const override { return 0; }
|
int32_t getConnectionXid() const override { return 0; }
|
||||||
|
@ -8,6 +8,7 @@
|
|||||||
#include <functional>
|
#include <functional>
|
||||||
#include <ranges>
|
#include <ranges>
|
||||||
#include <vector>
|
#include <vector>
|
||||||
|
#include <chrono>
|
||||||
|
|
||||||
#include <Common/ZooKeeper/Types.h>
|
#include <Common/ZooKeeper/Types.h>
|
||||||
#include <Common/ZooKeeper/ZooKeeperCommon.h>
|
#include <Common/ZooKeeper/ZooKeeperCommon.h>
|
||||||
@ -16,10 +17,12 @@
|
|||||||
#include <base/sort.h>
|
#include <base/sort.h>
|
||||||
#include <base/getFQDNOrHostName.h>
|
#include <base/getFQDNOrHostName.h>
|
||||||
#include <Core/ServerUUID.h>
|
#include <Core/ServerUUID.h>
|
||||||
|
#include <Core/BackgroundSchedulePool.h>
|
||||||
#include "Common/ZooKeeper/IKeeper.h"
|
#include "Common/ZooKeeper/IKeeper.h"
|
||||||
#include <Common/DNSResolver.h>
|
#include <Common/DNSResolver.h>
|
||||||
#include <Common/StringUtils.h>
|
#include <Common/StringUtils.h>
|
||||||
#include <Common/Exception.h>
|
#include <Common/Exception.h>
|
||||||
|
#include <Interpreters/Context.h>
|
||||||
|
|
||||||
#include <Poco/Net/NetException.h>
|
#include <Poco/Net/NetException.h>
|
||||||
#include <Poco/Net/DNS.h>
|
#include <Poco/Net/DNS.h>
|
||||||
@ -55,70 +58,120 @@ static void check(Coordination::Error code, const std::string & path)
|
|||||||
throw KeeperException::fromPath(code, path);
|
throw KeeperException::fromPath(code, path);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
UInt64 getSecondsUntilReconnect(const ZooKeeperArgs & args)
|
||||||
|
{
|
||||||
|
std::uniform_int_distribution<UInt32> fallback_session_lifetime_distribution
|
||||||
|
{
|
||||||
|
args.fallback_session_lifetime.min_sec,
|
||||||
|
args.fallback_session_lifetime.max_sec,
|
||||||
|
};
|
||||||
|
UInt32 session_lifetime_seconds = fallback_session_lifetime_distribution(thread_local_rng);
|
||||||
|
return session_lifetime_seconds;
|
||||||
|
}
|
||||||
|
|
||||||
void ZooKeeper::init(ZooKeeperArgs args_)
|
|
||||||
|
|
||||||
|
void ZooKeeper::updateAvailabilityZones()
|
||||||
|
{
|
||||||
|
ShuffleHosts shuffled_hosts = shuffleHosts();
|
||||||
|
|
||||||
|
for (const auto & node : shuffled_hosts)
|
||||||
|
{
|
||||||
|
try
|
||||||
|
{
|
||||||
|
ShuffleHosts single_node{node};
|
||||||
|
auto tmp_impl = std::make_unique<Coordination::ZooKeeper>(single_node, args, zk_log);
|
||||||
|
auto idx = node.original_index;
|
||||||
|
availability_zones[idx] = tmp_impl->tryGetAvailabilityZone();
|
||||||
|
LOG_TEST(log, "Got availability zone for {}: {}", args.hosts[idx], availability_zones[idx]);
|
||||||
|
}
|
||||||
|
catch (...)
|
||||||
|
{
|
||||||
|
DB::tryLogCurrentException(log, "Failed to get availability zone for " + node.host);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
LOG_DEBUG(log, "Updated availability zones: [{}]", fmt::join(availability_zones, ", "));
|
||||||
|
}
|
||||||
|
|
||||||
|
void ZooKeeper::init(ZooKeeperArgs args_, std::unique_ptr<Coordination::IKeeper> existing_impl)
|
||||||
{
|
{
|
||||||
args = std::move(args_);
|
args = std::move(args_);
|
||||||
log = getLogger("ZooKeeper");
|
log = getLogger("ZooKeeper");
|
||||||
|
|
||||||
if (args.implementation == "zookeeper")
|
if (existing_impl)
|
||||||
|
{
|
||||||
|
chassert(args.implementation == "zookeeper");
|
||||||
|
impl = std::move(existing_impl);
|
||||||
|
LOG_INFO(log, "Switching to connection to a more optimal node {}", impl->getConnectedHostPort());
|
||||||
|
}
|
||||||
|
else if (args.implementation == "zookeeper")
|
||||||
{
|
{
|
||||||
if (args.hosts.empty())
|
if (args.hosts.empty())
|
||||||
throw KeeperException::fromMessage(Coordination::Error::ZBADARGUMENTS, "No hosts passed to ZooKeeper constructor.");
|
throw KeeperException::fromMessage(Coordination::Error::ZBADARGUMENTS, "No hosts passed to ZooKeeper constructor.");
|
||||||
|
|
||||||
Coordination::ZooKeeper::Nodes nodes;
|
chassert(args.availability_zones.size() == args.hosts.size());
|
||||||
nodes.reserve(args.hosts.size());
|
if (availability_zones.empty())
|
||||||
|
{
|
||||||
|
/// availability_zones is empty on server startup or after config reloading
|
||||||
|
/// We will keep the az info when starting new sessions
|
||||||
|
availability_zones = args.availability_zones;
|
||||||
|
LOG_TEST(log, "Availability zones from config: [{}], client: {}", fmt::join(availability_zones, ", "), args.client_availability_zone);
|
||||||
|
if (args.availability_zone_autodetect)
|
||||||
|
updateAvailabilityZones();
|
||||||
|
}
|
||||||
|
chassert(availability_zones.size() == args.hosts.size());
|
||||||
|
|
||||||
/// Shuffle the hosts to distribute the load among ZooKeeper nodes.
|
/// Shuffle the hosts to distribute the load among ZooKeeper nodes.
|
||||||
std::vector<ShuffleHost> shuffled_hosts = shuffleHosts();
|
ShuffleHosts shuffled_hosts = shuffleHosts();
|
||||||
|
|
||||||
bool dns_error = false;
|
impl = std::make_unique<Coordination::ZooKeeper>(shuffled_hosts, args, zk_log);
|
||||||
for (auto & host : shuffled_hosts)
|
Int8 node_idx = impl->getConnectedNodeIdx();
|
||||||
{
|
|
||||||
auto & host_string = host.host;
|
|
||||||
try
|
|
||||||
{
|
|
||||||
const bool secure = startsWith(host_string, "secure://");
|
|
||||||
|
|
||||||
if (secure)
|
|
||||||
host_string.erase(0, strlen("secure://"));
|
|
||||||
|
|
||||||
/// We want to resolve all hosts without DNS cache for keeper connection.
|
|
||||||
Coordination::DNSResolver::instance().removeHostFromCache(host_string);
|
|
||||||
|
|
||||||
const Poco::Net::SocketAddress host_socket_addr{host_string};
|
|
||||||
LOG_TEST(log, "Adding ZooKeeper host {} ({})", host_string, host_socket_addr.toString());
|
|
||||||
nodes.emplace_back(Coordination::ZooKeeper::Node{host_socket_addr, host.original_index, secure});
|
|
||||||
}
|
|
||||||
catch (const Poco::Net::HostNotFoundException & e)
|
|
||||||
{
|
|
||||||
/// Most likely it's misconfiguration and wrong hostname was specified
|
|
||||||
LOG_ERROR(log, "Cannot use ZooKeeper host {}, reason: {}", host_string, e.displayText());
|
|
||||||
}
|
|
||||||
catch (const Poco::Net::DNSException & e)
|
|
||||||
{
|
|
||||||
/// Most likely DNS is not available now
|
|
||||||
dns_error = true;
|
|
||||||
LOG_ERROR(log, "Cannot use ZooKeeper host {} due to DNS error: {}", host_string, e.displayText());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (nodes.empty())
|
|
||||||
{
|
|
||||||
/// For DNS errors we throw exception with ZCONNECTIONLOSS code, so it will be considered as hardware error, not user error
|
|
||||||
if (dns_error)
|
|
||||||
throw KeeperException::fromMessage(Coordination::Error::ZCONNECTIONLOSS, "Cannot resolve any of provided ZooKeeper hosts due to DNS error");
|
|
||||||
else
|
|
||||||
throw KeeperException::fromMessage(Coordination::Error::ZCONNECTIONLOSS, "Cannot use any of provided ZooKeeper nodes");
|
|
||||||
}
|
|
||||||
|
|
||||||
impl = std::make_unique<Coordination::ZooKeeper>(nodes, args, zk_log);
|
|
||||||
|
|
||||||
if (args.chroot.empty())
|
if (args.chroot.empty())
|
||||||
LOG_TRACE(log, "Initialized, hosts: {}", fmt::join(args.hosts, ","));
|
LOG_TRACE(log, "Initialized, hosts: {}", fmt::join(args.hosts, ","));
|
||||||
else
|
else
|
||||||
LOG_TRACE(log, "Initialized, hosts: {}, chroot: {}", fmt::join(args.hosts, ","), args.chroot);
|
LOG_TRACE(log, "Initialized, hosts: {}, chroot: {}", fmt::join(args.hosts, ","), args.chroot);
|
||||||
|
|
||||||
|
|
||||||
|
/// If the balancing strategy has an optimal node then it will be the first in the list
|
||||||
|
bool connected_to_suboptimal_node = node_idx != shuffled_hosts[0].original_index;
|
||||||
|
bool respect_az = args.prefer_local_availability_zone && !args.client_availability_zone.empty();
|
||||||
|
bool may_benefit_from_reconnecting = respect_az || args.get_priority_load_balancing.hasOptimalNode();
|
||||||
|
if (connected_to_suboptimal_node && may_benefit_from_reconnecting)
|
||||||
|
{
|
||||||
|
auto reconnect_timeout_sec = getSecondsUntilReconnect(args);
|
||||||
|
LOG_DEBUG(log, "Connected to a suboptimal ZooKeeper host ({}, index {})."
|
||||||
|
" To preserve balance in ZooKeeper usage, this ZooKeeper session will expire in {} seconds",
|
||||||
|
impl->getConnectedHostPort(), node_idx, reconnect_timeout_sec);
|
||||||
|
|
||||||
|
auto reconnect_task_holder = DB::Context::getGlobalContextInstance()->getSchedulePool().createTask("ZKReconnect", [this, optimal_host = shuffled_hosts[0]]()
|
||||||
|
{
|
||||||
|
try
|
||||||
|
{
|
||||||
|
LOG_DEBUG(log, "Trying to connect to a more optimal node {}", optimal_host.host);
|
||||||
|
ShuffleHosts node{optimal_host};
|
||||||
|
std::unique_ptr<Coordination::IKeeper> new_impl = std::make_unique<Coordination::ZooKeeper>(node, args, zk_log);
|
||||||
|
Int8 new_node_idx = new_impl->getConnectedNodeIdx();
|
||||||
|
|
||||||
|
/// Maybe the node was unavailable when getting AZs first time, update just in case
|
||||||
|
if (args.availability_zone_autodetect && availability_zones[new_node_idx].empty())
|
||||||
|
{
|
||||||
|
availability_zones[new_node_idx] = new_impl->tryGetAvailabilityZone();
|
||||||
|
LOG_DEBUG(log, "Got availability zone for {}: {}", optimal_host.host, availability_zones[new_node_idx]);
|
||||||
|
}
|
||||||
|
|
||||||
|
optimal_impl = std::move(new_impl);
|
||||||
|
impl->finalize("Connected to a more optimal node");
|
||||||
|
}
|
||||||
|
catch (...)
|
||||||
|
{
|
||||||
|
LOG_WARNING(log, "Failed to connect to a more optimal ZooKeeper, will try again later: {}", DB::getCurrentExceptionMessage(/*with_stacktrace*/ false));
|
||||||
|
(*reconnect_task)->scheduleAfter(getSecondsUntilReconnect(args) * 1000);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
reconnect_task = std::make_unique<DB::BackgroundSchedulePoolTaskHolder>(std::move(reconnect_task_holder));
|
||||||
|
(*reconnect_task)->activate();
|
||||||
|
(*reconnect_task)->scheduleAfter(reconnect_timeout_sec * 1000);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
else if (args.implementation == "testkeeper")
|
else if (args.implementation == "testkeeper")
|
||||||
{
|
{
|
||||||
@ -152,29 +205,53 @@ void ZooKeeper::init(ZooKeeperArgs args_)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ZooKeeper::~ZooKeeper()
|
||||||
|
{
|
||||||
|
if (reconnect_task)
|
||||||
|
(*reconnect_task)->deactivate();
|
||||||
|
}
|
||||||
|
|
||||||
ZooKeeper::ZooKeeper(const ZooKeeperArgs & args_, std::shared_ptr<DB::ZooKeeperLog> zk_log_)
|
ZooKeeper::ZooKeeper(const ZooKeeperArgs & args_, std::shared_ptr<DB::ZooKeeperLog> zk_log_)
|
||||||
: zk_log(std::move(zk_log_))
|
: zk_log(std::move(zk_log_))
|
||||||
{
|
{
|
||||||
init(args_);
|
init(args_, /*existing_impl*/ {});
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
ZooKeeper::ZooKeeper(const ZooKeeperArgs & args_, std::shared_ptr<DB::ZooKeeperLog> zk_log_, Strings availability_zones_, std::unique_ptr<Coordination::IKeeper> existing_impl)
|
||||||
|
: availability_zones(std::move(availability_zones_)), zk_log(std::move(zk_log_))
|
||||||
|
{
|
||||||
|
if (availability_zones.size() != args_.hosts.size())
|
||||||
|
throw DB::Exception(DB::ErrorCodes::LOGICAL_ERROR, "Argument sizes mismatch: availability_zones count {} and hosts count {}",
|
||||||
|
availability_zones.size(), args_.hosts.size());
|
||||||
|
init(args_, std::move(existing_impl));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
ZooKeeper::ZooKeeper(const Poco::Util::AbstractConfiguration & config, const std::string & config_name, std::shared_ptr<DB::ZooKeeperLog> zk_log_)
|
ZooKeeper::ZooKeeper(const Poco::Util::AbstractConfiguration & config, const std::string & config_name, std::shared_ptr<DB::ZooKeeperLog> zk_log_)
|
||||||
: zk_log(std::move(zk_log_))
|
: zk_log(std::move(zk_log_))
|
||||||
{
|
{
|
||||||
init(ZooKeeperArgs(config, config_name));
|
init(ZooKeeperArgs(config, config_name), /*existing_impl*/ {});
|
||||||
}
|
}
|
||||||
|
|
||||||
std::vector<ShuffleHost> ZooKeeper::shuffleHosts() const
|
ShuffleHosts ZooKeeper::shuffleHosts() const
|
||||||
{
|
{
|
||||||
std::function<Priority(size_t index)> get_priority = args.get_priority_load_balancing.getPriorityFunc(args.get_priority_load_balancing.load_balancing, 0, args.hosts.size());
|
std::function<Priority(size_t index)> get_priority = args.get_priority_load_balancing.getPriorityFunc(
|
||||||
std::vector<ShuffleHost> shuffle_hosts;
|
args.get_priority_load_balancing.load_balancing, /* offset for first_or_random */ 0, args.hosts.size());
|
||||||
|
ShuffleHosts shuffle_hosts;
|
||||||
for (size_t i = 0; i < args.hosts.size(); ++i)
|
for (size_t i = 0; i < args.hosts.size(); ++i)
|
||||||
{
|
{
|
||||||
ShuffleHost shuffle_host;
|
ShuffleHost shuffle_host;
|
||||||
shuffle_host.host = args.hosts[i];
|
shuffle_host.host = args.hosts[i];
|
||||||
shuffle_host.original_index = static_cast<UInt8>(i);
|
shuffle_host.original_index = static_cast<UInt8>(i);
|
||||||
|
|
||||||
|
shuffle_host.secure = startsWith(shuffle_host.host, "secure://");
|
||||||
|
if (shuffle_host.secure)
|
||||||
|
shuffle_host.host.erase(0, strlen("secure://"));
|
||||||
|
|
||||||
|
if (!args.client_availability_zone.empty() && !availability_zones[i].empty())
|
||||||
|
shuffle_host.az_info = availability_zones[i] == args.client_availability_zone ? ShuffleHost::SAME : ShuffleHost::OTHER;
|
||||||
|
|
||||||
if (get_priority)
|
if (get_priority)
|
||||||
shuffle_host.priority = get_priority(i);
|
shuffle_host.priority = get_priority(i);
|
||||||
shuffle_host.randomize();
|
shuffle_host.randomize();
|
||||||
@ -1023,7 +1100,10 @@ ZooKeeperPtr ZooKeeper::create(const Poco::Util::AbstractConfiguration & config,
|
|||||||
|
|
||||||
ZooKeeperPtr ZooKeeper::startNewSession() const
|
ZooKeeperPtr ZooKeeper::startNewSession() const
|
||||||
{
|
{
|
||||||
auto res = std::shared_ptr<ZooKeeper>(new ZooKeeper(args, zk_log));
|
if (reconnect_task)
|
||||||
|
(*reconnect_task)->deactivate();
|
||||||
|
|
||||||
|
auto res = std::shared_ptr<ZooKeeper>(new ZooKeeper(args, zk_log, availability_zones, std::move(optimal_impl)));
|
||||||
res->initSession();
|
res->initSession();
|
||||||
return res;
|
return res;
|
||||||
}
|
}
|
||||||
@ -1456,6 +1536,16 @@ int32_t ZooKeeper::getConnectionXid() const
|
|||||||
return impl->getConnectionXid();
|
return impl->getConnectionXid();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
String ZooKeeper::getConnectedHostAvailabilityZone() const
|
||||||
|
{
|
||||||
|
if (args.implementation != "zookeeper" || !impl)
|
||||||
|
return "";
|
||||||
|
Int8 idx = impl->getConnectedNodeIdx();
|
||||||
|
if (idx < 0)
|
||||||
|
return ""; /// session expired
|
||||||
|
return availability_zones.at(idx);
|
||||||
|
}
|
||||||
|
|
||||||
size_t getFailedOpIndex(Coordination::Error exception_code, const Coordination::Responses & responses)
|
size_t getFailedOpIndex(Coordination::Error exception_code, const Coordination::Responses & responses)
|
||||||
{
|
{
|
||||||
if (responses.empty())
|
if (responses.empty())
|
||||||
|
@ -32,6 +32,7 @@ namespace DB
|
|||||||
{
|
{
|
||||||
class ZooKeeperLog;
|
class ZooKeeperLog;
|
||||||
class ZooKeeperWithFaultInjection;
|
class ZooKeeperWithFaultInjection;
|
||||||
|
class BackgroundSchedulePoolTaskHolder;
|
||||||
|
|
||||||
namespace ErrorCodes
|
namespace ErrorCodes
|
||||||
{
|
{
|
||||||
@ -48,11 +49,23 @@ constexpr size_t MULTI_BATCH_SIZE = 100;
|
|||||||
|
|
||||||
struct ShuffleHost
|
struct ShuffleHost
|
||||||
{
|
{
|
||||||
|
enum AvailabilityZoneInfo
|
||||||
|
{
|
||||||
|
SAME = 0,
|
||||||
|
UNKNOWN = 1,
|
||||||
|
OTHER = 2,
|
||||||
|
};
|
||||||
|
|
||||||
String host;
|
String host;
|
||||||
|
bool secure = false;
|
||||||
UInt8 original_index = 0;
|
UInt8 original_index = 0;
|
||||||
|
AvailabilityZoneInfo az_info = UNKNOWN;
|
||||||
Priority priority;
|
Priority priority;
|
||||||
UInt64 random = 0;
|
UInt64 random = 0;
|
||||||
|
|
||||||
|
/// We should resolve it each time without caching
|
||||||
|
mutable std::optional<Poco::Net::SocketAddress> address;
|
||||||
|
|
||||||
void randomize()
|
void randomize()
|
||||||
{
|
{
|
||||||
random = thread_local_rng();
|
random = thread_local_rng();
|
||||||
@ -60,11 +73,13 @@ struct ShuffleHost
|
|||||||
|
|
||||||
static bool compare(const ShuffleHost & lhs, const ShuffleHost & rhs)
|
static bool compare(const ShuffleHost & lhs, const ShuffleHost & rhs)
|
||||||
{
|
{
|
||||||
return std::forward_as_tuple(lhs.priority, lhs.random)
|
return std::forward_as_tuple(lhs.az_info, lhs.priority, lhs.random)
|
||||||
< std::forward_as_tuple(rhs.priority, rhs.random);
|
< std::forward_as_tuple(rhs.az_info, rhs.priority, rhs.random);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
using ShuffleHosts = std::vector<ShuffleHost>;
|
||||||
|
|
||||||
struct RemoveException
|
struct RemoveException
|
||||||
{
|
{
|
||||||
explicit RemoveException(std::string_view path_ = "", bool remove_subtree_ = true)
|
explicit RemoveException(std::string_view path_ = "", bool remove_subtree_ = true)
|
||||||
@ -197,6 +212,9 @@ class ZooKeeper
|
|||||||
|
|
||||||
explicit ZooKeeper(const ZooKeeperArgs & args_, std::shared_ptr<DB::ZooKeeperLog> zk_log_ = nullptr);
|
explicit ZooKeeper(const ZooKeeperArgs & args_, std::shared_ptr<DB::ZooKeeperLog> zk_log_ = nullptr);
|
||||||
|
|
||||||
|
/// Allows to keep info about availability zones when starting a new session
|
||||||
|
ZooKeeper(const ZooKeeperArgs & args_, std::shared_ptr<DB::ZooKeeperLog> zk_log_, Strings availability_zones_, std::unique_ptr<Coordination::IKeeper> existing_impl);
|
||||||
|
|
||||||
/** Config of the form:
|
/** Config of the form:
|
||||||
<zookeeper>
|
<zookeeper>
|
||||||
<node>
|
<node>
|
||||||
@ -228,7 +246,9 @@ public:
|
|||||||
using Ptr = std::shared_ptr<ZooKeeper>;
|
using Ptr = std::shared_ptr<ZooKeeper>;
|
||||||
using ErrorsList = std::initializer_list<Coordination::Error>;
|
using ErrorsList = std::initializer_list<Coordination::Error>;
|
||||||
|
|
||||||
std::vector<ShuffleHost> shuffleHosts() const;
|
~ZooKeeper();
|
||||||
|
|
||||||
|
ShuffleHosts shuffleHosts() const;
|
||||||
|
|
||||||
static Ptr create(const Poco::Util::AbstractConfiguration & config,
|
static Ptr create(const Poco::Util::AbstractConfiguration & config,
|
||||||
const std::string & config_name,
|
const std::string & config_name,
|
||||||
@ -596,8 +616,6 @@ public:
|
|||||||
|
|
||||||
UInt32 getSessionUptime() const { return static_cast<UInt32>(session_uptime.elapsedSeconds()); }
|
UInt32 getSessionUptime() const { return static_cast<UInt32>(session_uptime.elapsedSeconds()); }
|
||||||
|
|
||||||
bool hasReachedDeadline() const { return impl->hasReachedDeadline(); }
|
|
||||||
|
|
||||||
uint64_t getSessionTimeoutMS() const { return args.session_timeout_ms; }
|
uint64_t getSessionTimeoutMS() const { return args.session_timeout_ms; }
|
||||||
|
|
||||||
void setServerCompletelyStarted();
|
void setServerCompletelyStarted();
|
||||||
@ -606,6 +624,8 @@ public:
|
|||||||
String getConnectedHostPort() const;
|
String getConnectedHostPort() const;
|
||||||
int32_t getConnectionXid() const;
|
int32_t getConnectionXid() const;
|
||||||
|
|
||||||
|
String getConnectedHostAvailabilityZone() const;
|
||||||
|
|
||||||
const DB::KeeperFeatureFlags * getKeeperFeatureFlags() const { return impl->getKeeperFeatureFlags(); }
|
const DB::KeeperFeatureFlags * getKeeperFeatureFlags() const { return impl->getKeeperFeatureFlags(); }
|
||||||
|
|
||||||
/// Checks that our session was not killed, and allows to avoid applying a request from an old lost session.
|
/// Checks that our session was not killed, and allows to avoid applying a request from an old lost session.
|
||||||
@ -625,7 +645,8 @@ public:
|
|||||||
void addCheckSessionOp(Coordination::Requests & requests) const;
|
void addCheckSessionOp(Coordination::Requests & requests) const;
|
||||||
|
|
||||||
private:
|
private:
|
||||||
void init(ZooKeeperArgs args_);
|
void init(ZooKeeperArgs args_, std::unique_ptr<Coordination::IKeeper> existing_impl);
|
||||||
|
void updateAvailabilityZones();
|
||||||
|
|
||||||
/// The following methods don't any throw exceptions but return error codes.
|
/// The following methods don't any throw exceptions but return error codes.
|
||||||
Coordination::Error createImpl(const std::string & path, const std::string & data, int32_t mode, std::string & path_created);
|
Coordination::Error createImpl(const std::string & path, const std::string & data, int32_t mode, std::string & path_created);
|
||||||
@ -690,15 +711,20 @@ private:
|
|||||||
}
|
}
|
||||||
|
|
||||||
std::unique_ptr<Coordination::IKeeper> impl;
|
std::unique_ptr<Coordination::IKeeper> impl;
|
||||||
|
mutable std::unique_ptr<Coordination::IKeeper> optimal_impl;
|
||||||
|
|
||||||
ZooKeeperArgs args;
|
ZooKeeperArgs args;
|
||||||
|
|
||||||
|
Strings availability_zones;
|
||||||
|
|
||||||
LoggerPtr log = nullptr;
|
LoggerPtr log = nullptr;
|
||||||
std::shared_ptr<DB::ZooKeeperLog> zk_log;
|
std::shared_ptr<DB::ZooKeeperLog> zk_log;
|
||||||
|
|
||||||
AtomicStopwatch session_uptime;
|
AtomicStopwatch session_uptime;
|
||||||
|
|
||||||
int32_t session_node_version;
|
int32_t session_node_version;
|
||||||
|
|
||||||
|
std::unique_ptr<DB::BackgroundSchedulePoolTaskHolder> reconnect_task;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
|
@ -5,6 +5,9 @@
|
|||||||
#include <Poco/Util/AbstractConfiguration.h>
|
#include <Poco/Util/AbstractConfiguration.h>
|
||||||
#include <Common/isLocalAddress.h>
|
#include <Common/isLocalAddress.h>
|
||||||
#include <Common/StringUtils.h>
|
#include <Common/StringUtils.h>
|
||||||
|
#include <Common/thread_local_rng.h>
|
||||||
|
#include <Server/CloudPlacementInfo.h>
|
||||||
|
#include <IO/S3/Credentials.h>
|
||||||
#include <Poco/String.h>
|
#include <Poco/String.h>
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
@ -53,6 +56,7 @@ ZooKeeperArgs::ZooKeeperArgs(const Poco::Util::AbstractConfiguration & config, c
|
|||||||
ZooKeeperArgs::ZooKeeperArgs(const String & hosts_string)
|
ZooKeeperArgs::ZooKeeperArgs(const String & hosts_string)
|
||||||
{
|
{
|
||||||
splitInto<','>(hosts, hosts_string);
|
splitInto<','>(hosts, hosts_string);
|
||||||
|
availability_zones.resize(hosts.size());
|
||||||
}
|
}
|
||||||
|
|
||||||
void ZooKeeperArgs::initFromKeeperServerSection(const Poco::Util::AbstractConfiguration & config)
|
void ZooKeeperArgs::initFromKeeperServerSection(const Poco::Util::AbstractConfiguration & config)
|
||||||
@ -103,8 +107,11 @@ void ZooKeeperArgs::initFromKeeperServerSection(const Poco::Util::AbstractConfig
|
|||||||
for (const auto & key : keys)
|
for (const auto & key : keys)
|
||||||
{
|
{
|
||||||
if (startsWith(key, "server"))
|
if (startsWith(key, "server"))
|
||||||
|
{
|
||||||
hosts.push_back(
|
hosts.push_back(
|
||||||
(secure ? "secure://" : "") + config.getString(raft_configuration_key + "." + key + ".hostname") + ":" + tcp_port);
|
(secure ? "secure://" : "") + config.getString(raft_configuration_key + "." + key + ".hostname") + ":" + tcp_port);
|
||||||
|
availability_zones.push_back(config.getString(raft_configuration_key + "." + key + ".availability_zone", ""));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static constexpr std::array load_balancing_keys
|
static constexpr std::array load_balancing_keys
|
||||||
@ -123,11 +130,15 @@ void ZooKeeperArgs::initFromKeeperServerSection(const Poco::Util::AbstractConfig
|
|||||||
auto load_balancing = magic_enum::enum_cast<DB::LoadBalancing>(Poco::toUpper(load_balancing_str));
|
auto load_balancing = magic_enum::enum_cast<DB::LoadBalancing>(Poco::toUpper(load_balancing_str));
|
||||||
if (!load_balancing)
|
if (!load_balancing)
|
||||||
throw DB::Exception(DB::ErrorCodes::BAD_ARGUMENTS, "Unknown load balancing: {}", load_balancing_str);
|
throw DB::Exception(DB::ErrorCodes::BAD_ARGUMENTS, "Unknown load balancing: {}", load_balancing_str);
|
||||||
get_priority_load_balancing.load_balancing = *load_balancing;
|
get_priority_load_balancing = DB::GetPriorityForLoadBalancing(*load_balancing, thread_local_rng() % hosts.size());
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
availability_zone_autodetect = config.getBool(std::string{config_name} + ".availability_zone_autodetect", false);
|
||||||
|
prefer_local_availability_zone = config.getBool(std::string{config_name} + ".prefer_local_availability_zone", false);
|
||||||
|
if (prefer_local_availability_zone)
|
||||||
|
client_availability_zone = DB::PlacementInfo::PlacementInfo::instance().getAvailabilityZone();
|
||||||
}
|
}
|
||||||
|
|
||||||
void ZooKeeperArgs::initFromKeeperSection(const Poco::Util::AbstractConfiguration & config, const std::string & config_name)
|
void ZooKeeperArgs::initFromKeeperSection(const Poco::Util::AbstractConfiguration & config, const std::string & config_name)
|
||||||
@ -137,6 +148,8 @@ void ZooKeeperArgs::initFromKeeperSection(const Poco::Util::AbstractConfiguratio
|
|||||||
Poco::Util::AbstractConfiguration::Keys keys;
|
Poco::Util::AbstractConfiguration::Keys keys;
|
||||||
config.keys(config_name, keys);
|
config.keys(config_name, keys);
|
||||||
|
|
||||||
|
std::optional<DB::LoadBalancing> load_balancing;
|
||||||
|
|
||||||
for (const auto & key : keys)
|
for (const auto & key : keys)
|
||||||
{
|
{
|
||||||
if (key.starts_with("node"))
|
if (key.starts_with("node"))
|
||||||
@ -144,6 +157,7 @@ void ZooKeeperArgs::initFromKeeperSection(const Poco::Util::AbstractConfiguratio
|
|||||||
hosts.push_back(
|
hosts.push_back(
|
||||||
(config.getBool(config_name + "." + key + ".secure", false) ? "secure://" : "")
|
(config.getBool(config_name + "." + key + ".secure", false) ? "secure://" : "")
|
||||||
+ config.getString(config_name + "." + key + ".host") + ":" + config.getString(config_name + "." + key + ".port", "2181"));
|
+ config.getString(config_name + "." + key + ".host") + ":" + config.getString(config_name + "." + key + ".port", "2181"));
|
||||||
|
availability_zones.push_back(config.getString(config_name + "." + key + ".availability_zone", ""));
|
||||||
}
|
}
|
||||||
else if (key == "session_timeout_ms")
|
else if (key == "session_timeout_ms")
|
||||||
{
|
{
|
||||||
@ -199,6 +213,10 @@ void ZooKeeperArgs::initFromKeeperSection(const Poco::Util::AbstractConfiguratio
|
|||||||
{
|
{
|
||||||
sessions_path = config.getString(config_name + "." + key);
|
sessions_path = config.getString(config_name + "." + key);
|
||||||
}
|
}
|
||||||
|
else if (key == "prefer_local_availability_zone")
|
||||||
|
{
|
||||||
|
prefer_local_availability_zone = config.getBool(config_name + "." + key);
|
||||||
|
}
|
||||||
else if (key == "implementation")
|
else if (key == "implementation")
|
||||||
{
|
{
|
||||||
implementation = config.getString(config_name + "." + key);
|
implementation = config.getString(config_name + "." + key);
|
||||||
@ -207,10 +225,9 @@ void ZooKeeperArgs::initFromKeeperSection(const Poco::Util::AbstractConfiguratio
|
|||||||
{
|
{
|
||||||
String load_balancing_str = config.getString(config_name + "." + key);
|
String load_balancing_str = config.getString(config_name + "." + key);
|
||||||
/// Use magic_enum to avoid dependency from dbms (`SettingFieldLoadBalancingTraits::fromString(...)`)
|
/// Use magic_enum to avoid dependency from dbms (`SettingFieldLoadBalancingTraits::fromString(...)`)
|
||||||
auto load_balancing = magic_enum::enum_cast<DB::LoadBalancing>(Poco::toUpper(load_balancing_str));
|
load_balancing = magic_enum::enum_cast<DB::LoadBalancing>(Poco::toUpper(load_balancing_str));
|
||||||
if (!load_balancing)
|
if (!load_balancing)
|
||||||
throw DB::Exception(DB::ErrorCodes::BAD_ARGUMENTS, "Unknown load balancing: {}", load_balancing_str);
|
throw DB::Exception(DB::ErrorCodes::BAD_ARGUMENTS, "Unknown load balancing: {}", load_balancing_str);
|
||||||
get_priority_load_balancing.load_balancing = *load_balancing;
|
|
||||||
}
|
}
|
||||||
else if (key == "fallback_session_lifetime")
|
else if (key == "fallback_session_lifetime")
|
||||||
{
|
{
|
||||||
@ -224,9 +241,19 @@ void ZooKeeperArgs::initFromKeeperSection(const Poco::Util::AbstractConfiguratio
|
|||||||
{
|
{
|
||||||
use_compression = config.getBool(config_name + "." + key);
|
use_compression = config.getBool(config_name + "." + key);
|
||||||
}
|
}
|
||||||
|
else if (key == "availability_zone_autodetect")
|
||||||
|
{
|
||||||
|
availability_zone_autodetect = config.getBool(config_name + "." + key);
|
||||||
|
}
|
||||||
else
|
else
|
||||||
throw KeeperException(Coordination::Error::ZBADARGUMENTS, "Unknown key {} in config file", key);
|
throw KeeperException(Coordination::Error::ZBADARGUMENTS, "Unknown key {} in config file", key);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (load_balancing)
|
||||||
|
get_priority_load_balancing = DB::GetPriorityForLoadBalancing(*load_balancing, thread_local_rng() % hosts.size());
|
||||||
|
|
||||||
|
if (prefer_local_availability_zone)
|
||||||
|
client_availability_zone = DB::PlacementInfo::PlacementInfo::instance().getAvailabilityZone();
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -32,10 +32,12 @@ struct ZooKeeperArgs
|
|||||||
String zookeeper_name = "zookeeper";
|
String zookeeper_name = "zookeeper";
|
||||||
String implementation = "zookeeper";
|
String implementation = "zookeeper";
|
||||||
Strings hosts;
|
Strings hosts;
|
||||||
|
Strings availability_zones;
|
||||||
String auth_scheme;
|
String auth_scheme;
|
||||||
String identity;
|
String identity;
|
||||||
String chroot;
|
String chroot;
|
||||||
String sessions_path = "/clickhouse/sessions";
|
String sessions_path = "/clickhouse/sessions";
|
||||||
|
String client_availability_zone;
|
||||||
int32_t connection_timeout_ms = Coordination::DEFAULT_CONNECTION_TIMEOUT_MS;
|
int32_t connection_timeout_ms = Coordination::DEFAULT_CONNECTION_TIMEOUT_MS;
|
||||||
int32_t session_timeout_ms = Coordination::DEFAULT_SESSION_TIMEOUT_MS;
|
int32_t session_timeout_ms = Coordination::DEFAULT_SESSION_TIMEOUT_MS;
|
||||||
int32_t operation_timeout_ms = Coordination::DEFAULT_OPERATION_TIMEOUT_MS;
|
int32_t operation_timeout_ms = Coordination::DEFAULT_OPERATION_TIMEOUT_MS;
|
||||||
@ -47,6 +49,8 @@ struct ZooKeeperArgs
|
|||||||
UInt64 send_sleep_ms = 0;
|
UInt64 send_sleep_ms = 0;
|
||||||
UInt64 recv_sleep_ms = 0;
|
UInt64 recv_sleep_ms = 0;
|
||||||
bool use_compression = false;
|
bool use_compression = false;
|
||||||
|
bool prefer_local_availability_zone = false;
|
||||||
|
bool availability_zone_autodetect = false;
|
||||||
|
|
||||||
SessionLifetimeConfiguration fallback_session_lifetime = {};
|
SessionLifetimeConfiguration fallback_session_lifetime = {};
|
||||||
DB::GetPriorityForLoadBalancing get_priority_load_balancing;
|
DB::GetPriorityForLoadBalancing get_priority_load_balancing;
|
||||||
|
@ -23,6 +23,9 @@
|
|||||||
#include <Common/setThreadName.h>
|
#include <Common/setThreadName.h>
|
||||||
#include <Common/thread_local_rng.h>
|
#include <Common/thread_local_rng.h>
|
||||||
|
|
||||||
|
#include <Poco/Net/NetException.h>
|
||||||
|
#include <Poco/Net/DNS.h>
|
||||||
|
|
||||||
#include "Coordination/KeeperConstants.h"
|
#include "Coordination/KeeperConstants.h"
|
||||||
#include "config.h"
|
#include "config.h"
|
||||||
|
|
||||||
@ -338,7 +341,7 @@ ZooKeeper::~ZooKeeper()
|
|||||||
|
|
||||||
|
|
||||||
ZooKeeper::ZooKeeper(
|
ZooKeeper::ZooKeeper(
|
||||||
const Nodes & nodes,
|
const zkutil::ShuffleHosts & nodes,
|
||||||
const zkutil::ZooKeeperArgs & args_,
|
const zkutil::ZooKeeperArgs & args_,
|
||||||
std::shared_ptr<ZooKeeperLog> zk_log_)
|
std::shared_ptr<ZooKeeperLog> zk_log_)
|
||||||
: args(args_)
|
: args(args_)
|
||||||
@ -426,7 +429,7 @@ ZooKeeper::ZooKeeper(
|
|||||||
|
|
||||||
|
|
||||||
void ZooKeeper::connect(
|
void ZooKeeper::connect(
|
||||||
const Nodes & nodes,
|
const zkutil::ShuffleHosts & nodes,
|
||||||
Poco::Timespan connection_timeout)
|
Poco::Timespan connection_timeout)
|
||||||
{
|
{
|
||||||
if (nodes.empty())
|
if (nodes.empty())
|
||||||
@ -434,15 +437,51 @@ void ZooKeeper::connect(
|
|||||||
|
|
||||||
static constexpr size_t num_tries = 3;
|
static constexpr size_t num_tries = 3;
|
||||||
bool connected = false;
|
bool connected = false;
|
||||||
|
bool dns_error = false;
|
||||||
|
|
||||||
|
size_t resolved_count = 0;
|
||||||
|
for (const auto & node : nodes)
|
||||||
|
{
|
||||||
|
try
|
||||||
|
{
|
||||||
|
const Poco::Net::SocketAddress host_socket_addr{node.host};
|
||||||
|
LOG_TRACE(log, "Adding ZooKeeper host {} ({}), az: {}, priority: {}", node.host, host_socket_addr.toString(), node.az_info, node.priority);
|
||||||
|
node.address = host_socket_addr;
|
||||||
|
++resolved_count;
|
||||||
|
}
|
||||||
|
catch (const Poco::Net::HostNotFoundException & e)
|
||||||
|
{
|
||||||
|
/// Most likely it's misconfiguration and wrong hostname was specified
|
||||||
|
LOG_ERROR(log, "Cannot use ZooKeeper host {}, reason: {}", node.host, e.displayText());
|
||||||
|
}
|
||||||
|
catch (const Poco::Net::DNSException & e)
|
||||||
|
{
|
||||||
|
/// Most likely DNS is not available now
|
||||||
|
dns_error = true;
|
||||||
|
LOG_ERROR(log, "Cannot use ZooKeeper host {} due to DNS error: {}", node.host, e.displayText());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (resolved_count == 0)
|
||||||
|
{
|
||||||
|
/// For DNS errors we throw exception with ZCONNECTIONLOSS code, so it will be considered as hardware error, not user error
|
||||||
|
if (dns_error)
|
||||||
|
throw zkutil::KeeperException::fromMessage(
|
||||||
|
Coordination::Error::ZCONNECTIONLOSS, "Cannot resolve any of provided ZooKeeper hosts due to DNS error");
|
||||||
|
else
|
||||||
|
throw zkutil::KeeperException::fromMessage(Coordination::Error::ZCONNECTIONLOSS, "Cannot use any of provided ZooKeeper nodes");
|
||||||
|
}
|
||||||
|
|
||||||
WriteBufferFromOwnString fail_reasons;
|
WriteBufferFromOwnString fail_reasons;
|
||||||
for (size_t try_no = 0; try_no < num_tries; ++try_no)
|
for (size_t try_no = 0; try_no < num_tries; ++try_no)
|
||||||
{
|
{
|
||||||
for (size_t i = 0; i < nodes.size(); ++i)
|
for (const auto & node : nodes)
|
||||||
{
|
{
|
||||||
const auto & node = nodes[i];
|
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
|
if (!node.address)
|
||||||
|
continue;
|
||||||
|
|
||||||
/// Reset the state of previous attempt.
|
/// Reset the state of previous attempt.
|
||||||
if (node.secure)
|
if (node.secure)
|
||||||
{
|
{
|
||||||
@ -458,7 +497,7 @@ void ZooKeeper::connect(
|
|||||||
socket = Poco::Net::StreamSocket();
|
socket = Poco::Net::StreamSocket();
|
||||||
}
|
}
|
||||||
|
|
||||||
socket.connect(node.address, connection_timeout);
|
socket.connect(*node.address, connection_timeout);
|
||||||
socket_address = socket.peerAddress();
|
socket_address = socket.peerAddress();
|
||||||
|
|
||||||
socket.setReceiveTimeout(args.operation_timeout_ms * 1000);
|
socket.setReceiveTimeout(args.operation_timeout_ms * 1000);
|
||||||
@ -498,27 +537,11 @@ void ZooKeeper::connect(
|
|||||||
}
|
}
|
||||||
|
|
||||||
original_index = static_cast<Int8>(node.original_index);
|
original_index = static_cast<Int8>(node.original_index);
|
||||||
|
|
||||||
if (i != 0)
|
|
||||||
{
|
|
||||||
std::uniform_int_distribution<UInt32> fallback_session_lifetime_distribution
|
|
||||||
{
|
|
||||||
args.fallback_session_lifetime.min_sec,
|
|
||||||
args.fallback_session_lifetime.max_sec,
|
|
||||||
};
|
|
||||||
UInt32 session_lifetime_seconds = fallback_session_lifetime_distribution(thread_local_rng);
|
|
||||||
client_session_deadline = clock::now() + std::chrono::seconds(session_lifetime_seconds);
|
|
||||||
|
|
||||||
LOG_DEBUG(log, "Connected to a suboptimal ZooKeeper host ({}, index {})."
|
|
||||||
" To preserve balance in ZooKeeper usage, this ZooKeeper session will expire in {} seconds",
|
|
||||||
node.address.toString(), i, session_lifetime_seconds);
|
|
||||||
}
|
|
||||||
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
{
|
{
|
||||||
fail_reasons << "\n" << getCurrentExceptionMessage(false) << ", " << node.address.toString();
|
fail_reasons << "\n" << getCurrentExceptionMessage(false) << ", " << node.address->toString();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -532,6 +555,9 @@ void ZooKeeper::connect(
|
|||||||
bool first = true;
|
bool first = true;
|
||||||
for (const auto & node : nodes)
|
for (const auto & node : nodes)
|
||||||
{
|
{
|
||||||
|
if (!node.address)
|
||||||
|
continue;
|
||||||
|
|
||||||
if (first)
|
if (first)
|
||||||
first = false;
|
first = false;
|
||||||
else
|
else
|
||||||
@ -540,7 +566,7 @@ void ZooKeeper::connect(
|
|||||||
if (node.secure)
|
if (node.secure)
|
||||||
message << "secure://";
|
message << "secure://";
|
||||||
|
|
||||||
message << node.address.toString();
|
message << node.address->toString();
|
||||||
}
|
}
|
||||||
|
|
||||||
message << fail_reasons.str() << "\n";
|
message << fail_reasons.str() << "\n";
|
||||||
@ -1153,7 +1179,6 @@ void ZooKeeper::pushRequest(RequestInfo && info)
|
|||||||
{
|
{
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
checkSessionDeadline();
|
|
||||||
info.time = clock::now();
|
info.time = clock::now();
|
||||||
auto maybe_zk_log = std::atomic_load(&zk_log);
|
auto maybe_zk_log = std::atomic_load(&zk_log);
|
||||||
if (maybe_zk_log)
|
if (maybe_zk_log)
|
||||||
@ -1201,44 +1226,44 @@ bool ZooKeeper::isFeatureEnabled(KeeperFeatureFlag feature_flag) const
|
|||||||
return keeper_feature_flags.isEnabled(feature_flag);
|
return keeper_feature_flags.isEnabled(feature_flag);
|
||||||
}
|
}
|
||||||
|
|
||||||
void ZooKeeper::initFeatureFlags()
|
std::optional<String> ZooKeeper::tryGetSystemZnode(const std::string & path, const std::string & description)
|
||||||
{
|
{
|
||||||
const auto try_get = [&](const std::string & path, const std::string & description) -> std::optional<std::string>
|
auto promise = std::make_shared<std::promise<Coordination::GetResponse>>();
|
||||||
|
auto future = promise->get_future();
|
||||||
|
|
||||||
|
auto callback = [promise](const Coordination::GetResponse & response) mutable
|
||||||
{
|
{
|
||||||
auto promise = std::make_shared<std::promise<Coordination::GetResponse>>();
|
promise->set_value(response);
|
||||||
auto future = promise->get_future();
|
|
||||||
|
|
||||||
auto callback = [promise](const Coordination::GetResponse & response) mutable
|
|
||||||
{
|
|
||||||
promise->set_value(response);
|
|
||||||
};
|
|
||||||
|
|
||||||
get(path, std::move(callback), {});
|
|
||||||
if (future.wait_for(std::chrono::milliseconds(args.operation_timeout_ms)) != std::future_status::ready)
|
|
||||||
throw Exception(Error::ZOPERATIONTIMEOUT, "Failed to get {}: timeout", description);
|
|
||||||
|
|
||||||
auto response = future.get();
|
|
||||||
|
|
||||||
if (response.error == Coordination::Error::ZNONODE)
|
|
||||||
{
|
|
||||||
LOG_TRACE(log, "Failed to get {}", description);
|
|
||||||
return std::nullopt;
|
|
||||||
}
|
|
||||||
else if (response.error != Coordination::Error::ZOK)
|
|
||||||
{
|
|
||||||
throw Exception(response.error, "Failed to get {}", description);
|
|
||||||
}
|
|
||||||
|
|
||||||
return std::move(response.data);
|
|
||||||
};
|
};
|
||||||
|
|
||||||
if (auto feature_flags = try_get(keeper_api_feature_flags_path, "feature flags"); feature_flags.has_value())
|
get(path, std::move(callback), {});
|
||||||
|
if (future.wait_for(std::chrono::milliseconds(args.operation_timeout_ms)) != std::future_status::ready)
|
||||||
|
throw Exception(Error::ZOPERATIONTIMEOUT, "Failed to get {}: timeout", description);
|
||||||
|
|
||||||
|
auto response = future.get();
|
||||||
|
|
||||||
|
if (response.error == Coordination::Error::ZNONODE)
|
||||||
|
{
|
||||||
|
LOG_TRACE(log, "Failed to get {}", description);
|
||||||
|
return std::nullopt;
|
||||||
|
}
|
||||||
|
else if (response.error != Coordination::Error::ZOK)
|
||||||
|
{
|
||||||
|
throw Exception(response.error, "Failed to get {}", description);
|
||||||
|
}
|
||||||
|
|
||||||
|
return std::move(response.data);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ZooKeeper::initFeatureFlags()
|
||||||
|
{
|
||||||
|
if (auto feature_flags = tryGetSystemZnode(keeper_api_feature_flags_path, "feature flags"); feature_flags.has_value())
|
||||||
{
|
{
|
||||||
keeper_feature_flags.setFeatureFlags(std::move(*feature_flags));
|
keeper_feature_flags.setFeatureFlags(std::move(*feature_flags));
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
auto keeper_api_version_string = try_get(keeper_api_version_path, "API version");
|
auto keeper_api_version_string = tryGetSystemZnode(keeper_api_version_path, "API version");
|
||||||
|
|
||||||
DB::KeeperApiVersion keeper_api_version{DB::KeeperApiVersion::ZOOKEEPER_COMPATIBLE};
|
DB::KeeperApiVersion keeper_api_version{DB::KeeperApiVersion::ZOOKEEPER_COMPATIBLE};
|
||||||
|
|
||||||
@ -1256,6 +1281,17 @@ void ZooKeeper::initFeatureFlags()
|
|||||||
keeper_feature_flags.fromApiVersion(keeper_api_version);
|
keeper_feature_flags.fromApiVersion(keeper_api_version);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
String ZooKeeper::tryGetAvailabilityZone()
|
||||||
|
{
|
||||||
|
auto res = tryGetSystemZnode(keeper_availability_zone_path, "availability zone");
|
||||||
|
if (res)
|
||||||
|
{
|
||||||
|
LOG_TRACE(log, "Availability zone for ZooKeeper at {}: {}", getConnectedHostPort(), *res);
|
||||||
|
return *res;
|
||||||
|
}
|
||||||
|
return "";
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
void ZooKeeper::executeGenericRequest(
|
void ZooKeeper::executeGenericRequest(
|
||||||
const ZooKeeperRequestPtr & request,
|
const ZooKeeperRequestPtr & request,
|
||||||
@ -1587,17 +1623,6 @@ void ZooKeeper::setupFaultDistributions()
|
|||||||
inject_setup.test_and_set();
|
inject_setup.test_and_set();
|
||||||
}
|
}
|
||||||
|
|
||||||
void ZooKeeper::checkSessionDeadline() const
|
|
||||||
{
|
|
||||||
if (unlikely(hasReachedDeadline()))
|
|
||||||
throw Exception::fromMessage(Error::ZSESSIONEXPIRED, "Session expired (force expiry client-side)");
|
|
||||||
}
|
|
||||||
|
|
||||||
bool ZooKeeper::hasReachedDeadline() const
|
|
||||||
{
|
|
||||||
return client_session_deadline.has_value() && clock::now() >= client_session_deadline.value();
|
|
||||||
}
|
|
||||||
|
|
||||||
void ZooKeeper::maybeInjectSendFault()
|
void ZooKeeper::maybeInjectSendFault()
|
||||||
{
|
{
|
||||||
if (unlikely(inject_setup.test() && send_inject_fault && send_inject_fault.value()(thread_local_rng)))
|
if (unlikely(inject_setup.test() && send_inject_fault && send_inject_fault.value()(thread_local_rng)))
|
||||||
|
@ -8,6 +8,7 @@
|
|||||||
#include <Common/ZooKeeper/IKeeper.h>
|
#include <Common/ZooKeeper/IKeeper.h>
|
||||||
#include <Common/ZooKeeper/ZooKeeperCommon.h>
|
#include <Common/ZooKeeper/ZooKeeperCommon.h>
|
||||||
#include <Common/ZooKeeper/ZooKeeperArgs.h>
|
#include <Common/ZooKeeper/ZooKeeperArgs.h>
|
||||||
|
#include <Common/ZooKeeper/ZooKeeper.h>
|
||||||
#include <Coordination/KeeperConstants.h>
|
#include <Coordination/KeeperConstants.h>
|
||||||
#include <Coordination/KeeperFeatureFlags.h>
|
#include <Coordination/KeeperFeatureFlags.h>
|
||||||
|
|
||||||
@ -102,21 +103,12 @@ using namespace DB;
|
|||||||
class ZooKeeper final : public IKeeper
|
class ZooKeeper final : public IKeeper
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
struct Node
|
|
||||||
{
|
|
||||||
Poco::Net::SocketAddress address;
|
|
||||||
UInt8 original_index;
|
|
||||||
bool secure;
|
|
||||||
};
|
|
||||||
|
|
||||||
using Nodes = std::vector<Node>;
|
|
||||||
|
|
||||||
/** Connection to nodes is performed in order. If you want, shuffle them manually.
|
/** Connection to nodes is performed in order. If you want, shuffle them manually.
|
||||||
* Operation timeout couldn't be greater than session timeout.
|
* Operation timeout couldn't be greater than session timeout.
|
||||||
* Operation timeout applies independently for network read, network write, waiting for events and synchronization.
|
* Operation timeout applies independently for network read, network write, waiting for events and synchronization.
|
||||||
*/
|
*/
|
||||||
ZooKeeper(
|
ZooKeeper(
|
||||||
const Nodes & nodes,
|
const zkutil::ShuffleHosts & nodes,
|
||||||
const zkutil::ZooKeeperArgs & args_,
|
const zkutil::ZooKeeperArgs & args_,
|
||||||
std::shared_ptr<ZooKeeperLog> zk_log_);
|
std::shared_ptr<ZooKeeperLog> zk_log_);
|
||||||
|
|
||||||
@ -130,9 +122,7 @@ public:
|
|||||||
String getConnectedHostPort() const override { return (original_index == -1) ? "" : args.hosts[original_index]; }
|
String getConnectedHostPort() const override { return (original_index == -1) ? "" : args.hosts[original_index]; }
|
||||||
int32_t getConnectionXid() const override { return next_xid.load(); }
|
int32_t getConnectionXid() const override { return next_xid.load(); }
|
||||||
|
|
||||||
/// A ZooKeeper session can have an optional deadline set on it.
|
String tryGetAvailabilityZone() override;
|
||||||
/// After it has been reached, the session needs to be finalized.
|
|
||||||
bool hasReachedDeadline() const override;
|
|
||||||
|
|
||||||
/// Useful to check owner of ephemeral node.
|
/// Useful to check owner of ephemeral node.
|
||||||
int64_t getSessionID() const override { return session_id; }
|
int64_t getSessionID() const override { return session_id; }
|
||||||
@ -271,7 +261,6 @@ private:
|
|||||||
clock::time_point time;
|
clock::time_point time;
|
||||||
};
|
};
|
||||||
|
|
||||||
std::optional<clock::time_point> client_session_deadline {};
|
|
||||||
using RequestsQueue = ConcurrentBoundedQueue<RequestInfo>;
|
using RequestsQueue = ConcurrentBoundedQueue<RequestInfo>;
|
||||||
|
|
||||||
RequestsQueue requests_queue{1024};
|
RequestsQueue requests_queue{1024};
|
||||||
@ -316,7 +305,7 @@ private:
|
|||||||
LoggerPtr log;
|
LoggerPtr log;
|
||||||
|
|
||||||
void connect(
|
void connect(
|
||||||
const Nodes & node,
|
const zkutil::ShuffleHosts & node,
|
||||||
Poco::Timespan connection_timeout);
|
Poco::Timespan connection_timeout);
|
||||||
|
|
||||||
void sendHandshake();
|
void sendHandshake();
|
||||||
@ -346,9 +335,10 @@ private:
|
|||||||
|
|
||||||
void logOperationIfNeeded(const ZooKeeperRequestPtr & request, const ZooKeeperResponsePtr & response = nullptr, bool finalize = false, UInt64 elapsed_microseconds = 0);
|
void logOperationIfNeeded(const ZooKeeperRequestPtr & request, const ZooKeeperResponsePtr & response = nullptr, bool finalize = false, UInt64 elapsed_microseconds = 0);
|
||||||
|
|
||||||
|
std::optional<String> tryGetSystemZnode(const std::string & path, const std::string & description);
|
||||||
|
|
||||||
void initFeatureFlags();
|
void initFeatureFlags();
|
||||||
|
|
||||||
void checkSessionDeadline() const;
|
|
||||||
|
|
||||||
CurrentMetrics::Increment active_session_metric_increment{CurrentMetrics::ZooKeeperSession};
|
CurrentMetrics::Increment active_session_metric_increment{CurrentMetrics::ZooKeeperSession};
|
||||||
std::shared_ptr<ZooKeeperLog> zk_log;
|
std::shared_ptr<ZooKeeperLog> zk_log;
|
||||||
|
@ -25,24 +25,24 @@ try
|
|||||||
Poco::Logger::root().setChannel(channel);
|
Poco::Logger::root().setChannel(channel);
|
||||||
Poco::Logger::root().setLevel("trace");
|
Poco::Logger::root().setLevel("trace");
|
||||||
|
|
||||||
std::string hosts_arg = argv[1];
|
zkutil::ZooKeeperArgs args{argv[1]};
|
||||||
std::vector<std::string> hosts_strings;
|
zkutil::ShuffleHosts nodes;
|
||||||
splitInto<','>(hosts_strings, hosts_arg);
|
nodes.reserve(args.hosts.size());
|
||||||
ZooKeeper::Nodes nodes;
|
for (size_t i = 0; i < args.hosts.size(); ++i)
|
||||||
nodes.reserve(hosts_strings.size());
|
|
||||||
for (size_t i = 0; i < hosts_strings.size(); ++i)
|
|
||||||
{
|
{
|
||||||
std::string host_string = hosts_strings[i];
|
zkutil::ShuffleHost node;
|
||||||
bool secure = startsWith(host_string, "secure://");
|
std::string host_string = args.hosts[i];
|
||||||
|
node.secure = startsWith(host_string, "secure://");
|
||||||
|
|
||||||
if (secure)
|
if (node.secure)
|
||||||
host_string.erase(0, strlen("secure://"));
|
host_string.erase(0, strlen("secure://"));
|
||||||
|
|
||||||
nodes.emplace_back(ZooKeeper::Node{Poco::Net::SocketAddress{host_string}, static_cast<UInt8>(i) , secure});
|
node.host = host_string;
|
||||||
|
node.original_index = i;
|
||||||
|
|
||||||
|
nodes.emplace_back(node);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
zkutil::ZooKeeperArgs args;
|
|
||||||
ZooKeeper zk(nodes, args, nullptr);
|
ZooKeeper zk(nodes, args, nullptr);
|
||||||
|
|
||||||
Poco::Event event(true);
|
Poco::Event event(true);
|
||||||
|
@ -808,7 +808,11 @@ void LogEntryStorage::startCommitLogsPrefetch(uint64_t last_committed_index) con
|
|||||||
|
|
||||||
for (; current_index <= max_index_for_prefetch; ++current_index)
|
for (; current_index <= max_index_for_prefetch; ++current_index)
|
||||||
{
|
{
|
||||||
const auto & [changelog_description, position, size] = logs_location.at(current_index);
|
auto location_it = logs_location.find(current_index);
|
||||||
|
if (location_it == logs_location.end())
|
||||||
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "Location of log entry with index {} is missing", current_index);
|
||||||
|
|
||||||
|
const auto & [changelog_description, position, size] = location_it->second;
|
||||||
if (total_size == 0)
|
if (total_size == 0)
|
||||||
current_file_info = &file_infos.emplace_back(changelog_description, position, /* count */ 1);
|
current_file_info = &file_infos.emplace_back(changelog_description, position, /* count */ 1);
|
||||||
else if (total_size + size > commit_logs_cache.size_threshold)
|
else if (total_size + size > commit_logs_cache.size_threshold)
|
||||||
@ -1416,7 +1420,11 @@ LogEntriesPtr LogEntryStorage::getLogEntriesBetween(uint64_t start, uint64_t end
|
|||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
const auto & log_location = logs_location.at(i);
|
auto location_it = logs_location.find(i);
|
||||||
|
if (location_it == logs_location.end())
|
||||||
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "Location of log entry with index {} is missing", i);
|
||||||
|
|
||||||
|
const auto & log_location = location_it->second;
|
||||||
|
|
||||||
if (!read_info)
|
if (!read_info)
|
||||||
set_new_file(log_location);
|
set_new_file(log_location);
|
||||||
|
@ -7,11 +7,12 @@
|
|||||||
#include <mutex>
|
#include <mutex>
|
||||||
#include <string>
|
#include <string>
|
||||||
#include <Coordination/KeeperLogStore.h>
|
#include <Coordination/KeeperLogStore.h>
|
||||||
|
#include <Coordination/KeeperSnapshotManagerS3.h>
|
||||||
#include <Coordination/KeeperStateMachine.h>
|
#include <Coordination/KeeperStateMachine.h>
|
||||||
#include <Coordination/KeeperStateManager.h>
|
#include <Coordination/KeeperStateManager.h>
|
||||||
#include <Coordination/KeeperSnapshotManagerS3.h>
|
|
||||||
#include <Coordination/LoggerWrapper.h>
|
#include <Coordination/LoggerWrapper.h>
|
||||||
#include <Coordination/WriteBufferFromNuraftBuffer.h>
|
#include <Coordination/WriteBufferFromNuraftBuffer.h>
|
||||||
|
#include <Disks/DiskLocal.h>
|
||||||
#include <IO/ReadHelpers.h>
|
#include <IO/ReadHelpers.h>
|
||||||
#include <IO/WriteHelpers.h>
|
#include <IO/WriteHelpers.h>
|
||||||
#include <boost/algorithm/string.hpp>
|
#include <boost/algorithm/string.hpp>
|
||||||
@ -27,7 +28,7 @@
|
|||||||
#include <Common/LockMemoryExceptionInThread.h>
|
#include <Common/LockMemoryExceptionInThread.h>
|
||||||
#include <Common/Stopwatch.h>
|
#include <Common/Stopwatch.h>
|
||||||
#include <Common/getMultipleKeysFromConfig.h>
|
#include <Common/getMultipleKeysFromConfig.h>
|
||||||
#include <Disks/DiskLocal.h>
|
#include <Common/getNumberOfPhysicalCPUCores.h>
|
||||||
|
|
||||||
#pragma clang diagnostic ignored "-Wdeprecated-declarations"
|
#pragma clang diagnostic ignored "-Wdeprecated-declarations"
|
||||||
#include <fmt/chrono.h>
|
#include <fmt/chrono.h>
|
||||||
@ -365,6 +366,8 @@ void KeeperServer::launchRaftServer(const Poco::Util::AbstractConfiguration & co
|
|||||||
LockMemoryExceptionInThread::removeUniqueLock();
|
LockMemoryExceptionInThread::removeUniqueLock();
|
||||||
};
|
};
|
||||||
|
|
||||||
|
asio_opts.thread_pool_size_ = getNumberOfPhysicalCPUCores();
|
||||||
|
|
||||||
if (state_manager->isSecure())
|
if (state_manager->isSecure())
|
||||||
{
|
{
|
||||||
#if USE_SSL
|
#if USE_SSL
|
||||||
|
@ -534,6 +534,10 @@ bool KeeperStorage::UncommittedState::hasACL(int64_t session_id, bool is_local,
|
|||||||
if (is_local)
|
if (is_local)
|
||||||
return check_auth(storage.session_and_auth[session_id]);
|
return check_auth(storage.session_and_auth[session_id]);
|
||||||
|
|
||||||
|
/// we want to close the session and with that we will remove all the auth related to the session
|
||||||
|
if (closed_sessions.contains(session_id))
|
||||||
|
return false;
|
||||||
|
|
||||||
if (check_auth(storage.session_and_auth[session_id]))
|
if (check_auth(storage.session_and_auth[session_id]))
|
||||||
return true;
|
return true;
|
||||||
|
|
||||||
@ -559,6 +563,10 @@ void KeeperStorage::UncommittedState::addDelta(Delta new_delta)
|
|||||||
auto & uncommitted_auth = session_and_auth[auth_delta->session_id];
|
auto & uncommitted_auth = session_and_auth[auth_delta->session_id];
|
||||||
uncommitted_auth.emplace_back(&auth_delta->auth_id);
|
uncommitted_auth.emplace_back(&auth_delta->auth_id);
|
||||||
}
|
}
|
||||||
|
else if (const auto * close_session_delta = std::get_if<CloseSessionDelta>(&added_delta.operation))
|
||||||
|
{
|
||||||
|
closed_sessions.insert(close_session_delta->session_id);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void KeeperStorage::UncommittedState::addDeltas(std::vector<Delta> new_deltas)
|
void KeeperStorage::UncommittedState::addDeltas(std::vector<Delta> new_deltas)
|
||||||
@ -1013,9 +1021,11 @@ struct KeeperStorageHeartbeatRequestProcessor final : public KeeperStorageReques
|
|||||||
{
|
{
|
||||||
using KeeperStorageRequestProcessor::KeeperStorageRequestProcessor;
|
using KeeperStorageRequestProcessor::KeeperStorageRequestProcessor;
|
||||||
Coordination::ZooKeeperResponsePtr
|
Coordination::ZooKeeperResponsePtr
|
||||||
process(KeeperStorage & /* storage */, int64_t /* zxid */) const override
|
process(KeeperStorage & storage, int64_t zxid) const override
|
||||||
{
|
{
|
||||||
return zk_request->makeResponse();
|
Coordination::ZooKeeperResponsePtr response_ptr = zk_request->makeResponse();
|
||||||
|
response_ptr->error = storage.commit(zxid);
|
||||||
|
return response_ptr;
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -2377,15 +2387,13 @@ void KeeperStorage::preprocessRequest(
|
|||||||
|
|
||||||
ephemerals.erase(session_ephemerals);
|
ephemerals.erase(session_ephemerals);
|
||||||
}
|
}
|
||||||
new_deltas.emplace_back(transaction.zxid, CloseSessionDelta{session_id});
|
|
||||||
uncommitted_state.closed_sessions.insert(session_id);
|
|
||||||
|
|
||||||
|
new_deltas.emplace_back(transaction.zxid, CloseSessionDelta{session_id});
|
||||||
new_digest = calculateNodesDigest(new_digest, new_deltas);
|
new_digest = calculateNodesDigest(new_digest, new_deltas);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if ((check_acl && !request_processor->checkAuth(*this, session_id, false)) ||
|
if (check_acl && !request_processor->checkAuth(*this, session_id, false))
|
||||||
uncommitted_state.closed_sessions.contains(session_id)) // Is session closed but not committed yet
|
|
||||||
{
|
{
|
||||||
uncommitted_state.deltas.emplace_back(new_last_zxid, Coordination::Error::ZNOAUTH);
|
uncommitted_state.deltas.emplace_back(new_last_zxid, Coordination::Error::ZNOAUTH);
|
||||||
return;
|
return;
|
||||||
@ -2442,8 +2450,6 @@ KeeperStorage::ResponsesForSessions KeeperStorage::processRequest(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
uncommitted_state.commit(zxid);
|
|
||||||
|
|
||||||
clearDeadWatches(session_id);
|
clearDeadWatches(session_id);
|
||||||
auto auth_it = session_and_auth.find(session_id);
|
auto auth_it = session_and_auth.find(session_id);
|
||||||
if (auth_it != session_and_auth.end())
|
if (auth_it != session_and_auth.end())
|
||||||
@ -2488,7 +2494,6 @@ KeeperStorage::ResponsesForSessions KeeperStorage::processRequest(
|
|||||||
else
|
else
|
||||||
{
|
{
|
||||||
response = request_processor->process(*this, zxid);
|
response = request_processor->process(*this, zxid);
|
||||||
uncommitted_state.commit(zxid);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Watches for this requests are added to the watches lists
|
/// Watches for this requests are added to the watches lists
|
||||||
@ -2528,6 +2533,7 @@ KeeperStorage::ResponsesForSessions KeeperStorage::processRequest(
|
|||||||
results.push_back(ResponseForSession{session_id, response});
|
results.push_back(ResponseForSession{session_id, response});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
uncommitted_state.commit(zxid);
|
||||||
return results;
|
return results;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2028,56 +2028,175 @@ TEST_P(CoordinationTest, TestPreprocessWhenCloseSessionIsPrecommitted)
|
|||||||
setSnapshotDirectory("./snapshots");
|
setSnapshotDirectory("./snapshots");
|
||||||
ResponsesQueue queue(std::numeric_limits<size_t>::max());
|
ResponsesQueue queue(std::numeric_limits<size_t>::max());
|
||||||
SnapshotsQueue snapshots_queue{1};
|
SnapshotsQueue snapshots_queue{1};
|
||||||
int64_t session_id = 1;
|
int64_t session_without_auth = 1;
|
||||||
|
int64_t session_with_auth = 2;
|
||||||
size_t term = 0;
|
size_t term = 0;
|
||||||
|
|
||||||
auto state_machine = std::make_shared<KeeperStateMachine>(queue, snapshots_queue, keeper_context, nullptr);
|
auto state_machine = std::make_shared<KeeperStateMachine>(queue, snapshots_queue, keeper_context, nullptr);
|
||||||
state_machine->init();
|
state_machine->init();
|
||||||
|
|
||||||
auto & storage = state_machine->getStorageUnsafe();
|
auto & storage = state_machine->getStorageUnsafe();
|
||||||
const auto & uncommitted_state = storage.uncommitted_state;
|
const auto & uncommitted_state = storage.uncommitted_state;
|
||||||
|
|
||||||
// Create first node for the session
|
auto auth_req = std::make_shared<ZooKeeperAuthRequest>();
|
||||||
String node_path_1 = "/node_1";
|
auth_req->scheme = "digest";
|
||||||
std::shared_ptr<ZooKeeperCreateRequest> create_req_1 = std::make_shared<ZooKeeperCreateRequest>();
|
auth_req->data = "test_user:test_password";
|
||||||
create_req_1->path = node_path_1;
|
|
||||||
auto create_entry_1 = getLogEntryFromZKRequest(term, session_id, state_machine->getNextZxid(), create_req_1);
|
|
||||||
|
|
||||||
state_machine->pre_commit(1, create_entry_1->get_buf());
|
// Add auth data to the session
|
||||||
EXPECT_TRUE(uncommitted_state.nodes.contains(node_path_1));
|
auto auth_entry = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), auth_req);
|
||||||
|
state_machine->pre_commit(1, auth_entry->get_buf());
|
||||||
|
state_machine->commit(1, auth_entry->get_buf());
|
||||||
|
|
||||||
state_machine->commit(1, create_entry_1->get_buf());
|
std::string node_without_acl = "/node_without_acl";
|
||||||
EXPECT_TRUE(storage.container.contains(node_path_1));
|
{
|
||||||
|
auto create_req = std::make_shared<ZooKeeperCreateRequest>();
|
||||||
|
create_req->path = node_without_acl;
|
||||||
|
create_req->data = "notmodified";
|
||||||
|
auto create_entry = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), create_req);
|
||||||
|
state_machine->pre_commit(2, create_entry->get_buf());
|
||||||
|
state_machine->commit(2, create_entry->get_buf());
|
||||||
|
ASSERT_TRUE(storage.container.contains(node_without_acl));
|
||||||
|
}
|
||||||
|
|
||||||
// Close session
|
std::string node_with_acl = "/node_with_acl";
|
||||||
std::shared_ptr<ZooKeeperCloseRequest> close_req = std::make_shared<ZooKeeperCloseRequest>();
|
{
|
||||||
auto close_entry = getLogEntryFromZKRequest(term, session_id, state_machine->getNextZxid(), close_req);
|
auto create_req = std::make_shared<ZooKeeperCreateRequest>();
|
||||||
// Pre-commit close session
|
create_req->path = node_with_acl;
|
||||||
state_machine->pre_commit(2, close_entry->get_buf());
|
create_req->data = "notmodified";
|
||||||
|
create_req->acls = {{.permissions = ACL::All, .scheme = "auth", .id = ""}};
|
||||||
|
auto create_entry = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), create_req);
|
||||||
|
state_machine->pre_commit(3, create_entry->get_buf());
|
||||||
|
state_machine->commit(3, create_entry->get_buf());
|
||||||
|
ASSERT_TRUE(storage.container.contains(node_with_acl));
|
||||||
|
}
|
||||||
|
|
||||||
// Try to create second node after close session is pre-committed
|
auto set_req_with_acl = std::make_shared<ZooKeeperSetRequest>();
|
||||||
String node_path_2 = "/node_2";
|
set_req_with_acl->path = node_with_acl;
|
||||||
std::shared_ptr<ZooKeeperCreateRequest> create_req_2 = std::make_shared<ZooKeeperCreateRequest>();
|
set_req_with_acl->data = "modified";
|
||||||
create_req_2->path = node_path_2;
|
|
||||||
auto create_entry_2 = getLogEntryFromZKRequest(term, session_id, state_machine->getNextZxid(), create_req_2);
|
|
||||||
|
|
||||||
// Pre-commit creating second node
|
|
||||||
state_machine->pre_commit(3, create_entry_2->get_buf());
|
|
||||||
// Second node wasn't created
|
|
||||||
EXPECT_FALSE(uncommitted_state.nodes.contains(node_path_2));
|
|
||||||
|
|
||||||
// Rollback pre-committed closing session
|
auto set_req_without_acl = std::make_shared<ZooKeeperSetRequest>();
|
||||||
state_machine->rollback(3, create_entry_2->get_buf());
|
set_req_without_acl->path = node_without_acl;
|
||||||
state_machine->rollback(2, close_entry->get_buf());
|
set_req_without_acl->data = "modified";
|
||||||
|
|
||||||
// Pre-commit creating second node
|
const auto reset_node_value
|
||||||
state_machine->pre_commit(2, create_entry_2->get_buf());
|
= [&](const auto & path) { storage.container.updateValue(path, [](auto & node) { node.setData("notmodified"); }); };
|
||||||
// Now second node was created
|
|
||||||
EXPECT_TRUE(uncommitted_state.nodes.contains(node_path_2));
|
|
||||||
|
|
||||||
state_machine->commit(2, create_entry_2->get_buf());
|
auto close_req = std::make_shared<ZooKeeperCloseRequest>();
|
||||||
EXPECT_TRUE(storage.container.contains(node_path_1));
|
|
||||||
EXPECT_TRUE(storage.container.contains(node_path_2));
|
{
|
||||||
|
SCOPED_TRACE("Session with Auth");
|
||||||
|
|
||||||
|
// test we can modify both nodes
|
||||||
|
auto set_entry = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), set_req_with_acl);
|
||||||
|
state_machine->pre_commit(5, set_entry->get_buf());
|
||||||
|
state_machine->commit(5, set_entry->get_buf());
|
||||||
|
ASSERT_TRUE(storage.container.find(node_with_acl)->value.getData() == "modified");
|
||||||
|
reset_node_value(node_with_acl);
|
||||||
|
|
||||||
|
set_entry = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), set_req_without_acl);
|
||||||
|
state_machine->pre_commit(6, set_entry->get_buf());
|
||||||
|
state_machine->commit(6, set_entry->get_buf());
|
||||||
|
ASSERT_TRUE(storage.container.find(node_without_acl)->value.getData() == "modified");
|
||||||
|
reset_node_value(node_without_acl);
|
||||||
|
|
||||||
|
auto close_entry = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), close_req);
|
||||||
|
|
||||||
|
// Pre-commit close session
|
||||||
|
state_machine->pre_commit(7, close_entry->get_buf());
|
||||||
|
|
||||||
|
/// will be rejected because we don't have required auth
|
||||||
|
auto set_entry_with_acl = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), set_req_with_acl);
|
||||||
|
state_machine->pre_commit(8, set_entry_with_acl->get_buf());
|
||||||
|
|
||||||
|
/// will be accepted because no ACL
|
||||||
|
auto set_entry_without_acl = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), set_req_without_acl);
|
||||||
|
state_machine->pre_commit(9, set_entry_without_acl->get_buf());
|
||||||
|
|
||||||
|
ASSERT_TRUE(uncommitted_state.getNode(node_with_acl)->getData() == "notmodified");
|
||||||
|
ASSERT_TRUE(uncommitted_state.getNode(node_without_acl)->getData() == "modified");
|
||||||
|
|
||||||
|
state_machine->rollback(9, set_entry_without_acl->get_buf());
|
||||||
|
state_machine->rollback(8, set_entry_with_acl->get_buf());
|
||||||
|
|
||||||
|
// let's commit close and verify we get same outcome
|
||||||
|
state_machine->commit(7, close_entry->get_buf());
|
||||||
|
|
||||||
|
/// will be rejected because we don't have required auth
|
||||||
|
set_entry_with_acl = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), set_req_with_acl);
|
||||||
|
state_machine->pre_commit(8, set_entry_with_acl->get_buf());
|
||||||
|
|
||||||
|
/// will be accepted because no ACL
|
||||||
|
set_entry_without_acl = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), set_req_without_acl);
|
||||||
|
state_machine->pre_commit(9, set_entry_without_acl->get_buf());
|
||||||
|
|
||||||
|
ASSERT_TRUE(uncommitted_state.getNode(node_with_acl)->getData() == "notmodified");
|
||||||
|
ASSERT_TRUE(uncommitted_state.getNode(node_without_acl)->getData() == "modified");
|
||||||
|
|
||||||
|
state_machine->commit(8, set_entry_with_acl->get_buf());
|
||||||
|
state_machine->commit(9, set_entry_without_acl->get_buf());
|
||||||
|
|
||||||
|
ASSERT_TRUE(storage.container.find(node_with_acl)->value.getData() == "notmodified");
|
||||||
|
ASSERT_TRUE(storage.container.find(node_without_acl)->value.getData() == "modified");
|
||||||
|
|
||||||
|
reset_node_value(node_without_acl);
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
SCOPED_TRACE("Session without Auth");
|
||||||
|
|
||||||
|
// test we can modify only node without acl
|
||||||
|
auto set_entry = getLogEntryFromZKRequest(term, session_without_auth, state_machine->getNextZxid(), set_req_with_acl);
|
||||||
|
state_machine->pre_commit(10, set_entry->get_buf());
|
||||||
|
state_machine->commit(10, set_entry->get_buf());
|
||||||
|
ASSERT_TRUE(storage.container.find(node_with_acl)->value.getData() == "notmodified");
|
||||||
|
|
||||||
|
set_entry = getLogEntryFromZKRequest(term, session_without_auth, state_machine->getNextZxid(), set_req_without_acl);
|
||||||
|
state_machine->pre_commit(11, set_entry->get_buf());
|
||||||
|
state_machine->commit(11, set_entry->get_buf());
|
||||||
|
ASSERT_TRUE(storage.container.find(node_without_acl)->value.getData() == "modified");
|
||||||
|
reset_node_value(node_without_acl);
|
||||||
|
|
||||||
|
auto close_entry = getLogEntryFromZKRequest(term, session_without_auth, state_machine->getNextZxid(), close_req);
|
||||||
|
|
||||||
|
// Pre-commit close session
|
||||||
|
state_machine->pre_commit(12, close_entry->get_buf());
|
||||||
|
|
||||||
|
/// will be rejected because we don't have required auth
|
||||||
|
auto set_entry_with_acl = getLogEntryFromZKRequest(term, session_without_auth, state_machine->getNextZxid(), set_req_with_acl);
|
||||||
|
state_machine->pre_commit(13, set_entry_with_acl->get_buf());
|
||||||
|
|
||||||
|
/// will be accepted because no ACL
|
||||||
|
auto set_entry_without_acl = getLogEntryFromZKRequest(term, session_without_auth, state_machine->getNextZxid(), set_req_without_acl);
|
||||||
|
state_machine->pre_commit(14, set_entry_without_acl->get_buf());
|
||||||
|
|
||||||
|
ASSERT_TRUE(uncommitted_state.getNode(node_with_acl)->getData() == "notmodified");
|
||||||
|
ASSERT_TRUE(uncommitted_state.getNode(node_without_acl)->getData() == "modified");
|
||||||
|
|
||||||
|
state_machine->rollback(14, set_entry_without_acl->get_buf());
|
||||||
|
state_machine->rollback(13, set_entry_with_acl->get_buf());
|
||||||
|
|
||||||
|
// let's commit close and verify we get same outcome
|
||||||
|
state_machine->commit(12, close_entry->get_buf());
|
||||||
|
|
||||||
|
/// will be rejected because we don't have required auth
|
||||||
|
set_entry_with_acl = getLogEntryFromZKRequest(term, session_without_auth, state_machine->getNextZxid(), set_req_with_acl);
|
||||||
|
state_machine->pre_commit(13, set_entry_with_acl->get_buf());
|
||||||
|
|
||||||
|
/// will be accepted because no ACL
|
||||||
|
set_entry_without_acl = getLogEntryFromZKRequest(term, session_without_auth, state_machine->getNextZxid(), set_req_without_acl);
|
||||||
|
state_machine->pre_commit(14, set_entry_without_acl->get_buf());
|
||||||
|
|
||||||
|
ASSERT_TRUE(uncommitted_state.getNode(node_with_acl)->getData() == "notmodified");
|
||||||
|
ASSERT_TRUE(uncommitted_state.getNode(node_without_acl)->getData() == "modified");
|
||||||
|
|
||||||
|
state_machine->commit(13, set_entry_with_acl->get_buf());
|
||||||
|
state_machine->commit(14, set_entry_without_acl->get_buf());
|
||||||
|
|
||||||
|
ASSERT_TRUE(storage.container.find(node_with_acl)->value.getData() == "notmodified");
|
||||||
|
ASSERT_TRUE(storage.container.find(node_without_acl)->value.getData() == "modified");
|
||||||
|
|
||||||
|
reset_node_value(node_without_acl);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST_P(CoordinationTest, TestSetACLWithAuthSchemeForAclWhenAuthIsPrecommitted)
|
TEST_P(CoordinationTest, TestSetACLWithAuthSchemeForAclWhenAuthIsPrecommitted)
|
||||||
|
@ -142,6 +142,7 @@ void Settings::applyCompatibilitySetting(const String & compatibility_value)
|
|||||||
return;
|
return;
|
||||||
|
|
||||||
ClickHouseVersion version(compatibility_value);
|
ClickHouseVersion version(compatibility_value);
|
||||||
|
const auto & settings_changes_history = getSettingsChangesHistory();
|
||||||
/// Iterate through ClickHouse version in descending order and apply reversed
|
/// Iterate through ClickHouse version in descending order and apply reversed
|
||||||
/// changes for each version that is higher that version from compatibility setting
|
/// changes for each version that is higher that version from compatibility setting
|
||||||
for (auto it = settings_changes_history.rbegin(); it != settings_changes_history.rend(); ++it)
|
for (auto it = settings_changes_history.rbegin(); it != settings_changes_history.rend(); ++it)
|
||||||
|
324
src/Core/SettingsChangesHistory.cpp
Normal file
324
src/Core/SettingsChangesHistory.cpp
Normal file
@ -0,0 +1,324 @@
|
|||||||
|
#include <Core/SettingsChangesHistory.h>
|
||||||
|
#include <Core/Defines.h>
|
||||||
|
#include <IO/ReadBufferFromString.h>
|
||||||
|
#include <IO/ReadHelpers.h>
|
||||||
|
#include <boost/algorithm/string.hpp>
|
||||||
|
|
||||||
|
namespace DB
|
||||||
|
{
|
||||||
|
|
||||||
|
namespace ErrorCodes
|
||||||
|
{
|
||||||
|
extern const int BAD_ARGUMENTS;
|
||||||
|
extern const int LOGICAL_ERROR;
|
||||||
|
}
|
||||||
|
|
||||||
|
ClickHouseVersion::ClickHouseVersion(const String & version)
|
||||||
|
{
|
||||||
|
Strings split;
|
||||||
|
boost::split(split, version, [](char c){ return c == '.'; });
|
||||||
|
components.reserve(split.size());
|
||||||
|
if (split.empty())
|
||||||
|
throw Exception{ErrorCodes::BAD_ARGUMENTS, "Cannot parse ClickHouse version here: {}", version};
|
||||||
|
|
||||||
|
for (const auto & split_element : split)
|
||||||
|
{
|
||||||
|
size_t component;
|
||||||
|
ReadBufferFromString buf(split_element);
|
||||||
|
if (!tryReadIntText(component, buf) || !buf.eof())
|
||||||
|
throw Exception{ErrorCodes::BAD_ARGUMENTS, "Cannot parse ClickHouse version here: {}", version};
|
||||||
|
components.push_back(component);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ClickHouseVersion::ClickHouseVersion(const char * version)
|
||||||
|
: ClickHouseVersion(String(version))
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
|
String ClickHouseVersion::toString() const
|
||||||
|
{
|
||||||
|
String version = std::to_string(components[0]);
|
||||||
|
for (size_t i = 1; i < components.size(); ++i)
|
||||||
|
version += "." + std::to_string(components[i]);
|
||||||
|
|
||||||
|
return version;
|
||||||
|
}
|
||||||
|
|
||||||
|
// clang-format off
|
||||||
|
/// History of settings changes that controls some backward incompatible changes
|
||||||
|
/// across all ClickHouse versions. It maps ClickHouse version to settings changes that were done
|
||||||
|
/// in this version. This history contains both changes to existing settings and newly added settings.
|
||||||
|
/// Settings changes is a vector of structs
|
||||||
|
/// {setting_name, previous_value, new_value, reason}.
|
||||||
|
/// For newly added setting choose the most appropriate previous_value (for example, if new setting
|
||||||
|
/// controls new feature and it's 'true' by default, use 'false' as previous_value).
|
||||||
|
/// It's used to implement `compatibility` setting (see https://github.com/ClickHouse/ClickHouse/issues/35972)
|
||||||
|
/// Note: please check if the key already exists to prevent duplicate entries.
|
||||||
|
static std::initializer_list<std::pair<ClickHouseVersion, SettingsChangesHistory::SettingsChanges>> settings_changes_history_initializer =
|
||||||
|
{
|
||||||
|
{"24.7", {{"output_format_parquet_write_page_index", false, true, "Add a possibility to write page index into parquet files."},
|
||||||
|
{"optimize_functions_to_subcolumns", false, true, "Enable optimization by default"},
|
||||||
|
}},
|
||||||
|
{"24.6", {{"materialize_skip_indexes_on_insert", true, true, "Added new setting to allow to disable materialization of skip indexes on insert"},
|
||||||
|
{"materialize_statistics_on_insert", true, true, "Added new setting to allow to disable materialization of statistics on insert"},
|
||||||
|
{"input_format_parquet_use_native_reader", false, false, "When reading Parquet files, to use native reader instead of arrow reader."},
|
||||||
|
{"hdfs_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in HDFS engine instead of empty query result"},
|
||||||
|
{"azure_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in AzureBlobStorage engine instead of empty query result"},
|
||||||
|
{"s3_validate_request_settings", true, true, "Allow to disable S3 request settings validation"},
|
||||||
|
{"allow_experimental_full_text_index", false, false, "Enable experimental full-text index"},
|
||||||
|
{"azure_skip_empty_files", false, false, "Allow to skip empty files in azure table engine"},
|
||||||
|
{"hdfs_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in HDFS table engine"},
|
||||||
|
{"azure_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in AzureBlobStorage table engine"},
|
||||||
|
{"s3_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in S3 table engine"},
|
||||||
|
{"s3_max_part_number", 10000, 10000, "Maximum part number number for s3 upload part"},
|
||||||
|
{"s3_max_single_operation_copy_size", 32 * 1024 * 1024, 32 * 1024 * 1024, "Maximum size for a single copy operation in s3"},
|
||||||
|
{"input_format_parquet_max_block_size", 8192, DEFAULT_BLOCK_SIZE, "Increase block size for parquet reader."},
|
||||||
|
{"input_format_parquet_prefer_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Average block bytes output by parquet reader."},
|
||||||
|
{"enable_blob_storage_log", true, true, "Write information about blob storage operations to system.blob_storage_log table"},
|
||||||
|
{"allow_deprecated_snowflake_conversion_functions", true, false, "Disabled deprecated functions snowflakeToDateTime[64] and dateTime[64]ToSnowflake."},
|
||||||
|
{"allow_statistic_optimize", false, false, "Old setting which popped up here being renamed."},
|
||||||
|
{"allow_experimental_statistic", false, false, "Old setting which popped up here being renamed."},
|
||||||
|
{"allow_statistics_optimize", false, false, "The setting was renamed. The previous name is `allow_statistic_optimize`."},
|
||||||
|
{"allow_experimental_statistics", false, false, "The setting was renamed. The previous name is `allow_experimental_statistic`."},
|
||||||
|
{"enable_vertical_final", false, true, "Enable vertical final by default again after fixing bug"},
|
||||||
|
{"parallel_replicas_custom_key_range_lower", 0, 0, "Add settings to control the range filter when using parallel replicas with dynamic shards"},
|
||||||
|
{"parallel_replicas_custom_key_range_upper", 0, 0, "Add settings to control the range filter when using parallel replicas with dynamic shards. A value of 0 disables the upper limit"},
|
||||||
|
{"output_format_pretty_display_footer_column_names", 0, 1, "Add a setting to display column names in the footer if there are many rows. Threshold value is controlled by output_format_pretty_display_footer_column_names_min_rows."},
|
||||||
|
{"output_format_pretty_display_footer_column_names_min_rows", 0, 50, "Add a setting to control the threshold value for setting output_format_pretty_display_footer_column_names_min_rows. Default 50."},
|
||||||
|
{"output_format_csv_serialize_tuple_into_separate_columns", true, true, "A new way of how interpret tuples in CSV format was added."},
|
||||||
|
{"input_format_csv_deserialize_separate_columns_into_tuple", true, true, "A new way of how interpret tuples in CSV format was added."},
|
||||||
|
{"input_format_csv_try_infer_strings_from_quoted_tuples", true, true, "A new way of how interpret tuples in CSV format was added."},
|
||||||
|
{"input_format_json_ignore_key_case", false, false, "Ignore json key case while read json field from string."},
|
||||||
|
}},
|
||||||
|
{"24.5", {{"allow_deprecated_error_prone_window_functions", true, false, "Allow usage of deprecated error prone window functions (neighbor, runningAccumulate, runningDifferenceStartingWithFirstValue, runningDifference)"},
|
||||||
|
{"allow_experimental_join_condition", false, false, "Support join with inequal conditions which involve columns from both left and right table. e.g. t1.y < t2.y."},
|
||||||
|
{"input_format_tsv_crlf_end_of_line", false, false, "Enables reading of CRLF line endings with TSV formats"},
|
||||||
|
{"output_format_parquet_use_custom_encoder", false, true, "Enable custom Parquet encoder."},
|
||||||
|
{"cross_join_min_rows_to_compress", 0, 10000000, "Minimal count of rows to compress block in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."},
|
||||||
|
{"cross_join_min_bytes_to_compress", 0, 1_GiB, "Minimal size of block to compress in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."},
|
||||||
|
{"http_max_chunk_size", 0, 0, "Internal limitation"},
|
||||||
|
{"prefer_external_sort_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Prefer maximum block bytes for external sort, reduce the memory usage during merging."},
|
||||||
|
{"input_format_force_null_for_omitted_fields", false, false, "Disable type-defaults for omitted fields when needed"},
|
||||||
|
{"cast_string_to_dynamic_use_inference", false, false, "Add setting to allow converting String to Dynamic through parsing"},
|
||||||
|
{"allow_experimental_dynamic_type", false, false, "Add new experimental Dynamic type"},
|
||||||
|
{"azure_max_blocks_in_multipart_upload", 50000, 50000, "Maximum number of blocks in multipart upload for Azure."},
|
||||||
|
}},
|
||||||
|
{"24.4", {{"input_format_json_throw_on_bad_escape_sequence", true, true, "Allow to save JSON strings with bad escape sequences"},
|
||||||
|
{"max_parsing_threads", 0, 0, "Add a separate setting to control number of threads in parallel parsing from files"},
|
||||||
|
{"ignore_drop_queries_probability", 0, 0, "Allow to ignore drop queries in server with specified probability for testing purposes"},
|
||||||
|
{"lightweight_deletes_sync", 2, 2, "The same as 'mutation_sync', but controls only execution of lightweight deletes"},
|
||||||
|
{"query_cache_system_table_handling", "save", "throw", "The query cache no longer caches results of queries against system tables"},
|
||||||
|
{"input_format_json_ignore_unnecessary_fields", false, true, "Ignore unnecessary fields and not parse them. Enabling this may not throw exceptions on json strings of invalid format or with duplicated fields"},
|
||||||
|
{"input_format_hive_text_allow_variable_number_of_columns", false, true, "Ignore extra columns in Hive Text input (if file has more columns than expected) and treat missing fields in Hive Text input as default values."},
|
||||||
|
{"allow_experimental_database_replicated", false, true, "Database engine Replicated is now in Beta stage"},
|
||||||
|
{"temporary_data_in_cache_reserve_space_wait_lock_timeout_milliseconds", (10 * 60 * 1000), (10 * 60 * 1000), "Wait time to lock cache for sapce reservation in temporary data in filesystem cache"},
|
||||||
|
{"optimize_rewrite_sum_if_to_count_if", false, true, "Only available for the analyzer, where it works correctly"},
|
||||||
|
{"azure_allow_parallel_part_upload", "true", "true", "Use multiple threads for azure multipart upload."},
|
||||||
|
{"max_recursive_cte_evaluation_depth", DBMS_RECURSIVE_CTE_MAX_EVALUATION_DEPTH, DBMS_RECURSIVE_CTE_MAX_EVALUATION_DEPTH, "Maximum limit on recursive CTE evaluation depth"},
|
||||||
|
{"query_plan_convert_outer_join_to_inner_join", false, true, "Allow to convert OUTER JOIN to INNER JOIN if filter after JOIN always filters default values"},
|
||||||
|
}},
|
||||||
|
{"24.3", {{"s3_connect_timeout_ms", 1000, 1000, "Introduce new dedicated setting for s3 connection timeout"},
|
||||||
|
{"allow_experimental_shared_merge_tree", false, true, "The setting is obsolete"},
|
||||||
|
{"use_page_cache_for_disks_without_file_cache", false, false, "Added userspace page cache"},
|
||||||
|
{"read_from_page_cache_if_exists_otherwise_bypass_cache", false, false, "Added userspace page cache"},
|
||||||
|
{"page_cache_inject_eviction", false, false, "Added userspace page cache"},
|
||||||
|
{"default_table_engine", "None", "MergeTree", "Set default table engine to MergeTree for better usability"},
|
||||||
|
{"input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects", false, false, "Allow to use String type for ambiguous paths during named tuple inference from JSON objects"},
|
||||||
|
{"traverse_shadow_remote_data_paths", false, false, "Traverse shadow directory when query system.remote_data_paths."},
|
||||||
|
{"throw_if_deduplication_in_dependent_materialized_views_enabled_with_async_insert", false, true, "Deduplication is dependent materialized view cannot work together with async inserts."},
|
||||||
|
{"parallel_replicas_allow_in_with_subquery", false, true, "If true, subquery for IN will be executed on every follower replica"},
|
||||||
|
{"log_processors_profiles", false, true, "Enable by default"},
|
||||||
|
{"function_locate_has_mysql_compatible_argument_order", false, true, "Increase compatibility with MySQL's locate function."},
|
||||||
|
{"allow_suspicious_primary_key", true, false, "Forbid suspicious PRIMARY KEY/ORDER BY for MergeTree (i.e. SimpleAggregateFunction)"},
|
||||||
|
{"filesystem_cache_reserve_space_wait_lock_timeout_milliseconds", 1000, 1000, "Wait time to lock cache for sapce reservation in filesystem cache"},
|
||||||
|
{"max_parser_backtracks", 0, 1000000, "Limiting the complexity of parsing"},
|
||||||
|
{"analyzer_compatibility_join_using_top_level_identifier", false, false, "Force to resolve identifier in JOIN USING from projection"},
|
||||||
|
{"distributed_insert_skip_read_only_replicas", false, false, "If true, INSERT into Distributed will skip read-only replicas"},
|
||||||
|
{"keeper_max_retries", 10, 10, "Max retries for general keeper operations"},
|
||||||
|
{"keeper_retry_initial_backoff_ms", 100, 100, "Initial backoff timeout for general keeper operations"},
|
||||||
|
{"keeper_retry_max_backoff_ms", 5000, 5000, "Max backoff timeout for general keeper operations"},
|
||||||
|
{"s3queue_allow_experimental_sharded_mode", false, false, "Enable experimental sharded mode of S3Queue table engine. It is experimental because it will be rewritten"},
|
||||||
|
{"allow_experimental_analyzer", false, true, "Enable analyzer and planner by default."},
|
||||||
|
{"merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability", 0.0, 0.0, "For testing of `PartsSplitter` - split read ranges into intersecting and non intersecting every time you read from MergeTree with the specified probability."},
|
||||||
|
{"allow_get_client_http_header", false, false, "Introduced a new function."},
|
||||||
|
{"output_format_pretty_row_numbers", false, true, "It is better for usability."},
|
||||||
|
{"output_format_pretty_max_value_width_apply_for_single_value", true, false, "Single values in Pretty formats won't be cut."},
|
||||||
|
{"output_format_parquet_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
|
||||||
|
{"output_format_orc_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
|
||||||
|
{"output_format_arrow_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
|
||||||
|
{"output_format_parquet_compression_method", "lz4", "zstd", "Parquet/ORC/Arrow support many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools, such as 'duckdb', lack support for the faster `lz4` compression method, that's why we set zstd by default."},
|
||||||
|
{"output_format_orc_compression_method", "lz4", "zstd", "Parquet/ORC/Arrow support many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools, such as 'duckdb', lack support for the faster `lz4` compression method, that's why we set zstd by default."},
|
||||||
|
{"output_format_pretty_highlight_digit_groups", false, true, "If enabled and if output is a terminal, highlight every digit corresponding to the number of thousands, millions, etc. with underline."},
|
||||||
|
{"geo_distance_returns_float64_on_float64_arguments", false, true, "Increase the default precision."},
|
||||||
|
{"azure_max_inflight_parts_for_one_file", 20, 20, "The maximum number of a concurrent loaded parts in multipart upload request. 0 means unlimited."},
|
||||||
|
{"azure_strict_upload_part_size", 0, 0, "The exact size of part to upload during multipart upload to Azure blob storage."},
|
||||||
|
{"azure_min_upload_part_size", 16*1024*1024, 16*1024*1024, "The minimum size of part to upload during multipart upload to Azure blob storage."},
|
||||||
|
{"azure_max_upload_part_size", 5ull*1024*1024*1024, 5ull*1024*1024*1024, "The maximum size of part to upload during multipart upload to Azure blob storage."},
|
||||||
|
{"azure_upload_part_size_multiply_factor", 2, 2, "Multiply azure_min_upload_part_size by this factor each time azure_multiply_parts_count_threshold parts were uploaded from a single write to Azure blob storage."},
|
||||||
|
{"azure_upload_part_size_multiply_parts_count_threshold", 500, 500, "Each time this number of parts was uploaded to Azure blob storage, azure_min_upload_part_size is multiplied by azure_upload_part_size_multiply_factor."},
|
||||||
|
{"output_format_csv_serialize_tuple_into_separate_columns", true, true, "A new way of how interpret tuples in CSV format was added."},
|
||||||
|
{"input_format_csv_deserialize_separate_columns_into_tuple", true, true, "A new way of how interpret tuples in CSV format was added."},
|
||||||
|
{"input_format_csv_try_infer_strings_from_quoted_tuples", true, true, "A new way of how interpret tuples in CSV format was added."},
|
||||||
|
}},
|
||||||
|
{"24.2", {{"allow_suspicious_variant_types", true, false, "Don't allow creating Variant type with suspicious variants by default"},
|
||||||
|
{"validate_experimental_and_suspicious_types_inside_nested_types", false, true, "Validate usage of experimental and suspicious types inside nested types"},
|
||||||
|
{"output_format_values_escape_quote_with_quote", false, false, "If true escape ' with '', otherwise quoted with \\'"},
|
||||||
|
{"output_format_pretty_single_large_number_tip_threshold", 0, 1'000'000, "Print a readable number tip on the right side of the table if the block consists of a single number which exceeds this value (except 0)"},
|
||||||
|
{"input_format_try_infer_exponent_floats", true, false, "Don't infer floats in exponential notation by default"},
|
||||||
|
{"query_plan_optimize_prewhere", true, true, "Allow to push down filter to PREWHERE expression for supported storages"},
|
||||||
|
{"async_insert_max_data_size", 1000000, 10485760, "The previous value appeared to be too small."},
|
||||||
|
{"async_insert_poll_timeout_ms", 10, 10, "Timeout in milliseconds for polling data from asynchronous insert queue"},
|
||||||
|
{"async_insert_use_adaptive_busy_timeout", false, true, "Use adaptive asynchronous insert timeout"},
|
||||||
|
{"async_insert_busy_timeout_min_ms", 50, 50, "The minimum value of the asynchronous insert timeout in milliseconds; it also serves as the initial value, which may be increased later by the adaptive algorithm"},
|
||||||
|
{"async_insert_busy_timeout_max_ms", 200, 200, "The minimum value of the asynchronous insert timeout in milliseconds; async_insert_busy_timeout_ms is aliased to async_insert_busy_timeout_max_ms"},
|
||||||
|
{"async_insert_busy_timeout_increase_rate", 0.2, 0.2, "The exponential growth rate at which the adaptive asynchronous insert timeout increases"},
|
||||||
|
{"async_insert_busy_timeout_decrease_rate", 0.2, 0.2, "The exponential growth rate at which the adaptive asynchronous insert timeout decreases"},
|
||||||
|
{"format_template_row_format", "", "", "Template row format string can be set directly in query"},
|
||||||
|
{"format_template_resultset_format", "", "", "Template result set format string can be set in query"},
|
||||||
|
{"split_parts_ranges_into_intersecting_and_non_intersecting_final", true, true, "Allow to split parts ranges into intersecting and non intersecting during FINAL optimization"},
|
||||||
|
{"split_intersecting_parts_ranges_into_layers_final", true, true, "Allow to split intersecting parts ranges into layers during FINAL optimization"},
|
||||||
|
{"azure_max_single_part_copy_size", 256*1024*1024, 256*1024*1024, "The maximum size of object to copy using single part copy to Azure blob storage."},
|
||||||
|
{"min_external_table_block_size_rows", DEFAULT_INSERT_BLOCK_SIZE, DEFAULT_INSERT_BLOCK_SIZE, "Squash blocks passed to external table to specified size in rows, if blocks are not big enough"},
|
||||||
|
{"min_external_table_block_size_bytes", DEFAULT_INSERT_BLOCK_SIZE * 256, DEFAULT_INSERT_BLOCK_SIZE * 256, "Squash blocks passed to external table to specified size in bytes, if blocks are not big enough."},
|
||||||
|
{"parallel_replicas_prefer_local_join", true, true, "If true, and JOIN can be executed with parallel replicas algorithm, and all storages of right JOIN part are *MergeTree, local JOIN will be used instead of GLOBAL JOIN."},
|
||||||
|
{"optimize_time_filter_with_preimage", true, true, "Optimize Date and DateTime predicates by converting functions into equivalent comparisons without conversions (e.g. toYear(col) = 2023 -> col >= '2023-01-01' AND col <= '2023-12-31')"},
|
||||||
|
{"extract_key_value_pairs_max_pairs_per_row", 0, 0, "Max number of pairs that can be produced by the `extractKeyValuePairs` function. Used as a safeguard against consuming too much memory."},
|
||||||
|
{"default_view_definer", "CURRENT_USER", "CURRENT_USER", "Allows to set default `DEFINER` option while creating a view"},
|
||||||
|
{"default_materialized_view_sql_security", "DEFINER", "DEFINER", "Allows to set a default value for SQL SECURITY option when creating a materialized view"},
|
||||||
|
{"default_normal_view_sql_security", "INVOKER", "INVOKER", "Allows to set default `SQL SECURITY` option while creating a normal view"},
|
||||||
|
{"mysql_map_string_to_text_in_show_columns", false, true, "Reduce the configuration effort to connect ClickHouse with BI tools."},
|
||||||
|
{"mysql_map_fixed_string_to_text_in_show_columns", false, true, "Reduce the configuration effort to connect ClickHouse with BI tools."},
|
||||||
|
}},
|
||||||
|
{"24.1", {{"print_pretty_type_names", false, true, "Better user experience."},
|
||||||
|
{"input_format_json_read_bools_as_strings", false, true, "Allow to read bools as strings in JSON formats by default"},
|
||||||
|
{"output_format_arrow_use_signed_indexes_for_dictionary", false, true, "Use signed indexes type for Arrow dictionaries by default as it's recommended"},
|
||||||
|
{"allow_experimental_variant_type", false, false, "Add new experimental Variant type"},
|
||||||
|
{"use_variant_as_common_type", false, false, "Allow to use Variant in if/multiIf if there is no common type"},
|
||||||
|
{"output_format_arrow_use_64_bit_indexes_for_dictionary", false, false, "Allow to use 64 bit indexes type in Arrow dictionaries"},
|
||||||
|
{"parallel_replicas_mark_segment_size", 128, 128, "Add new setting to control segment size in new parallel replicas coordinator implementation"},
|
||||||
|
{"ignore_materialized_views_with_dropped_target_table", false, false, "Add new setting to allow to ignore materialized views with dropped target table"},
|
||||||
|
{"output_format_compression_level", 3, 3, "Allow to change compression level in the query output"},
|
||||||
|
{"output_format_compression_zstd_window_log", 0, 0, "Allow to change zstd window log in the query output when zstd compression is used"},
|
||||||
|
{"enable_zstd_qat_codec", false, false, "Add new ZSTD_QAT codec"},
|
||||||
|
{"enable_vertical_final", false, true, "Use vertical final by default"},
|
||||||
|
{"output_format_arrow_use_64_bit_indexes_for_dictionary", false, false, "Allow to use 64 bit indexes type in Arrow dictionaries"},
|
||||||
|
{"max_rows_in_set_to_optimize_join", 100000, 0, "Disable join optimization as it prevents from read in order optimization"},
|
||||||
|
{"output_format_pretty_color", true, "auto", "Setting is changed to allow also for auto value, disabling ANSI escapes if output is not a tty"},
|
||||||
|
{"function_visible_width_behavior", 0, 1, "We changed the default behavior of `visibleWidth` to be more precise"},
|
||||||
|
{"max_estimated_execution_time", 0, 0, "Separate max_execution_time and max_estimated_execution_time"},
|
||||||
|
{"iceberg_engine_ignore_schema_evolution", false, false, "Allow to ignore schema evolution in Iceberg table engine"},
|
||||||
|
{"optimize_injective_functions_in_group_by", false, true, "Replace injective functions by it's arguments in GROUP BY section in analyzer"},
|
||||||
|
{"update_insert_deduplication_token_in_dependent_materialized_views", false, false, "Allow to update insert deduplication token with table identifier during insert in dependent materialized views"},
|
||||||
|
{"azure_max_unexpected_write_error_retries", 4, 4, "The maximum number of retries in case of unexpected errors during Azure blob storage write"},
|
||||||
|
{"split_parts_ranges_into_intersecting_and_non_intersecting_final", false, true, "Allow to split parts ranges into intersecting and non intersecting during FINAL optimization"},
|
||||||
|
{"split_intersecting_parts_ranges_into_layers_final", true, true, "Allow to split intersecting parts ranges into layers during FINAL optimization"}}},
|
||||||
|
{"23.12", {{"allow_suspicious_ttl_expressions", true, false, "It is a new setting, and in previous versions the behavior was equivalent to allowing."},
|
||||||
|
{"input_format_parquet_allow_missing_columns", false, true, "Allow missing columns in Parquet files by default"},
|
||||||
|
{"input_format_orc_allow_missing_columns", false, true, "Allow missing columns in ORC files by default"},
|
||||||
|
{"input_format_arrow_allow_missing_columns", false, true, "Allow missing columns in Arrow files by default"}}},
|
||||||
|
{"23.11", {{"parsedatetime_parse_without_leading_zeros", false, true, "Improved compatibility with MySQL DATE_FORMAT/STR_TO_DATE"}}},
|
||||||
|
{"23.9", {{"optimize_group_by_constant_keys", false, true, "Optimize group by constant keys by default"},
|
||||||
|
{"input_format_json_try_infer_named_tuples_from_objects", false, true, "Try to infer named Tuples from JSON objects by default"},
|
||||||
|
{"input_format_json_read_numbers_as_strings", false, true, "Allow to read numbers as strings in JSON formats by default"},
|
||||||
|
{"input_format_json_read_arrays_as_strings", false, true, "Allow to read arrays as strings in JSON formats by default"},
|
||||||
|
{"input_format_json_infer_incomplete_types_as_strings", false, true, "Allow to infer incomplete types as Strings in JSON formats by default"},
|
||||||
|
{"input_format_json_try_infer_numbers_from_strings", true, false, "Don't infer numbers from strings in JSON formats by default to prevent possible parsing errors"},
|
||||||
|
{"http_write_exception_in_output_format", false, true, "Output valid JSON/XML on exception in HTTP streaming."}}},
|
||||||
|
{"23.8", {{"rewrite_count_distinct_if_with_count_distinct_implementation", false, true, "Rewrite countDistinctIf with count_distinct_implementation configuration"}}},
|
||||||
|
{"23.7", {{"function_sleep_max_microseconds_per_block", 0, 3000000, "In previous versions, the maximum sleep time of 3 seconds was applied only for `sleep`, but not for `sleepEachRow` function. In the new version, we introduce this setting. If you set compatibility with the previous versions, we will disable the limit altogether."}}},
|
||||||
|
{"23.6", {{"http_send_timeout", 180, 30, "3 minutes seems crazy long. Note that this is timeout for a single network write call, not for the whole upload operation."},
|
||||||
|
{"http_receive_timeout", 180, 30, "See http_send_timeout."}}},
|
||||||
|
{"23.5", {{"input_format_parquet_preserve_order", true, false, "Allow Parquet reader to reorder rows for better parallelism."},
|
||||||
|
{"parallelize_output_from_storages", false, true, "Allow parallelism when executing queries that read from file/url/s3/etc. This may reorder rows."},
|
||||||
|
{"use_with_fill_by_sorting_prefix", false, true, "Columns preceding WITH FILL columns in ORDER BY clause form sorting prefix. Rows with different values in sorting prefix are filled independently"},
|
||||||
|
{"output_format_parquet_compliant_nested_types", false, true, "Change an internal field name in output Parquet file schema."}}},
|
||||||
|
{"23.4", {{"allow_suspicious_indices", true, false, "If true, index can defined with identical expressions"},
|
||||||
|
{"allow_nonconst_timezone_arguments", true, false, "Allow non-const timezone arguments in certain time-related functions like toTimeZone(), fromUnixTimestamp*(), snowflakeToDateTime*()."},
|
||||||
|
{"connect_timeout_with_failover_ms", 50, 1000, "Increase default connect timeout because of async connect"},
|
||||||
|
{"connect_timeout_with_failover_secure_ms", 100, 1000, "Increase default secure connect timeout because of async connect"},
|
||||||
|
{"hedged_connection_timeout_ms", 100, 50, "Start new connection in hedged requests after 50 ms instead of 100 to correspond with previous connect timeout"},
|
||||||
|
{"formatdatetime_f_prints_single_zero", true, false, "Improved compatibility with MySQL DATE_FORMAT()/STR_TO_DATE()"},
|
||||||
|
{"formatdatetime_parsedatetime_m_is_month_name", false, true, "Improved compatibility with MySQL DATE_FORMAT/STR_TO_DATE"}}},
|
||||||
|
{"23.3", {{"output_format_parquet_version", "1.0", "2.latest", "Use latest Parquet format version for output format"},
|
||||||
|
{"input_format_json_ignore_unknown_keys_in_named_tuple", false, true, "Improve parsing JSON objects as named tuples"},
|
||||||
|
{"input_format_native_allow_types_conversion", false, true, "Allow types conversion in Native input forma"},
|
||||||
|
{"output_format_arrow_compression_method", "none", "lz4_frame", "Use lz4 compression in Arrow output format by default"},
|
||||||
|
{"output_format_parquet_compression_method", "snappy", "lz4", "Use lz4 compression in Parquet output format by default"},
|
||||||
|
{"output_format_orc_compression_method", "none", "lz4_frame", "Use lz4 compression in ORC output format by default"},
|
||||||
|
{"async_query_sending_for_remote", false, true, "Create connections and send query async across shards"}}},
|
||||||
|
{"23.2", {{"output_format_parquet_fixed_string_as_fixed_byte_array", false, true, "Use Parquet FIXED_LENGTH_BYTE_ARRAY type for FixedString by default"},
|
||||||
|
{"output_format_arrow_fixed_string_as_fixed_byte_array", false, true, "Use Arrow FIXED_SIZE_BINARY type for FixedString by default"},
|
||||||
|
{"query_plan_remove_redundant_distinct", false, true, "Remove redundant Distinct step in query plan"},
|
||||||
|
{"optimize_duplicate_order_by_and_distinct", true, false, "Remove duplicate ORDER BY and DISTINCT if it's possible"},
|
||||||
|
{"insert_keeper_max_retries", 0, 20, "Enable reconnections to Keeper on INSERT, improve reliability"}}},
|
||||||
|
{"23.1", {{"input_format_json_read_objects_as_strings", 0, 1, "Enable reading nested json objects as strings while object type is experimental"},
|
||||||
|
{"input_format_json_defaults_for_missing_elements_in_named_tuple", false, true, "Allow missing elements in JSON objects while reading named tuples by default"},
|
||||||
|
{"input_format_csv_detect_header", false, true, "Detect header in CSV format by default"},
|
||||||
|
{"input_format_tsv_detect_header", false, true, "Detect header in TSV format by default"},
|
||||||
|
{"input_format_custom_detect_header", false, true, "Detect header in CustomSeparated format by default"},
|
||||||
|
{"query_plan_remove_redundant_sorting", false, true, "Remove redundant sorting in query plan. For example, sorting steps related to ORDER BY clauses in subqueries"}}},
|
||||||
|
{"22.12", {{"max_size_to_preallocate_for_aggregation", 10'000'000, 100'000'000, "This optimizes performance"},
|
||||||
|
{"query_plan_aggregation_in_order", 0, 1, "Enable some refactoring around query plan"},
|
||||||
|
{"format_binary_max_string_size", 0, 1_GiB, "Prevent allocating large amount of memory"}}},
|
||||||
|
{"22.11", {{"use_structure_from_insertion_table_in_table_functions", 0, 2, "Improve using structure from insertion table in table functions"}}},
|
||||||
|
{"22.9", {{"force_grouping_standard_compatibility", false, true, "Make GROUPING function output the same as in SQL standard and other DBMS"}}},
|
||||||
|
{"22.7", {{"cross_to_inner_join_rewrite", 1, 2, "Force rewrite comma join to inner"},
|
||||||
|
{"enable_positional_arguments", false, true, "Enable positional arguments feature by default"},
|
||||||
|
{"format_csv_allow_single_quotes", true, false, "Most tools don't treat single quote in CSV specially, don't do it by default too"}}},
|
||||||
|
{"22.6", {{"output_format_json_named_tuples_as_objects", false, true, "Allow to serialize named tuples as JSON objects in JSON formats by default"},
|
||||||
|
{"input_format_skip_unknown_fields", false, true, "Optimize reading subset of columns for some input formats"}}},
|
||||||
|
{"22.5", {{"memory_overcommit_ratio_denominator", 0, 1073741824, "Enable memory overcommit feature by default"},
|
||||||
|
{"memory_overcommit_ratio_denominator_for_user", 0, 1073741824, "Enable memory overcommit feature by default"}}},
|
||||||
|
{"22.4", {{"allow_settings_after_format_in_insert", true, false, "Do not allow SETTINGS after FORMAT for INSERT queries because ClickHouse interpret SETTINGS as some values, which is misleading"}}},
|
||||||
|
{"22.3", {{"cast_ipv4_ipv6_default_on_conversion_error", true, false, "Make functions cast(value, 'IPv4') and cast(value, 'IPv6') behave same as toIPv4 and toIPv6 functions"}}},
|
||||||
|
{"21.12", {{"stream_like_engine_allow_direct_select", true, false, "Do not allow direct select for Kafka/RabbitMQ/FileLog by default"}}},
|
||||||
|
{"21.9", {{"output_format_decimal_trailing_zeros", true, false, "Do not output trailing zeros in text representation of Decimal types by default for better looking output"},
|
||||||
|
{"use_hedged_requests", false, true, "Enable Hedged Requests feature by default"}}},
|
||||||
|
{"21.7", {{"legacy_column_name_of_tuple_literal", true, false, "Add this setting only for compatibility reasons. It makes sense to set to 'true', while doing rolling update of cluster from version lower than 21.7 to higher"}}},
|
||||||
|
{"21.5", {{"async_socket_for_remote", false, true, "Fix all problems and turn on asynchronous reads from socket for remote queries by default again"}}},
|
||||||
|
{"21.3", {{"async_socket_for_remote", true, false, "Turn off asynchronous reads from socket for remote queries because of some problems"},
|
||||||
|
{"optimize_normalize_count_variants", false, true, "Rewrite aggregate functions that semantically equals to count() as count() by default"},
|
||||||
|
{"normalize_function_names", false, true, "Normalize function names to their canonical names, this was needed for projection query routing"}}},
|
||||||
|
{"21.2", {{"enable_global_with_statement", false, true, "Propagate WITH statements to UNION queries and all subqueries by default"}}},
|
||||||
|
{"21.1", {{"insert_quorum_parallel", false, true, "Use parallel quorum inserts by default. It is significantly more convenient to use than sequential quorum inserts"},
|
||||||
|
{"input_format_null_as_default", false, true, "Allow to insert NULL as default for input formats by default"},
|
||||||
|
{"optimize_on_insert", false, true, "Enable data optimization on INSERT by default for better user experience"},
|
||||||
|
{"use_compact_format_in_distributed_parts_names", false, true, "Use compact format for async INSERT into Distributed tables by default"}}},
|
||||||
|
{"20.10", {{"format_regexp_escaping_rule", "Escaped", "Raw", "Use Raw as default escaping rule for Regexp format to male the behaviour more like to what users expect"}}},
|
||||||
|
{"20.7", {{"show_table_uuid_in_table_create_query_if_not_nil", true, false, "Stop showing UID of the table in its CREATE query for Engine=Atomic"}}},
|
||||||
|
{"20.5", {{"input_format_with_names_use_header", false, true, "Enable using header with names for formats with WithNames/WithNamesAndTypes suffixes"},
|
||||||
|
{"allow_suspicious_codecs", true, false, "Don't allow to specify meaningless compression codecs"}}},
|
||||||
|
{"20.4", {{"validate_polygons", false, true, "Throw exception if polygon is invalid in function pointInPolygon by default instead of returning possibly wrong results"}}},
|
||||||
|
{"19.18", {{"enable_scalar_subquery_optimization", false, true, "Prevent scalar subqueries from (de)serializing large scalar values and possibly avoid running the same subquery more than once"}}},
|
||||||
|
{"19.14", {{"any_join_distinct_right_table_keys", true, false, "Disable ANY RIGHT and ANY FULL JOINs by default to avoid inconsistency"}}},
|
||||||
|
{"19.12", {{"input_format_defaults_for_omitted_fields", false, true, "Enable calculation of complex default expressions for omitted fields for some input formats, because it should be the expected behaviour"}}},
|
||||||
|
{"19.5", {{"max_partitions_per_insert_block", 0, 100, "Add a limit for the number of partitions in one block"}}},
|
||||||
|
{"18.12.17", {{"enable_optimize_predicate_expression", 0, 1, "Optimize predicates to subqueries by default"}}},
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
|
const std::map<ClickHouseVersion, SettingsChangesHistory::SettingsChanges> & getSettingsChangesHistory()
|
||||||
|
{
|
||||||
|
static std::map<ClickHouseVersion, SettingsChangesHistory::SettingsChanges> settings_changes_history;
|
||||||
|
|
||||||
|
static std::once_flag initialized_flag;
|
||||||
|
std::call_once(initialized_flag, []()
|
||||||
|
{
|
||||||
|
for (const auto & setting_change : settings_changes_history_initializer)
|
||||||
|
{
|
||||||
|
/// Disallow duplicate keys in the settings changes history. Example:
|
||||||
|
/// {"21.2", {{"some_setting_1", false, true, "[...]"}}},
|
||||||
|
/// [...]
|
||||||
|
/// {"21.2", {{"some_setting_2", false, true, "[...]"}}},
|
||||||
|
/// As std::set has unique keys, one of the entries would be overwritten.
|
||||||
|
if (settings_changes_history.contains(setting_change.first))
|
||||||
|
throw Exception{ErrorCodes::LOGICAL_ERROR, "Detected duplicate version '{}'", setting_change.first.toString()};
|
||||||
|
|
||||||
|
settings_changes_history[setting_change.first] = setting_change.second;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
return settings_changes_history;
|
||||||
|
}
|
||||||
|
}
|
@ -1,62 +1,25 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <Core/Field.h>
|
#include <Core/Field.h>
|
||||||
#include <Core/Settings.h>
|
|
||||||
#include <IO/ReadHelpers.h>
|
|
||||||
#include <IO/ReadBufferFromString.h>
|
|
||||||
#include <boost/algorithm/string.hpp>
|
|
||||||
#include <map>
|
#include <map>
|
||||||
|
#include <vector>
|
||||||
|
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
namespace ErrorCodes
|
|
||||||
{
|
|
||||||
extern const int BAD_ARGUMENTS;
|
|
||||||
}
|
|
||||||
|
|
||||||
class ClickHouseVersion
|
class ClickHouseVersion
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
ClickHouseVersion(const String & version) /// NOLINT(google-explicit-constructor)
|
/// NOLINTBEGIN(google-explicit-constructor)
|
||||||
{
|
ClickHouseVersion(const String & version);
|
||||||
Strings split;
|
ClickHouseVersion(const char * version);
|
||||||
boost::split(split, version, [](char c){ return c == '.'; });
|
/// NOLINTEND(google-explicit-constructor)
|
||||||
components.reserve(split.size());
|
|
||||||
if (split.empty())
|
|
||||||
throw Exception{ErrorCodes::BAD_ARGUMENTS, "Cannot parse ClickHouse version here: {}", version};
|
|
||||||
|
|
||||||
for (const auto & split_element : split)
|
String toString() const;
|
||||||
{
|
|
||||||
size_t component;
|
|
||||||
ReadBufferFromString buf(split_element);
|
|
||||||
if (!tryReadIntText(component, buf) || !buf.eof())
|
|
||||||
throw Exception{ErrorCodes::BAD_ARGUMENTS, "Cannot parse ClickHouse version here: {}", version};
|
|
||||||
components.push_back(component);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ClickHouseVersion(const char * version) : ClickHouseVersion(String(version)) {} /// NOLINT(google-explicit-constructor)
|
bool operator<(const ClickHouseVersion & other) const { return components < other.components; }
|
||||||
|
bool operator>=(const ClickHouseVersion & other) const { return components >= other.components; }
|
||||||
String toString() const
|
|
||||||
{
|
|
||||||
String version = std::to_string(components[0]);
|
|
||||||
for (size_t i = 1; i < components.size(); ++i)
|
|
||||||
version += "." + std::to_string(components[i]);
|
|
||||||
|
|
||||||
return version;
|
|
||||||
}
|
|
||||||
|
|
||||||
bool operator<(const ClickHouseVersion & other) const
|
|
||||||
{
|
|
||||||
return components < other.components;
|
|
||||||
}
|
|
||||||
|
|
||||||
bool operator>=(const ClickHouseVersion & other) const
|
|
||||||
{
|
|
||||||
return components >= other.components;
|
|
||||||
}
|
|
||||||
|
|
||||||
private:
|
private:
|
||||||
std::vector<size_t> components;
|
std::vector<size_t> components;
|
||||||
@ -75,255 +38,6 @@ namespace SettingsChangesHistory
|
|||||||
using SettingsChanges = std::vector<SettingChange>;
|
using SettingsChanges = std::vector<SettingChange>;
|
||||||
}
|
}
|
||||||
|
|
||||||
// clang-format off
|
const std::map<ClickHouseVersion, SettingsChangesHistory::SettingsChanges> & getSettingsChangesHistory();
|
||||||
/// History of settings changes that controls some backward incompatible changes
|
|
||||||
/// across all ClickHouse versions. It maps ClickHouse version to settings changes that were done
|
|
||||||
/// in this version. This history contains both changes to existing settings and newly added settings.
|
|
||||||
/// Settings changes is a vector of structs
|
|
||||||
/// {setting_name, previous_value, new_value, reason}.
|
|
||||||
/// For newly added setting choose the most appropriate previous_value (for example, if new setting
|
|
||||||
/// controls new feature and it's 'true' by default, use 'false' as previous_value).
|
|
||||||
/// It's used to implement `compatibility` setting (see https://github.com/ClickHouse/ClickHouse/issues/35972)
|
|
||||||
static const std::map<ClickHouseVersion, SettingsChangesHistory::SettingsChanges> settings_changes_history =
|
|
||||||
{
|
|
||||||
{"24.7", {{"output_format_parquet_write_page_index", false, true, "Add a possibility to write page index into parquet files."},
|
|
||||||
{"optimize_functions_to_subcolumns", false, true, "Enable optimization by default"},
|
|
||||||
}},
|
|
||||||
{"24.6", {{"materialize_skip_indexes_on_insert", true, true, "Added new setting to allow to disable materialization of skip indexes on insert"},
|
|
||||||
{"materialize_statistics_on_insert", true, true, "Added new setting to allow to disable materialization of statistics on insert"},
|
|
||||||
{"input_format_parquet_use_native_reader", false, false, "When reading Parquet files, to use native reader instead of arrow reader."},
|
|
||||||
{"hdfs_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in HDFS engine instead of empty query result"},
|
|
||||||
{"azure_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in AzureBlobStorage engine instead of empty query result"},
|
|
||||||
{"s3_validate_request_settings", true, true, "Allow to disable S3 request settings validation"},
|
|
||||||
{"allow_experimental_full_text_index", false, false, "Enable experimental full-text index"},
|
|
||||||
{"azure_skip_empty_files", false, false, "Allow to skip empty files in azure table engine"},
|
|
||||||
{"hdfs_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in HDFS table engine"},
|
|
||||||
{"azure_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in AzureBlobStorage table engine"},
|
|
||||||
{"s3_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in S3 table engine"},
|
|
||||||
{"s3_max_part_number", 10000, 10000, "Maximum part number number for s3 upload part"},
|
|
||||||
{"s3_max_single_operation_copy_size", 32 * 1024 * 1024, 32 * 1024 * 1024, "Maximum size for a single copy operation in s3"},
|
|
||||||
{"input_format_parquet_max_block_size", 8192, DEFAULT_BLOCK_SIZE, "Increase block size for parquet reader."},
|
|
||||||
{"input_format_parquet_prefer_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Average block bytes output by parquet reader."},
|
|
||||||
{"enable_blob_storage_log", true, true, "Write information about blob storage operations to system.blob_storage_log table"},
|
|
||||||
{"allow_deprecated_snowflake_conversion_functions", true, false, "Disabled deprecated functions snowflakeToDateTime[64] and dateTime[64]ToSnowflake."},
|
|
||||||
{"allow_statistic_optimize", false, false, "Old setting which popped up here being renamed."},
|
|
||||||
{"allow_experimental_statistic", false, false, "Old setting which popped up here being renamed."},
|
|
||||||
{"allow_statistics_optimize", false, false, "The setting was renamed. The previous name is `allow_statistic_optimize`."},
|
|
||||||
{"allow_experimental_statistics", false, false, "The setting was renamed. The previous name is `allow_experimental_statistic`."},
|
|
||||||
{"enable_vertical_final", false, true, "Enable vertical final by default again after fixing bug"},
|
|
||||||
{"parallel_replicas_custom_key_range_lower", 0, 0, "Add settings to control the range filter when using parallel replicas with dynamic shards"},
|
|
||||||
{"parallel_replicas_custom_key_range_upper", 0, 0, "Add settings to control the range filter when using parallel replicas with dynamic shards. A value of 0 disables the upper limit"},
|
|
||||||
{"output_format_pretty_display_footer_column_names", 0, 1, "Add a setting to display column names in the footer if there are many rows. Threshold value is controlled by output_format_pretty_display_footer_column_names_min_rows."},
|
|
||||||
{"output_format_pretty_display_footer_column_names_min_rows", 0, 50, "Add a setting to control the threshold value for setting output_format_pretty_display_footer_column_names_min_rows. Default 50."},
|
|
||||||
{"output_format_csv_serialize_tuple_into_separate_columns", true, true, "A new way of how interpret tuples in CSV format was added."},
|
|
||||||
{"input_format_csv_deserialize_separate_columns_into_tuple", true, true, "A new way of how interpret tuples in CSV format was added."},
|
|
||||||
{"input_format_csv_try_infer_strings_from_quoted_tuples", true, true, "A new way of how interpret tuples in CSV format was added."},
|
|
||||||
{"input_format_json_ignore_key_case", false, false, "Ignore json key case while read json field from string."},
|
|
||||||
}},
|
|
||||||
{"24.5", {{"allow_deprecated_error_prone_window_functions", true, false, "Allow usage of deprecated error prone window functions (neighbor, runningAccumulate, runningDifferenceStartingWithFirstValue, runningDifference)"},
|
|
||||||
{"allow_experimental_join_condition", false, false, "Support join with inequal conditions which involve columns from both left and right table. e.g. t1.y < t2.y."},
|
|
||||||
{"input_format_tsv_crlf_end_of_line", false, false, "Enables reading of CRLF line endings with TSV formats"},
|
|
||||||
{"output_format_parquet_use_custom_encoder", false, true, "Enable custom Parquet encoder."},
|
|
||||||
{"cross_join_min_rows_to_compress", 0, 10000000, "Minimal count of rows to compress block in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."},
|
|
||||||
{"cross_join_min_bytes_to_compress", 0, 1_GiB, "Minimal size of block to compress in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."},
|
|
||||||
{"http_max_chunk_size", 0, 0, "Internal limitation"},
|
|
||||||
{"prefer_external_sort_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Prefer maximum block bytes for external sort, reduce the memory usage during merging."},
|
|
||||||
{"input_format_force_null_for_omitted_fields", false, false, "Disable type-defaults for omitted fields when needed"},
|
|
||||||
{"cast_string_to_dynamic_use_inference", false, false, "Add setting to allow converting String to Dynamic through parsing"},
|
|
||||||
{"allow_experimental_dynamic_type", false, false, "Add new experimental Dynamic type"},
|
|
||||||
{"azure_max_blocks_in_multipart_upload", 50000, 50000, "Maximum number of blocks in multipart upload for Azure."},
|
|
||||||
}},
|
|
||||||
{"24.4", {{"input_format_json_throw_on_bad_escape_sequence", true, true, "Allow to save JSON strings with bad escape sequences"},
|
|
||||||
{"max_parsing_threads", 0, 0, "Add a separate setting to control number of threads in parallel parsing from files"},
|
|
||||||
{"ignore_drop_queries_probability", 0, 0, "Allow to ignore drop queries in server with specified probability for testing purposes"},
|
|
||||||
{"lightweight_deletes_sync", 2, 2, "The same as 'mutation_sync', but controls only execution of lightweight deletes"},
|
|
||||||
{"query_cache_system_table_handling", "save", "throw", "The query cache no longer caches results of queries against system tables"},
|
|
||||||
{"input_format_json_ignore_unnecessary_fields", false, true, "Ignore unnecessary fields and not parse them. Enabling this may not throw exceptions on json strings of invalid format or with duplicated fields"},
|
|
||||||
{"input_format_hive_text_allow_variable_number_of_columns", false, true, "Ignore extra columns in Hive Text input (if file has more columns than expected) and treat missing fields in Hive Text input as default values."},
|
|
||||||
{"allow_experimental_database_replicated", false, true, "Database engine Replicated is now in Beta stage"},
|
|
||||||
{"temporary_data_in_cache_reserve_space_wait_lock_timeout_milliseconds", (10 * 60 * 1000), (10 * 60 * 1000), "Wait time to lock cache for sapce reservation in temporary data in filesystem cache"},
|
|
||||||
{"optimize_rewrite_sum_if_to_count_if", false, true, "Only available for the analyzer, where it works correctly"},
|
|
||||||
{"azure_allow_parallel_part_upload", "true", "true", "Use multiple threads for azure multipart upload."},
|
|
||||||
{"max_recursive_cte_evaluation_depth", DBMS_RECURSIVE_CTE_MAX_EVALUATION_DEPTH, DBMS_RECURSIVE_CTE_MAX_EVALUATION_DEPTH, "Maximum limit on recursive CTE evaluation depth"},
|
|
||||||
{"query_plan_convert_outer_join_to_inner_join", false, true, "Allow to convert OUTER JOIN to INNER JOIN if filter after JOIN always filters default values"},
|
|
||||||
}},
|
|
||||||
{"24.3", {{"s3_connect_timeout_ms", 1000, 1000, "Introduce new dedicated setting for s3 connection timeout"},
|
|
||||||
{"allow_experimental_shared_merge_tree", false, true, "The setting is obsolete"},
|
|
||||||
{"use_page_cache_for_disks_without_file_cache", false, false, "Added userspace page cache"},
|
|
||||||
{"read_from_page_cache_if_exists_otherwise_bypass_cache", false, false, "Added userspace page cache"},
|
|
||||||
{"page_cache_inject_eviction", false, false, "Added userspace page cache"},
|
|
||||||
{"default_table_engine", "None", "MergeTree", "Set default table engine to MergeTree for better usability"},
|
|
||||||
{"input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects", false, false, "Allow to use String type for ambiguous paths during named tuple inference from JSON objects"},
|
|
||||||
{"traverse_shadow_remote_data_paths", false, false, "Traverse shadow directory when query system.remote_data_paths."},
|
|
||||||
{"throw_if_deduplication_in_dependent_materialized_views_enabled_with_async_insert", false, true, "Deduplication is dependent materialized view cannot work together with async inserts."},
|
|
||||||
{"parallel_replicas_allow_in_with_subquery", false, true, "If true, subquery for IN will be executed on every follower replica"},
|
|
||||||
{"log_processors_profiles", false, true, "Enable by default"},
|
|
||||||
{"function_locate_has_mysql_compatible_argument_order", false, true, "Increase compatibility with MySQL's locate function."},
|
|
||||||
{"allow_suspicious_primary_key", true, false, "Forbid suspicious PRIMARY KEY/ORDER BY for MergeTree (i.e. SimpleAggregateFunction)"},
|
|
||||||
{"filesystem_cache_reserve_space_wait_lock_timeout_milliseconds", 1000, 1000, "Wait time to lock cache for sapce reservation in filesystem cache"},
|
|
||||||
{"max_parser_backtracks", 0, 1000000, "Limiting the complexity of parsing"},
|
|
||||||
{"analyzer_compatibility_join_using_top_level_identifier", false, false, "Force to resolve identifier in JOIN USING from projection"},
|
|
||||||
{"distributed_insert_skip_read_only_replicas", false, false, "If true, INSERT into Distributed will skip read-only replicas"},
|
|
||||||
{"keeper_max_retries", 10, 10, "Max retries for general keeper operations"},
|
|
||||||
{"keeper_retry_initial_backoff_ms", 100, 100, "Initial backoff timeout for general keeper operations"},
|
|
||||||
{"keeper_retry_max_backoff_ms", 5000, 5000, "Max backoff timeout for general keeper operations"},
|
|
||||||
{"s3queue_allow_experimental_sharded_mode", false, false, "Enable experimental sharded mode of S3Queue table engine. It is experimental because it will be rewritten"},
|
|
||||||
{"allow_experimental_analyzer", false, true, "Enable analyzer and planner by default."},
|
|
||||||
{"merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability", 0.0, 0.0, "For testing of `PartsSplitter` - split read ranges into intersecting and non intersecting every time you read from MergeTree with the specified probability."},
|
|
||||||
{"allow_get_client_http_header", false, false, "Introduced a new function."},
|
|
||||||
{"output_format_pretty_row_numbers", false, true, "It is better for usability."},
|
|
||||||
{"output_format_pretty_max_value_width_apply_for_single_value", true, false, "Single values in Pretty formats won't be cut."},
|
|
||||||
{"output_format_parquet_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
|
|
||||||
{"output_format_orc_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
|
|
||||||
{"output_format_arrow_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
|
|
||||||
{"output_format_parquet_compression_method", "lz4", "zstd", "Parquet/ORC/Arrow support many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools, such as 'duckdb', lack support for the faster `lz4` compression method, that's why we set zstd by default."},
|
|
||||||
{"output_format_orc_compression_method", "lz4", "zstd", "Parquet/ORC/Arrow support many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools, such as 'duckdb', lack support for the faster `lz4` compression method, that's why we set zstd by default."},
|
|
||||||
{"output_format_pretty_highlight_digit_groups", false, true, "If enabled and if output is a terminal, highlight every digit corresponding to the number of thousands, millions, etc. with underline."},
|
|
||||||
{"geo_distance_returns_float64_on_float64_arguments", false, true, "Increase the default precision."},
|
|
||||||
{"azure_max_inflight_parts_for_one_file", 20, 20, "The maximum number of a concurrent loaded parts in multipart upload request. 0 means unlimited."},
|
|
||||||
{"azure_strict_upload_part_size", 0, 0, "The exact size of part to upload during multipart upload to Azure blob storage."},
|
|
||||||
{"azure_min_upload_part_size", 16*1024*1024, 16*1024*1024, "The minimum size of part to upload during multipart upload to Azure blob storage."},
|
|
||||||
{"azure_max_upload_part_size", 5ull*1024*1024*1024, 5ull*1024*1024*1024, "The maximum size of part to upload during multipart upload to Azure blob storage."},
|
|
||||||
{"azure_upload_part_size_multiply_factor", 2, 2, "Multiply azure_min_upload_part_size by this factor each time azure_multiply_parts_count_threshold parts were uploaded from a single write to Azure blob storage."},
|
|
||||||
{"azure_upload_part_size_multiply_parts_count_threshold", 500, 500, "Each time this number of parts was uploaded to Azure blob storage, azure_min_upload_part_size is multiplied by azure_upload_part_size_multiply_factor."},
|
|
||||||
{"output_format_csv_serialize_tuple_into_separate_columns", true, true, "A new way of how interpret tuples in CSV format was added."},
|
|
||||||
{"input_format_csv_deserialize_separate_columns_into_tuple", true, true, "A new way of how interpret tuples in CSV format was added."},
|
|
||||||
{"input_format_csv_try_infer_strings_from_quoted_tuples", true, true, "A new way of how interpret tuples in CSV format was added."},
|
|
||||||
}},
|
|
||||||
{"24.2", {{"allow_suspicious_variant_types", true, false, "Don't allow creating Variant type with suspicious variants by default"},
|
|
||||||
{"validate_experimental_and_suspicious_types_inside_nested_types", false, true, "Validate usage of experimental and suspicious types inside nested types"},
|
|
||||||
{"output_format_values_escape_quote_with_quote", false, false, "If true escape ' with '', otherwise quoted with \\'"},
|
|
||||||
{"output_format_pretty_single_large_number_tip_threshold", 0, 1'000'000, "Print a readable number tip on the right side of the table if the block consists of a single number which exceeds this value (except 0)"},
|
|
||||||
{"input_format_try_infer_exponent_floats", true, false, "Don't infer floats in exponential notation by default"},
|
|
||||||
{"query_plan_optimize_prewhere", true, true, "Allow to push down filter to PREWHERE expression for supported storages"},
|
|
||||||
{"async_insert_max_data_size", 1000000, 10485760, "The previous value appeared to be too small."},
|
|
||||||
{"async_insert_poll_timeout_ms", 10, 10, "Timeout in milliseconds for polling data from asynchronous insert queue"},
|
|
||||||
{"async_insert_use_adaptive_busy_timeout", false, true, "Use adaptive asynchronous insert timeout"},
|
|
||||||
{"async_insert_busy_timeout_min_ms", 50, 50, "The minimum value of the asynchronous insert timeout in milliseconds; it also serves as the initial value, which may be increased later by the adaptive algorithm"},
|
|
||||||
{"async_insert_busy_timeout_max_ms", 200, 200, "The minimum value of the asynchronous insert timeout in milliseconds; async_insert_busy_timeout_ms is aliased to async_insert_busy_timeout_max_ms"},
|
|
||||||
{"async_insert_busy_timeout_increase_rate", 0.2, 0.2, "The exponential growth rate at which the adaptive asynchronous insert timeout increases"},
|
|
||||||
{"async_insert_busy_timeout_decrease_rate", 0.2, 0.2, "The exponential growth rate at which the adaptive asynchronous insert timeout decreases"},
|
|
||||||
{"format_template_row_format", "", "", "Template row format string can be set directly in query"},
|
|
||||||
{"format_template_resultset_format", "", "", "Template result set format string can be set in query"},
|
|
||||||
{"split_parts_ranges_into_intersecting_and_non_intersecting_final", true, true, "Allow to split parts ranges into intersecting and non intersecting during FINAL optimization"},
|
|
||||||
{"split_intersecting_parts_ranges_into_layers_final", true, true, "Allow to split intersecting parts ranges into layers during FINAL optimization"},
|
|
||||||
{"azure_max_single_part_copy_size", 256*1024*1024, 256*1024*1024, "The maximum size of object to copy using single part copy to Azure blob storage."},
|
|
||||||
{"min_external_table_block_size_rows", DEFAULT_INSERT_BLOCK_SIZE, DEFAULT_INSERT_BLOCK_SIZE, "Squash blocks passed to external table to specified size in rows, if blocks are not big enough"},
|
|
||||||
{"min_external_table_block_size_bytes", DEFAULT_INSERT_BLOCK_SIZE * 256, DEFAULT_INSERT_BLOCK_SIZE * 256, "Squash blocks passed to external table to specified size in bytes, if blocks are not big enough."},
|
|
||||||
{"parallel_replicas_prefer_local_join", true, true, "If true, and JOIN can be executed with parallel replicas algorithm, and all storages of right JOIN part are *MergeTree, local JOIN will be used instead of GLOBAL JOIN."},
|
|
||||||
{"optimize_time_filter_with_preimage", true, true, "Optimize Date and DateTime predicates by converting functions into equivalent comparisons without conversions (e.g. toYear(col) = 2023 -> col >= '2023-01-01' AND col <= '2023-12-31')"},
|
|
||||||
{"extract_key_value_pairs_max_pairs_per_row", 0, 0, "Max number of pairs that can be produced by the `extractKeyValuePairs` function. Used as a safeguard against consuming too much memory."},
|
|
||||||
{"default_view_definer", "CURRENT_USER", "CURRENT_USER", "Allows to set default `DEFINER` option while creating a view"},
|
|
||||||
{"default_materialized_view_sql_security", "DEFINER", "DEFINER", "Allows to set a default value for SQL SECURITY option when creating a materialized view"},
|
|
||||||
{"default_normal_view_sql_security", "INVOKER", "INVOKER", "Allows to set default `SQL SECURITY` option while creating a normal view"},
|
|
||||||
{"mysql_map_string_to_text_in_show_columns", false, true, "Reduce the configuration effort to connect ClickHouse with BI tools."},
|
|
||||||
{"mysql_map_fixed_string_to_text_in_show_columns", false, true, "Reduce the configuration effort to connect ClickHouse with BI tools."},
|
|
||||||
}},
|
|
||||||
{"24.1", {{"print_pretty_type_names", false, true, "Better user experience."},
|
|
||||||
{"input_format_json_read_bools_as_strings", false, true, "Allow to read bools as strings in JSON formats by default"},
|
|
||||||
{"output_format_arrow_use_signed_indexes_for_dictionary", false, true, "Use signed indexes type for Arrow dictionaries by default as it's recommended"},
|
|
||||||
{"allow_experimental_variant_type", false, false, "Add new experimental Variant type"},
|
|
||||||
{"use_variant_as_common_type", false, false, "Allow to use Variant in if/multiIf if there is no common type"},
|
|
||||||
{"output_format_arrow_use_64_bit_indexes_for_dictionary", false, false, "Allow to use 64 bit indexes type in Arrow dictionaries"},
|
|
||||||
{"parallel_replicas_mark_segment_size", 128, 128, "Add new setting to control segment size in new parallel replicas coordinator implementation"},
|
|
||||||
{"ignore_materialized_views_with_dropped_target_table", false, false, "Add new setting to allow to ignore materialized views with dropped target table"},
|
|
||||||
{"output_format_compression_level", 3, 3, "Allow to change compression level in the query output"},
|
|
||||||
{"output_format_compression_zstd_window_log", 0, 0, "Allow to change zstd window log in the query output when zstd compression is used"},
|
|
||||||
{"enable_zstd_qat_codec", false, false, "Add new ZSTD_QAT codec"},
|
|
||||||
{"enable_vertical_final", false, true, "Use vertical final by default"},
|
|
||||||
{"output_format_arrow_use_64_bit_indexes_for_dictionary", false, false, "Allow to use 64 bit indexes type in Arrow dictionaries"},
|
|
||||||
{"max_rows_in_set_to_optimize_join", 100000, 0, "Disable join optimization as it prevents from read in order optimization"},
|
|
||||||
{"output_format_pretty_color", true, "auto", "Setting is changed to allow also for auto value, disabling ANSI escapes if output is not a tty"},
|
|
||||||
{"function_visible_width_behavior", 0, 1, "We changed the default behavior of `visibleWidth` to be more precise"},
|
|
||||||
{"max_estimated_execution_time", 0, 0, "Separate max_execution_time and max_estimated_execution_time"},
|
|
||||||
{"iceberg_engine_ignore_schema_evolution", false, false, "Allow to ignore schema evolution in Iceberg table engine"},
|
|
||||||
{"optimize_injective_functions_in_group_by", false, true, "Replace injective functions by it's arguments in GROUP BY section in analyzer"},
|
|
||||||
{"update_insert_deduplication_token_in_dependent_materialized_views", false, false, "Allow to update insert deduplication token with table identifier during insert in dependent materialized views"},
|
|
||||||
{"azure_max_unexpected_write_error_retries", 4, 4, "The maximum number of retries in case of unexpected errors during Azure blob storage write"},
|
|
||||||
{"split_parts_ranges_into_intersecting_and_non_intersecting_final", false, true, "Allow to split parts ranges into intersecting and non intersecting during FINAL optimization"},
|
|
||||||
{"split_intersecting_parts_ranges_into_layers_final", true, true, "Allow to split intersecting parts ranges into layers during FINAL optimization"}}},
|
|
||||||
{"23.12", {{"allow_suspicious_ttl_expressions", true, false, "It is a new setting, and in previous versions the behavior was equivalent to allowing."},
|
|
||||||
{"input_format_parquet_allow_missing_columns", false, true, "Allow missing columns in Parquet files by default"},
|
|
||||||
{"input_format_orc_allow_missing_columns", false, true, "Allow missing columns in ORC files by default"},
|
|
||||||
{"input_format_arrow_allow_missing_columns", false, true, "Allow missing columns in Arrow files by default"}}},
|
|
||||||
{"23.9", {{"optimize_group_by_constant_keys", false, true, "Optimize group by constant keys by default"},
|
|
||||||
{"input_format_json_try_infer_named_tuples_from_objects", false, true, "Try to infer named Tuples from JSON objects by default"},
|
|
||||||
{"input_format_json_read_numbers_as_strings", false, true, "Allow to read numbers as strings in JSON formats by default"},
|
|
||||||
{"input_format_json_read_arrays_as_strings", false, true, "Allow to read arrays as strings in JSON formats by default"},
|
|
||||||
{"input_format_json_infer_incomplete_types_as_strings", false, true, "Allow to infer incomplete types as Strings in JSON formats by default"},
|
|
||||||
{"input_format_json_try_infer_numbers_from_strings", true, false, "Don't infer numbers from strings in JSON formats by default to prevent possible parsing errors"},
|
|
||||||
{"http_write_exception_in_output_format", false, true, "Output valid JSON/XML on exception in HTTP streaming."}}},
|
|
||||||
{"23.8", {{"rewrite_count_distinct_if_with_count_distinct_implementation", false, true, "Rewrite countDistinctIf with count_distinct_implementation configuration"}}},
|
|
||||||
{"23.7", {{"function_sleep_max_microseconds_per_block", 0, 3000000, "In previous versions, the maximum sleep time of 3 seconds was applied only for `sleep`, but not for `sleepEachRow` function. In the new version, we introduce this setting. If you set compatibility with the previous versions, we will disable the limit altogether."}}},
|
|
||||||
{"23.6", {{"http_send_timeout", 180, 30, "3 minutes seems crazy long. Note that this is timeout for a single network write call, not for the whole upload operation."},
|
|
||||||
{"http_receive_timeout", 180, 30, "See http_send_timeout."}}},
|
|
||||||
{"23.5", {{"input_format_parquet_preserve_order", true, false, "Allow Parquet reader to reorder rows for better parallelism."},
|
|
||||||
{"parallelize_output_from_storages", false, true, "Allow parallelism when executing queries that read from file/url/s3/etc. This may reorder rows."},
|
|
||||||
{"use_with_fill_by_sorting_prefix", false, true, "Columns preceding WITH FILL columns in ORDER BY clause form sorting prefix. Rows with different values in sorting prefix are filled independently"},
|
|
||||||
{"output_format_parquet_compliant_nested_types", false, true, "Change an internal field name in output Parquet file schema."}}},
|
|
||||||
{"23.4", {{"allow_suspicious_indices", true, false, "If true, index can defined with identical expressions"},
|
|
||||||
{"allow_nonconst_timezone_arguments", true, false, "Allow non-const timezone arguments in certain time-related functions like toTimeZone(), fromUnixTimestamp*(), snowflakeToDateTime*()."},
|
|
||||||
{"connect_timeout_with_failover_ms", 50, 1000, "Increase default connect timeout because of async connect"},
|
|
||||||
{"connect_timeout_with_failover_secure_ms", 100, 1000, "Increase default secure connect timeout because of async connect"},
|
|
||||||
{"hedged_connection_timeout_ms", 100, 50, "Start new connection in hedged requests after 50 ms instead of 100 to correspond with previous connect timeout"}}},
|
|
||||||
{"23.3", {{"output_format_parquet_version", "1.0", "2.latest", "Use latest Parquet format version for output format"},
|
|
||||||
{"input_format_json_ignore_unknown_keys_in_named_tuple", false, true, "Improve parsing JSON objects as named tuples"},
|
|
||||||
{"input_format_native_allow_types_conversion", false, true, "Allow types conversion in Native input forma"},
|
|
||||||
{"output_format_arrow_compression_method", "none", "lz4_frame", "Use lz4 compression in Arrow output format by default"},
|
|
||||||
{"output_format_parquet_compression_method", "snappy", "lz4", "Use lz4 compression in Parquet output format by default"},
|
|
||||||
{"output_format_orc_compression_method", "none", "lz4_frame", "Use lz4 compression in ORC output format by default"},
|
|
||||||
{"async_query_sending_for_remote", false, true, "Create connections and send query async across shards"}}},
|
|
||||||
{"23.2", {{"output_format_parquet_fixed_string_as_fixed_byte_array", false, true, "Use Parquet FIXED_LENGTH_BYTE_ARRAY type for FixedString by default"},
|
|
||||||
{"output_format_arrow_fixed_string_as_fixed_byte_array", false, true, "Use Arrow FIXED_SIZE_BINARY type for FixedString by default"},
|
|
||||||
{"query_plan_remove_redundant_distinct", false, true, "Remove redundant Distinct step in query plan"},
|
|
||||||
{"optimize_duplicate_order_by_and_distinct", true, false, "Remove duplicate ORDER BY and DISTINCT if it's possible"},
|
|
||||||
{"insert_keeper_max_retries", 0, 20, "Enable reconnections to Keeper on INSERT, improve reliability"}}},
|
|
||||||
{"23.1", {{"input_format_json_read_objects_as_strings", 0, 1, "Enable reading nested json objects as strings while object type is experimental"},
|
|
||||||
{"input_format_json_defaults_for_missing_elements_in_named_tuple", false, true, "Allow missing elements in JSON objects while reading named tuples by default"},
|
|
||||||
{"input_format_csv_detect_header", false, true, "Detect header in CSV format by default"},
|
|
||||||
{"input_format_tsv_detect_header", false, true, "Detect header in TSV format by default"},
|
|
||||||
{"input_format_custom_detect_header", false, true, "Detect header in CustomSeparated format by default"},
|
|
||||||
{"query_plan_remove_redundant_sorting", false, true, "Remove redundant sorting in query plan. For example, sorting steps related to ORDER BY clauses in subqueries"}}},
|
|
||||||
{"22.12", {{"max_size_to_preallocate_for_aggregation", 10'000'000, 100'000'000, "This optimizes performance"},
|
|
||||||
{"query_plan_aggregation_in_order", 0, 1, "Enable some refactoring around query plan"},
|
|
||||||
{"format_binary_max_string_size", 0, 1_GiB, "Prevent allocating large amount of memory"}}},
|
|
||||||
{"22.11", {{"use_structure_from_insertion_table_in_table_functions", 0, 2, "Improve using structure from insertion table in table functions"}}},
|
|
||||||
{"23.4", {{"formatdatetime_f_prints_single_zero", true, false, "Improved compatibility with MySQL DATE_FORMAT()/STR_TO_DATE()"}}},
|
|
||||||
{"23.4", {{"formatdatetime_parsedatetime_m_is_month_name", false, true, "Improved compatibility with MySQL DATE_FORMAT/STR_TO_DATE"}}},
|
|
||||||
{"23.11", {{"parsedatetime_parse_without_leading_zeros", false, true, "Improved compatibility with MySQL DATE_FORMAT/STR_TO_DATE"}}},
|
|
||||||
{"22.9", {{"force_grouping_standard_compatibility", false, true, "Make GROUPING function output the same as in SQL standard and other DBMS"}}},
|
|
||||||
{"22.7", {{"cross_to_inner_join_rewrite", 1, 2, "Force rewrite comma join to inner"},
|
|
||||||
{"enable_positional_arguments", false, true, "Enable positional arguments feature by default"},
|
|
||||||
{"format_csv_allow_single_quotes", true, false, "Most tools don't treat single quote in CSV specially, don't do it by default too"}}},
|
|
||||||
{"22.6", {{"output_format_json_named_tuples_as_objects", false, true, "Allow to serialize named tuples as JSON objects in JSON formats by default"},
|
|
||||||
{"input_format_skip_unknown_fields", false, true, "Optimize reading subset of columns for some input formats"}}},
|
|
||||||
{"22.5", {{"memory_overcommit_ratio_denominator", 0, 1073741824, "Enable memory overcommit feature by default"},
|
|
||||||
{"memory_overcommit_ratio_denominator_for_user", 0, 1073741824, "Enable memory overcommit feature by default"}}},
|
|
||||||
{"22.4", {{"allow_settings_after_format_in_insert", true, false, "Do not allow SETTINGS after FORMAT for INSERT queries because ClickHouse interpret SETTINGS as some values, which is misleading"}}},
|
|
||||||
{"22.3", {{"cast_ipv4_ipv6_default_on_conversion_error", true, false, "Make functions cast(value, 'IPv4') and cast(value, 'IPv6') behave same as toIPv4 and toIPv6 functions"}}},
|
|
||||||
{"21.12", {{"stream_like_engine_allow_direct_select", true, false, "Do not allow direct select for Kafka/RabbitMQ/FileLog by default"}}},
|
|
||||||
{"21.9", {{"output_format_decimal_trailing_zeros", true, false, "Do not output trailing zeros in text representation of Decimal types by default for better looking output"},
|
|
||||||
{"use_hedged_requests", false, true, "Enable Hedged Requests feature by default"}}},
|
|
||||||
{"21.7", {{"legacy_column_name_of_tuple_literal", true, false, "Add this setting only for compatibility reasons. It makes sense to set to 'true', while doing rolling update of cluster from version lower than 21.7 to higher"}}},
|
|
||||||
{"21.5", {{"async_socket_for_remote", false, true, "Fix all problems and turn on asynchronous reads from socket for remote queries by default again"}}},
|
|
||||||
{"21.3", {{"async_socket_for_remote", true, false, "Turn off asynchronous reads from socket for remote queries because of some problems"},
|
|
||||||
{"optimize_normalize_count_variants", false, true, "Rewrite aggregate functions that semantically equals to count() as count() by default"},
|
|
||||||
{"normalize_function_names", false, true, "Normalize function names to their canonical names, this was needed for projection query routing"}}},
|
|
||||||
{"21.2", {{"enable_global_with_statement", false, true, "Propagate WITH statements to UNION queries and all subqueries by default"}}},
|
|
||||||
{"21.1", {{"insert_quorum_parallel", false, true, "Use parallel quorum inserts by default. It is significantly more convenient to use than sequential quorum inserts"},
|
|
||||||
{"input_format_null_as_default", false, true, "Allow to insert NULL as default for input formats by default"},
|
|
||||||
{"optimize_on_insert", false, true, "Enable data optimization on INSERT by default for better user experience"},
|
|
||||||
{"use_compact_format_in_distributed_parts_names", false, true, "Use compact format for async INSERT into Distributed tables by default"}}},
|
|
||||||
{"20.10", {{"format_regexp_escaping_rule", "Escaped", "Raw", "Use Raw as default escaping rule for Regexp format to male the behaviour more like to what users expect"}}},
|
|
||||||
{"20.7", {{"show_table_uuid_in_table_create_query_if_not_nil", true, false, "Stop showing UID of the table in its CREATE query for Engine=Atomic"}}},
|
|
||||||
{"20.5", {{"input_format_with_names_use_header", false, true, "Enable using header with names for formats with WithNames/WithNamesAndTypes suffixes"},
|
|
||||||
{"allow_suspicious_codecs", true, false, "Don't allow to specify meaningless compression codecs"}}},
|
|
||||||
{"20.4", {{"validate_polygons", false, true, "Throw exception if polygon is invalid in function pointInPolygon by default instead of returning possibly wrong results"}}},
|
|
||||||
{"19.18", {{"enable_scalar_subquery_optimization", false, true, "Prevent scalar subqueries from (de)serializing large scalar values and possibly avoid running the same subquery more than once"}}},
|
|
||||||
{"19.14", {{"any_join_distinct_right_table_keys", true, false, "Disable ANY RIGHT and ANY FULL JOINs by default to avoid inconsistency"}}},
|
|
||||||
{"19.12", {{"input_format_defaults_for_omitted_fields", false, true, "Enable calculation of complex default expressions for omitted fields for some input formats, because it should be the expected behaviour"}}},
|
|
||||||
{"19.5", {{"max_partitions_per_insert_block", 0, 100, "Add a limit for the number of partitions in one block"}}},
|
|
||||||
{"18.12.17", {{"enable_optimize_predicate_expression", 0, 1, "Optimize predicates to subqueries by default"}}},
|
|
||||||
};
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -9,6 +9,21 @@ namespace ErrorCodes
|
|||||||
extern const int UNSUPPORTED_METHOD;
|
extern const int UNSUPPORTED_METHOD;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
namespace S3
|
||||||
|
{
|
||||||
|
std::string tryGetRunningAvailabilityZone()
|
||||||
|
{
|
||||||
|
try
|
||||||
|
{
|
||||||
|
return getRunningAvailabilityZone();
|
||||||
|
}
|
||||||
|
catch (...)
|
||||||
|
{
|
||||||
|
tryLogCurrentException("tryGetRunningAvailabilityZone");
|
||||||
|
return "";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#if USE_AWS_S3
|
#if USE_AWS_S3
|
||||||
|
@ -24,6 +24,7 @@ static inline constexpr char GCP_METADATA_SERVICE_ENDPOINT[] = "http://metadata.
|
|||||||
|
|
||||||
/// getRunningAvailabilityZone returns the availability zone of the underlying compute resources where the current process runs.
|
/// getRunningAvailabilityZone returns the availability zone of the underlying compute resources where the current process runs.
|
||||||
std::string getRunningAvailabilityZone();
|
std::string getRunningAvailabilityZone();
|
||||||
|
std::string tryGetRunningAvailabilityZone();
|
||||||
|
|
||||||
class AWSEC2MetadataClient : public Aws::Internal::AWSHttpResourceClient
|
class AWSEC2MetadataClient : public Aws::Internal::AWSHttpResourceClient
|
||||||
{
|
{
|
||||||
@ -195,6 +196,7 @@ namespace DB
|
|||||||
namespace S3
|
namespace S3
|
||||||
{
|
{
|
||||||
std::string getRunningAvailabilityZone();
|
std::string getRunningAvailabilityZone();
|
||||||
|
std::string tryGetRunningAvailabilityZone();
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -187,13 +187,6 @@ size_t FileSegment::getDownloadedSize() const
|
|||||||
return downloaded_size;
|
return downloaded_size;
|
||||||
}
|
}
|
||||||
|
|
||||||
void FileSegment::setDownloadedSize(size_t delta)
|
|
||||||
{
|
|
||||||
auto lk = lock();
|
|
||||||
downloaded_size += delta;
|
|
||||||
assert(downloaded_size == std::filesystem::file_size(getPath()));
|
|
||||||
}
|
|
||||||
|
|
||||||
bool FileSegment::isDownloaded() const
|
bool FileSegment::isDownloaded() const
|
||||||
{
|
{
|
||||||
auto lk = lock();
|
auto lk = lock();
|
||||||
@ -311,6 +304,11 @@ FileSegment::RemoteFileReaderPtr FileSegment::getRemoteFileReader()
|
|||||||
return remote_file_reader;
|
return remote_file_reader;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
FileSegment::LocalCacheWriterPtr FileSegment::getLocalCacheWriter()
|
||||||
|
{
|
||||||
|
return cache_writer;
|
||||||
|
}
|
||||||
|
|
||||||
void FileSegment::resetRemoteFileReader()
|
void FileSegment::resetRemoteFileReader()
|
||||||
{
|
{
|
||||||
auto lk = lock();
|
auto lk = lock();
|
||||||
@ -340,33 +338,31 @@ void FileSegment::setRemoteFileReader(RemoteFileReaderPtr remote_file_reader_)
|
|||||||
remote_file_reader = remote_file_reader_;
|
remote_file_reader = remote_file_reader_;
|
||||||
}
|
}
|
||||||
|
|
||||||
void FileSegment::write(char * from, size_t size, size_t offset)
|
void FileSegment::write(char * from, size_t size, size_t offset_in_file)
|
||||||
{
|
{
|
||||||
ProfileEventTimeIncrement<Microseconds> watch(ProfileEvents::FileSegmentWriteMicroseconds);
|
ProfileEventTimeIncrement<Microseconds> watch(ProfileEvents::FileSegmentWriteMicroseconds);
|
||||||
|
auto file_segment_path = getPath();
|
||||||
if (!size)
|
|
||||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Writing zero size is not allowed");
|
|
||||||
|
|
||||||
{
|
{
|
||||||
auto lk = lock();
|
if (!size)
|
||||||
assertIsDownloaderUnlocked("write", lk);
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "Writing zero size is not allowed");
|
||||||
assertNotDetachedUnlocked(lk);
|
|
||||||
}
|
|
||||||
|
|
||||||
const auto file_segment_path = getPath();
|
{
|
||||||
|
auto lk = lock();
|
||||||
|
assertIsDownloaderUnlocked("write", lk);
|
||||||
|
assertNotDetachedUnlocked(lk);
|
||||||
|
}
|
||||||
|
|
||||||
{
|
|
||||||
if (download_state != State::DOWNLOADING)
|
if (download_state != State::DOWNLOADING)
|
||||||
throw Exception(
|
throw Exception(
|
||||||
ErrorCodes::LOGICAL_ERROR,
|
ErrorCodes::LOGICAL_ERROR,
|
||||||
"Expected DOWNLOADING state, got {}", stateToString(download_state));
|
"Expected DOWNLOADING state, got {}", stateToString(download_state));
|
||||||
|
|
||||||
const size_t first_non_downloaded_offset = getCurrentWriteOffset();
|
const size_t first_non_downloaded_offset = getCurrentWriteOffset();
|
||||||
if (offset != first_non_downloaded_offset)
|
if (offset_in_file != first_non_downloaded_offset)
|
||||||
throw Exception(
|
throw Exception(
|
||||||
ErrorCodes::LOGICAL_ERROR,
|
ErrorCodes::LOGICAL_ERROR,
|
||||||
"Attempt to write {} bytes to offset: {}, but current write offset is {}",
|
"Attempt to write {} bytes to offset: {}, but current write offset is {}",
|
||||||
size, offset, first_non_downloaded_offset);
|
size, offset_in_file, first_non_downloaded_offset);
|
||||||
|
|
||||||
const size_t current_downloaded_size = getDownloadedSize();
|
const size_t current_downloaded_size = getDownloadedSize();
|
||||||
chassert(reserved_size >= current_downloaded_size);
|
chassert(reserved_size >= current_downloaded_size);
|
||||||
@ -396,10 +392,10 @@ void FileSegment::write(char * from, size_t size, size_t offset)
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
if (!cache_writer)
|
if (!cache_writer)
|
||||||
cache_writer = std::make_unique<WriteBufferFromFile>(file_segment_path, /* buf_size */0);
|
cache_writer = std::make_unique<WriteBufferFromFile>(getPath(), /* buf_size */0);
|
||||||
|
|
||||||
/// Size is equal to offset as offset for write buffer points to data end.
|
/// Size is equal to offset as offset for write buffer points to data end.
|
||||||
cache_writer->set(from, size, /* offset */size);
|
cache_writer->set(from, /* size */size, /* offset */size);
|
||||||
/// Reset the buffer when finished.
|
/// Reset the buffer when finished.
|
||||||
SCOPE_EXIT({ cache_writer->set(nullptr, 0); });
|
SCOPE_EXIT({ cache_writer->set(nullptr, 0); });
|
||||||
/// Flush the buffer.
|
/// Flush the buffer.
|
||||||
@ -435,7 +431,6 @@ void FileSegment::write(char * from, size_t size, size_t offset)
|
|||||||
}
|
}
|
||||||
|
|
||||||
throw;
|
throw;
|
||||||
|
|
||||||
}
|
}
|
||||||
catch (Exception & e)
|
catch (Exception & e)
|
||||||
{
|
{
|
||||||
@ -445,7 +440,7 @@ void FileSegment::write(char * from, size_t size, size_t offset)
|
|||||||
throw;
|
throw;
|
||||||
}
|
}
|
||||||
|
|
||||||
chassert(getCurrentWriteOffset() == offset + size);
|
chassert(getCurrentWriteOffset() == offset_in_file + size);
|
||||||
}
|
}
|
||||||
|
|
||||||
FileSegment::State FileSegment::wait(size_t offset)
|
FileSegment::State FileSegment::wait(size_t offset)
|
||||||
@ -828,7 +823,7 @@ bool FileSegment::assertCorrectnessUnlocked(const FileSegmentGuard::Lock & lock)
|
|||||||
};
|
};
|
||||||
|
|
||||||
const auto file_path = getPath();
|
const auto file_path = getPath();
|
||||||
if (segment_kind != FileSegmentKind::Temporary)
|
|
||||||
{
|
{
|
||||||
std::lock_guard lk(write_mutex);
|
std::lock_guard lk(write_mutex);
|
||||||
if (downloaded_size == 0)
|
if (downloaded_size == 0)
|
||||||
|
@ -48,7 +48,7 @@ friend class FileCache; /// Because of reserved_size in tryReserve().
|
|||||||
public:
|
public:
|
||||||
using Key = FileCacheKey;
|
using Key = FileCacheKey;
|
||||||
using RemoteFileReaderPtr = std::shared_ptr<ReadBufferFromFileBase>;
|
using RemoteFileReaderPtr = std::shared_ptr<ReadBufferFromFileBase>;
|
||||||
using LocalCacheWriterPtr = std::unique_ptr<WriteBufferFromFile>;
|
using LocalCacheWriterPtr = std::shared_ptr<WriteBufferFromFile>;
|
||||||
using Downloader = std::string;
|
using Downloader = std::string;
|
||||||
using DownloaderId = std::string;
|
using DownloaderId = std::string;
|
||||||
using Priority = IFileCachePriority;
|
using Priority = IFileCachePriority;
|
||||||
@ -204,7 +204,7 @@ public:
|
|||||||
bool reserve(size_t size_to_reserve, size_t lock_wait_timeout_milliseconds, FileCacheReserveStat * reserve_stat = nullptr);
|
bool reserve(size_t size_to_reserve, size_t lock_wait_timeout_milliseconds, FileCacheReserveStat * reserve_stat = nullptr);
|
||||||
|
|
||||||
/// Write data into reserved space.
|
/// Write data into reserved space.
|
||||||
void write(char * from, size_t size, size_t offset);
|
void write(char * from, size_t size, size_t offset_in_file);
|
||||||
|
|
||||||
// Invariant: if state() != DOWNLOADING and remote file reader is present, the reader's
|
// Invariant: if state() != DOWNLOADING and remote file reader is present, the reader's
|
||||||
// available() == 0, and getFileOffsetOfBufferEnd() == our getCurrentWriteOffset().
|
// available() == 0, and getFileOffsetOfBufferEnd() == our getCurrentWriteOffset().
|
||||||
@ -212,6 +212,7 @@ public:
|
|||||||
// The reader typically requires its internal_buffer to be assigned from the outside before
|
// The reader typically requires its internal_buffer to be assigned from the outside before
|
||||||
// calling next().
|
// calling next().
|
||||||
RemoteFileReaderPtr getRemoteFileReader();
|
RemoteFileReaderPtr getRemoteFileReader();
|
||||||
|
LocalCacheWriterPtr getLocalCacheWriter();
|
||||||
|
|
||||||
RemoteFileReaderPtr extractRemoteFileReader();
|
RemoteFileReaderPtr extractRemoteFileReader();
|
||||||
|
|
||||||
@ -219,8 +220,6 @@ public:
|
|||||||
|
|
||||||
void setRemoteFileReader(RemoteFileReaderPtr remote_file_reader_);
|
void setRemoteFileReader(RemoteFileReaderPtr remote_file_reader_);
|
||||||
|
|
||||||
void setDownloadedSize(size_t delta);
|
|
||||||
|
|
||||||
void setDownloadFailed();
|
void setDownloadFailed();
|
||||||
|
|
||||||
private:
|
private:
|
||||||
|
@ -944,14 +944,7 @@ KeyMetadata::iterator LockedKey::removeFileSegmentImpl(
|
|||||||
try
|
try
|
||||||
{
|
{
|
||||||
const auto path = key_metadata->getFileSegmentPath(*file_segment);
|
const auto path = key_metadata->getFileSegmentPath(*file_segment);
|
||||||
if (file_segment->segment_kind == FileSegmentKind::Temporary)
|
if (file_segment->downloaded_size == 0)
|
||||||
{
|
|
||||||
/// FIXME: For temporary file segment the requirement is not as strong because
|
|
||||||
/// the implementation of "temporary data in cache" creates files in advance.
|
|
||||||
if (fs::exists(path))
|
|
||||||
fs::remove(path);
|
|
||||||
}
|
|
||||||
else if (file_segment->downloaded_size == 0)
|
|
||||||
{
|
{
|
||||||
chassert(!fs::exists(path));
|
chassert(!fs::exists(path));
|
||||||
}
|
}
|
||||||
|
@ -4,6 +4,7 @@
|
|||||||
#include <Interpreters/Context.h>
|
#include <Interpreters/Context.h>
|
||||||
#include <IO/SwapHelper.h>
|
#include <IO/SwapHelper.h>
|
||||||
#include <IO/ReadBufferFromFile.h>
|
#include <IO/ReadBufferFromFile.h>
|
||||||
|
#include <IO/EmptyReadBuffer.h>
|
||||||
|
|
||||||
#include <base/scope_guard.h>
|
#include <base/scope_guard.h>
|
||||||
|
|
||||||
@ -33,21 +34,20 @@ namespace
|
|||||||
}
|
}
|
||||||
|
|
||||||
WriteBufferToFileSegment::WriteBufferToFileSegment(FileSegment * file_segment_)
|
WriteBufferToFileSegment::WriteBufferToFileSegment(FileSegment * file_segment_)
|
||||||
: WriteBufferFromFileDecorator(std::make_unique<WriteBufferFromFile>(file_segment_->getPath()))
|
: WriteBufferFromFileBase(DBMS_DEFAULT_BUFFER_SIZE, nullptr, 0)
|
||||||
, file_segment(file_segment_)
|
, file_segment(file_segment_)
|
||||||
, reserve_space_lock_wait_timeout_milliseconds(getCacheLockWaitTimeout())
|
, reserve_space_lock_wait_timeout_milliseconds(getCacheLockWaitTimeout())
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
WriteBufferToFileSegment::WriteBufferToFileSegment(FileSegmentsHolderPtr segment_holder_)
|
WriteBufferToFileSegment::WriteBufferToFileSegment(FileSegmentsHolderPtr segment_holder_)
|
||||||
: WriteBufferFromFileDecorator(
|
: WriteBufferFromFileBase(DBMS_DEFAULT_BUFFER_SIZE, nullptr, 0)
|
||||||
segment_holder_->size() == 1
|
|
||||||
? std::make_unique<WriteBufferFromFile>(segment_holder_->front().getPath())
|
|
||||||
: throw Exception(ErrorCodes::LOGICAL_ERROR, "WriteBufferToFileSegment can be created only from single segment"))
|
|
||||||
, file_segment(&segment_holder_->front())
|
, file_segment(&segment_holder_->front())
|
||||||
, segment_holder(std::move(segment_holder_))
|
, segment_holder(std::move(segment_holder_))
|
||||||
, reserve_space_lock_wait_timeout_milliseconds(getCacheLockWaitTimeout())
|
, reserve_space_lock_wait_timeout_milliseconds(getCacheLockWaitTimeout())
|
||||||
{
|
{
|
||||||
|
if (segment_holder->size() != 1)
|
||||||
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "WriteBufferToFileSegment can be created only from single segment");
|
||||||
}
|
}
|
||||||
|
|
||||||
/// If it throws an exception, the file segment will be incomplete, so you should not use it in the future.
|
/// If it throws an exception, the file segment will be incomplete, so you should not use it in the future.
|
||||||
@ -82,9 +82,6 @@ void WriteBufferToFileSegment::nextImpl()
|
|||||||
reserve_stat_msg += fmt::format("{} hold {}, can release {}; ",
|
reserve_stat_msg += fmt::format("{} hold {}, can release {}; ",
|
||||||
toString(kind), ReadableSize(stat.non_releasable_size), ReadableSize(stat.releasable_size));
|
toString(kind), ReadableSize(stat.non_releasable_size), ReadableSize(stat.releasable_size));
|
||||||
|
|
||||||
if (std::filesystem::exists(file_segment->getPath()))
|
|
||||||
std::filesystem::remove(file_segment->getPath());
|
|
||||||
|
|
||||||
throw Exception(ErrorCodes::NOT_ENOUGH_SPACE, "Failed to reserve {} bytes for {}: {}(segment info: {})",
|
throw Exception(ErrorCodes::NOT_ENOUGH_SPACE, "Failed to reserve {} bytes for {}: {}(segment info: {})",
|
||||||
bytes_to_write,
|
bytes_to_write,
|
||||||
file_segment->getKind() == FileSegmentKind::Temporary ? "temporary file" : "the file in cache",
|
file_segment->getKind() == FileSegmentKind::Temporary ? "temporary file" : "the file in cache",
|
||||||
@ -95,17 +92,37 @@ void WriteBufferToFileSegment::nextImpl()
|
|||||||
|
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
SwapHelper swap(*this, *impl);
|
|
||||||
/// Write data to the underlying buffer.
|
/// Write data to the underlying buffer.
|
||||||
impl->next();
|
file_segment->write(working_buffer.begin(), bytes_to_write, written_bytes);
|
||||||
|
written_bytes += bytes_to_write;
|
||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
{
|
{
|
||||||
LOG_WARNING(getLogger("WriteBufferToFileSegment"), "Failed to write to the underlying buffer ({})", file_segment->getInfoForLog());
|
LOG_WARNING(getLogger("WriteBufferToFileSegment"), "Failed to write to the underlying buffer ({})", file_segment->getInfoForLog());
|
||||||
throw;
|
throw;
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
file_segment->setDownloadedSize(bytes_to_write);
|
void WriteBufferToFileSegment::finalizeImpl()
|
||||||
|
{
|
||||||
|
next();
|
||||||
|
auto cache_writer = file_segment->getLocalCacheWriter();
|
||||||
|
if (cache_writer)
|
||||||
|
{
|
||||||
|
SwapHelper swap(*this, *cache_writer);
|
||||||
|
cache_writer->finalize();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void WriteBufferToFileSegment::sync()
|
||||||
|
{
|
||||||
|
next();
|
||||||
|
auto cache_writer = file_segment->getLocalCacheWriter();
|
||||||
|
if (cache_writer)
|
||||||
|
{
|
||||||
|
SwapHelper swap(*this, *cache_writer);
|
||||||
|
cache_writer->sync();
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
std::unique_ptr<ReadBuffer> WriteBufferToFileSegment::getReadBufferImpl()
|
std::unique_ptr<ReadBuffer> WriteBufferToFileSegment::getReadBufferImpl()
|
||||||
@ -114,7 +131,10 @@ std::unique_ptr<ReadBuffer> WriteBufferToFileSegment::getReadBufferImpl()
|
|||||||
* because in case destructor called without `getReadBufferImpl` called, data won't be read.
|
* because in case destructor called without `getReadBufferImpl` called, data won't be read.
|
||||||
*/
|
*/
|
||||||
finalize();
|
finalize();
|
||||||
return std::make_unique<ReadBufferFromFile>(file_segment->getPath());
|
if (file_segment->getDownloadedSize() > 0)
|
||||||
|
return std::make_unique<ReadBufferFromFile>(file_segment->getPath());
|
||||||
|
else
|
||||||
|
return std::make_unique<EmptyReadBuffer>();
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -9,7 +9,7 @@ namespace DB
|
|||||||
|
|
||||||
class FileSegment;
|
class FileSegment;
|
||||||
|
|
||||||
class WriteBufferToFileSegment : public WriteBufferFromFileDecorator, public IReadableWriteBuffer
|
class WriteBufferToFileSegment : public WriteBufferFromFileBase, public IReadableWriteBuffer
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
explicit WriteBufferToFileSegment(FileSegment * file_segment_);
|
explicit WriteBufferToFileSegment(FileSegment * file_segment_);
|
||||||
@ -17,6 +17,13 @@ public:
|
|||||||
|
|
||||||
void nextImpl() override;
|
void nextImpl() override;
|
||||||
|
|
||||||
|
std::string getFileName() const override { return file_segment->getPath(); }
|
||||||
|
|
||||||
|
void sync() override;
|
||||||
|
|
||||||
|
protected:
|
||||||
|
void finalizeImpl() override;
|
||||||
|
|
||||||
private:
|
private:
|
||||||
|
|
||||||
std::unique_ptr<ReadBuffer> getReadBufferImpl() override;
|
std::unique_ptr<ReadBuffer> getReadBufferImpl() override;
|
||||||
@ -29,6 +36,7 @@ private:
|
|||||||
FileSegmentsHolderPtr segment_holder;
|
FileSegmentsHolderPtr segment_holder;
|
||||||
|
|
||||||
const size_t reserve_space_lock_wait_timeout_milliseconds;
|
const size_t reserve_space_lock_wait_timeout_milliseconds;
|
||||||
|
size_t written_bytes = 0;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
|
@ -3402,8 +3402,6 @@ zkutil::ZooKeeperPtr Context::getZooKeeper() const
|
|||||||
const auto & config = shared->zookeeper_config ? *shared->zookeeper_config : getConfigRef();
|
const auto & config = shared->zookeeper_config ? *shared->zookeeper_config : getConfigRef();
|
||||||
if (!shared->zookeeper)
|
if (!shared->zookeeper)
|
||||||
shared->zookeeper = zkutil::ZooKeeper::create(config, zkutil::getZooKeeperConfigName(config), getZooKeeperLog());
|
shared->zookeeper = zkutil::ZooKeeper::create(config, zkutil::getZooKeeperConfigName(config), getZooKeeperLog());
|
||||||
else if (shared->zookeeper->hasReachedDeadline())
|
|
||||||
shared->zookeeper->finalize("ZooKeeper session has reached its deadline");
|
|
||||||
|
|
||||||
if (shared->zookeeper->expired())
|
if (shared->zookeeper->expired())
|
||||||
{
|
{
|
||||||
|
@ -910,7 +910,7 @@ bool InterpreterSelectQuery::adjustParallelReplicasAfterAnalysis()
|
|||||||
UInt64 max_rows = maxBlockSizeByLimit();
|
UInt64 max_rows = maxBlockSizeByLimit();
|
||||||
if (settings.max_rows_to_read)
|
if (settings.max_rows_to_read)
|
||||||
max_rows = max_rows ? std::min(max_rows, settings.max_rows_to_read.value) : settings.max_rows_to_read;
|
max_rows = max_rows ? std::min(max_rows, settings.max_rows_to_read.value) : settings.max_rows_to_read;
|
||||||
query_info_copy.limit = max_rows;
|
query_info_copy.trivial_limit = max_rows;
|
||||||
|
|
||||||
/// Apply filters to prewhere and add them to the query_info so we can filter out parts efficiently during row estimation
|
/// Apply filters to prewhere and add them to the query_info so we can filter out parts efficiently during row estimation
|
||||||
applyFiltersToPrewhereInAnalysis(analysis_copy);
|
applyFiltersToPrewhereInAnalysis(analysis_copy);
|
||||||
@ -2445,13 +2445,13 @@ void InterpreterSelectQuery::executeFetchColumns(QueryProcessingStage::Enum proc
|
|||||||
if (local_limits.local_limits.size_limits.max_rows != 0)
|
if (local_limits.local_limits.size_limits.max_rows != 0)
|
||||||
{
|
{
|
||||||
if (max_block_limited < local_limits.local_limits.size_limits.max_rows)
|
if (max_block_limited < local_limits.local_limits.size_limits.max_rows)
|
||||||
query_info.limit = max_block_limited;
|
query_info.trivial_limit = max_block_limited;
|
||||||
else if (local_limits.local_limits.size_limits.max_rows < std::numeric_limits<UInt64>::max()) /// Ask to read just enough rows to make the max_rows limit effective (so it has a chance to be triggered).
|
else if (local_limits.local_limits.size_limits.max_rows < std::numeric_limits<UInt64>::max()) /// Ask to read just enough rows to make the max_rows limit effective (so it has a chance to be triggered).
|
||||||
query_info.limit = 1 + local_limits.local_limits.size_limits.max_rows;
|
query_info.trivial_limit = 1 + local_limits.local_limits.size_limits.max_rows;
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
query_info.limit = max_block_limited;
|
query_info.trivial_limit = max_block_limited;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3,6 +3,8 @@
|
|||||||
#include <Interpreters/TemporaryDataOnDisk.h>
|
#include <Interpreters/TemporaryDataOnDisk.h>
|
||||||
|
|
||||||
#include <IO/WriteBufferFromFile.h>
|
#include <IO/WriteBufferFromFile.h>
|
||||||
|
#include <IO/ReadBufferFromFile.h>
|
||||||
|
#include <IO/ReadBufferFromEmptyFile.h>
|
||||||
#include <Compression/CompressedWriteBuffer.h>
|
#include <Compression/CompressedWriteBuffer.h>
|
||||||
#include <Interpreters/Cache/FileCache.h>
|
#include <Interpreters/Cache/FileCache.h>
|
||||||
#include <Formats/NativeWriter.h>
|
#include <Formats/NativeWriter.h>
|
||||||
@ -224,25 +226,37 @@ struct TemporaryFileStream::OutputWriter
|
|||||||
bool finalized = false;
|
bool finalized = false;
|
||||||
};
|
};
|
||||||
|
|
||||||
TemporaryFileStream::Reader::Reader(const String & path, const Block & header_, size_t size)
|
TemporaryFileStream::Reader::Reader(const String & path_, const Block & header_, size_t size_)
|
||||||
: in_file_buf(path, size ? std::min<size_t>(DBMS_DEFAULT_BUFFER_SIZE, size) : DBMS_DEFAULT_BUFFER_SIZE)
|
: path(path_)
|
||||||
, in_compressed_buf(in_file_buf)
|
, size(size_ ? std::min<size_t>(size_, DBMS_DEFAULT_BUFFER_SIZE) : DBMS_DEFAULT_BUFFER_SIZE)
|
||||||
, in_reader(in_compressed_buf, header_, DBMS_TCP_PROTOCOL_VERSION)
|
, header(header_)
|
||||||
{
|
{
|
||||||
LOG_TEST(getLogger("TemporaryFileStream"), "Reading {} from {}", header_.dumpStructure(), path);
|
LOG_TEST(getLogger("TemporaryFileStream"), "Reading {} from {}", header_.dumpStructure(), path);
|
||||||
}
|
}
|
||||||
|
|
||||||
TemporaryFileStream::Reader::Reader(const String & path, size_t size)
|
TemporaryFileStream::Reader::Reader(const String & path_, size_t size_)
|
||||||
: in_file_buf(path, size ? std::min<size_t>(DBMS_DEFAULT_BUFFER_SIZE, size) : DBMS_DEFAULT_BUFFER_SIZE)
|
: path(path_)
|
||||||
, in_compressed_buf(in_file_buf)
|
, size(size_ ? std::min<size_t>(size_, DBMS_DEFAULT_BUFFER_SIZE) : DBMS_DEFAULT_BUFFER_SIZE)
|
||||||
, in_reader(in_compressed_buf, DBMS_TCP_PROTOCOL_VERSION)
|
|
||||||
{
|
{
|
||||||
LOG_TEST(getLogger("TemporaryFileStream"), "Reading from {}", path);
|
LOG_TEST(getLogger("TemporaryFileStream"), "Reading from {}", path);
|
||||||
}
|
}
|
||||||
|
|
||||||
Block TemporaryFileStream::Reader::read()
|
Block TemporaryFileStream::Reader::read()
|
||||||
{
|
{
|
||||||
return in_reader.read();
|
if (!in_reader)
|
||||||
|
{
|
||||||
|
if (fs::exists(path))
|
||||||
|
in_file_buf = std::make_unique<ReadBufferFromFile>(path, size);
|
||||||
|
else
|
||||||
|
in_file_buf = std::make_unique<ReadBufferFromEmptyFile>();
|
||||||
|
|
||||||
|
in_compressed_buf = std::make_unique<CompressedReadBuffer>(*in_file_buf);
|
||||||
|
if (header.has_value())
|
||||||
|
in_reader = std::make_unique<NativeReader>(*in_compressed_buf, header.value(), DBMS_TCP_PROTOCOL_VERSION);
|
||||||
|
else
|
||||||
|
in_reader = std::make_unique<NativeReader>(*in_compressed_buf, DBMS_TCP_PROTOCOL_VERSION);
|
||||||
|
}
|
||||||
|
return in_reader->read();
|
||||||
}
|
}
|
||||||
|
|
||||||
TemporaryFileStream::TemporaryFileStream(TemporaryFileOnDiskHolder file_, const Block & header_, TemporaryDataOnDisk * parent_)
|
TemporaryFileStream::TemporaryFileStream(TemporaryFileOnDiskHolder file_, const Block & header_, TemporaryDataOnDisk * parent_)
|
||||||
|
@ -151,9 +151,13 @@ public:
|
|||||||
|
|
||||||
Block read();
|
Block read();
|
||||||
|
|
||||||
ReadBufferFromFile in_file_buf;
|
const std::string path;
|
||||||
CompressedReadBuffer in_compressed_buf;
|
const size_t size;
|
||||||
NativeReader in_reader;
|
const std::optional<Block> header;
|
||||||
|
|
||||||
|
std::unique_ptr<ReadBufferFromFileBase> in_file_buf;
|
||||||
|
std::unique_ptr<CompressedReadBuffer> in_compressed_buf;
|
||||||
|
std::unique_ptr<NativeReader> in_reader;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct Stat
|
struct Stat
|
||||||
|
@ -693,14 +693,14 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(QueryTreeNodePtr table_expres
|
|||||||
if (select_query_info.local_storage_limits.local_limits.size_limits.max_rows != 0)
|
if (select_query_info.local_storage_limits.local_limits.size_limits.max_rows != 0)
|
||||||
{
|
{
|
||||||
if (max_block_size_limited < select_query_info.local_storage_limits.local_limits.size_limits.max_rows)
|
if (max_block_size_limited < select_query_info.local_storage_limits.local_limits.size_limits.max_rows)
|
||||||
table_expression_query_info.limit = max_block_size_limited;
|
table_expression_query_info.trivial_limit = max_block_size_limited;
|
||||||
/// Ask to read just enough rows to make the max_rows limit effective (so it has a chance to be triggered).
|
/// Ask to read just enough rows to make the max_rows limit effective (so it has a chance to be triggered).
|
||||||
else if (select_query_info.local_storage_limits.local_limits.size_limits.max_rows < std::numeric_limits<UInt64>::max())
|
else if (select_query_info.local_storage_limits.local_limits.size_limits.max_rows < std::numeric_limits<UInt64>::max())
|
||||||
table_expression_query_info.limit = 1 + select_query_info.local_storage_limits.local_limits.size_limits.max_rows;
|
table_expression_query_info.trivial_limit = 1 + select_query_info.local_storage_limits.local_limits.size_limits.max_rows;
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
table_expression_query_info.limit = max_block_size_limited;
|
table_expression_query_info.trivial_limit = max_block_size_limited;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -913,8 +913,8 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(QueryTreeNodePtr table_expres
|
|||||||
auto result_ptr = reading->selectRangesToRead();
|
auto result_ptr = reading->selectRangesToRead();
|
||||||
|
|
||||||
UInt64 rows_to_read = result_ptr->selected_rows;
|
UInt64 rows_to_read = result_ptr->selected_rows;
|
||||||
if (table_expression_query_info.limit > 0 && table_expression_query_info.limit < rows_to_read)
|
if (table_expression_query_info.trivial_limit > 0 && table_expression_query_info.trivial_limit < rows_to_read)
|
||||||
rows_to_read = table_expression_query_info.limit;
|
rows_to_read = table_expression_query_info.trivial_limit;
|
||||||
|
|
||||||
if (max_block_size_limited && (max_block_size_limited < rows_to_read))
|
if (max_block_size_limited && (max_block_size_limited < rows_to_read))
|
||||||
rows_to_read = max_block_size_limited;
|
rows_to_read = max_block_size_limited;
|
||||||
|
@ -250,9 +250,9 @@ void ReadFromMergeTree::AnalysisResult::checkLimits(const Settings & settings, c
|
|||||||
{
|
{
|
||||||
/// Fail fast if estimated number of rows to read exceeds the limit
|
/// Fail fast if estimated number of rows to read exceeds the limit
|
||||||
size_t total_rows_estimate = selected_rows;
|
size_t total_rows_estimate = selected_rows;
|
||||||
if (query_info_.limit > 0 && total_rows_estimate > query_info_.limit)
|
if (query_info_.trivial_limit > 0 && total_rows_estimate > query_info_.trivial_limit)
|
||||||
{
|
{
|
||||||
total_rows_estimate = query_info_.limit;
|
total_rows_estimate = query_info_.trivial_limit;
|
||||||
}
|
}
|
||||||
limits.check(total_rows_estimate, 0, "rows (controlled by 'max_rows_to_read' setting)", ErrorCodes::TOO_MANY_ROWS);
|
limits.check(total_rows_estimate, 0, "rows (controlled by 'max_rows_to_read' setting)", ErrorCodes::TOO_MANY_ROWS);
|
||||||
leaf_limits.check(
|
leaf_limits.check(
|
||||||
@ -398,8 +398,8 @@ Pipe ReadFromMergeTree::readFromPool(
|
|||||||
{
|
{
|
||||||
size_t total_rows = parts_with_range.getRowsCountAllParts();
|
size_t total_rows = parts_with_range.getRowsCountAllParts();
|
||||||
|
|
||||||
if (query_info.limit > 0 && query_info.limit < total_rows)
|
if (query_info.trivial_limit > 0 && query_info.trivial_limit < total_rows)
|
||||||
total_rows = query_info.limit;
|
total_rows = query_info.trivial_limit;
|
||||||
|
|
||||||
const auto & settings = context->getSettingsRef();
|
const auto & settings = context->getSettingsRef();
|
||||||
|
|
||||||
@ -436,7 +436,7 @@ Pipe ReadFromMergeTree::readFromPool(
|
|||||||
* Because time spend during filling per thread tasks can be greater than whole query
|
* Because time spend during filling per thread tasks can be greater than whole query
|
||||||
* execution for big tables with small limit.
|
* execution for big tables with small limit.
|
||||||
*/
|
*/
|
||||||
bool use_prefetched_read_pool = query_info.limit == 0 && (allow_prefetched_remote || allow_prefetched_local);
|
bool use_prefetched_read_pool = query_info.trivial_limit == 0 && (allow_prefetched_remote || allow_prefetched_local);
|
||||||
|
|
||||||
if (use_prefetched_read_pool)
|
if (use_prefetched_read_pool)
|
||||||
{
|
{
|
||||||
@ -563,9 +563,8 @@ Pipe ReadFromMergeTree::readInOrder(
|
|||||||
/// Actually it means that parallel reading from replicas enabled
|
/// Actually it means that parallel reading from replicas enabled
|
||||||
/// and we have to collaborate with initiator.
|
/// and we have to collaborate with initiator.
|
||||||
/// In this case we won't set approximate rows, because it will be accounted multiple times.
|
/// In this case we won't set approximate rows, because it will be accounted multiple times.
|
||||||
/// Also do not count amount of read rows if we read in order of sorting key,
|
const auto in_order_limit = query_info.input_order_info ? query_info.input_order_info->limit : 0;
|
||||||
/// because we don't know actual amount of read rows in case when limit is set.
|
const bool set_total_rows_approx = !is_parallel_reading_from_replicas;
|
||||||
bool set_rows_approx = !is_parallel_reading_from_replicas && !reader_settings.read_in_order;
|
|
||||||
|
|
||||||
Pipes pipes;
|
Pipes pipes;
|
||||||
for (size_t i = 0; i < parts_with_ranges.size(); ++i)
|
for (size_t i = 0; i < parts_with_ranges.size(); ++i)
|
||||||
@ -573,8 +572,10 @@ Pipe ReadFromMergeTree::readInOrder(
|
|||||||
const auto & part_with_ranges = parts_with_ranges[i];
|
const auto & part_with_ranges = parts_with_ranges[i];
|
||||||
|
|
||||||
UInt64 total_rows = part_with_ranges.getRowsCount();
|
UInt64 total_rows = part_with_ranges.getRowsCount();
|
||||||
if (query_info.limit > 0 && query_info.limit < total_rows)
|
if (query_info.trivial_limit > 0 && query_info.trivial_limit < total_rows)
|
||||||
total_rows = query_info.limit;
|
total_rows = query_info.trivial_limit;
|
||||||
|
else if (in_order_limit > 0 && in_order_limit < total_rows)
|
||||||
|
total_rows = in_order_limit;
|
||||||
|
|
||||||
LOG_TRACE(log, "Reading {} ranges in{}order from part {}, approx. {} rows starting from {}",
|
LOG_TRACE(log, "Reading {} ranges in{}order from part {}, approx. {} rows starting from {}",
|
||||||
part_with_ranges.ranges.size(),
|
part_with_ranges.ranges.size(),
|
||||||
@ -595,7 +596,7 @@ Pipe ReadFromMergeTree::readInOrder(
|
|||||||
processor->addPartLevelToChunk(isQueryWithFinal());
|
processor->addPartLevelToChunk(isQueryWithFinal());
|
||||||
|
|
||||||
auto source = std::make_shared<MergeTreeSource>(std::move(processor));
|
auto source = std::make_shared<MergeTreeSource>(std::move(processor));
|
||||||
if (set_rows_approx)
|
if (set_total_rows_approx)
|
||||||
source->addTotalRowsApprox(total_rows);
|
source->addTotalRowsApprox(total_rows);
|
||||||
|
|
||||||
pipes.emplace_back(std::move(source));
|
pipes.emplace_back(std::move(source));
|
||||||
|
@ -393,7 +393,7 @@ ReadFromSystemNumbersStep::ReadFromSystemNumbersStep(
|
|||||||
, num_streams{num_streams_}
|
, num_streams{num_streams_}
|
||||||
, limit_length_and_offset(InterpreterSelectQuery::getLimitLengthAndOffset(query_info.query->as<ASTSelectQuery &>(), context))
|
, limit_length_and_offset(InterpreterSelectQuery::getLimitLengthAndOffset(query_info.query->as<ASTSelectQuery &>(), context))
|
||||||
, should_pushdown_limit(shouldPushdownLimit(query_info, limit_length_and_offset.first))
|
, should_pushdown_limit(shouldPushdownLimit(query_info, limit_length_and_offset.first))
|
||||||
, query_info_limit(query_info.limit)
|
, query_info_limit(query_info.trivial_limit)
|
||||||
, storage_limits(query_info.storage_limits)
|
, storage_limits(query_info.storage_limits)
|
||||||
{
|
{
|
||||||
storage_snapshot->check(column_names);
|
storage_snapshot->check(column_names);
|
||||||
|
@ -191,6 +191,12 @@ PostgreSQLSource<T>::~PostgreSQLSource()
|
|||||||
{
|
{
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
|
if (stream)
|
||||||
|
{
|
||||||
|
tx->conn().cancel_query();
|
||||||
|
stream->close();
|
||||||
|
}
|
||||||
|
|
||||||
stream.reset();
|
stream.reset();
|
||||||
tx.reset();
|
tx.reset();
|
||||||
}
|
}
|
||||||
|
@ -2,7 +2,6 @@
|
|||||||
#include <Common/formatReadable.h>
|
#include <Common/formatReadable.h>
|
||||||
#include <Common/Exception.h>
|
#include <Common/Exception.h>
|
||||||
#include <Common/ProfileEvents.h>
|
#include <Common/ProfileEvents.h>
|
||||||
#include <string>
|
|
||||||
|
|
||||||
|
|
||||||
namespace ProfileEvents
|
namespace ProfileEvents
|
||||||
|
@ -11,6 +11,11 @@
|
|||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
|
namespace ErrorCodes
|
||||||
|
{
|
||||||
|
extern const int LOGICAL_ERROR;
|
||||||
|
}
|
||||||
|
|
||||||
namespace PlacementInfo
|
namespace PlacementInfo
|
||||||
{
|
{
|
||||||
|
|
||||||
@ -46,7 +51,15 @@ PlacementInfo & PlacementInfo::instance()
|
|||||||
}
|
}
|
||||||
|
|
||||||
void PlacementInfo::initialize(const Poco::Util::AbstractConfiguration & config)
|
void PlacementInfo::initialize(const Poco::Util::AbstractConfiguration & config)
|
||||||
|
try
|
||||||
{
|
{
|
||||||
|
if (!config.has(DB::PlacementInfo::PLACEMENT_CONFIG_PREFIX))
|
||||||
|
{
|
||||||
|
availability_zone = "";
|
||||||
|
initialized = true;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
use_imds = config.getBool(getConfigPath("use_imds"), false);
|
use_imds = config.getBool(getConfigPath("use_imds"), false);
|
||||||
|
|
||||||
if (use_imds)
|
if (use_imds)
|
||||||
@ -67,14 +80,17 @@ void PlacementInfo::initialize(const Poco::Util::AbstractConfiguration & config)
|
|||||||
LOG_DEBUG(log, "Loaded info: availability_zone: {}", availability_zone);
|
LOG_DEBUG(log, "Loaded info: availability_zone: {}", availability_zone);
|
||||||
initialized = true;
|
initialized = true;
|
||||||
}
|
}
|
||||||
|
catch (...)
|
||||||
|
{
|
||||||
|
tryLogCurrentException("Failed to get availability zone");
|
||||||
|
availability_zone = "";
|
||||||
|
initialized = true;
|
||||||
|
}
|
||||||
|
|
||||||
std::string PlacementInfo::getAvailabilityZone() const
|
std::string PlacementInfo::getAvailabilityZone() const
|
||||||
{
|
{
|
||||||
if (!initialized)
|
if (!initialized)
|
||||||
{
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "Placement info has not been loaded");
|
||||||
LOG_WARNING(log, "Placement info has not been loaded");
|
|
||||||
return "";
|
|
||||||
}
|
|
||||||
|
|
||||||
return availability_zone;
|
return availability_zone;
|
||||||
}
|
}
|
||||||
|
@ -1759,11 +1759,14 @@ void MergeTreeData::loadDataParts(bool skip_sanity_checks, std::optional<std::un
|
|||||||
|
|
||||||
ThreadPoolCallbackRunnerLocal<void> runner(getActivePartsLoadingThreadPool().get(), "ActiveParts");
|
ThreadPoolCallbackRunnerLocal<void> runner(getActivePartsLoadingThreadPool().get(), "ActiveParts");
|
||||||
|
|
||||||
|
bool all_disks_are_readonly = true;
|
||||||
for (size_t i = 0; i < disks.size(); ++i)
|
for (size_t i = 0; i < disks.size(); ++i)
|
||||||
{
|
{
|
||||||
const auto & disk_ptr = disks[i];
|
const auto & disk_ptr = disks[i];
|
||||||
if (disk_ptr->isBroken())
|
if (disk_ptr->isBroken())
|
||||||
continue;
|
continue;
|
||||||
|
if (!disk_ptr->isReadOnly())
|
||||||
|
all_disks_are_readonly = false;
|
||||||
|
|
||||||
auto & disk_parts = parts_to_load_by_disk[i];
|
auto & disk_parts = parts_to_load_by_disk[i];
|
||||||
auto & unexpected_disk_parts = unexpected_parts_to_load_by_disk[i];
|
auto & unexpected_disk_parts = unexpected_parts_to_load_by_disk[i];
|
||||||
@ -1916,7 +1919,6 @@ void MergeTreeData::loadDataParts(bool skip_sanity_checks, std::optional<std::un
|
|||||||
if (suspicious_broken_unexpected_parts != 0)
|
if (suspicious_broken_unexpected_parts != 0)
|
||||||
LOG_WARNING(log, "Found suspicious broken unexpected parts {} with total rows count {}", suspicious_broken_unexpected_parts, suspicious_broken_unexpected_parts_bytes);
|
LOG_WARNING(log, "Found suspicious broken unexpected parts {} with total rows count {}", suspicious_broken_unexpected_parts, suspicious_broken_unexpected_parts_bytes);
|
||||||
|
|
||||||
|
|
||||||
if (!is_static_storage)
|
if (!is_static_storage)
|
||||||
for (auto & part : broken_parts_to_detach)
|
for (auto & part : broken_parts_to_detach)
|
||||||
part->renameToDetached("broken-on-start"); /// detached parts must not have '_' in prefixes
|
part->renameToDetached("broken-on-start"); /// detached parts must not have '_' in prefixes
|
||||||
@ -1961,7 +1963,8 @@ void MergeTreeData::loadDataParts(bool skip_sanity_checks, std::optional<std::un
|
|||||||
unloaded_parts.push_back(node);
|
unloaded_parts.push_back(node);
|
||||||
});
|
});
|
||||||
|
|
||||||
if (!unloaded_parts.empty())
|
/// By the way, if all disks are readonly, it does not make sense to load outdated parts (we will not own them).
|
||||||
|
if (!unloaded_parts.empty() && !all_disks_are_readonly)
|
||||||
{
|
{
|
||||||
LOG_DEBUG(log, "Found {} outdated data parts. They will be loaded asynchronously", unloaded_parts.size());
|
LOG_DEBUG(log, "Found {} outdated data parts. They will be loaded asynchronously", unloaded_parts.size());
|
||||||
|
|
||||||
@ -7111,8 +7114,8 @@ UInt64 MergeTreeData::estimateNumberOfRowsToRead(
|
|||||||
query_context->getSettingsRef().max_threads);
|
query_context->getSettingsRef().max_threads);
|
||||||
|
|
||||||
UInt64 total_rows = result_ptr->selected_rows;
|
UInt64 total_rows = result_ptr->selected_rows;
|
||||||
if (query_info.limit > 0 && query_info.limit < total_rows)
|
if (query_info.trivial_limit > 0 && query_info.trivial_limit < total_rows)
|
||||||
total_rows = query_info.limit;
|
total_rows = query_info.trivial_limit;
|
||||||
return total_rows;
|
return total_rows;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -229,8 +229,8 @@ struct SelectQueryInfo
|
|||||||
bool is_parameterized_view = false;
|
bool is_parameterized_view = false;
|
||||||
bool optimize_trivial_count = false;
|
bool optimize_trivial_count = false;
|
||||||
|
|
||||||
// If limit is not 0, that means it's a trivial limit query.
|
// If not 0, that means it's a trivial limit query.
|
||||||
UInt64 limit = 0;
|
UInt64 trivial_limit = 0;
|
||||||
|
|
||||||
/// For IStorageSystemOneBlock
|
/// For IStorageSystemOneBlock
|
||||||
std::vector<UInt8> columns_mask;
|
std::vector<UInt8> columns_mask;
|
||||||
|
@ -705,7 +705,7 @@ Pipe StorageGenerateRandom::read(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
UInt64 query_limit = query_info.limit;
|
UInt64 query_limit = query_info.trivial_limit;
|
||||||
if (query_limit && num_streams * max_block_size > query_limit)
|
if (query_limit && num_streams * max_block_size > query_limit)
|
||||||
{
|
{
|
||||||
/// We want to avoid spawning more streams than necessary
|
/// We want to avoid spawning more streams than necessary
|
||||||
@ -717,7 +717,7 @@ Pipe StorageGenerateRandom::read(
|
|||||||
/// Will create more seed values for each source from initial seed.
|
/// Will create more seed values for each source from initial seed.
|
||||||
pcg64 generate(random_seed);
|
pcg64 generate(random_seed);
|
||||||
|
|
||||||
auto shared_state = std::make_shared<GenerateRandomState>(query_info.limit);
|
auto shared_state = std::make_shared<GenerateRandomState>(query_info.trivial_limit);
|
||||||
|
|
||||||
for (UInt64 i = 0; i < num_streams; ++i)
|
for (UInt64 i = 0; i < num_streams; ++i)
|
||||||
{
|
{
|
||||||
|
@ -26,6 +26,7 @@ ColumnsDescription StorageSystemSettingsChanges::getColumnsDescription()
|
|||||||
|
|
||||||
void StorageSystemSettingsChanges::fillData(MutableColumns & res_columns, ContextPtr, const ActionsDAG::Node *, std::vector<UInt8>) const
|
void StorageSystemSettingsChanges::fillData(MutableColumns & res_columns, ContextPtr, const ActionsDAG::Node *, std::vector<UInt8>) const
|
||||||
{
|
{
|
||||||
|
const auto & settings_changes_history = getSettingsChangesHistory();
|
||||||
for (auto it = settings_changes_history.rbegin(); it != settings_changes_history.rend(); ++it)
|
for (auto it = settings_changes_history.rbegin(); it != settings_changes_history.rend(); ++it)
|
||||||
{
|
{
|
||||||
res_columns[0]->insert(it->first.toString());
|
res_columns[0]->insert(it->first.toString());
|
||||||
|
@ -109,8 +109,8 @@ Pipe StorageSystemZeros::read(
|
|||||||
storage_snapshot->check(column_names);
|
storage_snapshot->check(column_names);
|
||||||
|
|
||||||
UInt64 query_limit = limit ? *limit : 0;
|
UInt64 query_limit = limit ? *limit : 0;
|
||||||
if (query_info.limit)
|
if (query_info.trivial_limit)
|
||||||
query_limit = query_limit ? std::min(query_limit, query_info.limit) : query_info.limit;
|
query_limit = query_limit ? std::min(query_limit, query_info.trivial_limit) : query_info.trivial_limit;
|
||||||
|
|
||||||
if (query_limit && query_limit < max_block_size)
|
if (query_limit && query_limit < max_block_size)
|
||||||
max_block_size = query_limit;
|
max_block_size = query_limit;
|
||||||
|
@ -36,7 +36,8 @@ ColumnsDescription StorageSystemZooKeeperConnection::getColumnsDescription()
|
|||||||
/* 9 */ {"xid", std::make_shared<DataTypeInt32>(), "XID of the current session."},
|
/* 9 */ {"xid", std::make_shared<DataTypeInt32>(), "XID of the current session."},
|
||||||
/* 10*/ {"enabled_feature_flags", std::make_shared<DataTypeArray>(std::move(feature_flags_enum)),
|
/* 10*/ {"enabled_feature_flags", std::make_shared<DataTypeArray>(std::move(feature_flags_enum)),
|
||||||
"Feature flags which are enabled. Only applicable to ClickHouse Keeper."
|
"Feature flags which are enabled. Only applicable to ClickHouse Keeper."
|
||||||
}
|
},
|
||||||
|
/* 11*/ {"availability_zone", std::make_shared<DataTypeString>(), "Availability zone"},
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -85,6 +86,7 @@ void StorageSystemZooKeeperConnection::fillData(MutableColumns & res_columns, Co
|
|||||||
columns[8]->insert(zookeeper->getClientID());
|
columns[8]->insert(zookeeper->getClientID());
|
||||||
columns[9]->insert(zookeeper->getConnectionXid());
|
columns[9]->insert(zookeeper->getConnectionXid());
|
||||||
add_enabled_feature_flags(zookeeper);
|
add_enabled_feature_flags(zookeeper);
|
||||||
|
columns[11]->insert(zookeeper->getConnectedHostAvailabilityZone());
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -9,7 +9,7 @@ set -xeuo pipefail
|
|||||||
|
|
||||||
echo "Running prepare script"
|
echo "Running prepare script"
|
||||||
export DEBIAN_FRONTEND=noninteractive
|
export DEBIAN_FRONTEND=noninteractive
|
||||||
export RUNNER_VERSION=2.316.1
|
export RUNNER_VERSION=2.317.0
|
||||||
export RUNNER_HOME=/home/ubuntu/actions-runner
|
export RUNNER_HOME=/home/ubuntu/actions-runner
|
||||||
|
|
||||||
deb_arch() {
|
deb_arch() {
|
||||||
@ -54,7 +54,8 @@ apt-get install --yes --no-install-recommends \
|
|||||||
python3-dev \
|
python3-dev \
|
||||||
python3-pip \
|
python3-pip \
|
||||||
qemu-user-static \
|
qemu-user-static \
|
||||||
unzip
|
unzip \
|
||||||
|
gh
|
||||||
|
|
||||||
# Install docker
|
# Install docker
|
||||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
|
||||||
@ -101,7 +102,7 @@ sudo -u ubuntu docker buildx version
|
|||||||
sudo -u ubuntu docker buildx rm default-builder || : # if it's the second attempt
|
sudo -u ubuntu docker buildx rm default-builder || : # if it's the second attempt
|
||||||
sudo -u ubuntu docker buildx create --use --name default-builder
|
sudo -u ubuntu docker buildx create --use --name default-builder
|
||||||
|
|
||||||
pip install boto3 pygithub requests urllib3 unidiff dohq-artifactory
|
pip install boto3 pygithub requests urllib3 unidiff dohq-artifactory jwt
|
||||||
|
|
||||||
rm -rf $RUNNER_HOME # if it's the second attempt
|
rm -rf $RUNNER_HOME # if it's the second attempt
|
||||||
mkdir -p $RUNNER_HOME && cd $RUNNER_HOME
|
mkdir -p $RUNNER_HOME && cd $RUNNER_HOME
|
||||||
@ -212,9 +213,9 @@ chmod +x /usr/local/share/scripts/init-network.sh
|
|||||||
touch /var/tmp/clickhouse-ci-ami.success
|
touch /var/tmp/clickhouse-ci-ami.success
|
||||||
# END OF THE SCRIPT
|
# END OF THE SCRIPT
|
||||||
|
|
||||||
# TOE description
|
# TOE (Task Orchestrator and Executor) description
|
||||||
# name: CIInfrastructurePrepare
|
# name: CIInfrastructurePrepare
|
||||||
# description: instals the infrastructure for ClickHouse CI runners
|
# description: installs the infrastructure for ClickHouse CI runners
|
||||||
# schemaVersion: 1.0
|
# schemaVersion: 1.0
|
||||||
#
|
#
|
||||||
# phases:
|
# phases:
|
||||||
|
@ -78,7 +78,7 @@ Upd: сделали по-другому: теперь всё безопасно.
|
|||||||
|
|
||||||
## LEFT ONLY JOIN
|
## LEFT ONLY JOIN
|
||||||
|
|
||||||
## Функции makeDate, makeDateTime.
|
## + Функции makeDate, makeDateTime.
|
||||||
|
|
||||||
`makeDate(year, month, day)`
|
`makeDate(year, month, day)`
|
||||||
`makeDateTime(year, month, day, hour, minute, second, [timezone])`
|
`makeDateTime(year, month, day, hour, minute, second, [timezone])`
|
||||||
@ -187,13 +187,13 @@ https://clickhouse.com/docs/en/operations/table_engines/external_data/
|
|||||||
|
|
||||||
Не работает, если открыть clickhouse-client в интерактивном режиме и делать несколько запросов.
|
Не работает, если открыть clickhouse-client в интерактивном режиме и делать несколько запросов.
|
||||||
|
|
||||||
## + Настройка для возможности получить частичный результат при cancel-е.
|
## Настройка для возможности получить частичный результат при cancel-е.
|
||||||
|
|
||||||
Хотим по Ctrl+C получить те данные, которые успели обработаться.
|
Хотим по Ctrl+C получить те данные, которые успели обработаться.
|
||||||
|
|
||||||
## Раскрытие кортежей в функциях высшего порядка.
|
## Раскрытие кортежей в функциях высшего порядка.
|
||||||
|
|
||||||
## Табличная функция loop.
|
## + Табличная функция loop.
|
||||||
|
|
||||||
`SELECT * FROM loop(database, table)`
|
`SELECT * FROM loop(database, table)`
|
||||||
|
|
||||||
|
@ -15,7 +15,10 @@ services:
|
|||||||
ports:
|
ports:
|
||||||
- ${LDAP_EXTERNAL_PORT:-1389}:${LDAP_INTERNAL_PORT:-1389}
|
- ${LDAP_EXTERNAL_PORT:-1389}:${LDAP_INTERNAL_PORT:-1389}
|
||||||
healthcheck:
|
healthcheck:
|
||||||
test: "ldapsearch -x -b dc=example,dc=org cn > /dev/null"
|
test: >
|
||||||
|
ldapsearch -x -H ldap://localhost:$$LDAP_PORT_NUMBER -D $$LDAP_ADMIN_DN -w $$LDAP_ADMIN_PASSWORD -b $$LDAP_ROOT
|
||||||
|
| grep -c -E "member: cn=j(ohn|ane)doe"
|
||||||
|
| grep 2 >> /dev/null
|
||||||
interval: 10s
|
interval: 10s
|
||||||
retries: 10
|
retries: 10
|
||||||
timeout: 2s
|
timeout: 2s
|
||||||
|
@ -2640,7 +2640,9 @@ class ClickHouseCluster:
|
|||||||
[
|
[
|
||||||
"bash",
|
"bash",
|
||||||
"-c",
|
"-c",
|
||||||
f"/opt/bitnami/openldap/bin/ldapsearch -x -H ldap://{self.ldap_host}:{self.ldap_port} -D cn=admin,dc=example,dc=org -w clickhouse -b dc=example,dc=org",
|
f"/opt/bitnami/openldap/bin/ldapsearch -x -H ldap://{self.ldap_host}:{self.ldap_port} -D cn=admin,dc=example,dc=org -w clickhouse -b dc=example,dc=org"
|
||||||
|
f'| grep -c -E "member: cn=j(ohn|ane)doe"'
|
||||||
|
f"| grep 2 >> /dev/null",
|
||||||
],
|
],
|
||||||
user="root",
|
user="root",
|
||||||
)
|
)
|
||||||
|
@ -0,0 +1,35 @@
|
|||||||
|
<clickhouse>
|
||||||
|
<zookeeper>
|
||||||
|
<!--<zookeeper_load_balancing> random / in_order / nearest_hostname / first_or_random / round_robin </zookeeper_load_balancing>-->
|
||||||
|
<zookeeper_load_balancing>random</zookeeper_load_balancing>
|
||||||
|
|
||||||
|
<prefer_local_availability_zone>1</prefer_local_availability_zone>
|
||||||
|
|
||||||
|
<fallback_session_lifetime>
|
||||||
|
<min>0</min>
|
||||||
|
<max>1</max>
|
||||||
|
</fallback_session_lifetime>
|
||||||
|
|
||||||
|
<node index="1">
|
||||||
|
<host>zoo1</host>
|
||||||
|
<port>2181</port>
|
||||||
|
<availability_zone>az1</availability_zone>
|
||||||
|
</node>
|
||||||
|
<node index="2">
|
||||||
|
<host>zoo2</host>
|
||||||
|
<port>2181</port>
|
||||||
|
<availability_zone>az2</availability_zone>
|
||||||
|
</node>
|
||||||
|
<node index="3">
|
||||||
|
<host>zoo3</host>
|
||||||
|
<port>2181</port>
|
||||||
|
<availability_zone>az3</availability_zone>
|
||||||
|
</node>
|
||||||
|
<session_timeout_ms>3000</session_timeout_ms>
|
||||||
|
</zookeeper>
|
||||||
|
|
||||||
|
<placement>
|
||||||
|
<use_imds>0</use_imds>
|
||||||
|
<availability_zone>az2</availability_zone>
|
||||||
|
</placement>
|
||||||
|
</clickhouse>
|
@ -1,6 +1,8 @@
|
|||||||
|
import time
|
||||||
import pytest
|
import pytest
|
||||||
from helpers.cluster import ClickHouseCluster
|
from helpers.cluster import ClickHouseCluster
|
||||||
from helpers.network import PartitionManager
|
from helpers.network import PartitionManager
|
||||||
|
from helpers.test_tools import assert_eq_with_retry
|
||||||
|
|
||||||
cluster = ClickHouseCluster(
|
cluster = ClickHouseCluster(
|
||||||
__file__, zookeeper_config_path="configs/zookeeper_load_balancing.xml"
|
__file__, zookeeper_config_path="configs/zookeeper_load_balancing.xml"
|
||||||
@ -17,6 +19,10 @@ node3 = cluster.add_instance(
|
|||||||
"nod3", with_zookeeper=True, main_configs=["configs/zookeeper_load_balancing.xml"]
|
"nod3", with_zookeeper=True, main_configs=["configs/zookeeper_load_balancing.xml"]
|
||||||
)
|
)
|
||||||
|
|
||||||
|
node4 = cluster.add_instance(
|
||||||
|
"nod4", with_zookeeper=True, main_configs=["configs/zookeeper_load_balancing2.xml"]
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def change_balancing(old, new, reload=True):
|
def change_balancing(old, new, reload=True):
|
||||||
line = "<zookeeper_load_balancing>{}<"
|
line = "<zookeeper_load_balancing>{}<"
|
||||||
@ -405,113 +411,57 @@ def test_hostname_levenshtein_distance(started_cluster):
|
|||||||
def test_round_robin(started_cluster):
|
def test_round_robin(started_cluster):
|
||||||
pm = PartitionManager()
|
pm = PartitionManager()
|
||||||
try:
|
try:
|
||||||
pm._add_rule(
|
|
||||||
{
|
|
||||||
"source": node1.ip_address,
|
|
||||||
"destination": cluster.get_instance_ip("zoo1"),
|
|
||||||
"action": "REJECT --reject-with tcp-reset",
|
|
||||||
}
|
|
||||||
)
|
|
||||||
pm._add_rule(
|
|
||||||
{
|
|
||||||
"source": node2.ip_address,
|
|
||||||
"destination": cluster.get_instance_ip("zoo1"),
|
|
||||||
"action": "REJECT --reject-with tcp-reset",
|
|
||||||
}
|
|
||||||
)
|
|
||||||
pm._add_rule(
|
|
||||||
{
|
|
||||||
"source": node3.ip_address,
|
|
||||||
"destination": cluster.get_instance_ip("zoo1"),
|
|
||||||
"action": "REJECT --reject-with tcp-reset",
|
|
||||||
}
|
|
||||||
)
|
|
||||||
change_balancing("random", "round_robin")
|
change_balancing("random", "round_robin")
|
||||||
|
for node in [node1, node2, node3]:
|
||||||
print(
|
idx = int(
|
||||||
str(
|
node.query("select index from system.zookeeper_connection").strip()
|
||||||
node1.exec_in_container(
|
|
||||||
[
|
|
||||||
"bash",
|
|
||||||
"-c",
|
|
||||||
"lsof -a -i4 -i6 -itcp -w | grep ':2181' | grep ESTABLISHED",
|
|
||||||
],
|
|
||||||
privileged=True,
|
|
||||||
user="root",
|
|
||||||
)
|
|
||||||
)
|
)
|
||||||
)
|
new_idx = (idx + 1) % 3
|
||||||
assert (
|
|
||||||
"1"
|
|
||||||
== str(
|
|
||||||
node1.exec_in_container(
|
|
||||||
[
|
|
||||||
"bash",
|
|
||||||
"-c",
|
|
||||||
"lsof -a -i4 -i6 -itcp -w | grep 'testzookeeperconfigloadbalancing_zoo2_1.*testzookeeperconfigloadbalancing_default:2181' | grep ESTABLISHED | wc -l",
|
|
||||||
],
|
|
||||||
privileged=True,
|
|
||||||
user="root",
|
|
||||||
)
|
|
||||||
).strip()
|
|
||||||
)
|
|
||||||
|
|
||||||
print(
|
pm._add_rule(
|
||||||
str(
|
{
|
||||||
node2.exec_in_container(
|
"source": node.ip_address,
|
||||||
[
|
"destination": cluster.get_instance_ip("zoo" + str(idx + 1)),
|
||||||
"bash",
|
"action": "REJECT --reject-with tcp-reset",
|
||||||
"-c",
|
}
|
||||||
"lsof -a -i4 -i6 -itcp -w | grep ':2181' | grep ESTABLISHED",
|
|
||||||
],
|
|
||||||
privileged=True,
|
|
||||||
user="root",
|
|
||||||
)
|
|
||||||
)
|
)
|
||||||
)
|
|
||||||
assert (
|
|
||||||
"1"
|
|
||||||
== str(
|
|
||||||
node2.exec_in_container(
|
|
||||||
[
|
|
||||||
"bash",
|
|
||||||
"-c",
|
|
||||||
"lsof -a -i4 -i6 -itcp -w | grep 'testzookeeperconfigloadbalancing_zoo2_1.*testzookeeperconfigloadbalancing_default:2181' | grep ESTABLISHED | wc -l",
|
|
||||||
],
|
|
||||||
privileged=True,
|
|
||||||
user="root",
|
|
||||||
)
|
|
||||||
).strip()
|
|
||||||
)
|
|
||||||
|
|
||||||
print(
|
assert_eq_with_retry(
|
||||||
str(
|
node,
|
||||||
node3.exec_in_container(
|
"select index from system.zookeeper_connection",
|
||||||
[
|
str(new_idx) + "\n",
|
||||||
"bash",
|
|
||||||
"-c",
|
|
||||||
"lsof -a -i4 -i6 -itcp -w | grep ':2181' | grep ESTABLISHED",
|
|
||||||
],
|
|
||||||
privileged=True,
|
|
||||||
user="root",
|
|
||||||
)
|
|
||||||
)
|
)
|
||||||
)
|
pm.heal_all()
|
||||||
assert (
|
|
||||||
"1"
|
|
||||||
== str(
|
|
||||||
node3.exec_in_container(
|
|
||||||
[
|
|
||||||
"bash",
|
|
||||||
"-c",
|
|
||||||
"lsof -a -i4 -i6 -itcp -w | grep 'testzookeeperconfigloadbalancing_zoo2_1.*testzookeeperconfigloadbalancing_default:2181' | grep ESTABLISHED | wc -l",
|
|
||||||
],
|
|
||||||
privileged=True,
|
|
||||||
user="root",
|
|
||||||
)
|
|
||||||
).strip()
|
|
||||||
)
|
|
||||||
|
|
||||||
finally:
|
finally:
|
||||||
pm.heal_all()
|
pm.heal_all()
|
||||||
change_balancing("round_robin", "random", reload=False)
|
change_balancing("round_robin", "random", reload=False)
|
||||||
|
|
||||||
|
|
||||||
|
def test_az(started_cluster):
|
||||||
|
pm = PartitionManager()
|
||||||
|
try:
|
||||||
|
# make sure it disconnects from the optimal node
|
||||||
|
pm._add_rule(
|
||||||
|
{
|
||||||
|
"source": node4.ip_address,
|
||||||
|
"destination": cluster.get_instance_ip("zoo2"),
|
||||||
|
"action": "REJECT --reject-with tcp-reset",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
node4.query_with_retry("select * from system.zookeeper where path='/'")
|
||||||
|
assert "az2\n" != node4.query(
|
||||||
|
"select availability_zone from system.zookeeper_connection"
|
||||||
|
)
|
||||||
|
|
||||||
|
# fallback_session_lifetime.max is 1 second, but it shouldn't drop current session until the node becomes available
|
||||||
|
|
||||||
|
time.sleep(5) # this is fine
|
||||||
|
assert 5 <= int(node4.query("select zookeeperSessionUptime()").strip())
|
||||||
|
|
||||||
|
pm.heal_all()
|
||||||
|
assert_eq_with_retry(
|
||||||
|
node4, "select availability_zone from system.zookeeper_connection", "az2\n"
|
||||||
|
)
|
||||||
|
finally:
|
||||||
|
pm.heal_all()
|
||||||
|
@ -84,10 +84,28 @@ def test_fallback_session(started_cluster: ClickHouseCluster):
|
|||||||
)
|
)
|
||||||
|
|
||||||
# at this point network partitioning has been reverted.
|
# at this point network partitioning has been reverted.
|
||||||
# the nodes should switch to zoo1 automatically because of `in_order` load-balancing.
|
# the nodes should switch to zoo1 because of `in_order` load-balancing.
|
||||||
# otherwise they would connect to a random replica
|
# otherwise they would connect to a random replica
|
||||||
|
|
||||||
|
# but there's no reason to reconnect because current session works
|
||||||
|
# and there's no "optimal" node with `in_order` load-balancing
|
||||||
|
# so we need to break the current session
|
||||||
|
|
||||||
for node in [node1, node2, node3]:
|
for node in [node1, node2, node3]:
|
||||||
assert_uses_zk_node(node, "zoo1")
|
assert_uses_zk_node(node, "zoo3")
|
||||||
|
|
||||||
|
with PartitionManager() as pm:
|
||||||
|
for node in started_cluster.instances.values():
|
||||||
|
pm._add_rule(
|
||||||
|
{
|
||||||
|
"source": node.ip_address,
|
||||||
|
"destination": cluster.get_instance_ip("zoo3"),
|
||||||
|
"action": "REJECT --reject-with tcp-reset",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
for node in [node1, node2, node3]:
|
||||||
|
assert_uses_zk_node(node, "zoo1")
|
||||||
|
|
||||||
node1.query_with_retry("INSERT INTO simple VALUES ({0}, {0})".format(2))
|
node1.query_with_retry("INSERT INTO simple VALUES ({0}, {0})".format(2))
|
||||||
for node in [node2, node3]:
|
for node in [node2, node3]:
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
# Tags: long, no-s3-storage
|
# Tags: long, no-s3-storage, no-tsan
|
||||||
# no-s3 because read FileOpen metric
|
# no-s3 because read FileOpen metric
|
||||||
|
|
||||||
set -e
|
set -e
|
||||||
@ -31,6 +31,6 @@ $CLICKHOUSE_CLIENT $settings -q "$touching_many_parts_query" &> /dev/null
|
|||||||
|
|
||||||
$CLICKHOUSE_CLIENT $settings -q "SYSTEM FLUSH LOGS"
|
$CLICKHOUSE_CLIENT $settings -q "SYSTEM FLUSH LOGS"
|
||||||
|
|
||||||
$CLICKHOUSE_CLIENT $settings -q "SELECT ProfileEvents['FileOpen'] as opened_files FROM system.query_log WHERE query='$touching_many_parts_query' and current_database = currentDatabase() ORDER BY event_time DESC, opened_files DESC LIMIT 1;"
|
$CLICKHOUSE_CLIENT $settings -q "SELECT ProfileEvents['FileOpen'] as opened_files FROM system.query_log WHERE query = '$touching_many_parts_query' AND current_database = currentDatabase() AND event_date >= yesterday() ORDER BY event_time DESC, opened_files DESC LIMIT 1;"
|
||||||
|
|
||||||
$CLICKHOUSE_CLIENT $settings -q "DROP TABLE IF EXISTS merge_tree_table;"
|
$CLICKHOUSE_CLIENT $settings -q "DROP TABLE IF EXISTS merge_tree_table;"
|
||||||
|
@ -1,5 +1,6 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
# Tags: long
|
# Tags: long, no-s3-storage, no-msan, no-asan, no-tsan, no-debug
|
||||||
|
# Some kind of stress test, it doesn't make sense to test in a non-release build
|
||||||
|
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
@ -15,7 +16,7 @@ ${CLICKHOUSE_CLIENT} --query="CREATE TABLE buffer_00763_2 (s String) ENGINE = Bu
|
|||||||
|
|
||||||
function thread1()
|
function thread1()
|
||||||
{
|
{
|
||||||
seq 1 500 | sed -r -e 's/.+/DROP TABLE IF EXISTS mt_00763_2; CREATE TABLE mt_00763_2 (s String) ENGINE = MergeTree ORDER BY s; INSERT INTO mt_00763_2 SELECT toString(number) FROM numbers(10);/' | ${CLICKHOUSE_CLIENT} --multiquery --ignore-error ||:
|
seq 1 500 | sed -r -e 's/.+/DROP TABLE IF EXISTS mt_00763_2; CREATE TABLE mt_00763_2 (s String) ENGINE = MergeTree ORDER BY s; INSERT INTO mt_00763_2 SELECT toString(number) FROM numbers(10);/' | ${CLICKHOUSE_CLIENT} --fsync-metadata 0 --multiquery --ignore-error ||:
|
||||||
}
|
}
|
||||||
|
|
||||||
function thread2()
|
function thread2()
|
||||||
|
@ -2,6 +2,4 @@ Instruction check fail. The CPU does not support SSSE3 instruction set.
|
|||||||
Instruction check fail. The CPU does not support SSE4.1 instruction set.
|
Instruction check fail. The CPU does not support SSE4.1 instruction set.
|
||||||
Instruction check fail. The CPU does not support SSE4.2 instruction set.
|
Instruction check fail. The CPU does not support SSE4.2 instruction set.
|
||||||
Instruction check fail. The CPU does not support POPCNT instruction set.
|
Instruction check fail. The CPU does not support POPCNT instruction set.
|
||||||
<jemalloc>: MADV_DONTNEED does not work (memset will be used instead)
|
|
||||||
<jemalloc>: (This is the expected behaviour if you are running under QEMU)
|
|
||||||
1
|
1
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
# Tags: no-tsan, no-asan, no-ubsan, no-msan, no-debug, no-fasttest, no-cpu-aarch64
|
# Tags: no-tsan, no-asan, no-ubsan, no-msan, no-debug, no-fasttest, no-cpu-aarch64
|
||||||
# Tag no-fasttest: avoid dependency on qemu -- invonvenient when running locally
|
# Tag no-fasttest: avoid dependency on qemu -- inconvenient when running locally
|
||||||
|
|
||||||
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||||
# shellcheck source=../shell_config.sh
|
# shellcheck source=../shell_config.sh
|
||||||
|
@ -1,5 +1,3 @@
|
|||||||
<jemalloc>: Number of CPUs detected is not deterministic. Per-CPU arena disabled.
|
|
||||||
1
|
1
|
||||||
<jemalloc>: Number of CPUs detected is not deterministic. Per-CPU arena disabled.
|
|
||||||
100000000
|
100000000
|
||||||
1
|
1
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
# Tags: no-tsan, no-asan, no-msan, no-ubsan, no-fasttest
|
# Tags: no-tsan, no-asan, no-msan, no-ubsan, no-fasttest, no-debug
|
||||||
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
# NOTE: jemalloc is disabled under sanitizers
|
# NOTE: jemalloc is disabled under sanitizers
|
||||||
|
|
||||||
|
@ -14,7 +14,7 @@ for _ in {1..10}; do
|
|||||||
${CLICKHOUSE_LOCAL} -q 'select * from numbers_mt(100000000) settings max_threads=100 FORMAT Null'
|
${CLICKHOUSE_LOCAL} -q 'select * from numbers_mt(100000000) settings max_threads=100 FORMAT Null'
|
||||||
# Binding to specific CPU is not required, but this makes the test more reliable.
|
# Binding to specific CPU is not required, but this makes the test more reliable.
|
||||||
taskset --cpu-list 0 ${CLICKHOUSE_LOCAL} -q 'select * from numbers_mt(100000000) settings max_threads=100 FORMAT Null' 2>&1 | {
|
taskset --cpu-list 0 ${CLICKHOUSE_LOCAL} -q 'select * from numbers_mt(100000000) settings max_threads=100 FORMAT Null' 2>&1 | {
|
||||||
# build with santiziers does not have jemalloc
|
# build with sanitiziers does not have jemalloc
|
||||||
# and for jemalloc we have separate test
|
# and for jemalloc we have separate test
|
||||||
# 01502_jemalloc_percpu_arena
|
# 01502_jemalloc_percpu_arena
|
||||||
grep -v '<jemalloc>: Number of CPUs detected is not deterministic. Per-CPU arena disabled.'
|
grep -v '<jemalloc>: Number of CPUs detected is not deterministic. Per-CPU arena disabled.'
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
# Tags: no-ordinary-database, use-rocksdb
|
# Tags: no-ordinary-database, use-rocksdb, no-random-settings
|
||||||
|
|
||||||
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||||
# shellcheck source=../shell_config.sh
|
# shellcheck source=../shell_config.sh
|
||||||
@ -45,4 +45,3 @@ ${CLICKHOUSE_CLIENT} --query "INSERT INTO rocksdb_worm SELECT number, number+1 F
|
|||||||
${CLICKHOUSE_CLIENT} --query "INSERT INTO rocksdb_worm SELECT number, number+1 FROM numbers_mt(1000000)" &
|
${CLICKHOUSE_CLIENT} --query "INSERT INTO rocksdb_worm SELECT number, number+1 FROM numbers_mt(1000000)" &
|
||||||
wait
|
wait
|
||||||
${CLICKHOUSE_CLIENT} --query "SELECT count() FROM rocksdb_worm;"
|
${CLICKHOUSE_CLIENT} --query "SELECT count() FROM rocksdb_worm;"
|
||||||
|
|
||||||
|
@ -22,7 +22,7 @@ create table test (a Int32) engine = MergeTree() order by tuple()
|
|||||||
settings disk=disk(name='test2',
|
settings disk=disk(name='test2',
|
||||||
type = object_storage,
|
type = object_storage,
|
||||||
object_storage_type = s3,
|
object_storage_type = s3,
|
||||||
metadata_storage_type = local,
|
metadata_type = local,
|
||||||
endpoint = 'http://localhost:11111/test/common/',
|
endpoint = 'http://localhost:11111/test/common/',
|
||||||
access_key_id = clickhouse,
|
access_key_id = clickhouse,
|
||||||
secret_access_key = clickhouse);
|
secret_access_key = clickhouse);
|
||||||
@ -32,7 +32,7 @@ create table test (a Int32) engine = MergeTree() order by tuple()
|
|||||||
settings disk=disk(name='test3',
|
settings disk=disk(name='test3',
|
||||||
type = object_storage,
|
type = object_storage,
|
||||||
object_storage_type = s3,
|
object_storage_type = s3,
|
||||||
metadata_storage_type = local,
|
metadata_type = local,
|
||||||
metadata_keep_free_space_bytes = 1024,
|
metadata_keep_free_space_bytes = 1024,
|
||||||
endpoint = 'http://localhost:11111/test/common/',
|
endpoint = 'http://localhost:11111/test/common/',
|
||||||
access_key_id = clickhouse,
|
access_key_id = clickhouse,
|
||||||
@ -43,7 +43,7 @@ create table test (a Int32) engine = MergeTree() order by tuple()
|
|||||||
settings disk=disk(name='test4',
|
settings disk=disk(name='test4',
|
||||||
type = object_storage,
|
type = object_storage,
|
||||||
object_storage_type = s3,
|
object_storage_type = s3,
|
||||||
metadata_storage_type = local,
|
metadata_type = local,
|
||||||
metadata_keep_free_space_bytes = 0,
|
metadata_keep_free_space_bytes = 0,
|
||||||
endpoint = 'http://localhost:11111/test/common/',
|
endpoint = 'http://localhost:11111/test/common/',
|
||||||
access_key_id = clickhouse,
|
access_key_id = clickhouse,
|
||||||
|
@ -0,0 +1 @@
|
|||||||
|
maximum: 95.37 MiB
|
7
tests/queries/0_stateless/03196_local_memory_limit.sh
Executable file
7
tests/queries/0_stateless/03196_local_memory_limit.sh
Executable file
@ -0,0 +1,7 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||||
|
# shellcheck source=../shell_config.sh
|
||||||
|
. "$CUR_DIR"/../shell_config.sh
|
||||||
|
|
||||||
|
${CLICKHOUSE_LOCAL} --config-file <(echo "<clickhouse><max_server_memory_usage>100M</max_server_memory_usage></clickhouse>") --query "SELECT number FROM system.numbers GROUP BY number HAVING count() > 1" 2>&1 | grep -o -P 'maximum: [\d\.]+ MiB'
|
47
utils/backup/backup
Executable file
47
utils/backup/backup
Executable file
@ -0,0 +1,47 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
user="default"
|
||||||
|
path="."
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
echo
|
||||||
|
echo "A trivial script to upload your files into ClickHouse."
|
||||||
|
echo "You might want to use something like Dropbox instead, but..."
|
||||||
|
echo
|
||||||
|
echo "Usage: $0 --host <hostname> [--user <username>] --password <password> <path>"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
while [[ "$#" -gt 0 ]]; do
|
||||||
|
case "$1" in
|
||||||
|
--host)
|
||||||
|
host="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--user)
|
||||||
|
user="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--password)
|
||||||
|
password="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--help)
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
path="$1"
|
||||||
|
shift 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ -z "$host" ] || [ -z "$password" ]; then
|
||||||
|
echo "Error: --host and --password are mandatory."
|
||||||
|
usage
|
||||||
|
fi
|
||||||
|
|
||||||
|
clickhouse-client --host "$host" --user "$user" --password "$password" --secure --query "CREATE TABLE IF NOT EXISTS default.files (time DEFAULT now(), path String, content String CODEC(ZSTD(6))) ENGINE = MergeTree ORDER BY (path, time)" &&
|
||||||
|
find "$path" -type f | clickhouse-local --input-format LineAsString \
|
||||||
|
--max-block-size 1 --min-insert-block-size-rows 0 --min-insert-block-size-bytes '100M' --max-insert-threads 1 \
|
||||||
|
--query "INSERT INTO FUNCTION remoteSecure('$host', default.files, '$user', '$password') (path, content) SELECT line, file(line) FROM table" --progress
|
@ -575,6 +575,7 @@ MySQLDump
|
|||||||
MySQLThreads
|
MySQLThreads
|
||||||
NATS
|
NATS
|
||||||
NCHAR
|
NCHAR
|
||||||
|
NDJSON
|
||||||
NEKUDOTAYIM
|
NEKUDOTAYIM
|
||||||
NEWDATE
|
NEWDATE
|
||||||
NEWDECIMAL
|
NEWDECIMAL
|
||||||
@ -717,6 +718,8 @@ PlantUML
|
|||||||
PointDistKm
|
PointDistKm
|
||||||
PointDistM
|
PointDistM
|
||||||
PointDistRads
|
PointDistRads
|
||||||
|
PostHistory
|
||||||
|
PostLink
|
||||||
PostgreSQLConnection
|
PostgreSQLConnection
|
||||||
PostgreSQLThreads
|
PostgreSQLThreads
|
||||||
Postgres
|
Postgres
|
||||||
@ -2516,6 +2519,7 @@ sqlite
|
|||||||
sqrt
|
sqrt
|
||||||
src
|
src
|
||||||
srcReplicas
|
srcReplicas
|
||||||
|
stackoverflow
|
||||||
stacktrace
|
stacktrace
|
||||||
stacktraces
|
stacktraces
|
||||||
startsWith
|
startsWith
|
||||||
@ -2854,6 +2858,7 @@ userver
|
|||||||
utils
|
utils
|
||||||
uuid
|
uuid
|
||||||
uuidv
|
uuidv
|
||||||
|
vCPU
|
||||||
varPop
|
varPop
|
||||||
varPopStable
|
varPopStable
|
||||||
varSamp
|
varSamp
|
||||||
|
@ -1238,9 +1238,13 @@ void Runner::createConnections()
|
|||||||
|
|
||||||
std::shared_ptr<Coordination::ZooKeeper> Runner::getConnection(const ConnectionInfo & connection_info, size_t connection_info_idx)
|
std::shared_ptr<Coordination::ZooKeeper> Runner::getConnection(const ConnectionInfo & connection_info, size_t connection_info_idx)
|
||||||
{
|
{
|
||||||
Coordination::ZooKeeper::Node node{Poco::Net::SocketAddress{connection_info.host}, static_cast<UInt8>(connection_info_idx), connection_info.secure};
|
zkutil::ShuffleHost host;
|
||||||
std::vector<Coordination::ZooKeeper::Node> nodes;
|
host.host = connection_info.host;
|
||||||
nodes.push_back(node);
|
host.secure = connection_info.secure;
|
||||||
|
host.original_index = static_cast<UInt8>(connection_info_idx);
|
||||||
|
host.address = Poco::Net::SocketAddress{connection_info.host};
|
||||||
|
|
||||||
|
zkutil::ShuffleHosts nodes{host};
|
||||||
zkutil::ZooKeeperArgs args;
|
zkutil::ZooKeeperArgs args;
|
||||||
args.session_timeout_ms = connection_info.session_timeout_ms;
|
args.session_timeout_ms = connection_info.session_timeout_ms;
|
||||||
args.connection_timeout_ms = connection_info.connection_timeout_ms;
|
args.connection_timeout_ms = connection_info.connection_timeout_ms;
|
||||||
|
Loading…
Reference in New Issue
Block a user