mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-12-02 04:22:03 +00:00
Merge remote-tracking branch 'upstream/master' into evillique-patch-1
This commit is contained in:
commit
60a0cefb03
152
CHANGELOG.md
152
CHANGELOG.md
@ -1,5 +1,5 @@
|
|||||||
### Table of Contents
|
### Table of Contents
|
||||||
**[ClickHouse release v24.6, 2024-06-27](#246)**<br/>
|
**[ClickHouse release v24.6, 2024-07-01](#246)**<br/>
|
||||||
**[ClickHouse release v24.5, 2024-05-30](#245)**<br/>
|
**[ClickHouse release v24.5, 2024-05-30](#245)**<br/>
|
||||||
**[ClickHouse release v24.4, 2024-04-30](#244)**<br/>
|
**[ClickHouse release v24.4, 2024-04-30](#244)**<br/>
|
||||||
**[ClickHouse release v24.3 LTS, 2024-03-26](#243)**<br/>
|
**[ClickHouse release v24.3 LTS, 2024-03-26](#243)**<br/>
|
||||||
@ -9,107 +9,105 @@
|
|||||||
|
|
||||||
# 2024 Changelog
|
# 2024 Changelog
|
||||||
|
|
||||||
### <a id="246"></a> ClickHouse release 24.6, 2024-06-27
|
### <a id="246"></a> ClickHouse release 24.6, 2024-07-01
|
||||||
|
|
||||||
#### Backward Incompatible Change
|
#### Backward Incompatible Change
|
||||||
* Enable asynchronous load of databases and tables by default. See the `async_load_databases` in config.xml. While this change is fully compatible, it can introduce a difference in behavior. When `async_load_databases` is false, as in the previous versions, the server will not accept connections until all tables are loaded. When `async_load_databases` is true, as in the new version, the server can accept connections before all the tables are loaded. If a query is made to a table that is not yet loaded, it will wait for the table's loading, which can take considerable time. It can change the behavior of the server if it is part of a large distributed system under a load balancer. In the first case, the load balancer can get a connection refusal and quickly failover to another server. In the second case, the load balancer can connect to a server that is still loading the tables, and the query will have a higher latency. Moreover, if many queries accumulate in the waiting state, it can lead to a "thundering herd" problem when they start processing simultaneously. This can make a difference only for highly loaded distributed backends. You can set the value of `async_load_databases` to false to avoid this problem. [#57695](https://github.com/ClickHouse/ClickHouse/pull/57695) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
* Enable asynchronous load of databases and tables by default. See the `async_load_databases` in config.xml. While this change is fully compatible, it can introduce a difference in behavior. When `async_load_databases` is false, as in the previous versions, the server will not accept connections until all tables are loaded. When `async_load_databases` is true, as in the new version, the server can accept connections before all the tables are loaded. If a query is made to a table that is not yet loaded, it will wait for the table's loading, which can take considerable time. It can change the behavior of the server if it is part of a large distributed system under a load balancer. In the first case, the load balancer can get a connection refusal and quickly failover to another server. In the second case, the load balancer can connect to a server that is still loading the tables, and the query will have a higher latency. Moreover, if many queries accumulate in the waiting state, it can lead to a "thundering herd" problem when they start processing simultaneously. This can make a difference only for highly loaded distributed backends. You can set the value of `async_load_databases` to false to avoid this problem. [#57695](https://github.com/ClickHouse/ClickHouse/pull/57695) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Setting `replace_long_file_name_to_hash` is enabled by default for `MergeTree` tables. [#64457](https://github.com/ClickHouse/ClickHouse/pull/64457) ([Anton Popov](https://github.com/CurtizJ)). This setting is fully compatible, and no actions needed during upgrade. The new data format is supported from all versions starting from 23.9. After enabling this setting, you can no longer downgrade to a version 23.8 or older.
|
||||||
* Some invalid queries will fail earlier during parsing. Note: disabled the support for inline KQL expressions (the experimental Kusto language) when they are put into a `kql` table function without a string literal, e.g. `kql(garbage | trash)` instead of `kql('garbage | trash')` or `kql($$garbage | trash$$)`. This feature was introduced unintentionally and should not exist. [#61500](https://github.com/ClickHouse/ClickHouse/pull/61500) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
* Some invalid queries will fail earlier during parsing. Note: disabled the support for inline KQL expressions (the experimental Kusto language) when they are put into a `kql` table function without a string literal, e.g. `kql(garbage | trash)` instead of `kql('garbage | trash')` or `kql($$garbage | trash$$)`. This feature was introduced unintentionally and should not exist. [#61500](https://github.com/ClickHouse/ClickHouse/pull/61500) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
* Rework parallel processing in `Ordered` mode of storage `S3Queue`. This PR is backward incompatible for Ordered mode if you used settings `s3queue_processing_threads_num` or `s3queue_total_shards_num`. Setting `s3queue_total_shards_num` is deleted, previously it was allowed to use only under `s3queue_allow_experimental_sharded_mode`, which is now deprecated. A new setting is added - `s3queue_buckets`. [#64349](https://github.com/ClickHouse/ClickHouse/pull/64349) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
* Rework parallel processing in `Ordered` mode of storage `S3Queue`. This PR is backward incompatible for Ordered mode if you used settings `s3queue_processing_threads_num` or `s3queue_total_shards_num`. Setting `s3queue_total_shards_num` is deleted, previously it was allowed to use only under `s3queue_allow_experimental_sharded_mode`, which is now deprecated. A new setting is added - `s3queue_buckets`. [#64349](https://github.com/ClickHouse/ClickHouse/pull/64349) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
* New functions `snowflakeIDToDateTime`, `snowflakeIDToDateTime64`, `dateTimeToSnowflakeID`, and `dateTime64ToSnowflakeID` were added. Unlike the existing functions `snowflakeToDateTime`, `snowflakeToDateTime64`, `dateTimeToSnowflake`, and `dateTime64ToSnowflake`, the new functions are compatible with function `generateSnowflakeID`, i.e. they accept the snowflake IDs generated by `generateSnowflakeID` and produce snowflake IDs of the same type as `generateSnowflakeID` (i.e. `UInt64`). Furthermore, the new functions default to the UNIX epoch (aka. 1970-01-01), just like `generateSnowflakeID`. If necessary, a different epoch, e.g. Twitter's/X's epoch 2010-11-04 aka. 1288834974657 msec since UNIX epoch, can be passed. The old conversion functions are deprecated and will be removed after a transition period: to use them regardless, enable setting `allow_deprecated_snowflake_conversion_functions`. [#64948](https://github.com/ClickHouse/ClickHouse/pull/64948) ([Robert Schulze](https://github.com/rschu1ze)).
|
* New functions `snowflakeIDToDateTime`, `snowflakeIDToDateTime64`, `dateTimeToSnowflakeID`, and `dateTime64ToSnowflakeID` were added. Unlike the existing functions `snowflakeToDateTime`, `snowflakeToDateTime64`, `dateTimeToSnowflake`, and `dateTime64ToSnowflake`, the new functions are compatible with function `generateSnowflakeID`, i.e. they accept the snowflake IDs generated by `generateSnowflakeID` and produce snowflake IDs of the same type as `generateSnowflakeID` (i.e. `UInt64`). Furthermore, the new functions default to the UNIX epoch (aka. 1970-01-01), just like `generateSnowflakeID`. If necessary, a different epoch, e.g. Twitter's/X's epoch 2010-11-04 aka. 1288834974657 msec since UNIX epoch, can be passed. The old conversion functions are deprecated and will be removed after a transition period: to use them regardless, enable setting `allow_deprecated_snowflake_conversion_functions`. [#64948](https://github.com/ClickHouse/ClickHouse/pull/64948) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
|
||||||
#### New Feature
|
#### New Feature
|
||||||
* Introduce statistics of type "number of distinct values". [#59357](https://github.com/ClickHouse/ClickHouse/pull/59357) ([Han Fei](https://github.com/hanfei1991)).
|
* Allow to store named collections in ClickHouse Keeper. [#64574](https://github.com/ClickHouse/ClickHouse/pull/64574) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Support empty tuples. [#55061](https://github.com/ClickHouse/ClickHouse/pull/55061) ([Amos Bird](https://github.com/amosbird)).
|
||||||
* Add Hilbert Curve encode and decode functions. [#60156](https://github.com/ClickHouse/ClickHouse/pull/60156) ([Artem Mustafin](https://github.com/Artemmm91)).
|
* Add Hilbert Curve encode and decode functions. [#60156](https://github.com/ClickHouse/ClickHouse/pull/60156) ([Artem Mustafin](https://github.com/Artemmm91)).
|
||||||
* Added support for reading LINESTRING geometry in WKT format using function `readWKTLineString`. [#62519](https://github.com/ClickHouse/ClickHouse/pull/62519) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
|
||||||
* Allow to attach parts from a different disk. [#63087](https://github.com/ClickHouse/ClickHouse/pull/63087) ([Unalian](https://github.com/Unalian)).
|
|
||||||
* Allow proxy to be bypassed for hosts specified in `no_proxy` env variable and ClickHouse proxy configuration. [#63314](https://github.com/ClickHouse/ClickHouse/pull/63314) ([Arthur Passos](https://github.com/arthurpassos)).
|
|
||||||
* Added a new table function `loop` to support returning query results in an infinite loop. [#63452](https://github.com/ClickHouse/ClickHouse/pull/63452) ([Sariel](https://github.com/sarielwxm)).
|
|
||||||
* Added new SQL functions `generateSnowflakeID` for generating Twitter-style Snowflake IDs. [#63577](https://github.com/ClickHouse/ClickHouse/pull/63577) ([Danila Puzov](https://github.com/kazalika)).
|
|
||||||
* Add the ability to reshuffle rows during insert to optimize for size without violating the order set by `PRIMARY KEY`. It's controlled by the setting `optimize_row_order` (off by default). [#63578](https://github.com/ClickHouse/ClickHouse/pull/63578) ([Igor Markelov](https://github.com/ElderlyPassionFruit)).
|
|
||||||
* Added `merge_workload` and `mutation_workload` settings to regulate how resources are utilized and shared between merges, mutations and other workloads. [#64061](https://github.com/ClickHouse/ClickHouse/pull/64061) ([Sergei Trifonov](https://github.com/serxa)).
|
|
||||||
* Add support for comparing IPv4 and IPv6 types using the `=` operator. [#64292](https://github.com/ClickHouse/ClickHouse/pull/64292) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)).
|
|
||||||
* Allow to store named collections in zookeeper. [#64574](https://github.com/ClickHouse/ClickHouse/pull/64574) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
|
||||||
* Support decimal arguments in binary math functions (pow, atan2, max2, min2, hypot). [#64582](https://github.com/ClickHouse/ClickHouse/pull/64582) ([Mikhail Gorshkov](https://github.com/mgorshkov)).
|
|
||||||
* Add support for index analysis over `hilbertEncode`. [#64662](https://github.com/ClickHouse/ClickHouse/pull/64662) ([Artem Mustafin](https://github.com/Artemmm91)).
|
* Add support for index analysis over `hilbertEncode`. [#64662](https://github.com/ClickHouse/ClickHouse/pull/64662) ([Artem Mustafin](https://github.com/Artemmm91)).
|
||||||
|
* Added support for reading `LINESTRING` geometry in the WKT format using function `readWKTLineString`. [#62519](https://github.com/ClickHouse/ClickHouse/pull/62519) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Allow to attach parts from a different disk. [#63087](https://github.com/ClickHouse/ClickHouse/pull/63087) ([Unalian](https://github.com/Unalian)).
|
||||||
|
* Added new SQL functions `generateSnowflakeID` for generating Twitter-style Snowflake IDs. [#63577](https://github.com/ClickHouse/ClickHouse/pull/63577) ([Danila Puzov](https://github.com/kazalika)).
|
||||||
|
* Added `merge_workload` and `mutation_workload` settings to regulate how resources are utilized and shared between merges, mutations and other workloads. [#64061](https://github.com/ClickHouse/ClickHouse/pull/64061) ([Sergei Trifonov](https://github.com/serxa)).
|
||||||
|
* Add support for comparing `IPv4` and `IPv6` types using the `=` operator. [#64292](https://github.com/ClickHouse/ClickHouse/pull/64292) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)).
|
||||||
|
* Support decimal arguments in binary math functions (pow, atan2, max2, min2, hypot). [#64582](https://github.com/ClickHouse/ClickHouse/pull/64582) ([Mikhail Gorshkov](https://github.com/mgorshkov)).
|
||||||
* Added SQL functions `parseReadableSize` (along with `OrNull` and `OrZero` variants). [#64742](https://github.com/ClickHouse/ClickHouse/pull/64742) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)).
|
* Added SQL functions `parseReadableSize` (along with `OrNull` and `OrZero` variants). [#64742](https://github.com/ClickHouse/ClickHouse/pull/64742) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)).
|
||||||
* Add server settings `max_table_num_to_throw` and `max_database_num_to_throw` to limit the number of databases or tables on `CREATE` queries. [#64781](https://github.com/ClickHouse/ClickHouse/pull/64781) ([Xu Jia](https://github.com/XuJia0210)).
|
* Add server settings `max_table_num_to_throw` and `max_database_num_to_throw` to limit the number of databases or tables on `CREATE` queries. [#64781](https://github.com/ClickHouse/ClickHouse/pull/64781) ([Xu Jia](https://github.com/XuJia0210)).
|
||||||
* Add _time virtual column to file alike storages (s3/file/hdfs/url/azureBlobStorage). [#64947](https://github.com/ClickHouse/ClickHouse/pull/64947) ([Ilya Golshtein](https://github.com/ilejn)).
|
* Add `_time` virtual column to file alike storages (s3/file/hdfs/url/azureBlobStorage). [#64947](https://github.com/ClickHouse/ClickHouse/pull/64947) ([Ilya Golshtein](https://github.com/ilejn)).
|
||||||
* Introduced new functions `base64URLEncode`, `base64URLDecode` and `tryBase64URLDecode`. [#64991](https://github.com/ClickHouse/ClickHouse/pull/64991) ([Mikhail Gorshkov](https://github.com/mgorshkov)).
|
* Introduced new functions `base64URLEncode`, `base64URLDecode` and `tryBase64URLDecode`. [#64991](https://github.com/ClickHouse/ClickHouse/pull/64991) ([Mikhail Gorshkov](https://github.com/mgorshkov)).
|
||||||
* Add new function `editDistanceUTF8`, which calculates the [edit distance](https://en.wikipedia.org/wiki/Edit_distance) between two UTF8 strings. [#65269](https://github.com/ClickHouse/ClickHouse/pull/65269) ([LiuNeng](https://github.com/liuneng1994)).
|
* Add new function `editDistanceUTF8`, which calculates the [edit distance](https://en.wikipedia.org/wiki/Edit_distance) between two UTF8 strings. [#65269](https://github.com/ClickHouse/ClickHouse/pull/65269) ([LiuNeng](https://github.com/liuneng1994)).
|
||||||
|
* Add `http_response_headers` setting to support custom response headers in custom HTTP handlers. [#63562](https://github.com/ClickHouse/ClickHouse/pull/63562) ([Grigorii](https://github.com/GSokol)).
|
||||||
|
* Added a new table function `loop` to support returning query results in an infinite loop. [#63452](https://github.com/ClickHouse/ClickHouse/pull/63452) ([Sariel](https://github.com/sarielwxm)). This is useful for testing.
|
||||||
|
* Introduced two additional columns in the `system.query_log`: `used_privileges` and `missing_privileges`. `used_privileges` is populated with the privileges that were checked during query execution, and `missing_privileges` contains required privileges that are missing. [#64597](https://github.com/ClickHouse/ClickHouse/pull/64597) ([Alexey Katsman](https://github.com/alexkats)).
|
||||||
|
* Added a setting `output_format_pretty_display_footer_column_names` which when enabled displays column names at the end of the table for long tables (50 rows by default), with the threshold value for minimum number of rows controlled by `output_format_pretty_display_footer_column_names_min_rows`. [#65144](https://github.com/ClickHouse/ClickHouse/pull/65144) ([Shaun Struwig](https://github.com/Blargian)).
|
||||||
|
|
||||||
|
#### Experimental Feature
|
||||||
|
* Introduce statistics of type "number of distinct values". [#59357](https://github.com/ClickHouse/ClickHouse/pull/59357) ([Han Fei](https://github.com/hanfei1991)).
|
||||||
|
* Support statistics with ReplicatedMergeTree. [#64934](https://github.com/ClickHouse/ClickHouse/pull/64934) ([Han Fei](https://github.com/hanfei1991)).
|
||||||
|
* If "replica group" is configured for a `Replicated` database, automatically create a cluster that includes replicas from all groups. [#64312](https://github.com/ClickHouse/ClickHouse/pull/64312) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* Add settings `parallel_replicas_custom_key_range_lower` and `parallel_replicas_custom_key_range_upper` to control how parallel replicas with dynamic shards parallelizes queries when using a range filter. [#64604](https://github.com/ClickHouse/ClickHouse/pull/64604) ([josh-hildred](https://github.com/josh-hildred)).
|
||||||
|
|
||||||
#### Performance Improvement
|
#### Performance Improvement
|
||||||
|
* Add the ability to reshuffle rows during insert to optimize for size without violating the order set by `PRIMARY KEY`. It's controlled by the setting `optimize_row_order` (off by default). [#63578](https://github.com/ClickHouse/ClickHouse/pull/63578) ([Igor Markelov](https://github.com/ElderlyPassionFruit)).
|
||||||
* Add a native parquet reader, which can read parquet binary to ClickHouse Columns directly. It's controlled by the setting `input_format_parquet_use_native_reader` (disabled by default). [#60361](https://github.com/ClickHouse/ClickHouse/pull/60361) ([ZhiHong Zhang](https://github.com/copperybean)).
|
* Add a native parquet reader, which can read parquet binary to ClickHouse Columns directly. It's controlled by the setting `input_format_parquet_use_native_reader` (disabled by default). [#60361](https://github.com/ClickHouse/ClickHouse/pull/60361) ([ZhiHong Zhang](https://github.com/copperybean)).
|
||||||
* Reduce the number of virtual function calls in ColumnNullable::size. [#60556](https://github.com/ClickHouse/ClickHouse/pull/60556) ([HappenLee](https://github.com/HappenLee)).
|
|
||||||
* Speedup `splitByRegexp` when the regular expression argument is a single-character. [#62696](https://github.com/ClickHouse/ClickHouse/pull/62696) ([Robert Schulze](https://github.com/rschu1ze)).
|
|
||||||
* Speed up FixedHashTable by keeping track of the min and max keys used. This allows to reduce the number of cells that need to be verified. [#62746](https://github.com/ClickHouse/ClickHouse/pull/62746) ([Jiebin Sun](https://github.com/jiebinn)).
|
|
||||||
* Optimize the resolution of in(LowCardinality, ConstantSet). [#64060](https://github.com/ClickHouse/ClickHouse/pull/64060) ([Zhiguo Zhou](https://github.com/ZhiguoZh)).
|
|
||||||
* Use a thread pool to initialize and destroy hash tables inside `ConcurrentHashJoin`. [#64241](https://github.com/ClickHouse/ClickHouse/pull/64241) ([Nikita Taranov](https://github.com/nickitat)).
|
|
||||||
* Optimized vertical merges in tables with sparse columns. [#64311](https://github.com/ClickHouse/ClickHouse/pull/64311) ([Anton Popov](https://github.com/CurtizJ)).
|
|
||||||
* Enabled prefetches of data from remote filesystem during vertical merges. It improves latency of vertical merges in tables with data stored on remote filesystem. [#64314](https://github.com/ClickHouse/ClickHouse/pull/64314) ([Anton Popov](https://github.com/CurtizJ)).
|
|
||||||
* Reduce redundant calls to `isDefault()` of `ColumnSparse::filter` to improve performance. [#64426](https://github.com/ClickHouse/ClickHouse/pull/64426) ([Jiebin Sun](https://github.com/jiebinn)).
|
|
||||||
* Speedup `find_super_nodes` and `find_big_family` keeper-client commands by making multiple asynchronous getChildren requests. [#64628](https://github.com/ClickHouse/ClickHouse/pull/64628) ([Alexander Gololobov](https://github.com/davenger)).
|
|
||||||
* Improve function least/greatest for nullable numberic type arguments. [#64668](https://github.com/ClickHouse/ClickHouse/pull/64668) ([KevinyhZou](https://github.com/KevinyhZou)).
|
|
||||||
* Allow merging two consequent `FilterSteps` of a query plan. This improves filter-push-down optimization if the filter condition can be pushed down from the parent step. [#64760](https://github.com/ClickHouse/ClickHouse/pull/64760) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
|
||||||
* Remove bad optimization in vertical final implementation and re-enable vertical final algorithm by default. [#64783](https://github.com/ClickHouse/ClickHouse/pull/64783) ([Duc Canh Le](https://github.com/canhld94)).
|
|
||||||
* Remove ALIAS nodes from the filter expression. This slightly improves performance for queries with `PREWHERE` (with the new analyzer). [#64793](https://github.com/ClickHouse/ClickHouse/pull/64793) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
|
||||||
* Fix performance regression in cross join introduced in [#60459](https://github.com/ClickHouse/ClickHouse/issues/60459) (24.5). [#65243](https://github.com/ClickHouse/ClickHouse/pull/65243) ([Nikita Taranov](https://github.com/nickitat)).
|
|
||||||
|
|
||||||
#### Improvement
|
|
||||||
* Support empty tuples. [#55061](https://github.com/ClickHouse/ClickHouse/pull/55061) ([Amos Bird](https://github.com/amosbird)).
|
|
||||||
* Hot reload storage policy for distributed tables when adding a new disk. [#58285](https://github.com/ClickHouse/ClickHouse/pull/58285) ([Duc Canh Le](https://github.com/canhld94)).
|
|
||||||
* Avoid possible deadlock during MergeTree index analysis when scheduling threads in a saturated service. [#59427](https://github.com/ClickHouse/ClickHouse/pull/59427) ([Sean Haynes](https://github.com/seandhaynes)).
|
|
||||||
* Support partial trivial count optimization when the query filter is able to select exact ranges from merge tree tables. [#60463](https://github.com/ClickHouse/ClickHouse/pull/60463) ([Amos Bird](https://github.com/amosbird)).
|
* Support partial trivial count optimization when the query filter is able to select exact ranges from merge tree tables. [#60463](https://github.com/ClickHouse/ClickHouse/pull/60463) ([Amos Bird](https://github.com/amosbird)).
|
||||||
* Reduce max memory usage of multithreaded `INSERT`s by collecting chunks of multiple threads in a single transform. [#61047](https://github.com/ClickHouse/ClickHouse/pull/61047) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
* Reduce max memory usage of multithreaded `INSERT`s by collecting chunks of multiple threads in a single transform. [#61047](https://github.com/ClickHouse/ClickHouse/pull/61047) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||||
* Reduce the memory usage when using Azure object storage by using fixed memory allocation, avoiding the allocation of an extra buffer. [#63160](https://github.com/ClickHouse/ClickHouse/pull/63160) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
* Reduce the memory usage when using Azure object storage by using fixed memory allocation, avoiding the allocation of an extra buffer. [#63160](https://github.com/ClickHouse/ClickHouse/pull/63160) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||||
* Several minor corner case fixes to proxy support & tunneling. [#63427](https://github.com/ClickHouse/ClickHouse/pull/63427) ([Arthur Passos](https://github.com/arthurpassos)).
|
* Reduce the number of virtual function calls in `ColumnNullable::size`. [#60556](https://github.com/ClickHouse/ClickHouse/pull/60556) ([HappenLee](https://github.com/HappenLee)).
|
||||||
* Add `http_response_headers` setting to support custom response headers in custom HTTP handlers. [#63562](https://github.com/ClickHouse/ClickHouse/pull/63562) ([Grigorii](https://github.com/GSokol)).
|
* Speedup `splitByRegexp` when the regular expression argument is a single-character. [#62696](https://github.com/ClickHouse/ClickHouse/pull/62696) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
* Improve io_uring resubmit visibility. Rename profile event `IOUringSQEsResubmits` -> `IOUringSQEsResubmitsAsync` and add a new one `IOUringSQEsResubmitsSync`. [#63699](https://github.com/ClickHouse/ClickHouse/pull/63699) ([Tomer Shafir](https://github.com/tomershafir)).
|
* Speed up aggregation by 8-bit and 16-bit keys by keeping track of the min and max keys used. This allows to reduce the number of cells that need to be verified. [#62746](https://github.com/ClickHouse/ClickHouse/pull/62746) ([Jiebin Sun](https://github.com/jiebinn)).
|
||||||
* Introduce assertions to verify all functions are called with columns of the right size. [#63723](https://github.com/ClickHouse/ClickHouse/pull/63723) ([Raúl Marín](https://github.com/Algunenano)).
|
* Optimize operator IN when the left hand side is `LowCardinality` and the right is a set of constants. [#64060](https://github.com/ClickHouse/ClickHouse/pull/64060) ([Zhiguo Zhou](https://github.com/ZhiguoZh)).
|
||||||
|
* Use a thread pool to initialize and destroy hash tables inside `ConcurrentHashJoin`. [#64241](https://github.com/ClickHouse/ClickHouse/pull/64241) ([Nikita Taranov](https://github.com/nickitat)).
|
||||||
|
* Optimized vertical merges in tables with sparse columns. [#64311](https://github.com/ClickHouse/ClickHouse/pull/64311) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Enabled prefetches of data from remote filesystem during vertical merges. It improves latency of vertical merges in tables with data stored on remote filesystem. [#64314](https://github.com/ClickHouse/ClickHouse/pull/64314) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Reduce redundant calls to `isDefault` of `ColumnSparse::filter` to improve performance. [#64426](https://github.com/ClickHouse/ClickHouse/pull/64426) ([Jiebin Sun](https://github.com/jiebinn)).
|
||||||
|
* Speedup `find_super_nodes` and `find_big_family` keeper-client commands by making multiple asynchronous getChildren requests. [#64628](https://github.com/ClickHouse/ClickHouse/pull/64628) ([Alexander Gololobov](https://github.com/davenger)).
|
||||||
|
* Improve function `least`/`greatest` for nullable numberic type arguments. [#64668](https://github.com/ClickHouse/ClickHouse/pull/64668) ([KevinyhZou](https://github.com/KevinyhZou)).
|
||||||
|
* Allow merging two consequent filtering steps of a query plan. This improves filter-push-down optimization if the filter condition can be pushed down from the parent step. [#64760](https://github.com/ClickHouse/ClickHouse/pull/64760) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Remove bad optimization in the vertical final implementation and re-enable vertical final algorithm by default. [#64783](https://github.com/ClickHouse/ClickHouse/pull/64783) ([Duc Canh Le](https://github.com/canhld94)).
|
||||||
|
* Remove ALIAS nodes from the filter expression. This slightly improves performance for queries with `PREWHERE` (with the new analyzer). [#64793](https://github.com/ClickHouse/ClickHouse/pull/64793) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Re-enable OpenSSL session caching. [#65111](https://github.com/ClickHouse/ClickHouse/pull/65111) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* Added settings to disable materialization of skip indexes and statistics on inserts (`materialize_skip_indexes_on_insert` and `materialize_statistics_on_insert`). [#64391](https://github.com/ClickHouse/ClickHouse/pull/64391) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Use the allocated memory size to calculate the row group size and reduce the peak memory of the parquet writer in the single-threaded mode. [#64424](https://github.com/ClickHouse/ClickHouse/pull/64424) ([LiuNeng](https://github.com/liuneng1994)).
|
||||||
|
* Improve the iterator of sparse column to reduce call of `size`. [#64497](https://github.com/ClickHouse/ClickHouse/pull/64497) ([Jiebin Sun](https://github.com/jiebinn)).
|
||||||
|
* Update condition to use server-side copy for backups to Azure blob storage. [#64518](https://github.com/ClickHouse/ClickHouse/pull/64518) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||||
|
* Optimized memory usage of vertical merges for tables with high number of skip indexes. [#64580](https://github.com/ClickHouse/ClickHouse/pull/64580) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
* `SHOW CREATE TABLE` executed on top of system tables will now show the super handy comment unique for each table which will explain why this table is needed. [#63788](https://github.com/ClickHouse/ClickHouse/pull/63788) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
* `SHOW CREATE TABLE` executed on top of system tables will now show the super handy comment unique for each table which will explain why this table is needed. [#63788](https://github.com/ClickHouse/ClickHouse/pull/63788) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
* Added setting `metadata_storage_type` to keep free space on metadata storage disk. [#64128](https://github.com/ClickHouse/ClickHouse/pull/64128) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
* The second argument (scale) of functions `round()`, `roundBankers()`, `floor()`, `ceil()` and `trunc()` can now be non-const. [#64798](https://github.com/ClickHouse/ClickHouse/pull/64798) ([Mikhail Gorshkov](https://github.com/mgorshkov)).
|
||||||
* Add metrics to track the number of directories created and removed by the plain_rewritable metadata storage, and the number of entries in the local-to-remote in-memory map. [#64175](https://github.com/ClickHouse/ClickHouse/pull/64175) ([Julia Kartseva](https://github.com/jkartseva)).
|
* Hot reload storage policy for `Distributed` tables when adding a new disk. [#58285](https://github.com/ClickHouse/ClickHouse/pull/58285) ([Duc Canh Le](https://github.com/canhld94)).
|
||||||
|
* Avoid possible deadlock during MergeTree index analysis when scheduling threads in a saturated service. [#59427](https://github.com/ClickHouse/ClickHouse/pull/59427) ([Sean Haynes](https://github.com/seandhaynes)).
|
||||||
|
* Several minor corner case fixes to S3 proxy support & tunneling. [#63427](https://github.com/ClickHouse/ClickHouse/pull/63427) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||||
|
* Improve io_uring resubmit visibility. Rename profile event `IOUringSQEsResubmits` -> `IOUringSQEsResubmitsAsync` and add a new one `IOUringSQEsResubmitsSync`. [#63699](https://github.com/ClickHouse/ClickHouse/pull/63699) ([Tomer Shafir](https://github.com/tomershafir)).
|
||||||
|
* Added a new setting, `metadata_keep_free_space_bytes` to keep free space on the metadata storage disk. [#64128](https://github.com/ClickHouse/ClickHouse/pull/64128) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||||
|
* Add metrics to track the number of directories created and removed by the `plain_rewritable` metadata storage, and the number of entries in the local-to-remote in-memory map. [#64175](https://github.com/ClickHouse/ClickHouse/pull/64175) ([Julia Kartseva](https://github.com/jkartseva)).
|
||||||
* The query cache now considers identical queries with different settings as different. This increases robustness in cases where different settings (e.g. `limit` or `additional_table_filters`) would affect the query result. [#64205](https://github.com/ClickHouse/ClickHouse/pull/64205) ([Robert Schulze](https://github.com/rschu1ze)).
|
* The query cache now considers identical queries with different settings as different. This increases robustness in cases where different settings (e.g. `limit` or `additional_table_filters`) would affect the query result. [#64205](https://github.com/ClickHouse/ClickHouse/pull/64205) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
* Better Exception Message in Delete Table with Projection, users can understand the error and the steps should be taken. [#64212](https://github.com/ClickHouse/ClickHouse/pull/64212) ([jsc0218](https://github.com/jsc0218)).
|
|
||||||
* Support the non standard error code `QpsLimitExceeded` in object storage as a retryable error. [#64225](https://github.com/ClickHouse/ClickHouse/pull/64225) ([Sema Checherinda](https://github.com/CheSema)).
|
* Support the non standard error code `QpsLimitExceeded` in object storage as a retryable error. [#64225](https://github.com/ClickHouse/ClickHouse/pull/64225) ([Sema Checherinda](https://github.com/CheSema)).
|
||||||
* Forbid converting a MergeTree table to replicated if the zookeeper path for this table already exists. [#64244](https://github.com/ClickHouse/ClickHouse/pull/64244) ([Kirill](https://github.com/kirillgarbar)).
|
* Forbid converting a MergeTree table to replicated if the zookeeper path for this table already exists. [#64244](https://github.com/ClickHouse/ClickHouse/pull/64244) ([Kirill](https://github.com/kirillgarbar)).
|
||||||
* If "replica group" is configured for a `Replicated` database, automatically create a cluster that includes replicas from all groups. [#64312](https://github.com/ClickHouse/ClickHouse/pull/64312) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
* Added a new setting `input_format_parquet_prefer_block_bytes` to control the average output block bytes, and modified the default value of `input_format_parquet_max_block_size` to 65409. [#64427](https://github.com/ClickHouse/ClickHouse/pull/64427) ([LiuNeng](https://github.com/liuneng1994)).
|
||||||
* Added settings to disable materialization of skip indexes and statistics on inserts (`materialize_skip_indexes_on_insert` and `materialize_statistics_on_insert`). [#64391](https://github.com/ClickHouse/ClickHouse/pull/64391) ([Anton Popov](https://github.com/CurtizJ)).
|
* Allow proxy to be bypassed for hosts specified in `no_proxy` env variable and ClickHouse proxy configuration. [#63314](https://github.com/ClickHouse/ClickHouse/pull/63314) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||||
* Use the allocated memory size to calculate the row group size and reduce the peak memory of the parquet writer in single-threaded mode. [#64424](https://github.com/ClickHouse/ClickHouse/pull/64424) ([LiuNeng](https://github.com/liuneng1994)).
|
|
||||||
* Added new configuration input_format_parquet_prefer_block_bytes to control the average output block bytes, and modified the default value of input_format_parquet_max_block_size to 65409. [#64427](https://github.com/ClickHouse/ClickHouse/pull/64427) ([LiuNeng](https://github.com/liuneng1994)).
|
|
||||||
* Always start Keeper with sufficient amount of threads in global thread pool. [#64444](https://github.com/ClickHouse/ClickHouse/pull/64444) ([Duc Canh Le](https://github.com/canhld94)).
|
* Always start Keeper with sufficient amount of threads in global thread pool. [#64444](https://github.com/ClickHouse/ClickHouse/pull/64444) ([Duc Canh Le](https://github.com/canhld94)).
|
||||||
* Settings from user config doesn't affect merges and mutations for MergeTree on top of object storage. [#64456](https://github.com/ClickHouse/ClickHouse/pull/64456) ([alesapin](https://github.com/alesapin)).
|
* Settings from the user's config don't affect merges and mutations for `MergeTree` on top of object storage. [#64456](https://github.com/ClickHouse/ClickHouse/pull/64456) ([alesapin](https://github.com/alesapin)).
|
||||||
* Setting `replace_long_file_name_to_hash` is enabled by default for `MergeTree` tables. [#64457](https://github.com/ClickHouse/ClickHouse/pull/64457) ([Anton Popov](https://github.com/CurtizJ)).
|
|
||||||
* Improve the iterator of sparse column to reduce call of size(). [#64497](https://github.com/ClickHouse/ClickHouse/pull/64497) ([Jiebin Sun](https://github.com/jiebinn)).
|
|
||||||
* Update condition to use copy for azure blob storage. [#64518](https://github.com/ClickHouse/ClickHouse/pull/64518) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
|
||||||
* Support the non standard error code `TotalQpsLimitExceeded` in object storage as a retryable error. [#64520](https://github.com/ClickHouse/ClickHouse/pull/64520) ([Sema Checherinda](https://github.com/CheSema)).
|
* Support the non standard error code `TotalQpsLimitExceeded` in object storage as a retryable error. [#64520](https://github.com/ClickHouse/ClickHouse/pull/64520) ([Sema Checherinda](https://github.com/CheSema)).
|
||||||
* Optimized memory usage of vertical merges for tables with high number of skip indexes. [#64580](https://github.com/ClickHouse/ClickHouse/pull/64580) ([Anton Popov](https://github.com/CurtizJ)).
|
|
||||||
* Introduced two additional columns in the `system.query_log`: `used_privileges` and `missing_privileges`. `used_privileges` is populated with the privileges that were checked during query execution, and `missing_privileges` contains required privileges that are missing. [#64597](https://github.com/ClickHouse/ClickHouse/pull/64597) ([Alexey Katsman](https://github.com/alexkats)).
|
|
||||||
* Add settings `parallel_replicas_custom_key_range_lower` and `parallel_replicas_custom_key_range_upper` to control how parallel replicas with dynamic shards parallelizes queries when using a range filter. [#64604](https://github.com/ClickHouse/ClickHouse/pull/64604) ([josh-hildred](https://github.com/josh-hildred)).
|
|
||||||
* Updated Advanced Dashboard for both open-source and ClickHouse Cloud versions to include a chart for 'Maximum concurrent network connections'. [#64610](https://github.com/ClickHouse/ClickHouse/pull/64610) ([Thom O'Connor](https://github.com/thomoco)).
|
* Updated Advanced Dashboard for both open-source and ClickHouse Cloud versions to include a chart for 'Maximum concurrent network connections'. [#64610](https://github.com/ClickHouse/ClickHouse/pull/64610) ([Thom O'Connor](https://github.com/thomoco)).
|
||||||
* The second argument (scale) of functions `round()`, `roundBankers()`, `floor()`, `ceil()` and `trunc()` can now be non-const. [#64798](https://github.com/ClickHouse/ClickHouse/pull/64798) ([Mikhail Gorshkov](https://github.com/mgorshkov)).
|
* Improve progress report on `zeros_mt` and `generateRandom`. [#64804](https://github.com/ClickHouse/ClickHouse/pull/64804) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
* Improve progress report on zeros_mt and generateRandom. [#64804](https://github.com/ClickHouse/ClickHouse/pull/64804) ([Raúl Marín](https://github.com/Algunenano)).
|
* Add an asynchronous metric `jemalloc.profile.active` to show whether sampling is currently active. This is an activation mechanism in addition to prof.active; both must be active for the calling thread to sample. [#64842](https://github.com/ClickHouse/ClickHouse/pull/64842) ([Unalian](https://github.com/Unalian)).
|
||||||
* Add an asynchronous metric jemalloc.profile.active to show whether sampling is currently active. This is an activation mechanism in addition to prof.active; both must be active for the calling thread to sample. [#64842](https://github.com/ClickHouse/ClickHouse/pull/64842) ([Unalian](https://github.com/Unalian)).
|
|
||||||
* Support statistics with ReplicatedMergeTree. [#64934](https://github.com/ClickHouse/ClickHouse/pull/64934) ([Han Fei](https://github.com/hanfei1991)).
|
|
||||||
* Remove mark of `allow_experimental_join_condition` as important. This mark may have prevented distributed queries in a mixed versions cluster from being executed successfully. [#65008](https://github.com/ClickHouse/ClickHouse/pull/65008) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
* Remove mark of `allow_experimental_join_condition` as important. This mark may have prevented distributed queries in a mixed versions cluster from being executed successfully. [#65008](https://github.com/ClickHouse/ClickHouse/pull/65008) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
* Added server Asynchronous metrics `DiskGetObjectThrottler*` and `DiskGetObjectThrottler*` reflecting request per second rate limit defined with `s3_max_get_rps` and `s3_max_put_rps` disk settings and currently available number of requests that could be sent without hitting throttling limit on the disk. Metrics are defined for every disk that has a configured limit. [#65050](https://github.com/ClickHouse/ClickHouse/pull/65050) ([Sergei Trifonov](https://github.com/serxa)).
|
* Added server Asynchronous metrics `DiskGetObjectThrottler*` and `DiskGetObjectThrottler*` reflecting request per second rate limit defined with `s3_max_get_rps` and `s3_max_put_rps` disk settings and currently available number of requests that could be sent without hitting throttling limit on the disk. Metrics are defined for every disk that has a configured limit. [#65050](https://github.com/ClickHouse/ClickHouse/pull/65050) ([Sergei Trifonov](https://github.com/serxa)).
|
||||||
* Added a setting `output_format_pretty_display_footer_column_names` which when enabled displays column names at the end of the table for long tables (50 rows by default), with the threshold value for minimum number of rows controlled by `output_format_pretty_display_footer_column_names_min_rows`. [#65144](https://github.com/ClickHouse/ClickHouse/pull/65144) ([Shaun Struwig](https://github.com/Blargian)).
|
* Initialize global trace collector for `Poco::ThreadPool` (needed for Keeper, etc). [#65239](https://github.com/ClickHouse/ClickHouse/pull/65239) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
* Returned back the behaviour of how ClickHouse works and interprets Tuples in CSV format. This change effectively reverts https://github.com/ClickHouse/ClickHouse/pull/60994 and makes it available only under a few settings: `output_format_csv_serialize_tuple_into_separate_columns`, `input_format_csv_deserialize_separate_columns_into_tuple` and `input_format_csv_try_infer_strings_from_quoted_tuples`. [#65170](https://github.com/ClickHouse/ClickHouse/pull/65170) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
* Add a validation when creating a user with `bcrypt_hash`. [#65242](https://github.com/ClickHouse/ClickHouse/pull/65242) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
* Initialize global trace collector for Poco::ThreadPool (needed for keeper, etc). [#65239](https://github.com/ClickHouse/ClickHouse/pull/65239) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
* Add profile events for number of rows read during/after `PREWHERE`. [#64198](https://github.com/ClickHouse/ClickHouse/pull/64198) ([Nikita Taranov](https://github.com/nickitat)).
|
||||||
* Add validation when creating a user with bcrypt_hash. [#65242](https://github.com/ClickHouse/ClickHouse/pull/65242) ([Raúl Marín](https://github.com/Algunenano)).
|
* Print query in `EXPLAIN PLAN` with parallel replicas. [#64298](https://github.com/ClickHouse/ClickHouse/pull/64298) ([vdimir](https://github.com/vdimir)).
|
||||||
* Unite s3/hdfs/azure storage implementations into a single class working with IObjectStorage. Same for *Cluster, data lakes and Queue storages. [#59767](https://github.com/ClickHouse/ClickHouse/pull/59767) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
|
||||||
* Refactor data part writer to remove dependencies on MergeTreeData and DataPart. [#63620](https://github.com/ClickHouse/ClickHouse/pull/63620) ([Alexander Gololobov](https://github.com/davenger)).
|
|
||||||
* Add profile events for number of rows read during/after prewhere. [#64198](https://github.com/ClickHouse/ClickHouse/pull/64198) ([Nikita Taranov](https://github.com/nickitat)).
|
|
||||||
* Print query in explain plan with parallel replicas. [#64298](https://github.com/ClickHouse/ClickHouse/pull/64298) ([vdimir](https://github.com/vdimir)).
|
|
||||||
* Rename `allow_deprecated_functions` to `allow_deprecated_error_prone_window_functions`. [#64358](https://github.com/ClickHouse/ClickHouse/pull/64358) ([Raúl Marín](https://github.com/Algunenano)).
|
* Rename `allow_deprecated_functions` to `allow_deprecated_error_prone_window_functions`. [#64358](https://github.com/ClickHouse/ClickHouse/pull/64358) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
* Respect `max_read_buffer_size` setting for file descriptors as well in file() table function. [#64532](https://github.com/ClickHouse/ClickHouse/pull/64532) ([Azat Khuzhin](https://github.com/azat)).
|
* Respect `max_read_buffer_size` setting for file descriptors as well in the `file` table function. [#64532](https://github.com/ClickHouse/ClickHouse/pull/64532) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
* Disable transactions for unsupported storages even for materialized views. [#64918](https://github.com/ClickHouse/ClickHouse/pull/64918) ([alesapin](https://github.com/alesapin)).
|
* Disable transactions for unsupported storages even for materialized views. [#64918](https://github.com/ClickHouse/ClickHouse/pull/64918) ([alesapin](https://github.com/alesapin)).
|
||||||
* Refactor `KeyCondition` and key analysis to improve PartitionPruner and trivial count optimization. This is separated from [#60463](https://github.com/ClickHouse/ClickHouse/issues/60463) . [#61459](https://github.com/ClickHouse/ClickHouse/pull/61459) ([Amos Bird](https://github.com/amosbird)).
|
* Forbid `QUALIFY` clause in the old analyzer. The old analyzer ignored `QUALIFY`, so it could lead to unexpected data removal in mutations. [#65356](https://github.com/ClickHouse/ClickHouse/pull/65356) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
|
||||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||||
|
* A bug in Apache ORC library was fixed: Fixed ORC statistics calculation, when writing, for unsigned types on all platforms and Int8 on ARM. [#64563](https://github.com/ClickHouse/ClickHouse/pull/64563) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||||
|
* Returned back the behaviour of how ClickHouse works and interprets Tuples in CSV format. This change effectively reverts https://github.com/ClickHouse/ClickHouse/pull/60994 and makes it available only under a few settings: `output_format_csv_serialize_tuple_into_separate_columns`, `input_format_csv_deserialize_separate_columns_into_tuple` and `input_format_csv_try_infer_strings_from_quoted_tuples`. [#65170](https://github.com/ClickHouse/ClickHouse/pull/65170) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
* Fix a permission error where a user in a specific situation can escalate their privileges on the default database without necessary grants. [#64769](https://github.com/ClickHouse/ClickHouse/pull/64769) ([pufit](https://github.com/pufit)).
|
* Fix a permission error where a user in a specific situation can escalate their privileges on the default database without necessary grants. [#64769](https://github.com/ClickHouse/ClickHouse/pull/64769) ([pufit](https://github.com/pufit)).
|
||||||
* Fix crash with UniqInjectiveFunctionsEliminationPass and uniqCombined. [#65188](https://github.com/ClickHouse/ClickHouse/pull/65188) ([Raúl Marín](https://github.com/Algunenano)).
|
* Fix crash with UniqInjectiveFunctionsEliminationPass and uniqCombined. [#65188](https://github.com/ClickHouse/ClickHouse/pull/65188) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
* Fix a bug in ClickHouse Keeper that causes digest mismatch during closing session. [#65198](https://github.com/ClickHouse/ClickHouse/pull/65198) ([Aleksei Filatov](https://github.com/aalexfvk)).
|
* Fix a bug in ClickHouse Keeper that causes digest mismatch during closing session. [#65198](https://github.com/ClickHouse/ClickHouse/pull/65198) ([Aleksei Filatov](https://github.com/aalexfvk)).
|
||||||
* Forbid `QUALIFY` clause in the old analyzer. The old analyzer ignored `QUALIFY`, so it could lead to unexpected data removal in mutations. [#65356](https://github.com/ClickHouse/ClickHouse/pull/65356) ([Dmitry Novik](https://github.com/novikd)).
|
|
||||||
* Use correct memory alignment for Distinct combinator. Previously, crash could happen because of invalid memory allocation when the combinator was used. [#65379](https://github.com/ClickHouse/ClickHouse/pull/65379) ([Antonio Andelic](https://github.com/antonio2368)).
|
* Use correct memory alignment for Distinct combinator. Previously, crash could happen because of invalid memory allocation when the combinator was used. [#65379](https://github.com/ClickHouse/ClickHouse/pull/65379) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
* Fix crash with `DISTINCT` and window functions. [#64767](https://github.com/ClickHouse/ClickHouse/pull/64767) ([Igor Nikonov](https://github.com/devcrafter)).
|
* Fix crash with `DISTINCT` and window functions. [#64767](https://github.com/ClickHouse/ClickHouse/pull/64767) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||||
* Fixed 'set' skip index not working with IN and indexHint(). [#62083](https://github.com/ClickHouse/ClickHouse/pull/62083) ([Michael Kolupaev](https://github.com/al13n321)).
|
* Fixed 'set' skip index not working with IN and indexHint(). [#62083](https://github.com/ClickHouse/ClickHouse/pull/62083) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||||
@ -132,7 +130,6 @@
|
|||||||
* Fixed `optimize_read_in_order` behaviour for ORDER BY ... NULLS FIRST / LAST on tables with nullable keys. [#64483](https://github.com/ClickHouse/ClickHouse/pull/64483) ([Eduard Karacharov](https://github.com/korowa)).
|
* Fixed `optimize_read_in_order` behaviour for ORDER BY ... NULLS FIRST / LAST on tables with nullable keys. [#64483](https://github.com/ClickHouse/ClickHouse/pull/64483) ([Eduard Karacharov](https://github.com/korowa)).
|
||||||
* Fix the `Expression nodes list expected 1 projection names` and `Unknown expression or identifier` errors for queries with aliases to `GLOBAL IN.`. [#64517](https://github.com/ClickHouse/ClickHouse/pull/64517) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
* Fix the `Expression nodes list expected 1 projection names` and `Unknown expression or identifier` errors for queries with aliases to `GLOBAL IN.`. [#64517](https://github.com/ClickHouse/ClickHouse/pull/64517) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
* Fix an error `Cannot find column` in distributed queries with constant CTE in the `GROUP BY` key. [#64519](https://github.com/ClickHouse/ClickHouse/pull/64519) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
* Fix an error `Cannot find column` in distributed queries with constant CTE in the `GROUP BY` key. [#64519](https://github.com/ClickHouse/ClickHouse/pull/64519) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
* Fixed ORC statistics calculation, when writing, for unsigned types on all platforms and Int8 on ARM. [#64563](https://github.com/ClickHouse/ClickHouse/pull/64563) ([Michael Kolupaev](https://github.com/al13n321)).
|
|
||||||
* Fix the crash loop when restoring from backup is blocked by creating an MV with a definer that hasn't been restored yet. [#64595](https://github.com/ClickHouse/ClickHouse/pull/64595) ([pufit](https://github.com/pufit)).
|
* Fix the crash loop when restoring from backup is blocked by creating an MV with a definer that hasn't been restored yet. [#64595](https://github.com/ClickHouse/ClickHouse/pull/64595) ([pufit](https://github.com/pufit)).
|
||||||
* Fix the output of function `formatDateTimeInJodaSyntax` when a formatter generates an uneven number of characters and the last character is `0`. For example, `SELECT formatDateTimeInJodaSyntax(toDate('2012-05-29'), 'D')` now correctly returns `150` instead of previously `15`. [#64614](https://github.com/ClickHouse/ClickHouse/pull/64614) ([LiuNeng](https://github.com/liuneng1994)).
|
* Fix the output of function `formatDateTimeInJodaSyntax` when a formatter generates an uneven number of characters and the last character is `0`. For example, `SELECT formatDateTimeInJodaSyntax(toDate('2012-05-29'), 'D')` now correctly returns `150` instead of previously `15`. [#64614](https://github.com/ClickHouse/ClickHouse/pull/64614) ([LiuNeng](https://github.com/liuneng1994)).
|
||||||
* Do not rewrite aggregation if `-If` combinator is already used. [#64638](https://github.com/ClickHouse/ClickHouse/pull/64638) ([Dmitry Novik](https://github.com/novikd)).
|
* Do not rewrite aggregation if `-If` combinator is already used. [#64638](https://github.com/ClickHouse/ClickHouse/pull/64638) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
@ -166,21 +163,14 @@
|
|||||||
* This PR ensures that the type of the constant(IN operator's second parameter) is always visible during the IN operator's type conversion process. Otherwise, losing type information may cause some conversions to fail, such as the conversion from DateTime to Date. This fixes ([#64487](https://github.com/ClickHouse/ClickHouse/issues/64487)). [#65315](https://github.com/ClickHouse/ClickHouse/pull/65315) ([pn](https://github.com/chloro-pn)).
|
* This PR ensures that the type of the constant(IN operator's second parameter) is always visible during the IN operator's type conversion process. Otherwise, losing type information may cause some conversions to fail, such as the conversion from DateTime to Date. This fixes ([#64487](https://github.com/ClickHouse/ClickHouse/issues/64487)). [#65315](https://github.com/ClickHouse/ClickHouse/pull/65315) ([pn](https://github.com/chloro-pn)).
|
||||||
|
|
||||||
#### Build/Testing/Packaging Improvement
|
#### Build/Testing/Packaging Improvement
|
||||||
* Make `network` service be required when using the rc init script to start the ClickHouse server daemon. [#60650](https://github.com/ClickHouse/ClickHouse/pull/60650) ([Chun-Sheng, Li](https://github.com/peter279k)).
|
* Add support for LLVM XRay. [#64592](https://github.com/ClickHouse/ClickHouse/pull/64592) [#64837](https://github.com/ClickHouse/ClickHouse/pull/64837) ([Tomer Shafir](https://github.com/tomershafir)).
|
||||||
* Fix typo in test_hdfsCluster_unset_skip_unavailable_shards. The test writes data to unskip_unavailable_shards, but uses skip_unavailable_shards from the previous test. [#64243](https://github.com/ClickHouse/ClickHouse/pull/64243) ([Mikhail Artemenko](https://github.com/Michicosun)).
|
* Unite s3/hdfs/azure storage implementations into a single class working with IObjectStorage. Same for *Cluster, data lakes and Queue storages. [#59767](https://github.com/ClickHouse/ClickHouse/pull/59767) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
* Reduce the size of some slow tests. [#64387](https://github.com/ClickHouse/ClickHouse/pull/64387) ([Raúl Marín](https://github.com/Algunenano)).
|
* Refactor data part writer to remove dependencies on MergeTreeData and DataPart. [#63620](https://github.com/ClickHouse/ClickHouse/pull/63620) ([Alexander Gololobov](https://github.com/davenger)).
|
||||||
* Reduce the size of some slow tests. [#64452](https://github.com/ClickHouse/ClickHouse/pull/64452) ([Raúl Marín](https://github.com/Algunenano)).
|
* Refactor `KeyCondition` and key analysis to improve PartitionPruner and trivial count optimization. This is separated from [#60463](https://github.com/ClickHouse/ClickHouse/issues/60463) . [#61459](https://github.com/ClickHouse/ClickHouse/pull/61459) ([Amos Bird](https://github.com/amosbird)).
|
||||||
* Fix test_lost_part_other_replica. [#64512](https://github.com/ClickHouse/ClickHouse/pull/64512) ([Raúl Marín](https://github.com/Algunenano)).
|
* Introduce assertions to verify all functions are called with columns of the right size. [#63723](https://github.com/ClickHouse/ClickHouse/pull/63723) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
* Add tests for experimental unequal joins and randomize new settings in clickhouse-test. [#64535](https://github.com/ClickHouse/ClickHouse/pull/64535) ([Nikita Fomichev](https://github.com/fm4v)).
|
* Make `network` service be required when using the `rc` init script to start the ClickHouse server daemon. [#60650](https://github.com/ClickHouse/ClickHouse/pull/60650) ([Chun-Sheng, Li](https://github.com/peter279k)).
|
||||||
* Upgrade tests: Update config and work with release candidates. [#64542](https://github.com/ClickHouse/ClickHouse/pull/64542) ([Raúl Marín](https://github.com/Algunenano)).
|
* Reduce the size of some slow tests. [#64387](https://github.com/ClickHouse/ClickHouse/pull/64387) [#64452](https://github.com/ClickHouse/ClickHouse/pull/64452) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
* Add support for LLVM XRay. [#64592](https://github.com/ClickHouse/ClickHouse/pull/64592) ([Tomer Shafir](https://github.com/tomershafir)).
|
|
||||||
* Speed up 02995_forget_partition. [#64761](https://github.com/ClickHouse/ClickHouse/pull/64761) ([Raúl Marín](https://github.com/Algunenano)).
|
|
||||||
* Fix 02790_async_queries_in_query_log. [#64764](https://github.com/ClickHouse/ClickHouse/pull/64764) ([Raúl Marín](https://github.com/Algunenano)).
|
|
||||||
* Support LLVM XRay on Linux amd64 only. [#64837](https://github.com/ClickHouse/ClickHouse/pull/64837) ([Tomer Shafir](https://github.com/tomershafir)).
|
|
||||||
* Get rid of custom code in `tests/ci/download_release_packages.py` and `tests/ci/get_previous_release_tag.py` to avoid issues after the https://github.com/ClickHouse/ClickHouse/pull/64759 is merged. [#64848](https://github.com/ClickHouse/ClickHouse/pull/64848) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
|
||||||
* Decrease the `unit-test` image a few times. [#65102](https://github.com/ClickHouse/ClickHouse/pull/65102) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
|
||||||
* Replay ZooKeeper logs using keeper-bench. [#62481](https://github.com/ClickHouse/ClickHouse/pull/62481) ([Antonio Andelic](https://github.com/antonio2368)).
|
* Replay ZooKeeper logs using keeper-bench. [#62481](https://github.com/ClickHouse/ClickHouse/pull/62481) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
* Re-enable OpenSSL session caching. [#65111](https://github.com/ClickHouse/ClickHouse/pull/65111) ([Robert Schulze](https://github.com/rschu1ze)).
|
|
||||||
|
|
||||||
### <a id="245"></a> ClickHouse release 24.5, 2024-05-30
|
### <a id="245"></a> ClickHouse release 24.5, 2024-05-30
|
||||||
|
|
||||||
|
@ -14,6 +14,7 @@ The following versions of ClickHouse server are currently supported with securit
|
|||||||
|
|
||||||
| Version | Supported |
|
| Version | Supported |
|
||||||
|:-|:-|
|
|:-|:-|
|
||||||
|
| 24.6 | ✔️ |
|
||||||
| 24.5 | ✔️ |
|
| 24.5 | ✔️ |
|
||||||
| 24.4 | ✔️ |
|
| 24.4 | ✔️ |
|
||||||
| 24.3 | ✔️ |
|
| 24.3 | ✔️ |
|
||||||
|
@ -6,6 +6,9 @@ namespace
|
|||||||
{
|
{
|
||||||
std::string getFQDNOrHostNameImpl()
|
std::string getFQDNOrHostNameImpl()
|
||||||
{
|
{
|
||||||
|
#if defined(OS_DARWIN)
|
||||||
|
return Poco::Net::DNS::hostName();
|
||||||
|
#else
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
return Poco::Net::DNS::thisHost().name();
|
return Poco::Net::DNS::thisHost().name();
|
||||||
@ -14,6 +17,7 @@ namespace
|
|||||||
{
|
{
|
||||||
return Poco::Net::DNS::hostName();
|
return Poco::Net::DNS::hostName();
|
||||||
}
|
}
|
||||||
|
#endif
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -34,7 +34,7 @@ if (OS_LINUX)
|
|||||||
# avoid spurious latencies and additional work associated with
|
# avoid spurious latencies and additional work associated with
|
||||||
# MADV_DONTNEED. See
|
# MADV_DONTNEED. See
|
||||||
# https://github.com/ClickHouse/ClickHouse/issues/11121 for motivation.
|
# https://github.com/ClickHouse/ClickHouse/issues/11121 for motivation.
|
||||||
set (JEMALLOC_CONFIG_MALLOC_CONF "percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:0,dirty_decay_ms:5000")
|
set (JEMALLOC_CONFIG_MALLOC_CONF "percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:0,dirty_decay_ms:5000,prof:true,prof_active:false,background_thread:true")
|
||||||
else()
|
else()
|
||||||
set (JEMALLOC_CONFIG_MALLOC_CONF "oversize_threshold:0,muzzy_decay_ms:0,dirty_decay_ms:5000")
|
set (JEMALLOC_CONFIG_MALLOC_CONF "oversize_threshold:0,muzzy_decay_ms:0,dirty_decay_ms:5000")
|
||||||
endif()
|
endif()
|
||||||
|
@ -34,7 +34,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
|||||||
# lts / testing / prestable / etc
|
# lts / testing / prestable / etc
|
||||||
ARG REPO_CHANNEL="stable"
|
ARG REPO_CHANNEL="stable"
|
||||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||||
ARG VERSION="24.5.3.5"
|
ARG VERSION="24.6.1.4423"
|
||||||
ARG PACKAGES="clickhouse-keeper"
|
ARG PACKAGES="clickhouse-keeper"
|
||||||
ARG DIRECT_DOWNLOAD_URLS=""
|
ARG DIRECT_DOWNLOAD_URLS=""
|
||||||
|
|
||||||
|
@ -32,7 +32,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
|||||||
# lts / testing / prestable / etc
|
# lts / testing / prestable / etc
|
||||||
ARG REPO_CHANNEL="stable"
|
ARG REPO_CHANNEL="stable"
|
||||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||||
ARG VERSION="24.5.3.5"
|
ARG VERSION="24.6.1.4423"
|
||||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||||
ARG DIRECT_DOWNLOAD_URLS=""
|
ARG DIRECT_DOWNLOAD_URLS=""
|
||||||
|
|
||||||
|
@ -28,7 +28,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
|
|||||||
|
|
||||||
ARG REPO_CHANNEL="stable"
|
ARG REPO_CHANNEL="stable"
|
||||||
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
||||||
ARG VERSION="24.5.3.5"
|
ARG VERSION="24.6.1.4423"
|
||||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||||
|
|
||||||
#docker-official-library:off
|
#docker-official-library:off
|
||||||
|
41
docs/changelogs/v24.5.4.49-stable.md
Normal file
41
docs/changelogs/v24.5.4.49-stable.md
Normal file
@ -0,0 +1,41 @@
|
|||||||
|
---
|
||||||
|
sidebar_position: 1
|
||||||
|
sidebar_label: 2024
|
||||||
|
---
|
||||||
|
|
||||||
|
# 2024 Changelog
|
||||||
|
|
||||||
|
### ClickHouse release v24.5.4.49-stable (63b760955a0) FIXME as compared to v24.5.3.5-stable (e0eb66f8e17)
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
* Backported in [#65886](https://github.com/ClickHouse/ClickHouse/issues/65886): Always start Keeper with sufficient amount of threads in global thread pool. [#64444](https://github.com/ClickHouse/ClickHouse/pull/64444) ([Duc Canh Le](https://github.com/canhld94)).
|
||||||
|
* Backported in [#65304](https://github.com/ClickHouse/ClickHouse/issues/65304): Returned back the behaviour of how ClickHouse works and interprets Tuples in CSV format. This change effectively reverts https://github.com/ClickHouse/ClickHouse/pull/60994 and makes it available only under a few settings: `output_format_csv_serialize_tuple_into_separate_columns`, `input_format_csv_deserialize_separate_columns_into_tuple` and `input_format_csv_try_infer_strings_from_quoted_tuples`. [#65170](https://github.com/ClickHouse/ClickHouse/pull/65170) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Backported in [#65896](https://github.com/ClickHouse/ClickHouse/issues/65896): Respect cgroup CPU limit in Keeper. [#65819](https://github.com/ClickHouse/ClickHouse/pull/65819) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
|
||||||
|
#### Critical Bug Fix (crash, LOGICAL_ERROR, data loss, RBAC)
|
||||||
|
* Backported in [#65287](https://github.com/ClickHouse/ClickHouse/issues/65287): Fix crash with UniqInjectiveFunctionsEliminationPass and uniqCombined. [#65188](https://github.com/ClickHouse/ClickHouse/pull/65188) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#65374](https://github.com/ClickHouse/ClickHouse/issues/65374): Fix a bug in ClickHouse Keeper that causes digest mismatch during closing session. [#65198](https://github.com/ClickHouse/ClickHouse/pull/65198) ([Aleksei Filatov](https://github.com/aalexfvk)).
|
||||||
|
* Backported in [#65437](https://github.com/ClickHouse/ClickHouse/issues/65437): Forbid `QUALIFY` clause in the old analyzer. The old analyzer ignored `QUALIFY`, so it could lead to unexpected data removal in mutations. [#65356](https://github.com/ClickHouse/ClickHouse/pull/65356) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Backported in [#65450](https://github.com/ClickHouse/ClickHouse/issues/65450): Use correct memory alignment for Distinct combinator. Previously, crash could happen because of invalid memory allocation when the combinator was used. [#65379](https://github.com/ClickHouse/ClickHouse/pull/65379) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Backported in [#65712](https://github.com/ClickHouse/ClickHouse/issues/65712): Fix crash in maxIntersections. [#65689](https://github.com/ClickHouse/ClickHouse/pull/65689) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
|
||||||
|
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||||
|
* Backported in [#65681](https://github.com/ClickHouse/ClickHouse/issues/65681): Fix `duplicate alias` error for distributed queries with `ARRAY JOIN`. [#64226](https://github.com/ClickHouse/ClickHouse/pull/64226) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#65331](https://github.com/ClickHouse/ClickHouse/issues/65331): Fix the crash loop when restoring from backup is blocked by creating an MV with a definer that hasn't been restored yet. [#64595](https://github.com/ClickHouse/ClickHouse/pull/64595) ([pufit](https://github.com/pufit)).
|
||||||
|
* Backported in [#64835](https://github.com/ClickHouse/ClickHouse/issues/64835): Fix bug which could lead to non-working TTLs with expressions. [#64694](https://github.com/ClickHouse/ClickHouse/pull/64694) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#65542](https://github.com/ClickHouse/ClickHouse/issues/65542): Fix crash for `ALTER TABLE ... ON CLUSTER ... MODIFY SQL SECURITY`. [#64957](https://github.com/ClickHouse/ClickHouse/pull/64957) ([pufit](https://github.com/pufit)).
|
||||||
|
* Backported in [#65580](https://github.com/ClickHouse/ClickHouse/issues/65580): Fix crash on destroying AccessControl: add explicit shutdown. [#64993](https://github.com/ClickHouse/ClickHouse/pull/64993) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Backported in [#65618](https://github.com/ClickHouse/ClickHouse/issues/65618): Fix possible infinite query duration in case of cyclic aliases. Fixes [#64849](https://github.com/ClickHouse/ClickHouse/issues/64849). [#65081](https://github.com/ClickHouse/ClickHouse/pull/65081) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#65617](https://github.com/ClickHouse/ClickHouse/issues/65617): Fix aggregate function name rewriting in the new analyzer. [#65110](https://github.com/ClickHouse/ClickHouse/pull/65110) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Backported in [#65732](https://github.com/ClickHouse/ClickHouse/issues/65732): Eliminate injective function in argument of functions `uniq*` recursively. This used to work correctly but was broken in the new analyzer. [#65140](https://github.com/ClickHouse/ClickHouse/pull/65140) ([Duc Canh Le](https://github.com/canhld94)).
|
||||||
|
* Backported in [#65265](https://github.com/ClickHouse/ClickHouse/issues/65265): Fix the bug in Hashed and Hashed_Array dictionary short circuit evaluation, which may read uninitialized number, leading to various errors. [#65256](https://github.com/ClickHouse/ClickHouse/pull/65256) ([jsc0218](https://github.com/jsc0218)).
|
||||||
|
* Backported in [#65663](https://github.com/ClickHouse/ClickHouse/issues/65663): Disable `non-intersecting-parts` optimization for queries with `FINAL` in case of `read-in-order` optimization was enabled. This could lead to an incorrect query result. As a workaround, disable `do_not_merge_across_partitions_select_final` and `split_parts_ranges_into_intersecting_and_non_intersecting_final` before this fix is merged. [#65505](https://github.com/ClickHouse/ClickHouse/pull/65505) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#65788](https://github.com/ClickHouse/ClickHouse/issues/65788): Fixed bug in MergeJoin. Column in sparse serialisation might be treated as a column of its nested type though the required conversion wasn't performed. [#65632](https://github.com/ClickHouse/ClickHouse/pull/65632) ([Nikita Taranov](https://github.com/nickitat)).
|
||||||
|
* Backported in [#65812](https://github.com/ClickHouse/ClickHouse/issues/65812): Fix invalid exceptions in function `parseDateTime` with `%F` and `%D` placeholders. [#65768](https://github.com/ClickHouse/ClickHouse/pull/65768) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Backported in [#65828](https://github.com/ClickHouse/ClickHouse/issues/65828): Fix a bug in short circuit logic when old analyzer and dictGetOrDefault is used. [#65802](https://github.com/ClickHouse/ClickHouse/pull/65802) ([jsc0218](https://github.com/jsc0218)).
|
||||||
|
|
||||||
|
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||||
|
|
||||||
|
* Backported in [#65412](https://github.com/ClickHouse/ClickHouse/issues/65412): Re-enable OpenSSL session caching. [#65111](https://github.com/ClickHouse/ClickHouse/pull/65111) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* Backported in [#65905](https://github.com/ClickHouse/ClickHouse/issues/65905): Fix bug with session closing in Keeper. [#65735](https://github.com/ClickHouse/ClickHouse/pull/65735) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
|
@ -56,6 +56,15 @@ SELECT * FROM test_table;
|
|||||||
- `_size` — Size of the file in bytes. Type: `Nullable(UInt64)`. If the size is unknown, the value is `NULL`.
|
- `_size` — Size of the file in bytes. Type: `Nullable(UInt64)`. If the size is unknown, the value is `NULL`.
|
||||||
- `_time` — Last modified time of the file. Type: `Nullable(DateTime)`. If the time is unknown, the value is `NULL`.
|
- `_time` — Last modified time of the file. Type: `Nullable(DateTime)`. If the time is unknown, the value is `NULL`.
|
||||||
|
|
||||||
|
## Authentication
|
||||||
|
|
||||||
|
Currently there are 3 ways to authenticate:
|
||||||
|
- `Managed Identity` - Can be used by providing an `endpoint`, `connection_string` or `storage_account_url`.
|
||||||
|
- `SAS Token` - Can be used by providing an `endpoint`, `connection_string` or `storage_account_url`. It is identified by presence of '?' in the url.
|
||||||
|
- `Workload Identity` - Can be used by providing an `endpoint` or `storage_account_url`. If `use_workload_identity` parameter is set in config, ([workload identity](https://github.com/Azure/azure-sdk-for-cpp/tree/main/sdk/identity/azure-identity#authenticate-azure-hosted-applications)) is used for authentication.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## See also
|
## See also
|
||||||
|
|
||||||
[Azure Blob Storage Table Function](/docs/en/sql-reference/table-functions/azureBlobStorage)
|
[Azure Blob Storage Table Function](/docs/en/sql-reference/table-functions/azureBlobStorage)
|
||||||
|
@ -28,6 +28,8 @@ CREATE TABLE s3_queue_engine_table (name String, value UInt32)
|
|||||||
[s3queue_cleanup_interval_max_ms = 30000,]
|
[s3queue_cleanup_interval_max_ms = 30000,]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Starting with `24.7` settings without `s3queue_` prefix are also supported.
|
||||||
|
|
||||||
**Engine parameters**
|
**Engine parameters**
|
||||||
|
|
||||||
- `path` — Bucket url with path to file. Supports following wildcards in readonly mode: `*`, `**`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` — numbers, `'abc'`, `'def'` — strings. For more information see [below](#wildcards-in-path).
|
- `path` — Bucket url with path to file. Supports following wildcards in readonly mode: `*`, `**`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` — numbers, `'abc'`, `'def'` — strings. For more information see [below](#wildcards-in-path).
|
||||||
|
Binary file not shown.
After Width: | Height: | Size: 162 KiB |
394
docs/en/getting-started/example-datasets/stackoverflow.md
Normal file
394
docs/en/getting-started/example-datasets/stackoverflow.md
Normal file
@ -0,0 +1,394 @@
|
|||||||
|
---
|
||||||
|
slug: /en/getting-started/example-datasets/stackoverflow
|
||||||
|
sidebar_label: Stack Overflow
|
||||||
|
sidebar_position: 1
|
||||||
|
description: Analyzing Stack Overflow data with ClickHouse
|
||||||
|
---
|
||||||
|
|
||||||
|
# Analyzing Stack Overflow data with ClickHouse
|
||||||
|
|
||||||
|
This dataset contains every `Post`, `User`, `Vote`, `Comment`, `Badge, `PostHistory`, and `PostLink` that has occurred on Stack Overflow.
|
||||||
|
|
||||||
|
Users can either download pre-prepared Parquet versions of the data, containing every post up to April 2024, or download the latest data in XML format and load this. Stack Overflow provide updates to this data periodically - historically every 3 months.
|
||||||
|
|
||||||
|
The following diagram shows the schema for the available tables assuming Parquet format.
|
||||||
|
|
||||||
|
![Stack Overflow schema](./images/stackoverflow.png)
|
||||||
|
|
||||||
|
A description of the schema of this data can be found [here](https://meta.stackexchange.com/questions/2677/database-schema-documentation-for-the-public-data-dump-and-sede).
|
||||||
|
|
||||||
|
## Pre-prepared data
|
||||||
|
|
||||||
|
We provide a copy of this data in Parquet format, up to date as of April 2024. While small for ClickHouse with respect to the number of rows (60 million posts), this dataset contains significant volumes of text and large String columns.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE DATABASE stackoverflow
|
||||||
|
```
|
||||||
|
|
||||||
|
The following timings are for a 96 GiB, 24 vCPU ClickHouse Cloud cluster located in `eu-west-2`. The dataset is located in `eu-west-3`.
|
||||||
|
|
||||||
|
### Posts
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE stackoverflow.posts
|
||||||
|
(
|
||||||
|
`Id` Int32 CODEC(Delta(4), ZSTD(1)),
|
||||||
|
`PostTypeId` Enum8('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
|
||||||
|
`AcceptedAnswerId` UInt32,
|
||||||
|
`CreationDate` DateTime64(3, 'UTC'),
|
||||||
|
`Score` Int32,
|
||||||
|
`ViewCount` UInt32 CODEC(Delta(4), ZSTD(1)),
|
||||||
|
`Body` String,
|
||||||
|
`OwnerUserId` Int32,
|
||||||
|
`OwnerDisplayName` String,
|
||||||
|
`LastEditorUserId` Int32,
|
||||||
|
`LastEditorDisplayName` String,
|
||||||
|
`LastEditDate` DateTime64(3, 'UTC') CODEC(Delta(8), ZSTD(1)),
|
||||||
|
`LastActivityDate` DateTime64(3, 'UTC'),
|
||||||
|
`Title` String,
|
||||||
|
`Tags` String,
|
||||||
|
`AnswerCount` UInt16 CODEC(Delta(2), ZSTD(1)),
|
||||||
|
`CommentCount` UInt8,
|
||||||
|
`FavoriteCount` UInt8,
|
||||||
|
`ContentLicense` LowCardinality(String),
|
||||||
|
`ParentId` String,
|
||||||
|
`CommunityOwnedDate` DateTime64(3, 'UTC'),
|
||||||
|
`ClosedDate` DateTime64(3, 'UTC')
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree
|
||||||
|
PARTITION BY toYear(CreationDate)
|
||||||
|
ORDER BY (PostTypeId, toDate(CreationDate), CreationDate)
|
||||||
|
|
||||||
|
INSERT INTO stackoverflow.posts SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/*.parquet')
|
||||||
|
|
||||||
|
0 rows in set. Elapsed: 265.466 sec. Processed 59.82 million rows, 38.07 GB (225.34 thousand rows/s., 143.42 MB/s.)
|
||||||
|
```
|
||||||
|
|
||||||
|
Posts are also available by year e.g. [https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/2020.parquet](https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/2020.parquet)
|
||||||
|
|
||||||
|
|
||||||
|
### Votes
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE stackoverflow.votes
|
||||||
|
(
|
||||||
|
`Id` UInt32,
|
||||||
|
`PostId` Int32,
|
||||||
|
`VoteTypeId` UInt8,
|
||||||
|
`CreationDate` DateTime64(3, 'UTC'),
|
||||||
|
`UserId` Int32,
|
||||||
|
`BountyAmount` UInt8
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree
|
||||||
|
ORDER BY (VoteTypeId, CreationDate, PostId, UserId)
|
||||||
|
|
||||||
|
INSERT INTO stackoverflow.votes SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/votes/*.parquet')
|
||||||
|
|
||||||
|
0 rows in set. Elapsed: 21.605 sec. Processed 238.98 million rows, 2.13 GB (11.06 million rows/s., 98.46 MB/s.)
|
||||||
|
```
|
||||||
|
|
||||||
|
Votes are also available by year e.g. [https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/2020.parquet](https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/votes/2020.parquet)
|
||||||
|
|
||||||
|
|
||||||
|
### Comments
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE stackoverflow.comments
|
||||||
|
(
|
||||||
|
`Id` UInt32,
|
||||||
|
`PostId` UInt32,
|
||||||
|
`Score` UInt16,
|
||||||
|
`Text` String,
|
||||||
|
`CreationDate` DateTime64(3, 'UTC'),
|
||||||
|
`UserId` Int32,
|
||||||
|
`UserDisplayName` LowCardinality(String)
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree
|
||||||
|
ORDER BY CreationDate
|
||||||
|
|
||||||
|
INSERT INTO stackoverflow.comments SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/comments/*.parquet')
|
||||||
|
|
||||||
|
0 rows in set. Elapsed: 56.593 sec. Processed 90.38 million rows, 11.14 GB (1.60 million rows/s., 196.78 MB/s.)
|
||||||
|
```
|
||||||
|
|
||||||
|
Comments are also available by year e.g. [https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/2020.parquet](https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/comments/2020.parquet)
|
||||||
|
|
||||||
|
### Users
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE stackoverflow.users
|
||||||
|
(
|
||||||
|
`Id` Int32,
|
||||||
|
`Reputation` LowCardinality(String),
|
||||||
|
`CreationDate` DateTime64(3, 'UTC') CODEC(Delta(8), ZSTD(1)),
|
||||||
|
`DisplayName` String,
|
||||||
|
`LastAccessDate` DateTime64(3, 'UTC'),
|
||||||
|
`AboutMe` String,
|
||||||
|
`Views` UInt32,
|
||||||
|
`UpVotes` UInt32,
|
||||||
|
`DownVotes` UInt32,
|
||||||
|
`WebsiteUrl` String,
|
||||||
|
`Location` LowCardinality(String),
|
||||||
|
`AccountId` Int32
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree
|
||||||
|
ORDER BY (Id, CreationDate)
|
||||||
|
|
||||||
|
INSERT INTO stackoverflow.users SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/users.parquet')
|
||||||
|
|
||||||
|
0 rows in set. Elapsed: 10.988 sec. Processed 22.48 million rows, 1.36 GB (2.05 million rows/s., 124.10 MB/s.)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Badges
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE stackoverflow.badges
|
||||||
|
(
|
||||||
|
`Id` UInt32,
|
||||||
|
`UserId` Int32,
|
||||||
|
`Name` LowCardinality(String),
|
||||||
|
`Date` DateTime64(3, 'UTC'),
|
||||||
|
`Class` Enum8('Gold' = 1, 'Silver' = 2, 'Bronze' = 3),
|
||||||
|
`TagBased` Bool
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree
|
||||||
|
ORDER BY UserId
|
||||||
|
|
||||||
|
INSERT INTO stackoverflow.badges SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/badges.parquet')
|
||||||
|
|
||||||
|
0 rows in set. Elapsed: 6.635 sec. Processed 51.29 million rows, 797.05 MB (7.73 million rows/s., 120.13 MB/s.)
|
||||||
|
```
|
||||||
|
|
||||||
|
### `PostLinks`
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE stackoverflow.postlinks
|
||||||
|
(
|
||||||
|
`Id` UInt64,
|
||||||
|
`CreationDate` DateTime64(3, 'UTC'),
|
||||||
|
`PostId` Int32,
|
||||||
|
`RelatedPostId` Int32,
|
||||||
|
`LinkTypeId` Enum8('Linked' = 1, 'Duplicate' = 3)
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree
|
||||||
|
ORDER BY (PostId, RelatedPostId)
|
||||||
|
|
||||||
|
INSERT INTO stackoverflow.postlinks SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/postlinks.parquet')
|
||||||
|
|
||||||
|
0 rows in set. Elapsed: 1.534 sec. Processed 6.55 million rows, 129.70 MB (4.27 million rows/s., 84.57 MB/s.)
|
||||||
|
```
|
||||||
|
|
||||||
|
### `PostHistory`
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE stackoverflow.posthistory
|
||||||
|
(
|
||||||
|
`Id` UInt64,
|
||||||
|
`PostHistoryTypeId` UInt8,
|
||||||
|
`PostId` Int32,
|
||||||
|
`RevisionGUID` String,
|
||||||
|
`CreationDate` DateTime64(3, 'UTC'),
|
||||||
|
`UserId` Int32,
|
||||||
|
`Text` String,
|
||||||
|
`ContentLicense` LowCardinality(String),
|
||||||
|
`Comment` String,
|
||||||
|
`UserDisplayName` String
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree
|
||||||
|
ORDER BY (CreationDate, PostId)
|
||||||
|
|
||||||
|
INSERT INTO stackoverflow.posthistory SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posthistory/*.parquet')
|
||||||
|
|
||||||
|
0 rows in set. Elapsed: 422.795 sec. Processed 160.79 million rows, 67.08 GB (380.30 thousand rows/s., 158.67 MB/s.)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Original dataset
|
||||||
|
|
||||||
|
The original dataset is available in compressed (7zip) XML format at [https://archive.org/download/stackexchange](https://archive.org/download/stackexchange) - files with prefix `stackoverflow.com*`.
|
||||||
|
|
||||||
|
### Download
|
||||||
|
|
||||||
|
```bash
|
||||||
|
wget https://archive.org/download/stackexchange/stackoverflow.com-Badges.7z
|
||||||
|
wget https://archive.org/download/stackexchange/stackoverflow.com-Comments.7z
|
||||||
|
wget https://archive.org/download/stackexchange/stackoverflow.com-PostHistory.7z
|
||||||
|
wget https://archive.org/download/stackexchange/stackoverflow.com-PostLinks.7z
|
||||||
|
wget https://archive.org/download/stackexchange/stackoverflow.com-Posts.7z
|
||||||
|
wget https://archive.org/download/stackexchange/stackoverflow.com-Users.7z
|
||||||
|
wget https://archive.org/download/stackexchange/stackoverflow.com-Votes.7z
|
||||||
|
```
|
||||||
|
|
||||||
|
These files are up to 35GB and can take around 30 mins to download depending on internet connection - the download server throttles at around 20MB/sec.
|
||||||
|
|
||||||
|
### Convert to JSON
|
||||||
|
|
||||||
|
At the time of writing, ClickHouse does not have native support for XML as an input format. To load the data into ClickHouse we first convert to NDJSON.
|
||||||
|
|
||||||
|
To convert XML to JSON we recommend the [`xq`](https://github.com/kislyuk/yq) linux tool, a simple `jq` wrapper for XML documents.
|
||||||
|
|
||||||
|
Install xq and jq:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt install jq
|
||||||
|
pip install yq
|
||||||
|
```
|
||||||
|
|
||||||
|
The following steps apply to any of the above files. We use the `stackoverflow.com-Posts.7z` file as an example. Modify as required.
|
||||||
|
|
||||||
|
Extract the file using [p7zip](https://p7zip.sourceforge.net/). This will produce a single xml file - in this case `Posts.xml`.
|
||||||
|
|
||||||
|
> Files are compressed approximately 4.5x. At 22GB compressed, the posts file requires around 97G uncompressed.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
p7zip -d stackoverflow.com-Posts.7z
|
||||||
|
```
|
||||||
|
|
||||||
|
The following splits the xml file into files, each containing 10000 rows.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir posts
|
||||||
|
cd posts
|
||||||
|
# the following splits the input xml file into sub files of 10000 rows
|
||||||
|
tail +3 ../Posts.xml | head -n -1 | split -l 10000 --filter='{ printf "<rows>\n"; cat - ; printf "</rows>\n"; } > $FILE' -
|
||||||
|
```
|
||||||
|
|
||||||
|
After running the above users will have a set of files, each with 10000 lines. This ensures the memory overhead of the next command is not excessive (xml to JSON conversion is done in memory).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
find . -maxdepth 1 -type f -exec xq -c '.rows.row[]' {} \; | sed -e 's:"@:":g' > posts_v2.json
|
||||||
|
```
|
||||||
|
|
||||||
|
The above command will produce a single `posts.json` file.
|
||||||
|
|
||||||
|
Load into ClickHouse with the following command. Note the schema is specified for the `posts.json` file. This will need to be adjusted per data type to align with the target table.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
clickhouse local --query "SELECT * FROM file('posts.json', JSONEachRow, 'Id Int32, PostTypeId UInt8, AcceptedAnswerId UInt32, CreationDate DateTime64(3, \'UTC\'), Score Int32, ViewCount UInt32, Body String, OwnerUserId Int32, OwnerDisplayName String, LastEditorUserId Int32, LastEditorDisplayName String, LastEditDate DateTime64(3, \'UTC\'), LastActivityDate DateTime64(3, \'UTC\'), Title String, Tags String, AnswerCount UInt16, CommentCount UInt8, FavoriteCount UInt8, ContentLicense String, ParentId String, CommunityOwnedDate DateTime64(3, \'UTC\'), ClosedDate DateTime64(3, \'UTC\')') FORMAT Native" | clickhouse client --host <host> --secure --password <password> --query "INSERT INTO stackoverflow.posts_v2 FORMAT Native"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Example queries
|
||||||
|
|
||||||
|
A few simple questions to you get started.
|
||||||
|
|
||||||
|
### Most popular tags on Stack Overflow
|
||||||
|
|
||||||
|
```sql
|
||||||
|
|
||||||
|
SELECT
|
||||||
|
arrayJoin(arrayFilter(t -> (t != ''), splitByChar('|', Tags))) AS Tags,
|
||||||
|
count() AS c
|
||||||
|
FROM stackoverflow.posts
|
||||||
|
GROUP BY Tags
|
||||||
|
ORDER BY c DESC
|
||||||
|
LIMIT 10
|
||||||
|
|
||||||
|
┌─Tags───────┬───────c─┐
|
||||||
|
│ javascript │ 2527130 │
|
||||||
|
│ python │ 2189638 │
|
||||||
|
│ java │ 1916156 │
|
||||||
|
│ c# │ 1614236 │
|
||||||
|
│ php │ 1463901 │
|
||||||
|
│ android │ 1416442 │
|
||||||
|
│ html │ 1186567 │
|
||||||
|
│ jquery │ 1034621 │
|
||||||
|
│ c++ │ 806202 │
|
||||||
|
│ css │ 803755 │
|
||||||
|
└────────────┴─────────┘
|
||||||
|
|
||||||
|
10 rows in set. Elapsed: 1.013 sec. Processed 59.82 million rows, 1.21 GB (59.07 million rows/s., 1.19 GB/s.)
|
||||||
|
Peak memory usage: 224.03 MiB.
|
||||||
|
```
|
||||||
|
|
||||||
|
### User with the most answers (active accounts)
|
||||||
|
|
||||||
|
Account requires a `UserId`.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT
|
||||||
|
any(OwnerUserId) UserId,
|
||||||
|
OwnerDisplayName,
|
||||||
|
count() AS c
|
||||||
|
FROM stackoverflow.posts WHERE OwnerDisplayName != '' AND PostTypeId='Answer' AND OwnerUserId != 0
|
||||||
|
GROUP BY OwnerDisplayName
|
||||||
|
ORDER BY c DESC
|
||||||
|
LIMIT 5
|
||||||
|
|
||||||
|
┌─UserId─┬─OwnerDisplayName─┬────c─┐
|
||||||
|
│ 22656 │ Jon Skeet │ 2727 │
|
||||||
|
│ 23354 │ Marc Gravell │ 2150 │
|
||||||
|
│ 12950 │ tvanfosson │ 1530 │
|
||||||
|
│ 3043 │ Joel Coehoorn │ 1438 │
|
||||||
|
│ 10661 │ S.Lott │ 1087 │
|
||||||
|
└────────┴──────────────────┴──────┘
|
||||||
|
|
||||||
|
5 rows in set. Elapsed: 0.154 sec. Processed 35.83 million rows, 193.39 MB (232.33 million rows/s., 1.25 GB/s.)
|
||||||
|
Peak memory usage: 206.45 MiB.
|
||||||
|
```
|
||||||
|
|
||||||
|
### ClickHouse related posts with the most views
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT
|
||||||
|
Id,
|
||||||
|
Title,
|
||||||
|
ViewCount,
|
||||||
|
AnswerCount
|
||||||
|
FROM stackoverflow.posts
|
||||||
|
WHERE Title ILIKE '%ClickHouse%'
|
||||||
|
ORDER BY ViewCount DESC
|
||||||
|
LIMIT 10
|
||||||
|
|
||||||
|
┌───────Id─┬─Title────────────────────────────────────────────────────────────────────────────┬─ViewCount─┬─AnswerCount─┐
|
||||||
|
│ 52355143 │ Is it possible to delete old records from clickhouse table? │ 41462 │ 3 │
|
||||||
|
│ 37954203 │ Clickhouse Data Import │ 38735 │ 3 │
|
||||||
|
│ 37901642 │ Updating data in Clickhouse │ 36236 │ 6 │
|
||||||
|
│ 58422110 │ Pandas: How to insert dataframe into Clickhouse │ 29731 │ 4 │
|
||||||
|
│ 63621318 │ DBeaver - Clickhouse - SQL Error [159] .. Read timed out │ 27350 │ 1 │
|
||||||
|
│ 47591813 │ How to filter clickhouse table by array column contents? │ 27078 │ 2 │
|
||||||
|
│ 58728436 │ How to search the string in query with case insensitive on Clickhouse database? │ 26567 │ 3 │
|
||||||
|
│ 65316905 │ Clickhouse: DB::Exception: Memory limit (for query) exceeded │ 24899 │ 2 │
|
||||||
|
│ 49944865 │ How to add a column in clickhouse │ 24424 │ 1 │
|
||||||
|
│ 59712399 │ How to cast date Strings to DateTime format with extended parsing in ClickHouse? │ 22620 │ 1 │
|
||||||
|
└──────────┴──────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────────┘
|
||||||
|
|
||||||
|
10 rows in set. Elapsed: 0.472 sec. Processed 59.82 million rows, 1.91 GB (126.63 million rows/s., 4.03 GB/s.)
|
||||||
|
Peak memory usage: 240.01 MiB.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Most controversial posts
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT
|
||||||
|
Id,
|
||||||
|
Title,
|
||||||
|
UpVotes,
|
||||||
|
DownVotes,
|
||||||
|
abs(UpVotes - DownVotes) AS Controversial_ratio
|
||||||
|
FROM stackoverflow.posts
|
||||||
|
INNER JOIN
|
||||||
|
(
|
||||||
|
SELECT
|
||||||
|
PostId,
|
||||||
|
countIf(VoteTypeId = 2) AS UpVotes,
|
||||||
|
countIf(VoteTypeId = 3) AS DownVotes
|
||||||
|
FROM stackoverflow.votes
|
||||||
|
GROUP BY PostId
|
||||||
|
HAVING (UpVotes > 10) AND (DownVotes > 10)
|
||||||
|
) AS votes ON posts.Id = votes.PostId
|
||||||
|
WHERE Title != ''
|
||||||
|
ORDER BY Controversial_ratio ASC
|
||||||
|
LIMIT 3
|
||||||
|
|
||||||
|
┌───────Id─┬─Title─────────────────────────────────────────────┬─UpVotes─┬─DownVotes─┬─Controversial_ratio─┐
|
||||||
|
│ 583177 │ VB.NET Infinite For Loop │ 12 │ 12 │ 0 │
|
||||||
|
│ 9756797 │ Read console input as enumerable - one statement? │ 16 │ 16 │ 0 │
|
||||||
|
│ 13329132 │ What's the point of ARGV in Ruby? │ 22 │ 22 │ 0 │
|
||||||
|
└──────────┴───────────────────────────────────────────────────┴─────────┴───────────┴─────────────────────┘
|
||||||
|
|
||||||
|
3 rows in set. Elapsed: 4.779 sec. Processed 298.80 million rows, 3.16 GB (62.52 million rows/s., 661.05 MB/s.)
|
||||||
|
Peak memory usage: 6.05 GiB.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Attribution
|
||||||
|
|
||||||
|
We thank Stack Overflow for providing this data under the `cc-by-sa 4.0` license, acknowledging their efforts and the original source of the data at [https://archive.org/details/stackexchange](https://archive.org/details/stackexchange).
|
@ -5,6 +5,10 @@ sidebar_label: "Named collections"
|
|||||||
title: "Named collections"
|
title: "Named collections"
|
||||||
---
|
---
|
||||||
|
|
||||||
|
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
|
||||||
|
|
||||||
|
<CloudNotSupportedBadge />
|
||||||
|
|
||||||
Named collections provide a way to store collections of key-value pairs to be
|
Named collections provide a way to store collections of key-value pairs to be
|
||||||
used to configure integrations with external sources. You can use named collections with
|
used to configure integrations with external sources. You can use named collections with
|
||||||
dictionaries, tables, table functions, and object storage.
|
dictionaries, tables, table functions, and object storage.
|
||||||
|
@ -3,6 +3,10 @@ slug: /en/sql-reference/statements/alter/named-collection
|
|||||||
sidebar_label: NAMED COLLECTION
|
sidebar_label: NAMED COLLECTION
|
||||||
---
|
---
|
||||||
|
|
||||||
|
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
|
||||||
|
|
||||||
|
<CloudNotSupportedBadge />
|
||||||
|
|
||||||
# ALTER NAMED COLLECTION
|
# ALTER NAMED COLLECTION
|
||||||
|
|
||||||
This query intends to modify already existing named collections.
|
This query intends to modify already existing named collections.
|
||||||
|
@ -3,6 +3,10 @@ slug: /en/sql-reference/statements/create/named-collection
|
|||||||
sidebar_label: NAMED COLLECTION
|
sidebar_label: NAMED COLLECTION
|
||||||
---
|
---
|
||||||
|
|
||||||
|
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
|
||||||
|
|
||||||
|
<CloudNotSupportedBadge />
|
||||||
|
|
||||||
# CREATE NAMED COLLECTION
|
# CREATE NAMED COLLECTION
|
||||||
|
|
||||||
Creates a new named collection.
|
Creates a new named collection.
|
||||||
|
@ -201,18 +201,18 @@ ClickHouse 不要求主键唯一,所以您可以插入多条具有相同主键
|
|||||||
|
|
||||||
主键中列的数量并没有明确的限制。依据数据结构,您可以在主键包含多些或少些列。这样可以:
|
主键中列的数量并没有明确的限制。依据数据结构,您可以在主键包含多些或少些列。这样可以:
|
||||||
|
|
||||||
- 改善索引的性能。
|
- 改善索引的性能。
|
||||||
|
|
||||||
- 如果当前主键是 `(a, b)` ,在下列情况下添加另一个 `c` 列会提升性能:
|
如果当前主键是 `(a, b)` ,在下列情况下添加另一个 `c` 列会提升性能:
|
||||||
|
|
||||||
- 查询会使用 `c` 列作为条件
|
- 查询会使用 `c` 列作为条件
|
||||||
- 很长的数据范围( `index_granularity` 的数倍)里 `(a, b)` 都是相同的值,并且这样的情况很普遍。换言之,就是加入另一列后,可以让您的查询略过很长的数据范围。
|
- 很长的数据范围( `index_granularity` 的数倍)里 `(a, b)` 都是相同的值,并且这样的情况很普遍。换言之,就是加入另一列后,可以让您的查询略过很长的数据范围。
|
||||||
|
|
||||||
- 改善数据压缩。
|
- 改善数据压缩。
|
||||||
|
|
||||||
ClickHouse 以主键排序片段数据,所以,数据的一致性越高,压缩越好。
|
ClickHouse 以主键排序片段数据,所以,数据的一致性越高,压缩越好。
|
||||||
|
|
||||||
- 在[CollapsingMergeTree](collapsingmergetree.md#table_engine-collapsingmergetree) 和 [SummingMergeTree](summingmergetree.md) 引擎里进行数据合并时会提供额外的处理逻辑。
|
- 在[CollapsingMergeTree](collapsingmergetree.md#table_engine-collapsingmergetree) 和 [SummingMergeTree](summingmergetree.md) 引擎里进行数据合并时会提供额外的处理逻辑。
|
||||||
|
|
||||||
在这种情况下,指定与主键不同的 *排序键* 也是有意义的。
|
在这种情况下,指定与主键不同的 *排序键* 也是有意义的。
|
||||||
|
|
||||||
|
@ -248,6 +248,10 @@ std::vector<String> Client::loadWarningMessages()
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Poco::Util::LayeredConfiguration & Client::getClientConfiguration()
|
||||||
|
{
|
||||||
|
return config();
|
||||||
|
}
|
||||||
|
|
||||||
void Client::initialize(Poco::Util::Application & self)
|
void Client::initialize(Poco::Util::Application & self)
|
||||||
{
|
{
|
||||||
@ -697,9 +701,7 @@ bool Client::processWithFuzzing(const String & full_query)
|
|||||||
const char * begin = full_query.data();
|
const char * begin = full_query.data();
|
||||||
orig_ast = parseQuery(begin, begin + full_query.size(),
|
orig_ast = parseQuery(begin, begin + full_query.size(),
|
||||||
global_context->getSettingsRef(),
|
global_context->getSettingsRef(),
|
||||||
/*allow_multi_statements=*/ true,
|
/*allow_multi_statements=*/ true);
|
||||||
/*is_interactive=*/ is_interactive,
|
|
||||||
/*ignore_error=*/ ignore_error);
|
|
||||||
}
|
}
|
||||||
catch (const Exception & e)
|
catch (const Exception & e)
|
||||||
{
|
{
|
||||||
|
@ -16,6 +16,9 @@ public:
|
|||||||
int main(const std::vector<String> & /*args*/) override;
|
int main(const std::vector<String> & /*args*/) override;
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
|
|
||||||
|
Poco::Util::LayeredConfiguration & getClientConfiguration() override;
|
||||||
|
|
||||||
bool processWithFuzzing(const String & full_query) override;
|
bool processWithFuzzing(const String & full_query) override;
|
||||||
std::optional<bool> processFuzzingStep(const String & query_to_execute, const ASTPtr & parsed_query);
|
std::optional<bool> processFuzzingStep(const String & query_to_execute, const ASTPtr & parsed_query);
|
||||||
|
|
||||||
|
@ -383,6 +383,9 @@ int KeeperClient::main(const std::vector<String> & /* args */)
|
|||||||
|
|
||||||
for (const auto & key : keys)
|
for (const auto & key : keys)
|
||||||
{
|
{
|
||||||
|
if (key != "node")
|
||||||
|
continue;
|
||||||
|
|
||||||
String prefix = "zookeeper." + key;
|
String prefix = "zookeeper." + key;
|
||||||
String host = clickhouse_config.configuration->getString(prefix + ".host");
|
String host = clickhouse_config.configuration->getString(prefix + ".host");
|
||||||
String port = clickhouse_config.configuration->getString(prefix + ".port");
|
String port = clickhouse_config.configuration->getString(prefix + ".port");
|
||||||
@ -401,6 +404,7 @@ int KeeperClient::main(const std::vector<String> & /* args */)
|
|||||||
zk_args.hosts.push_back(host + ":" + port);
|
zk_args.hosts.push_back(host + ":" + port);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
zk_args.availability_zones.resize(zk_args.hosts.size());
|
||||||
zk_args.connection_timeout_ms = config().getInt("connection-timeout", 10) * 1000;
|
zk_args.connection_timeout_ms = config().getInt("connection-timeout", 10) * 1000;
|
||||||
zk_args.session_timeout_ms = config().getInt("session-timeout", 10) * 1000;
|
zk_args.session_timeout_ms = config().getInt("session-timeout", 10) * 1000;
|
||||||
zk_args.operation_timeout_ms = config().getInt("operation-timeout", 10) * 1000;
|
zk_args.operation_timeout_ms = config().getInt("operation-timeout", 10) * 1000;
|
||||||
|
@ -355,10 +355,7 @@ try
|
|||||||
|
|
||||||
std::string include_from_path = config().getString("include_from", "/etc/metrika.xml");
|
std::string include_from_path = config().getString("include_from", "/etc/metrika.xml");
|
||||||
|
|
||||||
if (config().has(DB::PlacementInfo::PLACEMENT_CONFIG_PREFIX))
|
PlacementInfo::PlacementInfo::instance().initialize(config());
|
||||||
{
|
|
||||||
PlacementInfo::PlacementInfo::instance().initialize(config());
|
|
||||||
}
|
|
||||||
|
|
||||||
GlobalThreadPool::initialize(
|
GlobalThreadPool::initialize(
|
||||||
/// We need to have sufficient amount of threads for connections + nuraft workers + keeper workers, 1000 is an estimation
|
/// We need to have sufficient amount of threads for connections + nuraft workers + keeper workers, 1000 is an estimation
|
||||||
|
@ -32,6 +32,7 @@
|
|||||||
#include <Common/quoteString.h>
|
#include <Common/quoteString.h>
|
||||||
#include <Common/randomSeed.h>
|
#include <Common/randomSeed.h>
|
||||||
#include <Common/ThreadPool.h>
|
#include <Common/ThreadPool.h>
|
||||||
|
#include <Common/CurrentMetrics.h>
|
||||||
#include <Loggers/OwnFormattingChannel.h>
|
#include <Loggers/OwnFormattingChannel.h>
|
||||||
#include <Loggers/OwnPatternFormatter.h>
|
#include <Loggers/OwnPatternFormatter.h>
|
||||||
#include <IO/ReadBufferFromFile.h>
|
#include <IO/ReadBufferFromFile.h>
|
||||||
@ -59,8 +60,13 @@
|
|||||||
# include <azure/storage/common/internal/xml_wrapper.hpp>
|
# include <azure/storage/common/internal/xml_wrapper.hpp>
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
|
||||||
namespace fs = std::filesystem;
|
namespace fs = std::filesystem;
|
||||||
|
|
||||||
|
namespace CurrentMetrics
|
||||||
|
{
|
||||||
|
extern const Metric MemoryTracking;
|
||||||
|
}
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
@ -82,6 +88,11 @@ void applySettingsOverridesForLocal(ContextMutablePtr context)
|
|||||||
context->setSettings(settings);
|
context->setSettings(settings);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Poco::Util::LayeredConfiguration & LocalServer::getClientConfiguration()
|
||||||
|
{
|
||||||
|
return config();
|
||||||
|
}
|
||||||
|
|
||||||
void LocalServer::processError(const String &) const
|
void LocalServer::processError(const String &) const
|
||||||
{
|
{
|
||||||
if (ignore_error)
|
if (ignore_error)
|
||||||
@ -117,20 +128,21 @@ void LocalServer::initialize(Poco::Util::Application & self)
|
|||||||
Poco::Util::Application::initialize(self);
|
Poco::Util::Application::initialize(self);
|
||||||
|
|
||||||
/// Load config files if exists
|
/// Load config files if exists
|
||||||
if (config().has("config-file") || fs::exists("config.xml"))
|
if (getClientConfiguration().has("config-file") || fs::exists("config.xml"))
|
||||||
{
|
{
|
||||||
const auto config_path = config().getString("config-file", "config.xml");
|
const auto config_path = getClientConfiguration().getString("config-file", "config.xml");
|
||||||
ConfigProcessor config_processor(config_path, false, true);
|
ConfigProcessor config_processor(config_path, false, true);
|
||||||
ConfigProcessor::setConfigPath(fs::path(config_path).parent_path());
|
ConfigProcessor::setConfigPath(fs::path(config_path).parent_path());
|
||||||
auto loaded_config = config_processor.loadConfig();
|
auto loaded_config = config_processor.loadConfig();
|
||||||
config().add(loaded_config.configuration.duplicate(), PRIO_DEFAULT, false);
|
getClientConfiguration().add(loaded_config.configuration.duplicate(), PRIO_DEFAULT, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
server_settings.loadSettingsFromConfig(config());
|
||||||
|
|
||||||
GlobalThreadPool::initialize(
|
GlobalThreadPool::initialize(
|
||||||
config().getUInt("max_thread_pool_size", 10000),
|
server_settings.max_thread_pool_size,
|
||||||
config().getUInt("max_thread_pool_free_size", 1000),
|
server_settings.max_thread_pool_free_size,
|
||||||
config().getUInt("thread_pool_queue_size", 10000)
|
server_settings.thread_pool_queue_size);
|
||||||
);
|
|
||||||
|
|
||||||
#if USE_AZURE_BLOB_STORAGE
|
#if USE_AZURE_BLOB_STORAGE
|
||||||
/// See the explanation near the same line in Server.cpp
|
/// See the explanation near the same line in Server.cpp
|
||||||
@ -141,18 +153,17 @@ void LocalServer::initialize(Poco::Util::Application & self)
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
getIOThreadPool().initialize(
|
getIOThreadPool().initialize(
|
||||||
config().getUInt("max_io_thread_pool_size", 100),
|
server_settings.max_io_thread_pool_size,
|
||||||
config().getUInt("max_io_thread_pool_free_size", 0),
|
server_settings.max_io_thread_pool_free_size,
|
||||||
config().getUInt("io_thread_pool_queue_size", 10000));
|
server_settings.io_thread_pool_queue_size);
|
||||||
|
|
||||||
|
const size_t active_parts_loading_threads = server_settings.max_active_parts_loading_thread_pool_size;
|
||||||
const size_t active_parts_loading_threads = config().getUInt("max_active_parts_loading_thread_pool_size", 64);
|
|
||||||
getActivePartsLoadingThreadPool().initialize(
|
getActivePartsLoadingThreadPool().initialize(
|
||||||
active_parts_loading_threads,
|
active_parts_loading_threads,
|
||||||
0, // We don't need any threads one all the parts will be loaded
|
0, // We don't need any threads one all the parts will be loaded
|
||||||
active_parts_loading_threads);
|
active_parts_loading_threads);
|
||||||
|
|
||||||
const size_t outdated_parts_loading_threads = config().getUInt("max_outdated_parts_loading_thread_pool_size", 32);
|
const size_t outdated_parts_loading_threads = server_settings.max_outdated_parts_loading_thread_pool_size;
|
||||||
getOutdatedPartsLoadingThreadPool().initialize(
|
getOutdatedPartsLoadingThreadPool().initialize(
|
||||||
outdated_parts_loading_threads,
|
outdated_parts_loading_threads,
|
||||||
0, // We don't need any threads one all the parts will be loaded
|
0, // We don't need any threads one all the parts will be loaded
|
||||||
@ -160,7 +171,7 @@ void LocalServer::initialize(Poco::Util::Application & self)
|
|||||||
|
|
||||||
getOutdatedPartsLoadingThreadPool().setMaxTurboThreads(active_parts_loading_threads);
|
getOutdatedPartsLoadingThreadPool().setMaxTurboThreads(active_parts_loading_threads);
|
||||||
|
|
||||||
const size_t unexpected_parts_loading_threads = config().getUInt("max_unexpected_parts_loading_thread_pool_size", 32);
|
const size_t unexpected_parts_loading_threads = server_settings.max_unexpected_parts_loading_thread_pool_size;
|
||||||
getUnexpectedPartsLoadingThreadPool().initialize(
|
getUnexpectedPartsLoadingThreadPool().initialize(
|
||||||
unexpected_parts_loading_threads,
|
unexpected_parts_loading_threads,
|
||||||
0, // We don't need any threads one all the parts will be loaded
|
0, // We don't need any threads one all the parts will be loaded
|
||||||
@ -168,7 +179,7 @@ void LocalServer::initialize(Poco::Util::Application & self)
|
|||||||
|
|
||||||
getUnexpectedPartsLoadingThreadPool().setMaxTurboThreads(active_parts_loading_threads);
|
getUnexpectedPartsLoadingThreadPool().setMaxTurboThreads(active_parts_loading_threads);
|
||||||
|
|
||||||
const size_t cleanup_threads = config().getUInt("max_parts_cleaning_thread_pool_size", 128);
|
const size_t cleanup_threads = server_settings.max_parts_cleaning_thread_pool_size;
|
||||||
getPartsCleaningThreadPool().initialize(
|
getPartsCleaningThreadPool().initialize(
|
||||||
cleanup_threads,
|
cleanup_threads,
|
||||||
0, // We don't need any threads one all the parts will be deleted
|
0, // We don't need any threads one all the parts will be deleted
|
||||||
@ -201,10 +212,10 @@ void LocalServer::tryInitPath()
|
|||||||
{
|
{
|
||||||
std::string path;
|
std::string path;
|
||||||
|
|
||||||
if (config().has("path"))
|
if (getClientConfiguration().has("path"))
|
||||||
{
|
{
|
||||||
// User-supplied path.
|
// User-supplied path.
|
||||||
path = config().getString("path");
|
path = getClientConfiguration().getString("path");
|
||||||
Poco::trimInPlace(path);
|
Poco::trimInPlace(path);
|
||||||
|
|
||||||
if (path.empty())
|
if (path.empty())
|
||||||
@ -263,13 +274,13 @@ void LocalServer::tryInitPath()
|
|||||||
|
|
||||||
global_context->setUserFilesPath(""); /// user's files are everywhere
|
global_context->setUserFilesPath(""); /// user's files are everywhere
|
||||||
|
|
||||||
std::string user_scripts_path = config().getString("user_scripts_path", fs::path(path) / "user_scripts/");
|
std::string user_scripts_path = getClientConfiguration().getString("user_scripts_path", fs::path(path) / "user_scripts/");
|
||||||
global_context->setUserScriptsPath(user_scripts_path);
|
global_context->setUserScriptsPath(user_scripts_path);
|
||||||
|
|
||||||
/// top_level_domains_lists
|
/// top_level_domains_lists
|
||||||
const std::string & top_level_domains_path = config().getString("top_level_domains_path", fs::path(path) / "top_level_domains/");
|
const std::string & top_level_domains_path = getClientConfiguration().getString("top_level_domains_path", fs::path(path) / "top_level_domains/");
|
||||||
if (!top_level_domains_path.empty())
|
if (!top_level_domains_path.empty())
|
||||||
TLDListsHolder::getInstance().parseConfig(fs::path(top_level_domains_path) / "", config());
|
TLDListsHolder::getInstance().parseConfig(fs::path(top_level_domains_path) / "", getClientConfiguration());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -311,14 +322,14 @@ void LocalServer::cleanup()
|
|||||||
|
|
||||||
std::string LocalServer::getInitialCreateTableQuery()
|
std::string LocalServer::getInitialCreateTableQuery()
|
||||||
{
|
{
|
||||||
if (!config().has("table-structure") && !config().has("table-file") && !config().has("table-data-format") && (!isRegularFile(STDIN_FILENO) || queries.empty()))
|
if (!getClientConfiguration().has("table-structure") && !getClientConfiguration().has("table-file") && !getClientConfiguration().has("table-data-format") && (!isRegularFile(STDIN_FILENO) || queries.empty()))
|
||||||
return {};
|
return {};
|
||||||
|
|
||||||
auto table_name = backQuoteIfNeed(config().getString("table-name", "table"));
|
auto table_name = backQuoteIfNeed(getClientConfiguration().getString("table-name", "table"));
|
||||||
auto table_structure = config().getString("table-structure", "auto");
|
auto table_structure = getClientConfiguration().getString("table-structure", "auto");
|
||||||
|
|
||||||
String table_file;
|
String table_file;
|
||||||
if (!config().has("table-file") || config().getString("table-file") == "-")
|
if (!getClientConfiguration().has("table-file") || getClientConfiguration().getString("table-file") == "-")
|
||||||
{
|
{
|
||||||
/// Use Unix tools stdin naming convention
|
/// Use Unix tools stdin naming convention
|
||||||
table_file = "stdin";
|
table_file = "stdin";
|
||||||
@ -326,7 +337,7 @@ std::string LocalServer::getInitialCreateTableQuery()
|
|||||||
else
|
else
|
||||||
{
|
{
|
||||||
/// Use regular file
|
/// Use regular file
|
||||||
auto file_name = config().getString("table-file");
|
auto file_name = getClientConfiguration().getString("table-file");
|
||||||
table_file = quoteString(file_name);
|
table_file = quoteString(file_name);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -374,18 +385,18 @@ void LocalServer::setupUsers()
|
|||||||
|
|
||||||
ConfigurationPtr users_config;
|
ConfigurationPtr users_config;
|
||||||
auto & access_control = global_context->getAccessControl();
|
auto & access_control = global_context->getAccessControl();
|
||||||
access_control.setNoPasswordAllowed(config().getBool("allow_no_password", true));
|
access_control.setNoPasswordAllowed(getClientConfiguration().getBool("allow_no_password", true));
|
||||||
access_control.setPlaintextPasswordAllowed(config().getBool("allow_plaintext_password", true));
|
access_control.setPlaintextPasswordAllowed(getClientConfiguration().getBool("allow_plaintext_password", true));
|
||||||
if (config().has("config-file") || fs::exists("config.xml"))
|
if (getClientConfiguration().has("config-file") || fs::exists("config.xml"))
|
||||||
{
|
{
|
||||||
String config_path = config().getString("config-file", "");
|
String config_path = getClientConfiguration().getString("config-file", "");
|
||||||
bool has_user_directories = config().has("user_directories");
|
bool has_user_directories = getClientConfiguration().has("user_directories");
|
||||||
const auto config_dir = fs::path{config_path}.remove_filename().string();
|
const auto config_dir = fs::path{config_path}.remove_filename().string();
|
||||||
String users_config_path = config().getString("users_config", "");
|
String users_config_path = getClientConfiguration().getString("users_config", "");
|
||||||
|
|
||||||
if (users_config_path.empty() && has_user_directories)
|
if (users_config_path.empty() && has_user_directories)
|
||||||
{
|
{
|
||||||
users_config_path = config().getString("user_directories.users_xml.path");
|
users_config_path = getClientConfiguration().getString("user_directories.users_xml.path");
|
||||||
if (fs::path(users_config_path).is_relative() && fs::exists(fs::path(config_dir) / users_config_path))
|
if (fs::path(users_config_path).is_relative() && fs::exists(fs::path(config_dir) / users_config_path))
|
||||||
users_config_path = fs::path(config_dir) / users_config_path;
|
users_config_path = fs::path(config_dir) / users_config_path;
|
||||||
}
|
}
|
||||||
@ -409,10 +420,10 @@ void LocalServer::setupUsers()
|
|||||||
|
|
||||||
void LocalServer::connect()
|
void LocalServer::connect()
|
||||||
{
|
{
|
||||||
connection_parameters = ConnectionParameters(config(), "localhost");
|
connection_parameters = ConnectionParameters(getClientConfiguration(), "localhost");
|
||||||
|
|
||||||
ReadBuffer * in;
|
ReadBuffer * in;
|
||||||
auto table_file = config().getString("table-file", "-");
|
auto table_file = getClientConfiguration().getString("table-file", "-");
|
||||||
if (table_file == "-" || table_file == "stdin")
|
if (table_file == "-" || table_file == "stdin")
|
||||||
{
|
{
|
||||||
in = &std_in;
|
in = &std_in;
|
||||||
@ -433,7 +444,7 @@ try
|
|||||||
UseSSL use_ssl;
|
UseSSL use_ssl;
|
||||||
thread_status.emplace();
|
thread_status.emplace();
|
||||||
|
|
||||||
StackTrace::setShowAddresses(config().getBool("show_addresses_in_stack_traces", true));
|
StackTrace::setShowAddresses(server_settings.show_addresses_in_stack_traces);
|
||||||
|
|
||||||
setupSignalHandler();
|
setupSignalHandler();
|
||||||
|
|
||||||
@ -448,7 +459,7 @@ try
|
|||||||
|
|
||||||
if (rlim.rlim_cur < rlim.rlim_max)
|
if (rlim.rlim_cur < rlim.rlim_max)
|
||||||
{
|
{
|
||||||
rlim.rlim_cur = config().getUInt("max_open_files", static_cast<unsigned>(rlim.rlim_max));
|
rlim.rlim_cur = getClientConfiguration().getUInt("max_open_files", static_cast<unsigned>(rlim.rlim_max));
|
||||||
int rc = setrlimit(RLIMIT_NOFILE, &rlim);
|
int rc = setrlimit(RLIMIT_NOFILE, &rlim);
|
||||||
if (rc != 0)
|
if (rc != 0)
|
||||||
std::cerr << fmt::format("Cannot set max number of file descriptors to {}. Try to specify max_open_files according to your system limits. error: {}", rlim.rlim_cur, errnoToString()) << '\n';
|
std::cerr << fmt::format("Cannot set max number of file descriptors to {}. Try to specify max_open_files according to your system limits. error: {}", rlim.rlim_cur, errnoToString()) << '\n';
|
||||||
@ -456,8 +467,8 @@ try
|
|||||||
}
|
}
|
||||||
|
|
||||||
is_interactive = stdin_is_a_tty
|
is_interactive = stdin_is_a_tty
|
||||||
&& (config().hasOption("interactive")
|
&& (getClientConfiguration().hasOption("interactive")
|
||||||
|| (queries.empty() && !config().has("table-structure") && queries_files.empty() && !config().has("table-file")));
|
|| (queries.empty() && !getClientConfiguration().has("table-structure") && queries_files.empty() && !getClientConfiguration().has("table-file")));
|
||||||
|
|
||||||
if (!is_interactive)
|
if (!is_interactive)
|
||||||
{
|
{
|
||||||
@ -481,7 +492,7 @@ try
|
|||||||
|
|
||||||
SCOPE_EXIT({ cleanup(); });
|
SCOPE_EXIT({ cleanup(); });
|
||||||
|
|
||||||
initTTYBuffer(toProgressOption(config().getString("progress", "default")));
|
initTTYBuffer(toProgressOption(getClientConfiguration().getString("progress", "default")));
|
||||||
ASTAlterCommand::setFormatAlterCommandsWithParentheses(true);
|
ASTAlterCommand::setFormatAlterCommandsWithParentheses(true);
|
||||||
|
|
||||||
applyCmdSettings(global_context);
|
applyCmdSettings(global_context);
|
||||||
@ -489,7 +500,7 @@ try
|
|||||||
/// try to load user defined executable functions, throw on error and die
|
/// try to load user defined executable functions, throw on error and die
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
global_context->loadOrReloadUserDefinedExecutableFunctions(config());
|
global_context->loadOrReloadUserDefinedExecutableFunctions(getClientConfiguration());
|
||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
{
|
{
|
||||||
@ -530,7 +541,7 @@ try
|
|||||||
}
|
}
|
||||||
catch (const DB::Exception & e)
|
catch (const DB::Exception & e)
|
||||||
{
|
{
|
||||||
bool need_print_stack_trace = config().getBool("stacktrace", false);
|
bool need_print_stack_trace = getClientConfiguration().getBool("stacktrace", false);
|
||||||
std::cerr << getExceptionMessage(e, need_print_stack_trace, true) << std::endl;
|
std::cerr << getExceptionMessage(e, need_print_stack_trace, true) << std::endl;
|
||||||
return e.code() ? e.code() : -1;
|
return e.code() ? e.code() : -1;
|
||||||
}
|
}
|
||||||
@ -542,42 +553,42 @@ catch (...)
|
|||||||
|
|
||||||
void LocalServer::updateLoggerLevel(const String & logs_level)
|
void LocalServer::updateLoggerLevel(const String & logs_level)
|
||||||
{
|
{
|
||||||
config().setString("logger.level", logs_level);
|
getClientConfiguration().setString("logger.level", logs_level);
|
||||||
updateLevels(config(), logger());
|
updateLevels(getClientConfiguration(), logger());
|
||||||
}
|
}
|
||||||
|
|
||||||
void LocalServer::processConfig()
|
void LocalServer::processConfig()
|
||||||
{
|
{
|
||||||
if (!queries.empty() && config().has("queries-file"))
|
if (!queries.empty() && getClientConfiguration().has("queries-file"))
|
||||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Options '--query' and '--queries-file' cannot be specified at the same time");
|
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Options '--query' and '--queries-file' cannot be specified at the same time");
|
||||||
|
|
||||||
if (config().has("multiquery"))
|
if (getClientConfiguration().has("multiquery"))
|
||||||
is_multiquery = true;
|
is_multiquery = true;
|
||||||
|
|
||||||
pager = config().getString("pager", "");
|
pager = getClientConfiguration().getString("pager", "");
|
||||||
|
|
||||||
delayed_interactive = config().has("interactive") && (!queries.empty() || config().has("queries-file"));
|
delayed_interactive = getClientConfiguration().has("interactive") && (!queries.empty() || getClientConfiguration().has("queries-file"));
|
||||||
if (!is_interactive || delayed_interactive)
|
if (!is_interactive || delayed_interactive)
|
||||||
{
|
{
|
||||||
echo_queries = config().hasOption("echo") || config().hasOption("verbose");
|
echo_queries = getClientConfiguration().hasOption("echo") || getClientConfiguration().hasOption("verbose");
|
||||||
ignore_error = config().getBool("ignore-error", false);
|
ignore_error = getClientConfiguration().getBool("ignore-error", false);
|
||||||
}
|
}
|
||||||
|
|
||||||
print_stack_trace = config().getBool("stacktrace", false);
|
print_stack_trace = getClientConfiguration().getBool("stacktrace", false);
|
||||||
const std::string clickhouse_dialect{"clickhouse"};
|
const std::string clickhouse_dialect{"clickhouse"};
|
||||||
load_suggestions = (is_interactive || delayed_interactive) && !config().getBool("disable_suggestion", false)
|
load_suggestions = (is_interactive || delayed_interactive) && !getClientConfiguration().getBool("disable_suggestion", false)
|
||||||
&& config().getString("dialect", clickhouse_dialect) == clickhouse_dialect;
|
&& getClientConfiguration().getString("dialect", clickhouse_dialect) == clickhouse_dialect;
|
||||||
wait_for_suggestions_to_load = config().getBool("wait_for_suggestions_to_load", false);
|
wait_for_suggestions_to_load = getClientConfiguration().getBool("wait_for_suggestions_to_load", false);
|
||||||
|
|
||||||
auto logging = (config().has("logger.console")
|
auto logging = (getClientConfiguration().has("logger.console")
|
||||||
|| config().has("logger.level")
|
|| getClientConfiguration().has("logger.level")
|
||||||
|| config().has("log-level")
|
|| getClientConfiguration().has("log-level")
|
||||||
|| config().has("send_logs_level")
|
|| getClientConfiguration().has("send_logs_level")
|
||||||
|| config().has("logger.log"));
|
|| getClientConfiguration().has("logger.log"));
|
||||||
|
|
||||||
auto level = config().getString("log-level", "trace");
|
auto level = getClientConfiguration().getString("log-level", "trace");
|
||||||
|
|
||||||
if (config().has("server_logs_file"))
|
if (getClientConfiguration().has("server_logs_file"))
|
||||||
{
|
{
|
||||||
auto poco_logs_level = Poco::Logger::parseLevel(level);
|
auto poco_logs_level = Poco::Logger::parseLevel(level);
|
||||||
Poco::Logger::root().setLevel(poco_logs_level);
|
Poco::Logger::root().setLevel(poco_logs_level);
|
||||||
@ -587,10 +598,10 @@ void LocalServer::processConfig()
|
|||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
config().setString("logger", "logger");
|
getClientConfiguration().setString("logger", "logger");
|
||||||
auto log_level_default = logging ? level : "fatal";
|
auto log_level_default = logging ? level : "fatal";
|
||||||
config().setString("logger.level", config().getString("log-level", config().getString("send_logs_level", log_level_default)));
|
getClientConfiguration().setString("logger.level", getClientConfiguration().getString("log-level", getClientConfiguration().getString("send_logs_level", log_level_default)));
|
||||||
buildLoggers(config(), logger(), "clickhouse-local");
|
buildLoggers(getClientConfiguration(), logger(), "clickhouse-local");
|
||||||
}
|
}
|
||||||
|
|
||||||
shared_context = Context::createShared();
|
shared_context = Context::createShared();
|
||||||
@ -604,13 +615,13 @@ void LocalServer::processConfig()
|
|||||||
LoggerRawPtr log = &logger();
|
LoggerRawPtr log = &logger();
|
||||||
|
|
||||||
/// Maybe useless
|
/// Maybe useless
|
||||||
if (config().has("macros"))
|
if (getClientConfiguration().has("macros"))
|
||||||
global_context->setMacros(std::make_unique<Macros>(config(), "macros", log));
|
global_context->setMacros(std::make_unique<Macros>(getClientConfiguration(), "macros", log));
|
||||||
|
|
||||||
setDefaultFormatsAndCompressionFromConfiguration();
|
setDefaultFormatsAndCompressionFromConfiguration();
|
||||||
|
|
||||||
/// Sets external authenticators config (LDAP, Kerberos).
|
/// Sets external authenticators config (LDAP, Kerberos).
|
||||||
global_context->setExternalAuthenticatorsConfig(config());
|
global_context->setExternalAuthenticatorsConfig(getClientConfiguration());
|
||||||
|
|
||||||
setupUsers();
|
setupUsers();
|
||||||
|
|
||||||
@ -619,12 +630,43 @@ void LocalServer::processConfig()
|
|||||||
global_context->getProcessList().setMaxSize(0);
|
global_context->getProcessList().setMaxSize(0);
|
||||||
|
|
||||||
const size_t physical_server_memory = getMemoryAmount();
|
const size_t physical_server_memory = getMemoryAmount();
|
||||||
const double cache_size_to_ram_max_ratio = config().getDouble("cache_size_to_ram_max_ratio", 0.5);
|
|
||||||
|
size_t max_server_memory_usage = server_settings.max_server_memory_usage;
|
||||||
|
double max_server_memory_usage_to_ram_ratio = server_settings.max_server_memory_usage_to_ram_ratio;
|
||||||
|
|
||||||
|
size_t default_max_server_memory_usage = static_cast<size_t>(physical_server_memory * max_server_memory_usage_to_ram_ratio);
|
||||||
|
|
||||||
|
if (max_server_memory_usage == 0)
|
||||||
|
{
|
||||||
|
max_server_memory_usage = default_max_server_memory_usage;
|
||||||
|
LOG_INFO(log, "Setting max_server_memory_usage was set to {}"
|
||||||
|
" ({} available * {:.2f} max_server_memory_usage_to_ram_ratio)",
|
||||||
|
formatReadableSizeWithBinarySuffix(max_server_memory_usage),
|
||||||
|
formatReadableSizeWithBinarySuffix(physical_server_memory),
|
||||||
|
max_server_memory_usage_to_ram_ratio);
|
||||||
|
}
|
||||||
|
else if (max_server_memory_usage > default_max_server_memory_usage)
|
||||||
|
{
|
||||||
|
max_server_memory_usage = default_max_server_memory_usage;
|
||||||
|
LOG_INFO(log, "Setting max_server_memory_usage was lowered to {}"
|
||||||
|
" because the system has low amount of memory. The amount was"
|
||||||
|
" calculated as {} available"
|
||||||
|
" * {:.2f} max_server_memory_usage_to_ram_ratio",
|
||||||
|
formatReadableSizeWithBinarySuffix(max_server_memory_usage),
|
||||||
|
formatReadableSizeWithBinarySuffix(physical_server_memory),
|
||||||
|
max_server_memory_usage_to_ram_ratio);
|
||||||
|
}
|
||||||
|
|
||||||
|
total_memory_tracker.setHardLimit(max_server_memory_usage);
|
||||||
|
total_memory_tracker.setDescription("(total)");
|
||||||
|
total_memory_tracker.setMetric(CurrentMetrics::MemoryTracking);
|
||||||
|
|
||||||
|
const double cache_size_to_ram_max_ratio = server_settings.cache_size_to_ram_max_ratio;
|
||||||
const size_t max_cache_size = static_cast<size_t>(physical_server_memory * cache_size_to_ram_max_ratio);
|
const size_t max_cache_size = static_cast<size_t>(physical_server_memory * cache_size_to_ram_max_ratio);
|
||||||
|
|
||||||
String uncompressed_cache_policy = config().getString("uncompressed_cache_policy", DEFAULT_UNCOMPRESSED_CACHE_POLICY);
|
String uncompressed_cache_policy = server_settings.uncompressed_cache_policy;
|
||||||
size_t uncompressed_cache_size = config().getUInt64("uncompressed_cache_size", DEFAULT_UNCOMPRESSED_CACHE_MAX_SIZE);
|
size_t uncompressed_cache_size = server_settings.uncompressed_cache_size;
|
||||||
double uncompressed_cache_size_ratio = config().getDouble("uncompressed_cache_size_ratio", DEFAULT_UNCOMPRESSED_CACHE_SIZE_RATIO);
|
double uncompressed_cache_size_ratio = server_settings.uncompressed_cache_size_ratio;
|
||||||
if (uncompressed_cache_size > max_cache_size)
|
if (uncompressed_cache_size > max_cache_size)
|
||||||
{
|
{
|
||||||
uncompressed_cache_size = max_cache_size;
|
uncompressed_cache_size = max_cache_size;
|
||||||
@ -632,9 +674,9 @@ void LocalServer::processConfig()
|
|||||||
}
|
}
|
||||||
global_context->setUncompressedCache(uncompressed_cache_policy, uncompressed_cache_size, uncompressed_cache_size_ratio);
|
global_context->setUncompressedCache(uncompressed_cache_policy, uncompressed_cache_size, uncompressed_cache_size_ratio);
|
||||||
|
|
||||||
String mark_cache_policy = config().getString("mark_cache_policy", DEFAULT_MARK_CACHE_POLICY);
|
String mark_cache_policy = server_settings.mark_cache_policy;
|
||||||
size_t mark_cache_size = config().getUInt64("mark_cache_size", DEFAULT_MARK_CACHE_MAX_SIZE);
|
size_t mark_cache_size = server_settings.mark_cache_size;
|
||||||
double mark_cache_size_ratio = config().getDouble("mark_cache_size_ratio", DEFAULT_MARK_CACHE_SIZE_RATIO);
|
double mark_cache_size_ratio = server_settings.mark_cache_size_ratio;
|
||||||
if (!mark_cache_size)
|
if (!mark_cache_size)
|
||||||
LOG_ERROR(log, "Too low mark cache size will lead to severe performance degradation.");
|
LOG_ERROR(log, "Too low mark cache size will lead to severe performance degradation.");
|
||||||
if (mark_cache_size > max_cache_size)
|
if (mark_cache_size > max_cache_size)
|
||||||
@ -644,9 +686,9 @@ void LocalServer::processConfig()
|
|||||||
}
|
}
|
||||||
global_context->setMarkCache(mark_cache_policy, mark_cache_size, mark_cache_size_ratio);
|
global_context->setMarkCache(mark_cache_policy, mark_cache_size, mark_cache_size_ratio);
|
||||||
|
|
||||||
String index_uncompressed_cache_policy = config().getString("index_uncompressed_cache_policy", DEFAULT_INDEX_UNCOMPRESSED_CACHE_POLICY);
|
String index_uncompressed_cache_policy = server_settings.index_uncompressed_cache_policy;
|
||||||
size_t index_uncompressed_cache_size = config().getUInt64("index_uncompressed_cache_size", DEFAULT_INDEX_UNCOMPRESSED_CACHE_MAX_SIZE);
|
size_t index_uncompressed_cache_size = server_settings.index_uncompressed_cache_size;
|
||||||
double index_uncompressed_cache_size_ratio = config().getDouble("index_uncompressed_cache_size_ratio", DEFAULT_INDEX_UNCOMPRESSED_CACHE_SIZE_RATIO);
|
double index_uncompressed_cache_size_ratio = server_settings.index_uncompressed_cache_size_ratio;
|
||||||
if (index_uncompressed_cache_size > max_cache_size)
|
if (index_uncompressed_cache_size > max_cache_size)
|
||||||
{
|
{
|
||||||
index_uncompressed_cache_size = max_cache_size;
|
index_uncompressed_cache_size = max_cache_size;
|
||||||
@ -654,9 +696,9 @@ void LocalServer::processConfig()
|
|||||||
}
|
}
|
||||||
global_context->setIndexUncompressedCache(index_uncompressed_cache_policy, index_uncompressed_cache_size, index_uncompressed_cache_size_ratio);
|
global_context->setIndexUncompressedCache(index_uncompressed_cache_policy, index_uncompressed_cache_size, index_uncompressed_cache_size_ratio);
|
||||||
|
|
||||||
String index_mark_cache_policy = config().getString("index_mark_cache_policy", DEFAULT_INDEX_MARK_CACHE_POLICY);
|
String index_mark_cache_policy = server_settings.index_mark_cache_policy;
|
||||||
size_t index_mark_cache_size = config().getUInt64("index_mark_cache_size", DEFAULT_INDEX_MARK_CACHE_MAX_SIZE);
|
size_t index_mark_cache_size = server_settings.index_mark_cache_size;
|
||||||
double index_mark_cache_size_ratio = config().getDouble("index_mark_cache_size_ratio", DEFAULT_INDEX_MARK_CACHE_SIZE_RATIO);
|
double index_mark_cache_size_ratio = server_settings.index_mark_cache_size_ratio;
|
||||||
if (index_mark_cache_size > max_cache_size)
|
if (index_mark_cache_size > max_cache_size)
|
||||||
{
|
{
|
||||||
index_mark_cache_size = max_cache_size;
|
index_mark_cache_size = max_cache_size;
|
||||||
@ -664,7 +706,7 @@ void LocalServer::processConfig()
|
|||||||
}
|
}
|
||||||
global_context->setIndexMarkCache(index_mark_cache_policy, index_mark_cache_size, index_mark_cache_size_ratio);
|
global_context->setIndexMarkCache(index_mark_cache_policy, index_mark_cache_size, index_mark_cache_size_ratio);
|
||||||
|
|
||||||
size_t mmap_cache_size = config().getUInt64("mmap_cache_size", DEFAULT_MMAP_CACHE_MAX_SIZE);
|
size_t mmap_cache_size = server_settings.mmap_cache_size;
|
||||||
if (mmap_cache_size > max_cache_size)
|
if (mmap_cache_size > max_cache_size)
|
||||||
{
|
{
|
||||||
mmap_cache_size = max_cache_size;
|
mmap_cache_size = max_cache_size;
|
||||||
@ -676,8 +718,8 @@ void LocalServer::processConfig()
|
|||||||
global_context->setQueryCache(0, 0, 0, 0);
|
global_context->setQueryCache(0, 0, 0, 0);
|
||||||
|
|
||||||
#if USE_EMBEDDED_COMPILER
|
#if USE_EMBEDDED_COMPILER
|
||||||
size_t compiled_expression_cache_max_size_in_bytes = config().getUInt64("compiled_expression_cache_size", DEFAULT_COMPILED_EXPRESSION_CACHE_MAX_SIZE);
|
size_t compiled_expression_cache_max_size_in_bytes = server_settings.compiled_expression_cache_size;
|
||||||
size_t compiled_expression_cache_max_elements = config().getUInt64("compiled_expression_cache_elements_size", DEFAULT_COMPILED_EXPRESSION_CACHE_MAX_ENTRIES);
|
size_t compiled_expression_cache_max_elements = server_settings.compiled_expression_cache_elements_size;
|
||||||
CompiledExpressionCacheFactory::instance().init(compiled_expression_cache_max_size_in_bytes, compiled_expression_cache_max_elements);
|
CompiledExpressionCacheFactory::instance().init(compiled_expression_cache_max_size_in_bytes, compiled_expression_cache_max_elements);
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
@ -689,16 +731,16 @@ void LocalServer::processConfig()
|
|||||||
applyCmdOptions(global_context);
|
applyCmdOptions(global_context);
|
||||||
|
|
||||||
/// Load global settings from default_profile and system_profile.
|
/// Load global settings from default_profile and system_profile.
|
||||||
global_context->setDefaultProfiles(config());
|
global_context->setDefaultProfiles(getClientConfiguration());
|
||||||
|
|
||||||
/// We load temporary database first, because projections need it.
|
/// We load temporary database first, because projections need it.
|
||||||
DatabaseCatalog::instance().initializeAndLoadTemporaryDatabase();
|
DatabaseCatalog::instance().initializeAndLoadTemporaryDatabase();
|
||||||
|
|
||||||
std::string default_database = config().getString("default_database", "default");
|
std::string default_database = server_settings.default_database;
|
||||||
DatabaseCatalog::instance().attachDatabase(default_database, createClickHouseLocalDatabaseOverlay(default_database, global_context));
|
DatabaseCatalog::instance().attachDatabase(default_database, createClickHouseLocalDatabaseOverlay(default_database, global_context));
|
||||||
global_context->setCurrentDatabase(default_database);
|
global_context->setCurrentDatabase(default_database);
|
||||||
|
|
||||||
if (config().has("path"))
|
if (getClientConfiguration().has("path"))
|
||||||
{
|
{
|
||||||
String path = global_context->getPath();
|
String path = global_context->getPath();
|
||||||
fs::create_directories(fs::path(path));
|
fs::create_directories(fs::path(path));
|
||||||
@ -713,7 +755,7 @@ void LocalServer::processConfig()
|
|||||||
attachInformationSchema(global_context, *createMemoryDatabaseIfNotExists(global_context, DatabaseCatalog::INFORMATION_SCHEMA_UPPERCASE));
|
attachInformationSchema(global_context, *createMemoryDatabaseIfNotExists(global_context, DatabaseCatalog::INFORMATION_SCHEMA_UPPERCASE));
|
||||||
waitLoad(TablesLoaderForegroundPoolId, startup_system_tasks);
|
waitLoad(TablesLoaderForegroundPoolId, startup_system_tasks);
|
||||||
|
|
||||||
if (!config().has("only-system-tables"))
|
if (!getClientConfiguration().has("only-system-tables"))
|
||||||
{
|
{
|
||||||
DatabaseCatalog::instance().createBackgroundTasks();
|
DatabaseCatalog::instance().createBackgroundTasks();
|
||||||
waitLoad(loadMetadata(global_context));
|
waitLoad(loadMetadata(global_context));
|
||||||
@ -725,15 +767,15 @@ void LocalServer::processConfig()
|
|||||||
|
|
||||||
LOG_DEBUG(log, "Loaded metadata.");
|
LOG_DEBUG(log, "Loaded metadata.");
|
||||||
}
|
}
|
||||||
else if (!config().has("no-system-tables"))
|
else if (!getClientConfiguration().has("no-system-tables"))
|
||||||
{
|
{
|
||||||
attachSystemTablesServer(global_context, *createMemoryDatabaseIfNotExists(global_context, DatabaseCatalog::SYSTEM_DATABASE), false);
|
attachSystemTablesServer(global_context, *createMemoryDatabaseIfNotExists(global_context, DatabaseCatalog::SYSTEM_DATABASE), false);
|
||||||
attachInformationSchema(global_context, *createMemoryDatabaseIfNotExists(global_context, DatabaseCatalog::INFORMATION_SCHEMA));
|
attachInformationSchema(global_context, *createMemoryDatabaseIfNotExists(global_context, DatabaseCatalog::INFORMATION_SCHEMA));
|
||||||
attachInformationSchema(global_context, *createMemoryDatabaseIfNotExists(global_context, DatabaseCatalog::INFORMATION_SCHEMA_UPPERCASE));
|
attachInformationSchema(global_context, *createMemoryDatabaseIfNotExists(global_context, DatabaseCatalog::INFORMATION_SCHEMA_UPPERCASE));
|
||||||
}
|
}
|
||||||
|
|
||||||
server_display_name = config().getString("display_name", "");
|
server_display_name = getClientConfiguration().getString("display_name", "");
|
||||||
prompt_by_server_display_name = config().getRawString("prompt_by_server_display_name.default", ":) ");
|
prompt_by_server_display_name = getClientConfiguration().getRawString("prompt_by_server_display_name.default", ":) ");
|
||||||
|
|
||||||
global_context->setQueryKindInitial();
|
global_context->setQueryKindInitial();
|
||||||
global_context->setQueryKind(query_kind);
|
global_context->setQueryKind(query_kind);
|
||||||
@ -811,7 +853,7 @@ void LocalServer::applyCmdSettings(ContextMutablePtr context)
|
|||||||
|
|
||||||
void LocalServer::applyCmdOptions(ContextMutablePtr context)
|
void LocalServer::applyCmdOptions(ContextMutablePtr context)
|
||||||
{
|
{
|
||||||
context->setDefaultFormat(config().getString("output-format", config().getString("format", is_interactive ? "PrettyCompact" : "TSV")));
|
context->setDefaultFormat(getClientConfiguration().getString("output-format", getClientConfiguration().getString("format", is_interactive ? "PrettyCompact" : "TSV")));
|
||||||
applyCmdSettings(context);
|
applyCmdSettings(context);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -819,33 +861,33 @@ void LocalServer::applyCmdOptions(ContextMutablePtr context)
|
|||||||
void LocalServer::processOptions(const OptionsDescription &, const CommandLineOptions & options, const std::vector<Arguments> &, const std::vector<Arguments> &)
|
void LocalServer::processOptions(const OptionsDescription &, const CommandLineOptions & options, const std::vector<Arguments> &, const std::vector<Arguments> &)
|
||||||
{
|
{
|
||||||
if (options.count("table"))
|
if (options.count("table"))
|
||||||
config().setString("table-name", options["table"].as<std::string>());
|
getClientConfiguration().setString("table-name", options["table"].as<std::string>());
|
||||||
if (options.count("file"))
|
if (options.count("file"))
|
||||||
config().setString("table-file", options["file"].as<std::string>());
|
getClientConfiguration().setString("table-file", options["file"].as<std::string>());
|
||||||
if (options.count("structure"))
|
if (options.count("structure"))
|
||||||
config().setString("table-structure", options["structure"].as<std::string>());
|
getClientConfiguration().setString("table-structure", options["structure"].as<std::string>());
|
||||||
if (options.count("no-system-tables"))
|
if (options.count("no-system-tables"))
|
||||||
config().setBool("no-system-tables", true);
|
getClientConfiguration().setBool("no-system-tables", true);
|
||||||
if (options.count("only-system-tables"))
|
if (options.count("only-system-tables"))
|
||||||
config().setBool("only-system-tables", true);
|
getClientConfiguration().setBool("only-system-tables", true);
|
||||||
if (options.count("database"))
|
if (options.count("database"))
|
||||||
config().setString("default_database", options["database"].as<std::string>());
|
getClientConfiguration().setString("default_database", options["database"].as<std::string>());
|
||||||
|
|
||||||
if (options.count("input-format"))
|
if (options.count("input-format"))
|
||||||
config().setString("table-data-format", options["input-format"].as<std::string>());
|
getClientConfiguration().setString("table-data-format", options["input-format"].as<std::string>());
|
||||||
if (options.count("output-format"))
|
if (options.count("output-format"))
|
||||||
config().setString("output-format", options["output-format"].as<std::string>());
|
getClientConfiguration().setString("output-format", options["output-format"].as<std::string>());
|
||||||
|
|
||||||
if (options.count("logger.console"))
|
if (options.count("logger.console"))
|
||||||
config().setBool("logger.console", options["logger.console"].as<bool>());
|
getClientConfiguration().setBool("logger.console", options["logger.console"].as<bool>());
|
||||||
if (options.count("logger.log"))
|
if (options.count("logger.log"))
|
||||||
config().setString("logger.log", options["logger.log"].as<std::string>());
|
getClientConfiguration().setString("logger.log", options["logger.log"].as<std::string>());
|
||||||
if (options.count("logger.level"))
|
if (options.count("logger.level"))
|
||||||
config().setString("logger.level", options["logger.level"].as<std::string>());
|
getClientConfiguration().setString("logger.level", options["logger.level"].as<std::string>());
|
||||||
if (options.count("send_logs_level"))
|
if (options.count("send_logs_level"))
|
||||||
config().setString("send_logs_level", options["send_logs_level"].as<std::string>());
|
getClientConfiguration().setString("send_logs_level", options["send_logs_level"].as<std::string>());
|
||||||
if (options.count("wait_for_suggestions_to_load"))
|
if (options.count("wait_for_suggestions_to_load"))
|
||||||
config().setBool("wait_for_suggestions_to_load", true);
|
getClientConfiguration().setBool("wait_for_suggestions_to_load", true);
|
||||||
}
|
}
|
||||||
|
|
||||||
void LocalServer::readArguments(int argc, char ** argv, Arguments & common_arguments, std::vector<Arguments> &, std::vector<Arguments> &)
|
void LocalServer::readArguments(int argc, char ** argv, Arguments & common_arguments, std::vector<Arguments> &, std::vector<Arguments> &)
|
||||||
|
@ -30,6 +30,9 @@ public:
|
|||||||
int main(const std::vector<String> & /*args*/) override;
|
int main(const std::vector<String> & /*args*/) override;
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
|
|
||||||
|
Poco::Util::LayeredConfiguration & getClientConfiguration() override;
|
||||||
|
|
||||||
void connect() override;
|
void connect() override;
|
||||||
|
|
||||||
void processError(const String & query) const override;
|
void processError(const String & query) const override;
|
||||||
@ -63,6 +66,8 @@ private:
|
|||||||
void applyCmdOptions(ContextMutablePtr context);
|
void applyCmdOptions(ContextMutablePtr context);
|
||||||
void applyCmdSettings(ContextMutablePtr context);
|
void applyCmdSettings(ContextMutablePtr context);
|
||||||
|
|
||||||
|
ServerSettings server_settings;
|
||||||
|
|
||||||
std::optional<StatusFile> status;
|
std::optional<StatusFile> status;
|
||||||
std::optional<std::filesystem::path> temporary_directory_to_delete;
|
std::optional<std::filesystem::path> temporary_directory_to_delete;
|
||||||
|
|
||||||
|
@ -13,6 +13,7 @@
|
|||||||
|
|
||||||
#include <fmt/format.h>
|
#include <fmt/format.h>
|
||||||
|
|
||||||
|
#include "config.h"
|
||||||
#include "config_tools.h"
|
#include "config_tools.h"
|
||||||
|
|
||||||
#include <Common/StringUtils.h>
|
#include <Common/StringUtils.h>
|
||||||
@ -439,6 +440,14 @@ extern "C"
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
/// Prevent messages from JeMalloc in the release build.
|
||||||
|
/// Some of these messages are non-actionable for the users, such as:
|
||||||
|
/// <jemalloc>: Number of CPUs detected is not deterministic. Per-CPU arena disabled.
|
||||||
|
#if USE_JEMALLOC && defined(NDEBUG) && !defined(SANITIZER)
|
||||||
|
extern "C" void (*malloc_message)(void *, const char *s);
|
||||||
|
__attribute__((constructor(0))) void init_je_malloc_message() { malloc_message = [](void *, const char *){}; }
|
||||||
|
#endif
|
||||||
|
|
||||||
/// This allows to implement assert to forbid initialization of a class in static constructors.
|
/// This allows to implement assert to forbid initialization of a class in static constructors.
|
||||||
/// Usage:
|
/// Usage:
|
||||||
///
|
///
|
||||||
|
@ -1003,6 +1003,8 @@ try
|
|||||||
|
|
||||||
ServerUUID::load(path / "uuid", log);
|
ServerUUID::load(path / "uuid", log);
|
||||||
|
|
||||||
|
PlacementInfo::PlacementInfo::instance().initialize(config());
|
||||||
|
|
||||||
zkutil::validateZooKeeperConfig(config());
|
zkutil::validateZooKeeperConfig(config());
|
||||||
bool has_zookeeper = zkutil::hasZooKeeperConfig(config());
|
bool has_zookeeper = zkutil::hasZooKeeperConfig(config());
|
||||||
|
|
||||||
@ -1817,11 +1819,6 @@ try
|
|||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (config().has(DB::PlacementInfo::PLACEMENT_CONFIG_PREFIX))
|
|
||||||
{
|
|
||||||
PlacementInfo::PlacementInfo::instance().initialize(config());
|
|
||||||
}
|
|
||||||
|
|
||||||
{
|
{
|
||||||
std::lock_guard lock(servers_lock);
|
std::lock_guard lock(servers_lock);
|
||||||
/// We should start interserver communications before (and more important shutdown after) tables.
|
/// We should start interserver communications before (and more important shutdown after) tables.
|
||||||
|
@ -438,7 +438,7 @@ void RestorerFromBackup::findTableInBackupImpl(const QualifiedTableName & table_
|
|||||||
String create_table_query_str = serializeAST(*create_table_query);
|
String create_table_query_str = serializeAST(*create_table_query);
|
||||||
|
|
||||||
bool is_predefined_table = DatabaseCatalog::instance().isPredefinedTable(StorageID{table_name.database, table_name.table});
|
bool is_predefined_table = DatabaseCatalog::instance().isPredefinedTable(StorageID{table_name.database, table_name.table});
|
||||||
auto table_dependencies = getDependenciesFromCreateQuery(context, table_name, create_table_query);
|
auto table_dependencies = getDependenciesFromCreateQuery(context, table_name, create_table_query, context->getCurrentDatabase());
|
||||||
bool table_has_data = backup->hasFiles(data_path_in_backup);
|
bool table_has_data = backup->hasFiles(data_path_in_backup);
|
||||||
|
|
||||||
std::lock_guard lock{mutex};
|
std::lock_guard lock{mutex};
|
||||||
|
@ -222,7 +222,7 @@ add_object_library(clickhouse_storages_mergetree Storages/MergeTree)
|
|||||||
add_object_library(clickhouse_storages_statistics Storages/Statistics)
|
add_object_library(clickhouse_storages_statistics Storages/Statistics)
|
||||||
add_object_library(clickhouse_storages_liveview Storages/LiveView)
|
add_object_library(clickhouse_storages_liveview Storages/LiveView)
|
||||||
add_object_library(clickhouse_storages_windowview Storages/WindowView)
|
add_object_library(clickhouse_storages_windowview Storages/WindowView)
|
||||||
add_object_library(clickhouse_storages_s3queue Storages/S3Queue)
|
add_object_library(clickhouse_storages_s3queue Storages/ObjectStorageQueue)
|
||||||
add_object_library(clickhouse_storages_materializedview Storages/MaterializedView)
|
add_object_library(clickhouse_storages_materializedview Storages/MaterializedView)
|
||||||
add_object_library(clickhouse_client Client)
|
add_object_library(clickhouse_client Client)
|
||||||
add_object_library(clickhouse_bridge BridgeHelper)
|
add_object_library(clickhouse_bridge BridgeHelper)
|
||||||
|
@ -302,8 +302,29 @@ public:
|
|||||||
|
|
||||||
|
|
||||||
ClientBase::~ClientBase() = default;
|
ClientBase::~ClientBase() = default;
|
||||||
ClientBase::ClientBase() = default;
|
ClientBase::ClientBase(
|
||||||
|
int in_fd_,
|
||||||
|
int out_fd_,
|
||||||
|
int err_fd_,
|
||||||
|
std::istream & input_stream_,
|
||||||
|
std::ostream & output_stream_,
|
||||||
|
std::ostream & error_stream_
|
||||||
|
)
|
||||||
|
: std_in(in_fd_)
|
||||||
|
, std_out(out_fd_)
|
||||||
|
, progress_indication(output_stream_, in_fd_, err_fd_)
|
||||||
|
, in_fd(in_fd_)
|
||||||
|
, out_fd(out_fd_)
|
||||||
|
, err_fd(err_fd_)
|
||||||
|
, input_stream(input_stream_)
|
||||||
|
, output_stream(output_stream_)
|
||||||
|
, error_stream(error_stream_)
|
||||||
|
{
|
||||||
|
stdin_is_a_tty = isatty(in_fd);
|
||||||
|
stdout_is_a_tty = isatty(out_fd);
|
||||||
|
stderr_is_a_tty = isatty(err_fd);
|
||||||
|
terminal_width = getTerminalWidth(in_fd, err_fd);
|
||||||
|
}
|
||||||
|
|
||||||
void ClientBase::setupSignalHandler()
|
void ClientBase::setupSignalHandler()
|
||||||
{
|
{
|
||||||
@ -330,7 +351,7 @@ void ClientBase::setupSignalHandler()
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
ASTPtr ClientBase::parseQuery(const char *& pos, const char * end, const Settings & settings, bool allow_multi_statements, bool is_interactive, bool ignore_error)
|
ASTPtr ClientBase::parseQuery(const char *& pos, const char * end, const Settings & settings, bool allow_multi_statements)
|
||||||
{
|
{
|
||||||
std::unique_ptr<IParserBase> parser;
|
std::unique_ptr<IParserBase> parser;
|
||||||
ASTPtr res;
|
ASTPtr res;
|
||||||
@ -359,7 +380,7 @@ ASTPtr ClientBase::parseQuery(const char *& pos, const char * end, const Setting
|
|||||||
|
|
||||||
if (!res)
|
if (!res)
|
||||||
{
|
{
|
||||||
std::cerr << std::endl << message << std::endl << std::endl;
|
error_stream << std::endl << message << std::endl << std::endl;
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -373,11 +394,11 @@ ASTPtr ClientBase::parseQuery(const char *& pos, const char * end, const Setting
|
|||||||
|
|
||||||
if (is_interactive)
|
if (is_interactive)
|
||||||
{
|
{
|
||||||
std::cout << std::endl;
|
output_stream << std::endl;
|
||||||
WriteBufferFromOStream res_buf(std::cout, 4096);
|
WriteBufferFromOStream res_buf(output_stream, 4096);
|
||||||
formatAST(*res, res_buf);
|
formatAST(*res, res_buf);
|
||||||
res_buf.finalize();
|
res_buf.finalize();
|
||||||
std::cout << std::endl << std::endl;
|
output_stream << std::endl << std::endl;
|
||||||
}
|
}
|
||||||
|
|
||||||
return res;
|
return res;
|
||||||
@ -481,7 +502,7 @@ void ClientBase::onData(Block & block, ASTPtr parsed_query)
|
|||||||
if (need_render_progress && tty_buf)
|
if (need_render_progress && tty_buf)
|
||||||
{
|
{
|
||||||
if (select_into_file && !select_into_file_and_stdout)
|
if (select_into_file && !select_into_file_and_stdout)
|
||||||
std::cerr << "\r";
|
error_stream << "\r";
|
||||||
progress_indication.writeProgress(*tty_buf);
|
progress_indication.writeProgress(*tty_buf);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -741,17 +762,17 @@ bool ClientBase::isRegularFile(int fd)
|
|||||||
|
|
||||||
void ClientBase::setDefaultFormatsAndCompressionFromConfiguration()
|
void ClientBase::setDefaultFormatsAndCompressionFromConfiguration()
|
||||||
{
|
{
|
||||||
if (config().has("output-format"))
|
if (getClientConfiguration().has("output-format"))
|
||||||
{
|
{
|
||||||
default_output_format = config().getString("output-format");
|
default_output_format = getClientConfiguration().getString("output-format");
|
||||||
is_default_format = false;
|
is_default_format = false;
|
||||||
}
|
}
|
||||||
else if (config().has("format"))
|
else if (getClientConfiguration().has("format"))
|
||||||
{
|
{
|
||||||
default_output_format = config().getString("format");
|
default_output_format = getClientConfiguration().getString("format");
|
||||||
is_default_format = false;
|
is_default_format = false;
|
||||||
}
|
}
|
||||||
else if (config().has("vertical"))
|
else if (getClientConfiguration().has("vertical"))
|
||||||
{
|
{
|
||||||
default_output_format = "Vertical";
|
default_output_format = "Vertical";
|
||||||
is_default_format = false;
|
is_default_format = false;
|
||||||
@ -777,17 +798,17 @@ void ClientBase::setDefaultFormatsAndCompressionFromConfiguration()
|
|||||||
default_output_format = "TSV";
|
default_output_format = "TSV";
|
||||||
}
|
}
|
||||||
|
|
||||||
if (config().has("input-format"))
|
if (getClientConfiguration().has("input-format"))
|
||||||
{
|
{
|
||||||
default_input_format = config().getString("input-format");
|
default_input_format = getClientConfiguration().getString("input-format");
|
||||||
}
|
}
|
||||||
else if (config().has("format"))
|
else if (getClientConfiguration().has("format"))
|
||||||
{
|
{
|
||||||
default_input_format = config().getString("format");
|
default_input_format = getClientConfiguration().getString("format");
|
||||||
}
|
}
|
||||||
else if (config().getString("table-file", "-") != "-")
|
else if (getClientConfiguration().getString("table-file", "-") != "-")
|
||||||
{
|
{
|
||||||
auto file_name = config().getString("table-file");
|
auto file_name = getClientConfiguration().getString("table-file");
|
||||||
std::optional<String> format_from_file_name = FormatFactory::instance().tryGetFormatFromFileName(file_name);
|
std::optional<String> format_from_file_name = FormatFactory::instance().tryGetFormatFromFileName(file_name);
|
||||||
if (format_from_file_name)
|
if (format_from_file_name)
|
||||||
default_input_format = *format_from_file_name;
|
default_input_format = *format_from_file_name;
|
||||||
@ -803,7 +824,7 @@ void ClientBase::setDefaultFormatsAndCompressionFromConfiguration()
|
|||||||
default_input_format = "TSV";
|
default_input_format = "TSV";
|
||||||
}
|
}
|
||||||
|
|
||||||
format_max_block_size = config().getUInt64("format_max_block_size",
|
format_max_block_size = getClientConfiguration().getUInt64("format_max_block_size",
|
||||||
global_context->getSettingsRef().max_block_size);
|
global_context->getSettingsRef().max_block_size);
|
||||||
|
|
||||||
/// Setting value from cmd arg overrides one from config
|
/// Setting value from cmd arg overrides one from config
|
||||||
@ -813,7 +834,7 @@ void ClientBase::setDefaultFormatsAndCompressionFromConfiguration()
|
|||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
insert_format_max_block_size = config().getUInt64("insert_format_max_block_size",
|
insert_format_max_block_size = getClientConfiguration().getUInt64("insert_format_max_block_size",
|
||||||
global_context->getSettingsRef().max_insert_block_size);
|
global_context->getSettingsRef().max_insert_block_size);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -924,9 +945,7 @@ void ClientBase::processTextAsSingleQuery(const String & full_query)
|
|||||||
const char * begin = full_query.data();
|
const char * begin = full_query.data();
|
||||||
auto parsed_query = parseQuery(begin, begin + full_query.size(),
|
auto parsed_query = parseQuery(begin, begin + full_query.size(),
|
||||||
global_context->getSettingsRef(),
|
global_context->getSettingsRef(),
|
||||||
/*allow_multi_statements=*/ false,
|
/*allow_multi_statements=*/ false);
|
||||||
is_interactive,
|
|
||||||
ignore_error);
|
|
||||||
|
|
||||||
if (!parsed_query)
|
if (!parsed_query)
|
||||||
return;
|
return;
|
||||||
@ -1100,7 +1119,7 @@ void ClientBase::processOrdinaryQuery(const String & query_to_execute, ASTPtr pa
|
|||||||
/// has been received yet.
|
/// has been received yet.
|
||||||
if (processed_rows == 0 && e.code() == ErrorCodes::DEADLOCK_AVOIDED && --retries_left)
|
if (processed_rows == 0 && e.code() == ErrorCodes::DEADLOCK_AVOIDED && --retries_left)
|
||||||
{
|
{
|
||||||
std::cerr << "Got a transient error from the server, will"
|
error_stream << "Got a transient error from the server, will"
|
||||||
<< " retry (" << retries_left << " retries left)";
|
<< " retry (" << retries_left << " retries left)";
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
@ -1154,7 +1173,7 @@ void ClientBase::receiveResult(ASTPtr parsed_query, Int32 signals_before_stop, b
|
|||||||
double elapsed = receive_watch.elapsedSeconds();
|
double elapsed = receive_watch.elapsedSeconds();
|
||||||
if (break_on_timeout && elapsed > receive_timeout.totalSeconds())
|
if (break_on_timeout && elapsed > receive_timeout.totalSeconds())
|
||||||
{
|
{
|
||||||
std::cout << "Timeout exceeded while receiving data from server."
|
output_stream << "Timeout exceeded while receiving data from server."
|
||||||
<< " Waited for " << static_cast<size_t>(elapsed) << " seconds,"
|
<< " Waited for " << static_cast<size_t>(elapsed) << " seconds,"
|
||||||
<< " timeout is " << receive_timeout.totalSeconds() << " seconds." << std::endl;
|
<< " timeout is " << receive_timeout.totalSeconds() << " seconds." << std::endl;
|
||||||
|
|
||||||
@ -1189,7 +1208,7 @@ void ClientBase::receiveResult(ASTPtr parsed_query, Int32 signals_before_stop, b
|
|||||||
|
|
||||||
if (cancelled && is_interactive)
|
if (cancelled && is_interactive)
|
||||||
{
|
{
|
||||||
std::cout << "Query was cancelled." << std::endl;
|
output_stream << "Query was cancelled." << std::endl;
|
||||||
cancelled_printed = true;
|
cancelled_printed = true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -1308,9 +1327,9 @@ void ClientBase::onEndOfStream()
|
|||||||
if (is_interactive)
|
if (is_interactive)
|
||||||
{
|
{
|
||||||
if (cancelled && !cancelled_printed)
|
if (cancelled && !cancelled_printed)
|
||||||
std::cout << "Query was cancelled." << std::endl;
|
output_stream << "Query was cancelled." << std::endl;
|
||||||
else if (!written_first_block)
|
else if (!written_first_block)
|
||||||
std::cout << "Ok." << std::endl;
|
output_stream << "Ok." << std::endl;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1863,7 +1882,7 @@ void ClientBase::cancelQuery()
|
|||||||
progress_indication.clearProgressOutput(*tty_buf);
|
progress_indication.clearProgressOutput(*tty_buf);
|
||||||
|
|
||||||
if (is_interactive)
|
if (is_interactive)
|
||||||
std::cout << "Cancelling query." << std::endl;
|
output_stream << "Cancelling query." << std::endl;
|
||||||
|
|
||||||
cancelled = true;
|
cancelled = true;
|
||||||
}
|
}
|
||||||
@ -2026,7 +2045,7 @@ void ClientBase::processParsedSingleQuery(const String & full_query, const Strin
|
|||||||
{
|
{
|
||||||
const String & new_database = use_query->getDatabase();
|
const String & new_database = use_query->getDatabase();
|
||||||
/// If the client initiates the reconnection, it takes the settings from the config.
|
/// If the client initiates the reconnection, it takes the settings from the config.
|
||||||
config().setString("database", new_database);
|
getClientConfiguration().setString("database", new_database);
|
||||||
/// If the connection initiates the reconnection, it uses its variable.
|
/// If the connection initiates the reconnection, it uses its variable.
|
||||||
connection->setDefaultDatabase(new_database);
|
connection->setDefaultDatabase(new_database);
|
||||||
}
|
}
|
||||||
@ -2046,21 +2065,21 @@ void ClientBase::processParsedSingleQuery(const String & full_query, const Strin
|
|||||||
|
|
||||||
if (is_interactive)
|
if (is_interactive)
|
||||||
{
|
{
|
||||||
std::cout << std::endl;
|
output_stream << std::endl;
|
||||||
if (!server_exception || processed_rows != 0)
|
if (!server_exception || processed_rows != 0)
|
||||||
std::cout << processed_rows << " row" << (processed_rows == 1 ? "" : "s") << " in set. ";
|
output_stream << processed_rows << " row" << (processed_rows == 1 ? "" : "s") << " in set. ";
|
||||||
std::cout << "Elapsed: " << progress_indication.elapsedSeconds() << " sec. ";
|
output_stream << "Elapsed: " << progress_indication.elapsedSeconds() << " sec. ";
|
||||||
progress_indication.writeFinalProgress();
|
progress_indication.writeFinalProgress();
|
||||||
std::cout << std::endl << std::endl;
|
output_stream << std::endl << std::endl;
|
||||||
}
|
}
|
||||||
else if (print_time_to_stderr)
|
else if (getClientConfiguration().getBool("print-time-to-stderr", false))
|
||||||
{
|
{
|
||||||
std::cerr << progress_indication.elapsedSeconds() << "\n";
|
error_stream << progress_indication.elapsedSeconds() << "\n";
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!is_interactive && print_num_processed_rows)
|
if (!is_interactive && getClientConfiguration().getBool("print-num-processed-rows", false))
|
||||||
{
|
{
|
||||||
std::cout << "Processed rows: " << processed_rows << "\n";
|
output_stream << "Processed rows: " << processed_rows << "\n";
|
||||||
}
|
}
|
||||||
|
|
||||||
if (have_error && report_error)
|
if (have_error && report_error)
|
||||||
@ -2110,9 +2129,7 @@ MultiQueryProcessingStage ClientBase::analyzeMultiQueryText(
|
|||||||
{
|
{
|
||||||
parsed_query = parseQuery(this_query_end, all_queries_end,
|
parsed_query = parseQuery(this_query_end, all_queries_end,
|
||||||
global_context->getSettingsRef(),
|
global_context->getSettingsRef(),
|
||||||
/*allow_multi_statements=*/ true,
|
/*allow_multi_statements=*/ true);
|
||||||
is_interactive,
|
|
||||||
ignore_error);
|
|
||||||
}
|
}
|
||||||
catch (const Exception & e)
|
catch (const Exception & e)
|
||||||
{
|
{
|
||||||
@ -2428,12 +2445,12 @@ void ClientBase::initQueryIdFormats()
|
|||||||
return;
|
return;
|
||||||
|
|
||||||
/// Initialize query_id_formats if any
|
/// Initialize query_id_formats if any
|
||||||
if (config().has("query_id_formats"))
|
if (getClientConfiguration().has("query_id_formats"))
|
||||||
{
|
{
|
||||||
Poco::Util::AbstractConfiguration::Keys keys;
|
Poco::Util::AbstractConfiguration::Keys keys;
|
||||||
config().keys("query_id_formats", keys);
|
getClientConfiguration().keys("query_id_formats", keys);
|
||||||
for (const auto & name : keys)
|
for (const auto & name : keys)
|
||||||
query_id_formats.emplace_back(name + ":", config().getString("query_id_formats." + name));
|
query_id_formats.emplace_back(name + ":", getClientConfiguration().getString("query_id_formats." + name));
|
||||||
}
|
}
|
||||||
|
|
||||||
if (query_id_formats.empty())
|
if (query_id_formats.empty())
|
||||||
@ -2478,9 +2495,9 @@ bool ClientBase::addMergeTreeSettings(ASTCreateQuery & ast_create)
|
|||||||
|
|
||||||
void ClientBase::runInteractive()
|
void ClientBase::runInteractive()
|
||||||
{
|
{
|
||||||
if (config().has("query_id"))
|
if (getClientConfiguration().has("query_id"))
|
||||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "query_id could be specified only in non-interactive mode");
|
throw Exception(ErrorCodes::BAD_ARGUMENTS, "query_id could be specified only in non-interactive mode");
|
||||||
if (print_time_to_stderr)
|
if (getClientConfiguration().getBool("print-time-to-stderr", false))
|
||||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "time option could be specified only in non-interactive mode");
|
throw Exception(ErrorCodes::BAD_ARGUMENTS, "time option could be specified only in non-interactive mode");
|
||||||
|
|
||||||
initQueryIdFormats();
|
initQueryIdFormats();
|
||||||
@ -2493,9 +2510,9 @@ void ClientBase::runInteractive()
|
|||||||
{
|
{
|
||||||
/// Load suggestion data from the server.
|
/// Load suggestion data from the server.
|
||||||
if (global_context->getApplicationType() == Context::ApplicationType::CLIENT)
|
if (global_context->getApplicationType() == Context::ApplicationType::CLIENT)
|
||||||
suggest->load<Connection>(global_context, connection_parameters, config().getInt("suggestion_limit"), wait_for_suggestions_to_load);
|
suggest->load<Connection>(global_context, connection_parameters, getClientConfiguration().getInt("suggestion_limit"), wait_for_suggestions_to_load);
|
||||||
else if (global_context->getApplicationType() == Context::ApplicationType::LOCAL)
|
else if (global_context->getApplicationType() == Context::ApplicationType::LOCAL)
|
||||||
suggest->load<LocalConnection>(global_context, connection_parameters, config().getInt("suggestion_limit"), wait_for_suggestions_to_load);
|
suggest->load<LocalConnection>(global_context, connection_parameters, getClientConfiguration().getInt("suggestion_limit"), wait_for_suggestions_to_load);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (home_path.empty())
|
if (home_path.empty())
|
||||||
@ -2506,8 +2523,8 @@ void ClientBase::runInteractive()
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Load command history if present.
|
/// Load command history if present.
|
||||||
if (config().has("history_file"))
|
if (getClientConfiguration().has("history_file"))
|
||||||
history_file = config().getString("history_file");
|
history_file = getClientConfiguration().getString("history_file");
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
auto * history_file_from_env = getenv("CLICKHOUSE_HISTORY_FILE"); // NOLINT(concurrency-mt-unsafe)
|
auto * history_file_from_env = getenv("CLICKHOUSE_HISTORY_FILE"); // NOLINT(concurrency-mt-unsafe)
|
||||||
@ -2528,7 +2545,7 @@ void ClientBase::runInteractive()
|
|||||||
{
|
{
|
||||||
if (e.getErrno() != EEXIST)
|
if (e.getErrno() != EEXIST)
|
||||||
{
|
{
|
||||||
std::cerr << getCurrentExceptionMessage(false) << '\n';
|
error_stream << getCurrentExceptionMessage(false) << '\n';
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -2539,13 +2556,13 @@ void ClientBase::runInteractive()
|
|||||||
|
|
||||||
#if USE_REPLXX
|
#if USE_REPLXX
|
||||||
replxx::Replxx::highlighter_callback_t highlight_callback{};
|
replxx::Replxx::highlighter_callback_t highlight_callback{};
|
||||||
if (config().getBool("highlight", true))
|
if (getClientConfiguration().getBool("highlight", true))
|
||||||
highlight_callback = highlight;
|
highlight_callback = highlight;
|
||||||
|
|
||||||
ReplxxLineReader lr(
|
ReplxxLineReader lr(
|
||||||
*suggest,
|
*suggest,
|
||||||
history_file,
|
history_file,
|
||||||
config().has("multiline"),
|
getClientConfiguration().has("multiline"),
|
||||||
query_extenders,
|
query_extenders,
|
||||||
query_delimiters,
|
query_delimiters,
|
||||||
word_break_characters,
|
word_break_characters,
|
||||||
@ -2553,7 +2570,7 @@ void ClientBase::runInteractive()
|
|||||||
#else
|
#else
|
||||||
LineReader lr(
|
LineReader lr(
|
||||||
history_file,
|
history_file,
|
||||||
config().has("multiline"),
|
getClientConfiguration().has("multiline"),
|
||||||
query_extenders,
|
query_extenders,
|
||||||
query_delimiters,
|
query_delimiters,
|
||||||
word_break_characters);
|
word_break_characters);
|
||||||
@ -2633,7 +2650,7 @@ void ClientBase::runInteractive()
|
|||||||
{
|
{
|
||||||
// If a separate connection loading suggestions failed to open a new session,
|
// If a separate connection loading suggestions failed to open a new session,
|
||||||
// use the main session to receive them.
|
// use the main session to receive them.
|
||||||
suggest->load(*connection, connection_parameters.timeouts, config().getInt("suggestion_limit"), global_context->getClientInfo());
|
suggest->load(*connection, connection_parameters.timeouts, getClientConfiguration().getInt("suggestion_limit"), global_context->getClientInfo());
|
||||||
}
|
}
|
||||||
|
|
||||||
try
|
try
|
||||||
@ -2648,7 +2665,7 @@ void ClientBase::runInteractive()
|
|||||||
break;
|
break;
|
||||||
|
|
||||||
/// We don't need to handle the test hints in the interactive mode.
|
/// We don't need to handle the test hints in the interactive mode.
|
||||||
std::cerr << "Exception on client:" << std::endl << getExceptionMessage(e, print_stack_trace, true) << std::endl << std::endl;
|
error_stream << "Exception on client:" << std::endl << getExceptionMessage(e, print_stack_trace, true) << std::endl << std::endl;
|
||||||
client_exception.reset(e.clone());
|
client_exception.reset(e.clone());
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -2665,11 +2682,11 @@ void ClientBase::runInteractive()
|
|||||||
while (true);
|
while (true);
|
||||||
|
|
||||||
if (isNewYearMode())
|
if (isNewYearMode())
|
||||||
std::cout << "Happy new year." << std::endl;
|
output_stream << "Happy new year." << std::endl;
|
||||||
else if (isChineseNewYearMode(local_tz))
|
else if (isChineseNewYearMode(local_tz))
|
||||||
std::cout << "Happy Chinese new year. 春节快乐!" << std::endl;
|
output_stream << "Happy Chinese new year. 春节快乐!" << std::endl;
|
||||||
else
|
else
|
||||||
std::cout << "Bye." << std::endl;
|
output_stream << "Bye." << std::endl;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -2680,7 +2697,7 @@ bool ClientBase::processMultiQueryFromFile(const String & file_name)
|
|||||||
ReadBufferFromFile in(file_name);
|
ReadBufferFromFile in(file_name);
|
||||||
readStringUntilEOF(queries_from_file, in);
|
readStringUntilEOF(queries_from_file, in);
|
||||||
|
|
||||||
if (!has_log_comment)
|
if (!getClientConfiguration().has("log_comment"))
|
||||||
{
|
{
|
||||||
Settings settings = global_context->getSettings();
|
Settings settings = global_context->getSettings();
|
||||||
/// NOTE: cannot use even weakly_canonical() since it fails for /dev/stdin due to resolving of "pipe:[X]"
|
/// NOTE: cannot use even weakly_canonical() since it fails for /dev/stdin due to resolving of "pipe:[X]"
|
||||||
@ -2789,13 +2806,13 @@ void ClientBase::clearTerminal()
|
|||||||
/// It is needed if garbage is left in terminal.
|
/// It is needed if garbage is left in terminal.
|
||||||
/// Show cursor. It can be left hidden by invocation of previous programs.
|
/// Show cursor. It can be left hidden by invocation of previous programs.
|
||||||
/// A test for this feature: perl -e 'print "x"x100000'; echo -ne '\033[0;0H\033[?25l'; clickhouse-client
|
/// A test for this feature: perl -e 'print "x"x100000'; echo -ne '\033[0;0H\033[?25l'; clickhouse-client
|
||||||
std::cout << "\033[0J" "\033[?25h";
|
output_stream << "\033[0J" "\033[?25h";
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
void ClientBase::showClientVersion()
|
void ClientBase::showClientVersion()
|
||||||
{
|
{
|
||||||
std::cout << VERSION_NAME << " " + getName() + " version " << VERSION_STRING << VERSION_OFFICIAL << "." << std::endl;
|
output_stream << VERSION_NAME << " " + getName() + " version " << VERSION_STRING << VERSION_OFFICIAL << "." << std::endl;
|
||||||
}
|
}
|
||||||
|
|
||||||
namespace
|
namespace
|
||||||
@ -2862,7 +2879,10 @@ private:
|
|||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Enable optimizations even in debug builds because otherwise options parsing becomes extremely slow affecting .sh tests
|
||||||
|
#if defined(__clang__)
|
||||||
|
#pragma clang optimize on
|
||||||
|
#endif
|
||||||
void ClientBase::parseAndCheckOptions(OptionsDescription & options_description, po::variables_map & options, Arguments & arguments)
|
void ClientBase::parseAndCheckOptions(OptionsDescription & options_description, po::variables_map & options, Arguments & arguments)
|
||||||
{
|
{
|
||||||
if (allow_repeated_settings)
|
if (allow_repeated_settings)
|
||||||
@ -3080,18 +3100,18 @@ void ClientBase::init(int argc, char ** argv)
|
|||||||
|
|
||||||
if (options.count("version-clean"))
|
if (options.count("version-clean"))
|
||||||
{
|
{
|
||||||
std::cout << VERSION_STRING;
|
output_stream << VERSION_STRING;
|
||||||
exit(0); // NOLINT(concurrency-mt-unsafe)
|
exit(0); // NOLINT(concurrency-mt-unsafe)
|
||||||
}
|
}
|
||||||
|
|
||||||
if (options.count("verbose"))
|
if (options.count("verbose"))
|
||||||
config().setBool("verbose", true);
|
getClientConfiguration().setBool("verbose", true);
|
||||||
|
|
||||||
/// Output of help message.
|
/// Output of help message.
|
||||||
if (options.count("help")
|
if (options.count("help")
|
||||||
|| (options.count("host") && options["host"].as<std::string>() == "elp")) /// If user writes -help instead of --help.
|
|| (options.count("host") && options["host"].as<std::string>() == "elp")) /// If user writes -help instead of --help.
|
||||||
{
|
{
|
||||||
if (config().getBool("verbose", false))
|
if (getClientConfiguration().getBool("verbose", false))
|
||||||
printHelpMessage(options_description, true);
|
printHelpMessage(options_description, true);
|
||||||
else
|
else
|
||||||
printHelpMessage(options_description_non_verbose, false);
|
printHelpMessage(options_description_non_verbose, false);
|
||||||
@ -3099,72 +3119,75 @@ void ClientBase::init(int argc, char ** argv)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Common options for clickhouse-client and clickhouse-local.
|
/// Common options for clickhouse-client and clickhouse-local.
|
||||||
|
|
||||||
|
/// Output execution time to stderr in batch mode.
|
||||||
if (options.count("time"))
|
if (options.count("time"))
|
||||||
print_time_to_stderr = true;
|
getClientConfiguration().setBool("print-time-to-stderr", true);
|
||||||
if (options.count("query"))
|
if (options.count("query"))
|
||||||
queries = options["query"].as<std::vector<std::string>>();
|
queries = options["query"].as<std::vector<std::string>>();
|
||||||
if (options.count("query_id"))
|
if (options.count("query_id"))
|
||||||
config().setString("query_id", options["query_id"].as<std::string>());
|
getClientConfiguration().setString("query_id", options["query_id"].as<std::string>());
|
||||||
if (options.count("database"))
|
if (options.count("database"))
|
||||||
config().setString("database", options["database"].as<std::string>());
|
getClientConfiguration().setString("database", options["database"].as<std::string>());
|
||||||
if (options.count("config-file"))
|
if (options.count("config-file"))
|
||||||
config().setString("config-file", options["config-file"].as<std::string>());
|
getClientConfiguration().setString("config-file", options["config-file"].as<std::string>());
|
||||||
if (options.count("queries-file"))
|
if (options.count("queries-file"))
|
||||||
queries_files = options["queries-file"].as<std::vector<std::string>>();
|
queries_files = options["queries-file"].as<std::vector<std::string>>();
|
||||||
if (options.count("multiline"))
|
if (options.count("multiline"))
|
||||||
config().setBool("multiline", true);
|
getClientConfiguration().setBool("multiline", true);
|
||||||
if (options.count("multiquery"))
|
if (options.count("multiquery"))
|
||||||
config().setBool("multiquery", true);
|
getClientConfiguration().setBool("multiquery", true);
|
||||||
if (options.count("ignore-error"))
|
if (options.count("ignore-error"))
|
||||||
config().setBool("ignore-error", true);
|
getClientConfiguration().setBool("ignore-error", true);
|
||||||
if (options.count("format"))
|
if (options.count("format"))
|
||||||
config().setString("format", options["format"].as<std::string>());
|
getClientConfiguration().setString("format", options["format"].as<std::string>());
|
||||||
if (options.count("output-format"))
|
if (options.count("output-format"))
|
||||||
config().setString("output-format", options["output-format"].as<std::string>());
|
getClientConfiguration().setString("output-format", options["output-format"].as<std::string>());
|
||||||
if (options.count("vertical"))
|
if (options.count("vertical"))
|
||||||
config().setBool("vertical", true);
|
getClientConfiguration().setBool("vertical", true);
|
||||||
if (options.count("stacktrace"))
|
if (options.count("stacktrace"))
|
||||||
config().setBool("stacktrace", true);
|
getClientConfiguration().setBool("stacktrace", true);
|
||||||
if (options.count("print-profile-events"))
|
if (options.count("print-profile-events"))
|
||||||
config().setBool("print-profile-events", true);
|
getClientConfiguration().setBool("print-profile-events", true);
|
||||||
if (options.count("profile-events-delay-ms"))
|
if (options.count("profile-events-delay-ms"))
|
||||||
config().setUInt64("profile-events-delay-ms", options["profile-events-delay-ms"].as<UInt64>());
|
getClientConfiguration().setUInt64("profile-events-delay-ms", options["profile-events-delay-ms"].as<UInt64>());
|
||||||
|
/// Whether to print the number of processed rows at
|
||||||
if (options.count("processed-rows"))
|
if (options.count("processed-rows"))
|
||||||
print_num_processed_rows = true;
|
getClientConfiguration().setBool("print-num-processed-rows", true);
|
||||||
if (options.count("progress"))
|
if (options.count("progress"))
|
||||||
{
|
{
|
||||||
switch (options["progress"].as<ProgressOption>())
|
switch (options["progress"].as<ProgressOption>())
|
||||||
{
|
{
|
||||||
case DEFAULT:
|
case DEFAULT:
|
||||||
config().setString("progress", "default");
|
getClientConfiguration().setString("progress", "default");
|
||||||
break;
|
break;
|
||||||
case OFF:
|
case OFF:
|
||||||
config().setString("progress", "off");
|
getClientConfiguration().setString("progress", "off");
|
||||||
break;
|
break;
|
||||||
case TTY:
|
case TTY:
|
||||||
config().setString("progress", "tty");
|
getClientConfiguration().setString("progress", "tty");
|
||||||
break;
|
break;
|
||||||
case ERR:
|
case ERR:
|
||||||
config().setString("progress", "err");
|
getClientConfiguration().setString("progress", "err");
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (options.count("echo"))
|
if (options.count("echo"))
|
||||||
config().setBool("echo", true);
|
getClientConfiguration().setBool("echo", true);
|
||||||
if (options.count("disable_suggestion"))
|
if (options.count("disable_suggestion"))
|
||||||
config().setBool("disable_suggestion", true);
|
getClientConfiguration().setBool("disable_suggestion", true);
|
||||||
if (options.count("wait_for_suggestions_to_load"))
|
if (options.count("wait_for_suggestions_to_load"))
|
||||||
config().setBool("wait_for_suggestions_to_load", true);
|
getClientConfiguration().setBool("wait_for_suggestions_to_load", true);
|
||||||
if (options.count("suggestion_limit"))
|
if (options.count("suggestion_limit"))
|
||||||
config().setInt("suggestion_limit", options["suggestion_limit"].as<int>());
|
getClientConfiguration().setInt("suggestion_limit", options["suggestion_limit"].as<int>());
|
||||||
if (options.count("highlight"))
|
if (options.count("highlight"))
|
||||||
config().setBool("highlight", options["highlight"].as<bool>());
|
getClientConfiguration().setBool("highlight", options["highlight"].as<bool>());
|
||||||
if (options.count("history_file"))
|
if (options.count("history_file"))
|
||||||
config().setString("history_file", options["history_file"].as<std::string>());
|
getClientConfiguration().setString("history_file", options["history_file"].as<std::string>());
|
||||||
if (options.count("interactive"))
|
if (options.count("interactive"))
|
||||||
config().setBool("interactive", true);
|
getClientConfiguration().setBool("interactive", true);
|
||||||
if (options.count("pager"))
|
if (options.count("pager"))
|
||||||
config().setString("pager", options["pager"].as<std::string>());
|
getClientConfiguration().setString("pager", options["pager"].as<std::string>());
|
||||||
|
|
||||||
if (options.count("log-level"))
|
if (options.count("log-level"))
|
||||||
Poco::Logger::root().setLevel(options["log-level"].as<std::string>());
|
Poco::Logger::root().setLevel(options["log-level"].as<std::string>());
|
||||||
@ -3182,13 +3205,13 @@ void ClientBase::init(int argc, char ** argv)
|
|||||||
alias_names.reserve(options_description.main_description->options().size());
|
alias_names.reserve(options_description.main_description->options().size());
|
||||||
for (const auto& option : options_description.main_description->options())
|
for (const auto& option : options_description.main_description->options())
|
||||||
alias_names.insert(option->long_name());
|
alias_names.insert(option->long_name());
|
||||||
argsToConfig(common_arguments, config(), 100, &alias_names);
|
argsToConfig(common_arguments, getClientConfiguration(), 100, &alias_names);
|
||||||
}
|
}
|
||||||
|
|
||||||
clearPasswordFromCommandLine(argc, argv);
|
clearPasswordFromCommandLine(argc, argv);
|
||||||
|
|
||||||
/// Limit on total memory usage
|
/// Limit on total memory usage
|
||||||
std::string max_client_memory_usage = config().getString("max_memory_usage_in_client", "0" /*default value*/);
|
std::string max_client_memory_usage = getClientConfiguration().getString("max_memory_usage_in_client", "0" /*default value*/);
|
||||||
if (max_client_memory_usage != "0")
|
if (max_client_memory_usage != "0")
|
||||||
{
|
{
|
||||||
UInt64 max_client_memory_usage_int = parseWithSizeSuffix<UInt64>(max_client_memory_usage.c_str(), max_client_memory_usage.length());
|
UInt64 max_client_memory_usage_int = parseWithSizeSuffix<UInt64>(max_client_memory_usage.c_str(), max_client_memory_usage.length());
|
||||||
@ -3197,8 +3220,6 @@ void ClientBase::init(int argc, char ** argv)
|
|||||||
total_memory_tracker.setDescription("(total)");
|
total_memory_tracker.setDescription("(total)");
|
||||||
total_memory_tracker.setMetric(CurrentMetrics::MemoryTracking);
|
total_memory_tracker.setMetric(CurrentMetrics::MemoryTracking);
|
||||||
}
|
}
|
||||||
|
|
||||||
has_log_comment = config().has("log_comment");
|
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -18,7 +18,6 @@
|
|||||||
#include <Storages/SelectQueryInfo.h>
|
#include <Storages/SelectQueryInfo.h>
|
||||||
#include <Storages/MergeTree/MergeTreeSettings.h>
|
#include <Storages/MergeTree/MergeTreeSettings.h>
|
||||||
|
|
||||||
|
|
||||||
namespace po = boost::program_options;
|
namespace po = boost::program_options;
|
||||||
|
|
||||||
|
|
||||||
@ -67,13 +66,22 @@ class ClientBase : public Poco::Util::Application, public IHints<2>
|
|||||||
public:
|
public:
|
||||||
using Arguments = std::vector<String>;
|
using Arguments = std::vector<String>;
|
||||||
|
|
||||||
ClientBase();
|
explicit ClientBase
|
||||||
|
(
|
||||||
|
int in_fd_ = STDIN_FILENO,
|
||||||
|
int out_fd_ = STDOUT_FILENO,
|
||||||
|
int err_fd_ = STDERR_FILENO,
|
||||||
|
std::istream & input_stream_ = std::cin,
|
||||||
|
std::ostream & output_stream_ = std::cout,
|
||||||
|
std::ostream & error_stream_ = std::cerr
|
||||||
|
);
|
||||||
|
|
||||||
~ClientBase() override;
|
~ClientBase() override;
|
||||||
|
|
||||||
void init(int argc, char ** argv);
|
void init(int argc, char ** argv);
|
||||||
|
|
||||||
std::vector<String> getAllRegisteredNames() const override { return cmd_options; }
|
std::vector<String> getAllRegisteredNames() const override { return cmd_options; }
|
||||||
static ASTPtr parseQuery(const char *& pos, const char * end, const Settings & settings, bool allow_multi_statements, bool is_interactive, bool ignore_error);
|
ASTPtr parseQuery(const char *& pos, const char * end, const Settings & settings, bool allow_multi_statements);
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
void runInteractive();
|
void runInteractive();
|
||||||
@ -82,6 +90,9 @@ protected:
|
|||||||
char * argv0 = nullptr;
|
char * argv0 = nullptr;
|
||||||
void runLibFuzzer();
|
void runLibFuzzer();
|
||||||
|
|
||||||
|
/// This is the analogue of Poco::Application::config()
|
||||||
|
virtual Poco::Util::LayeredConfiguration & getClientConfiguration() = 0;
|
||||||
|
|
||||||
virtual bool processWithFuzzing(const String &)
|
virtual bool processWithFuzzing(const String &)
|
||||||
{
|
{
|
||||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Query processing with fuzzing is not implemented");
|
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Query processing with fuzzing is not implemented");
|
||||||
@ -107,7 +118,7 @@ protected:
|
|||||||
String & query_to_execute, ASTPtr & parsed_query, const String & all_queries_text,
|
String & query_to_execute, ASTPtr & parsed_query, const String & all_queries_text,
|
||||||
std::unique_ptr<Exception> & current_exception);
|
std::unique_ptr<Exception> & current_exception);
|
||||||
|
|
||||||
static void clearTerminal();
|
void clearTerminal();
|
||||||
void showClientVersion();
|
void showClientVersion();
|
||||||
|
|
||||||
using ProgramOptionsDescription = boost::program_options::options_description;
|
using ProgramOptionsDescription = boost::program_options::options_description;
|
||||||
@ -206,7 +217,6 @@ protected:
|
|||||||
|
|
||||||
bool echo_queries = false; /// Print queries before execution in batch mode.
|
bool echo_queries = false; /// Print queries before execution in batch mode.
|
||||||
bool ignore_error = false; /// In case of errors, don't print error message, continue to next query. Only applicable for non-interactive mode.
|
bool ignore_error = false; /// In case of errors, don't print error message, continue to next query. Only applicable for non-interactive mode.
|
||||||
bool print_time_to_stderr = false; /// Output execution time to stderr in batch mode.
|
|
||||||
|
|
||||||
std::optional<Suggest> suggest;
|
std::optional<Suggest> suggest;
|
||||||
bool load_suggestions = false;
|
bool load_suggestions = false;
|
||||||
@ -251,9 +261,9 @@ protected:
|
|||||||
ConnectionParameters connection_parameters;
|
ConnectionParameters connection_parameters;
|
||||||
|
|
||||||
/// Buffer that reads from stdin in batch mode.
|
/// Buffer that reads from stdin in batch mode.
|
||||||
ReadBufferFromFileDescriptor std_in{STDIN_FILENO};
|
ReadBufferFromFileDescriptor std_in;
|
||||||
/// Console output.
|
/// Console output.
|
||||||
WriteBufferFromFileDescriptor std_out{STDOUT_FILENO};
|
WriteBufferFromFileDescriptor std_out;
|
||||||
std::unique_ptr<ShellCommand> pager_cmd;
|
std::unique_ptr<ShellCommand> pager_cmd;
|
||||||
|
|
||||||
/// The user can specify to redirect query output to a file.
|
/// The user can specify to redirect query output to a file.
|
||||||
@ -284,7 +294,6 @@ protected:
|
|||||||
bool need_render_profile_events = true;
|
bool need_render_profile_events = true;
|
||||||
bool written_first_block = false;
|
bool written_first_block = false;
|
||||||
size_t processed_rows = 0; /// How many rows have been read or written.
|
size_t processed_rows = 0; /// How many rows have been read or written.
|
||||||
bool print_num_processed_rows = false; /// Whether to print the number of processed rows at
|
|
||||||
|
|
||||||
bool print_stack_trace = false;
|
bool print_stack_trace = false;
|
||||||
/// The last exception that was received from the server. Is used for the
|
/// The last exception that was received from the server. Is used for the
|
||||||
@ -332,8 +341,14 @@ protected:
|
|||||||
bool cancelled = false;
|
bool cancelled = false;
|
||||||
bool cancelled_printed = false;
|
bool cancelled_printed = false;
|
||||||
|
|
||||||
/// Does log_comment has specified by user?
|
/// Unpacked descriptors and streams for the ease of use.
|
||||||
bool has_log_comment = false;
|
int in_fd = STDIN_FILENO;
|
||||||
|
int out_fd = STDOUT_FILENO;
|
||||||
|
int err_fd = STDERR_FILENO;
|
||||||
|
std::istream & input_stream;
|
||||||
|
std::ostream & output_stream;
|
||||||
|
std::ostream & error_stream;
|
||||||
|
|
||||||
};
|
};
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -23,14 +23,6 @@ void trim(String & s)
|
|||||||
s.erase(std::find_if(s.rbegin(), s.rend(), [](int ch) { return !std::isspace(ch); }).base(), s.end());
|
s.erase(std::find_if(s.rbegin(), s.rend(), [](int ch) { return !std::isspace(ch); }).base(), s.end());
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Check if multi-line query is inserted from the paste buffer.
|
|
||||||
/// Allows delaying the start of query execution until the entirety of query is inserted.
|
|
||||||
bool hasInputData()
|
|
||||||
{
|
|
||||||
pollfd fd{STDIN_FILENO, POLLIN, 0};
|
|
||||||
return poll(&fd, 1, 0) == 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
struct NoCaseCompare
|
struct NoCaseCompare
|
||||||
{
|
{
|
||||||
bool operator()(const std::string & str1, const std::string & str2)
|
bool operator()(const std::string & str1, const std::string & str2)
|
||||||
@ -63,6 +55,14 @@ void addNewWords(Words & to, const Words & from, Compare comp)
|
|||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
|
/// Check if multi-line query is inserted from the paste buffer.
|
||||||
|
/// Allows delaying the start of query execution until the entirety of query is inserted.
|
||||||
|
bool LineReader::hasInputData() const
|
||||||
|
{
|
||||||
|
pollfd fd{in_fd, POLLIN, 0};
|
||||||
|
return poll(&fd, 1, 0) == 1;
|
||||||
|
}
|
||||||
|
|
||||||
replxx::Replxx::completions_t LineReader::Suggest::getCompletions(const String & prefix, size_t prefix_length, const char * word_break_characters)
|
replxx::Replxx::completions_t LineReader::Suggest::getCompletions(const String & prefix, size_t prefix_length, const char * word_break_characters)
|
||||||
{
|
{
|
||||||
std::string_view last_word;
|
std::string_view last_word;
|
||||||
@ -131,11 +131,22 @@ void LineReader::Suggest::addWords(Words && new_words) // NOLINT(cppcoreguidelin
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
LineReader::LineReader(const String & history_file_path_, bool multiline_, Patterns extenders_, Patterns delimiters_)
|
LineReader::LineReader(
|
||||||
|
const String & history_file_path_,
|
||||||
|
bool multiline_,
|
||||||
|
Patterns extenders_,
|
||||||
|
Patterns delimiters_,
|
||||||
|
std::istream & input_stream_,
|
||||||
|
std::ostream & output_stream_,
|
||||||
|
int in_fd_
|
||||||
|
)
|
||||||
: history_file_path(history_file_path_)
|
: history_file_path(history_file_path_)
|
||||||
, multiline(multiline_)
|
, multiline(multiline_)
|
||||||
, extenders(std::move(extenders_))
|
, extenders(std::move(extenders_))
|
||||||
, delimiters(std::move(delimiters_))
|
, delimiters(std::move(delimiters_))
|
||||||
|
, input_stream(input_stream_)
|
||||||
|
, output_stream(output_stream_)
|
||||||
|
, in_fd(in_fd_)
|
||||||
{
|
{
|
||||||
/// FIXME: check extender != delimiter
|
/// FIXME: check extender != delimiter
|
||||||
}
|
}
|
||||||
@ -212,9 +223,9 @@ LineReader::InputStatus LineReader::readOneLine(const String & prompt)
|
|||||||
input.clear();
|
input.clear();
|
||||||
|
|
||||||
{
|
{
|
||||||
std::cout << prompt;
|
output_stream << prompt;
|
||||||
std::getline(std::cin, input);
|
std::getline(input_stream, input);
|
||||||
if (!std::cin.good())
|
if (!input_stream.good())
|
||||||
return ABORT;
|
return ABORT;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1,5 +1,7 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
|
#include <iostream>
|
||||||
|
#include <unistd.h>
|
||||||
#include <mutex>
|
#include <mutex>
|
||||||
#include <atomic>
|
#include <atomic>
|
||||||
#include <vector>
|
#include <vector>
|
||||||
@ -37,7 +39,16 @@ public:
|
|||||||
|
|
||||||
using Patterns = std::vector<const char *>;
|
using Patterns = std::vector<const char *>;
|
||||||
|
|
||||||
LineReader(const String & history_file_path, bool multiline, Patterns extenders, Patterns delimiters);
|
LineReader(
|
||||||
|
const String & history_file_path,
|
||||||
|
bool multiline,
|
||||||
|
Patterns extenders,
|
||||||
|
Patterns delimiters,
|
||||||
|
std::istream & input_stream_ = std::cin,
|
||||||
|
std::ostream & output_stream_ = std::cout,
|
||||||
|
int in_fd_ = STDIN_FILENO
|
||||||
|
);
|
||||||
|
|
||||||
virtual ~LineReader() = default;
|
virtual ~LineReader() = default;
|
||||||
|
|
||||||
/// Reads the whole line until delimiter (in multiline mode) or until the last line without extender.
|
/// Reads the whole line until delimiter (in multiline mode) or until the last line without extender.
|
||||||
@ -56,6 +67,8 @@ public:
|
|||||||
virtual void enableBracketedPaste() {}
|
virtual void enableBracketedPaste() {}
|
||||||
virtual void disableBracketedPaste() {}
|
virtual void disableBracketedPaste() {}
|
||||||
|
|
||||||
|
bool hasInputData() const;
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
enum InputStatus
|
enum InputStatus
|
||||||
{
|
{
|
||||||
@ -77,6 +90,10 @@ protected:
|
|||||||
|
|
||||||
virtual InputStatus readOneLine(const String & prompt);
|
virtual InputStatus readOneLine(const String & prompt);
|
||||||
virtual void addToHistory(const String &) {}
|
virtual void addToHistory(const String &) {}
|
||||||
|
|
||||||
|
std::istream & input_stream;
|
||||||
|
std::ostream & output_stream;
|
||||||
|
int in_fd;
|
||||||
};
|
};
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -16,7 +16,10 @@
|
|||||||
#include <Storages/IStorage.h>
|
#include <Storages/IStorage.h>
|
||||||
#include <Common/ConcurrentBoundedQueue.h>
|
#include <Common/ConcurrentBoundedQueue.h>
|
||||||
#include <Common/CurrentThread.h>
|
#include <Common/CurrentThread.h>
|
||||||
|
#include <Parsers/ParserQuery.h>
|
||||||
|
#include <Parsers/PRQL/ParserPRQLQuery.h>
|
||||||
|
#include <Parsers/Kusto/ParserKQLStatement.h>
|
||||||
|
#include <Parsers/Kusto/parseKQLQuery.h>
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
@ -151,12 +154,26 @@ void LocalConnection::sendQuery(
|
|||||||
state->block = sample;
|
state->block = sample;
|
||||||
|
|
||||||
String current_format = "Values";
|
String current_format = "Values";
|
||||||
|
|
||||||
|
const auto & settings = context->getSettingsRef();
|
||||||
const char * begin = state->query.data();
|
const char * begin = state->query.data();
|
||||||
auto parsed_query = ClientBase::parseQuery(begin, begin + state->query.size(),
|
const char * end = begin + state->query.size();
|
||||||
context->getSettingsRef(),
|
const Dialect & dialect = settings.dialect;
|
||||||
/*allow_multi_statements=*/ false,
|
|
||||||
/*is_interactive=*/ false,
|
std::unique_ptr<IParserBase> parser;
|
||||||
/*ignore_error=*/ false);
|
if (dialect == Dialect::kusto)
|
||||||
|
parser = std::make_unique<ParserKQLStatement>(end, settings.allow_settings_after_format_in_insert);
|
||||||
|
else if (dialect == Dialect::prql)
|
||||||
|
parser = std::make_unique<ParserPRQLQuery>(settings.max_query_size, settings.max_parser_depth, settings.max_parser_backtracks);
|
||||||
|
else
|
||||||
|
parser = std::make_unique<ParserQuery>(end, settings.allow_settings_after_format_in_insert);
|
||||||
|
|
||||||
|
ASTPtr parsed_query;
|
||||||
|
if (dialect == Dialect::kusto)
|
||||||
|
parsed_query = parseKQLQueryAndMovePosition(*parser, begin, end, "", /*allow_multi_statements*/false, settings.max_query_size, settings.max_parser_depth, settings.max_parser_backtracks);
|
||||||
|
else
|
||||||
|
parsed_query = parseQueryAndMovePosition(*parser, begin, end, "", /*allow_multi_statements*/false, settings.max_query_size, settings.max_parser_depth, settings.max_parser_backtracks);
|
||||||
|
|
||||||
if (const auto * insert = parsed_query->as<ASTInsertQuery>())
|
if (const auto * insert = parsed_query->as<ASTInsertQuery>())
|
||||||
{
|
{
|
||||||
if (!insert->format.empty())
|
if (!insert->format.empty())
|
||||||
@ -341,22 +358,18 @@ bool LocalConnection::poll(size_t)
|
|||||||
|
|
||||||
if (!state->is_finished)
|
if (!state->is_finished)
|
||||||
{
|
{
|
||||||
if (send_progress && (state->after_send_progress.elapsedMicroseconds() >= query_context->getSettingsRef().interactive_delay))
|
if (needSendProgressOrMetrics())
|
||||||
{
|
|
||||||
state->after_send_progress.restart();
|
|
||||||
next_packet_type = Protocol::Server::Progress;
|
|
||||||
return true;
|
return true;
|
||||||
}
|
|
||||||
|
|
||||||
if (send_profile_events && (state->after_send_profile_events.elapsedMicroseconds() >= query_context->getSettingsRef().interactive_delay))
|
|
||||||
{
|
|
||||||
sendProfileEvents();
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
pollImpl();
|
while (pollImpl())
|
||||||
|
{
|
||||||
|
LOG_DEBUG(&Poco::Logger::get("LocalConnection"), "Executor timeout encountered, will retry");
|
||||||
|
|
||||||
|
if (needSendProgressOrMetrics())
|
||||||
|
return true;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
catch (const Exception & e)
|
catch (const Exception & e)
|
||||||
{
|
{
|
||||||
@ -451,12 +464,34 @@ bool LocalConnection::poll(size_t)
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool LocalConnection::needSendProgressOrMetrics()
|
||||||
|
{
|
||||||
|
if (send_progress && (state->after_send_progress.elapsedMicroseconds() >= query_context->getSettingsRef().interactive_delay))
|
||||||
|
{
|
||||||
|
state->after_send_progress.restart();
|
||||||
|
next_packet_type = Protocol::Server::Progress;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (send_profile_events && (state->after_send_profile_events.elapsedMicroseconds() >= query_context->getSettingsRef().interactive_delay))
|
||||||
|
{
|
||||||
|
sendProfileEvents();
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
bool LocalConnection::pollImpl()
|
bool LocalConnection::pollImpl()
|
||||||
{
|
{
|
||||||
Block block;
|
Block block;
|
||||||
auto next_read = pullBlock(block);
|
auto next_read = pullBlock(block);
|
||||||
|
|
||||||
if (block && !state->io.null_format)
|
if (!block && next_read)
|
||||||
|
{
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
else if (block && !state->io.null_format)
|
||||||
{
|
{
|
||||||
state->block.emplace(block);
|
state->block.emplace(block);
|
||||||
}
|
}
|
||||||
@ -465,7 +500,7 @@ bool LocalConnection::pollImpl()
|
|||||||
state->is_finished = true;
|
state->is_finished = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
return true;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
Packet LocalConnection::receivePacket()
|
Packet LocalConnection::receivePacket()
|
||||||
|
@ -151,8 +151,11 @@ private:
|
|||||||
|
|
||||||
void sendProfileEvents();
|
void sendProfileEvents();
|
||||||
|
|
||||||
|
/// Returns true on executor timeout, meaning a retryable error.
|
||||||
bool pollImpl();
|
bool pollImpl();
|
||||||
|
|
||||||
|
bool needSendProgressOrMetrics();
|
||||||
|
|
||||||
ContextMutablePtr query_context;
|
ContextMutablePtr query_context;
|
||||||
Session session;
|
Session session;
|
||||||
|
|
||||||
|
@ -297,8 +297,15 @@ ReplxxLineReader::ReplxxLineReader(
|
|||||||
Patterns extenders_,
|
Patterns extenders_,
|
||||||
Patterns delimiters_,
|
Patterns delimiters_,
|
||||||
const char word_break_characters_[],
|
const char word_break_characters_[],
|
||||||
replxx::Replxx::highlighter_callback_t highlighter_)
|
replxx::Replxx::highlighter_callback_t highlighter_,
|
||||||
: LineReader(history_file_path_, multiline_, std::move(extenders_), std::move(delimiters_)), highlighter(std::move(highlighter_))
|
[[ maybe_unused ]] std::istream & input_stream_,
|
||||||
|
[[ maybe_unused ]] std::ostream & output_stream_,
|
||||||
|
[[ maybe_unused ]] int in_fd_,
|
||||||
|
[[ maybe_unused ]] int out_fd_,
|
||||||
|
[[ maybe_unused ]] int err_fd_
|
||||||
|
)
|
||||||
|
: LineReader(history_file_path_, multiline_, std::move(extenders_), std::move(delimiters_), input_stream_, output_stream_, in_fd_)
|
||||||
|
, highlighter(std::move(highlighter_))
|
||||||
, word_break_characters(word_break_characters_)
|
, word_break_characters(word_break_characters_)
|
||||||
, editor(getEditor())
|
, editor(getEditor())
|
||||||
{
|
{
|
||||||
@ -471,7 +478,7 @@ ReplxxLineReader::ReplxxLineReader(
|
|||||||
|
|
||||||
ReplxxLineReader::~ReplxxLineReader()
|
ReplxxLineReader::~ReplxxLineReader()
|
||||||
{
|
{
|
||||||
if (close(history_file_fd))
|
if (history_file_fd >= 0 && close(history_file_fd))
|
||||||
rx.print("Close of history file failed: %s\n", errnoToString().c_str());
|
rx.print("Close of history file failed: %s\n", errnoToString().c_str());
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -496,7 +503,7 @@ void ReplxxLineReader::addToHistory(const String & line)
|
|||||||
// but replxx::Replxx::history_load() does not
|
// but replxx::Replxx::history_load() does not
|
||||||
// and that is why flock() is added here.
|
// and that is why flock() is added here.
|
||||||
bool locked = false;
|
bool locked = false;
|
||||||
if (flock(history_file_fd, LOCK_EX))
|
if (history_file_fd >= 0 && flock(history_file_fd, LOCK_EX))
|
||||||
rx.print("Lock of history file failed: %s\n", errnoToString().c_str());
|
rx.print("Lock of history file failed: %s\n", errnoToString().c_str());
|
||||||
else
|
else
|
||||||
locked = true;
|
locked = true;
|
||||||
@ -507,7 +514,7 @@ void ReplxxLineReader::addToHistory(const String & line)
|
|||||||
if (!rx.history_save(history_file_path))
|
if (!rx.history_save(history_file_path))
|
||||||
rx.print("Saving history failed: %s\n", errnoToString().c_str());
|
rx.print("Saving history failed: %s\n", errnoToString().c_str());
|
||||||
|
|
||||||
if (locked && 0 != flock(history_file_fd, LOCK_UN))
|
if (history_file_fd >= 0 && locked && 0 != flock(history_file_fd, LOCK_UN))
|
||||||
rx.print("Unlock of history file failed: %s\n", errnoToString().c_str());
|
rx.print("Unlock of history file failed: %s\n", errnoToString().c_str());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include "LineReader.h"
|
#include <Client/LineReader.h>
|
||||||
|
#include <base/strong_typedef.h>
|
||||||
#include <replxx.hxx>
|
#include <replxx.hxx>
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
@ -9,14 +10,22 @@ namespace DB
|
|||||||
class ReplxxLineReader : public LineReader
|
class ReplxxLineReader : public LineReader
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
ReplxxLineReader(
|
ReplxxLineReader
|
||||||
|
(
|
||||||
Suggest & suggest,
|
Suggest & suggest,
|
||||||
const String & history_file_path,
|
const String & history_file_path,
|
||||||
bool multiline,
|
bool multiline,
|
||||||
Patterns extenders_,
|
Patterns extenders_,
|
||||||
Patterns delimiters_,
|
Patterns delimiters_,
|
||||||
const char word_break_characters_[],
|
const char word_break_characters_[],
|
||||||
replxx::Replxx::highlighter_callback_t highlighter_);
|
replxx::Replxx::highlighter_callback_t highlighter_,
|
||||||
|
std::istream & input_stream_ = std::cin,
|
||||||
|
std::ostream & output_stream_ = std::cout,
|
||||||
|
int in_fd_ = STDIN_FILENO,
|
||||||
|
int out_fd_ = STDOUT_FILENO,
|
||||||
|
int err_fd_ = STDERR_FILENO
|
||||||
|
);
|
||||||
|
|
||||||
~ReplxxLineReader() override;
|
~ReplxxLineReader() override;
|
||||||
|
|
||||||
void enableBracketedPaste() override;
|
void enableBracketedPaste() override;
|
||||||
|
@ -60,4 +60,26 @@ GetPriorityForLoadBalancing::getPriorityFunc(LoadBalancing load_balance, size_t
|
|||||||
return get_priority;
|
return get_priority;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Some load balancing strategies (such as "nearest hostname") have preferred nodes to connect to.
|
||||||
|
/// Usually it's a node in the same data center/availability zone.
|
||||||
|
/// For other strategies there's no difference between nodes.
|
||||||
|
bool GetPriorityForLoadBalancing::hasOptimalNode() const
|
||||||
|
{
|
||||||
|
switch (load_balancing)
|
||||||
|
{
|
||||||
|
case LoadBalancing::NEAREST_HOSTNAME:
|
||||||
|
return true;
|
||||||
|
case LoadBalancing::HOSTNAME_LEVENSHTEIN_DISTANCE:
|
||||||
|
return true;
|
||||||
|
case LoadBalancing::IN_ORDER:
|
||||||
|
return false;
|
||||||
|
case LoadBalancing::RANDOM:
|
||||||
|
return false;
|
||||||
|
case LoadBalancing::FIRST_OR_RANDOM:
|
||||||
|
return true;
|
||||||
|
case LoadBalancing::ROUND_ROBIN:
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -30,6 +30,8 @@ public:
|
|||||||
|
|
||||||
Func getPriorityFunc(LoadBalancing load_balance, size_t offset, size_t pool_size) const;
|
Func getPriorityFunc(LoadBalancing load_balance, size_t offset, size_t pool_size) const;
|
||||||
|
|
||||||
|
bool hasOptimalNode() const;
|
||||||
|
|
||||||
std::vector<size_t> hostname_prefix_distance; /// Prefix distances from name of this host to the names of hosts of pools.
|
std::vector<size_t> hostname_prefix_distance; /// Prefix distances from name of this host to the names of hosts of pools.
|
||||||
std::vector<size_t> hostname_levenshtein_distance; /// Levenshtein Distances from name of this host to the names of hosts of pools.
|
std::vector<size_t> hostname_levenshtein_distance; /// Levenshtein Distances from name of this host to the names of hosts of pools.
|
||||||
|
|
||||||
|
@ -637,11 +637,11 @@ The server successfully detected this situation and will download merged part fr
|
|||||||
M(S3QueueSetFileProcessingMicroseconds, "Time spent to set file as processing")\
|
M(S3QueueSetFileProcessingMicroseconds, "Time spent to set file as processing")\
|
||||||
M(S3QueueSetFileProcessedMicroseconds, "Time spent to set file as processed")\
|
M(S3QueueSetFileProcessedMicroseconds, "Time spent to set file as processed")\
|
||||||
M(S3QueueSetFileFailedMicroseconds, "Time spent to set file as failed")\
|
M(S3QueueSetFileFailedMicroseconds, "Time spent to set file as failed")\
|
||||||
M(S3QueueFailedFiles, "Number of files which failed to be processed")\
|
M(ObjectStorageQueueFailedFiles, "Number of files which failed to be processed")\
|
||||||
M(S3QueueProcessedFiles, "Number of files which were processed")\
|
M(ObjectStorageQueueProcessedFiles, "Number of files which were processed")\
|
||||||
M(S3QueueCleanupMaxSetSizeOrTTLMicroseconds, "Time spent to set file as failed")\
|
M(ObjectStorageQueueCleanupMaxSetSizeOrTTLMicroseconds, "Time spent to set file as failed")\
|
||||||
M(S3QueuePullMicroseconds, "Time spent to read file data")\
|
M(ObjectStorageQueuePullMicroseconds, "Time spent to read file data")\
|
||||||
M(S3QueueLockLocalFileStatusesMicroseconds, "Time spent to lock local file statuses")\
|
M(ObjectStorageQueueLockLocalFileStatusesMicroseconds, "Time spent to lock local file statuses")\
|
||||||
\
|
\
|
||||||
M(ServerStartupMilliseconds, "Time elapsed from starting server to listening to sockets in milliseconds")\
|
M(ServerStartupMilliseconds, "Time elapsed from starting server to listening to sockets in milliseconds")\
|
||||||
M(IOUringSQEsSubmitted, "Total number of io_uring SQEs submitted") \
|
M(IOUringSQEsSubmitted, "Total number of io_uring SQEs submitted") \
|
||||||
|
@ -92,19 +92,19 @@ void ProgressIndication::writeFinalProgress()
|
|||||||
if (progress.read_rows < 1000)
|
if (progress.read_rows < 1000)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
std::cout << "Processed " << formatReadableQuantity(progress.read_rows) << " rows, "
|
output_stream << "Processed " << formatReadableQuantity(progress.read_rows) << " rows, "
|
||||||
<< formatReadableSizeWithDecimalSuffix(progress.read_bytes);
|
<< formatReadableSizeWithDecimalSuffix(progress.read_bytes);
|
||||||
|
|
||||||
UInt64 elapsed_ns = getElapsedNanoseconds();
|
UInt64 elapsed_ns = getElapsedNanoseconds();
|
||||||
if (elapsed_ns)
|
if (elapsed_ns)
|
||||||
std::cout << " (" << formatReadableQuantity(progress.read_rows * 1000000000.0 / elapsed_ns) << " rows/s., "
|
output_stream << " (" << formatReadableQuantity(progress.read_rows * 1000000000.0 / elapsed_ns) << " rows/s., "
|
||||||
<< formatReadableSizeWithDecimalSuffix(progress.read_bytes * 1000000000.0 / elapsed_ns) << "/s.)";
|
<< formatReadableSizeWithDecimalSuffix(progress.read_bytes * 1000000000.0 / elapsed_ns) << "/s.)";
|
||||||
else
|
else
|
||||||
std::cout << ". ";
|
output_stream << ". ";
|
||||||
|
|
||||||
auto peak_memory_usage = getMemoryUsage().peak;
|
auto peak_memory_usage = getMemoryUsage().peak;
|
||||||
if (peak_memory_usage >= 0)
|
if (peak_memory_usage >= 0)
|
||||||
std::cout << "\nPeak memory usage: " << formatReadableSizeWithBinarySuffix(peak_memory_usage) << ".";
|
output_stream << "\nPeak memory usage: " << formatReadableSizeWithBinarySuffix(peak_memory_usage) << ".";
|
||||||
}
|
}
|
||||||
|
|
||||||
void ProgressIndication::writeProgress(WriteBufferFromFileDescriptor & message)
|
void ProgressIndication::writeProgress(WriteBufferFromFileDescriptor & message)
|
||||||
@ -125,7 +125,7 @@ void ProgressIndication::writeProgress(WriteBufferFromFileDescriptor & message)
|
|||||||
|
|
||||||
const char * indicator = indicators[increment % 8];
|
const char * indicator = indicators[increment % 8];
|
||||||
|
|
||||||
size_t terminal_width = getTerminalWidth();
|
size_t terminal_width = getTerminalWidth(in_fd, err_fd);
|
||||||
|
|
||||||
if (!written_progress_chars)
|
if (!written_progress_chars)
|
||||||
{
|
{
|
||||||
|
@ -32,6 +32,19 @@ using HostToTimesMap = std::unordered_map<String, ThreadEventData>;
|
|||||||
class ProgressIndication
|
class ProgressIndication
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
|
|
||||||
|
explicit ProgressIndication
|
||||||
|
(
|
||||||
|
std::ostream & output_stream_ = std::cout,
|
||||||
|
int in_fd_ = STDIN_FILENO,
|
||||||
|
int err_fd_ = STDERR_FILENO
|
||||||
|
)
|
||||||
|
: output_stream(output_stream_),
|
||||||
|
in_fd(in_fd_),
|
||||||
|
err_fd(err_fd_)
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
/// Write progress bar.
|
/// Write progress bar.
|
||||||
void writeProgress(WriteBufferFromFileDescriptor & message);
|
void writeProgress(WriteBufferFromFileDescriptor & message);
|
||||||
void clearProgressOutput(WriteBufferFromFileDescriptor & message);
|
void clearProgressOutput(WriteBufferFromFileDescriptor & message);
|
||||||
@ -103,6 +116,10 @@ private:
|
|||||||
/// - hosts_data/cpu_usage_meter (guarded with profile_events_mutex)
|
/// - hosts_data/cpu_usage_meter (guarded with profile_events_mutex)
|
||||||
mutable std::mutex profile_events_mutex;
|
mutable std::mutex profile_events_mutex;
|
||||||
mutable std::mutex progress_mutex;
|
mutable std::mutex progress_mutex;
|
||||||
|
|
||||||
|
std::ostream & output_stream;
|
||||||
|
int in_fd;
|
||||||
|
int err_fd;
|
||||||
};
|
};
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -11,7 +11,7 @@
|
|||||||
#include <Interpreters/TextLog.h>
|
#include <Interpreters/TextLog.h>
|
||||||
#include <Interpreters/TraceLog.h>
|
#include <Interpreters/TraceLog.h>
|
||||||
#include <Interpreters/FilesystemCacheLog.h>
|
#include <Interpreters/FilesystemCacheLog.h>
|
||||||
#include <Interpreters/S3QueueLog.h>
|
#include <Interpreters/ObjectStorageQueueLog.h>
|
||||||
#include <Interpreters/FilesystemReadPrefetchesLog.h>
|
#include <Interpreters/FilesystemReadPrefetchesLog.h>
|
||||||
#include <Interpreters/ProcessorsProfileLog.h>
|
#include <Interpreters/ProcessorsProfileLog.h>
|
||||||
#include <Interpreters/ZooKeeperLog.h>
|
#include <Interpreters/ZooKeeperLog.h>
|
||||||
|
@ -25,7 +25,7 @@
|
|||||||
M(ZooKeeperLogElement) \
|
M(ZooKeeperLogElement) \
|
||||||
M(ProcessorProfileLogElement) \
|
M(ProcessorProfileLogElement) \
|
||||||
M(TextLogElement) \
|
M(TextLogElement) \
|
||||||
M(S3QueueLogElement) \
|
M(ObjectStorageQueueLogElement) \
|
||||||
M(FilesystemCacheLogElement) \
|
M(FilesystemCacheLogElement) \
|
||||||
M(FilesystemReadPrefetchesLogElement) \
|
M(FilesystemReadPrefetchesLogElement) \
|
||||||
M(AsynchronousInsertLogElement) \
|
M(AsynchronousInsertLogElement) \
|
||||||
|
@ -13,17 +13,17 @@ namespace DB::ErrorCodes
|
|||||||
extern const int SYSTEM_ERROR;
|
extern const int SYSTEM_ERROR;
|
||||||
}
|
}
|
||||||
|
|
||||||
uint16_t getTerminalWidth()
|
uint16_t getTerminalWidth(int in_fd, int err_fd)
|
||||||
{
|
{
|
||||||
struct winsize terminal_size {};
|
struct winsize terminal_size {};
|
||||||
if (isatty(STDIN_FILENO))
|
if (isatty(in_fd))
|
||||||
{
|
{
|
||||||
if (ioctl(STDIN_FILENO, TIOCGWINSZ, &terminal_size))
|
if (ioctl(in_fd, TIOCGWINSZ, &terminal_size))
|
||||||
throw DB::ErrnoException(DB::ErrorCodes::SYSTEM_ERROR, "Cannot obtain terminal window size (ioctl TIOCGWINSZ)");
|
throw DB::ErrnoException(DB::ErrorCodes::SYSTEM_ERROR, "Cannot obtain terminal window size (ioctl TIOCGWINSZ)");
|
||||||
}
|
}
|
||||||
else if (isatty(STDERR_FILENO))
|
else if (isatty(err_fd))
|
||||||
{
|
{
|
||||||
if (ioctl(STDERR_FILENO, TIOCGWINSZ, &terminal_size))
|
if (ioctl(err_fd, TIOCGWINSZ, &terminal_size))
|
||||||
throw DB::ErrnoException(DB::ErrorCodes::SYSTEM_ERROR, "Cannot obtain terminal window size (ioctl TIOCGWINSZ)");
|
throw DB::ErrnoException(DB::ErrorCodes::SYSTEM_ERROR, "Cannot obtain terminal window size (ioctl TIOCGWINSZ)");
|
||||||
}
|
}
|
||||||
/// Default - 0.
|
/// Default - 0.
|
||||||
|
@ -1,16 +1,16 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <string>
|
#include <string>
|
||||||
|
#include <unistd.h>
|
||||||
#include <boost/program_options.hpp>
|
#include <boost/program_options.hpp>
|
||||||
|
|
||||||
|
|
||||||
namespace po = boost::program_options;
|
namespace po = boost::program_options;
|
||||||
|
|
||||||
|
|
||||||
uint16_t getTerminalWidth();
|
uint16_t getTerminalWidth(int in_fd = STDIN_FILENO, int err_fd = STDERR_FILENO);
|
||||||
|
|
||||||
/** Creates po::options_description with name and an appropriate size for option displaying
|
/** Creates po::options_description with name and an appropriate size for option displaying
|
||||||
* when program is called with option --help
|
* when program is called with option --help
|
||||||
* */
|
* */
|
||||||
po::options_description createOptionsDescription(const std::string &caption, unsigned short terminal_width); /// NOLINT
|
po::options_description createOptionsDescription(const std::string &caption, unsigned short terminal_width); /// NOLINT
|
||||||
|
|
||||||
|
@ -559,6 +559,8 @@ public:
|
|||||||
/// Useful to check owner of ephemeral node.
|
/// Useful to check owner of ephemeral node.
|
||||||
virtual int64_t getSessionID() const = 0;
|
virtual int64_t getSessionID() const = 0;
|
||||||
|
|
||||||
|
virtual String tryGetAvailabilityZone() { return ""; }
|
||||||
|
|
||||||
/// If the method will throw an exception, callbacks won't be called.
|
/// If the method will throw an exception, callbacks won't be called.
|
||||||
///
|
///
|
||||||
/// After the method is executed successfully, you must wait for callbacks
|
/// After the method is executed successfully, you must wait for callbacks
|
||||||
@ -635,10 +637,6 @@ public:
|
|||||||
|
|
||||||
virtual const DB::KeeperFeatureFlags * getKeeperFeatureFlags() const { return nullptr; }
|
virtual const DB::KeeperFeatureFlags * getKeeperFeatureFlags() const { return nullptr; }
|
||||||
|
|
||||||
/// A ZooKeeper session can have an optional deadline set on it.
|
|
||||||
/// After it has been reached, the session needs to be finalized.
|
|
||||||
virtual bool hasReachedDeadline() const = 0;
|
|
||||||
|
|
||||||
/// Expire session and finish all pending requests
|
/// Expire session and finish all pending requests
|
||||||
virtual void finalize(const String & reason) = 0;
|
virtual void finalize(const String & reason) = 0;
|
||||||
};
|
};
|
||||||
|
@ -39,7 +39,6 @@ public:
|
|||||||
~TestKeeper() override;
|
~TestKeeper() override;
|
||||||
|
|
||||||
bool isExpired() const override { return expired; }
|
bool isExpired() const override { return expired; }
|
||||||
bool hasReachedDeadline() const override { return false; }
|
|
||||||
Int8 getConnectedNodeIdx() const override { return 0; }
|
Int8 getConnectedNodeIdx() const override { return 0; }
|
||||||
String getConnectedHostPort() const override { return "TestKeeper:0000"; }
|
String getConnectedHostPort() const override { return "TestKeeper:0000"; }
|
||||||
int32_t getConnectionXid() const override { return 0; }
|
int32_t getConnectionXid() const override { return 0; }
|
||||||
|
@ -8,6 +8,7 @@
|
|||||||
#include <functional>
|
#include <functional>
|
||||||
#include <ranges>
|
#include <ranges>
|
||||||
#include <vector>
|
#include <vector>
|
||||||
|
#include <chrono>
|
||||||
|
|
||||||
#include <Common/ZooKeeper/Types.h>
|
#include <Common/ZooKeeper/Types.h>
|
||||||
#include <Common/ZooKeeper/ZooKeeperCommon.h>
|
#include <Common/ZooKeeper/ZooKeeperCommon.h>
|
||||||
@ -16,10 +17,12 @@
|
|||||||
#include <base/sort.h>
|
#include <base/sort.h>
|
||||||
#include <base/getFQDNOrHostName.h>
|
#include <base/getFQDNOrHostName.h>
|
||||||
#include <Core/ServerUUID.h>
|
#include <Core/ServerUUID.h>
|
||||||
|
#include <Core/BackgroundSchedulePool.h>
|
||||||
#include "Common/ZooKeeper/IKeeper.h"
|
#include "Common/ZooKeeper/IKeeper.h"
|
||||||
#include <Common/DNSResolver.h>
|
#include <Common/DNSResolver.h>
|
||||||
#include <Common/StringUtils.h>
|
#include <Common/StringUtils.h>
|
||||||
#include <Common/Exception.h>
|
#include <Common/Exception.h>
|
||||||
|
#include <Interpreters/Context.h>
|
||||||
|
|
||||||
#include <Poco/Net/NetException.h>
|
#include <Poco/Net/NetException.h>
|
||||||
#include <Poco/Net/DNS.h>
|
#include <Poco/Net/DNS.h>
|
||||||
@ -55,70 +58,120 @@ static void check(Coordination::Error code, const std::string & path)
|
|||||||
throw KeeperException::fromPath(code, path);
|
throw KeeperException::fromPath(code, path);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
UInt64 getSecondsUntilReconnect(const ZooKeeperArgs & args)
|
||||||
|
{
|
||||||
|
std::uniform_int_distribution<UInt32> fallback_session_lifetime_distribution
|
||||||
|
{
|
||||||
|
args.fallback_session_lifetime.min_sec,
|
||||||
|
args.fallback_session_lifetime.max_sec,
|
||||||
|
};
|
||||||
|
UInt32 session_lifetime_seconds = fallback_session_lifetime_distribution(thread_local_rng);
|
||||||
|
return session_lifetime_seconds;
|
||||||
|
}
|
||||||
|
|
||||||
void ZooKeeper::init(ZooKeeperArgs args_)
|
|
||||||
|
|
||||||
|
void ZooKeeper::updateAvailabilityZones()
|
||||||
|
{
|
||||||
|
ShuffleHosts shuffled_hosts = shuffleHosts();
|
||||||
|
|
||||||
|
for (const auto & node : shuffled_hosts)
|
||||||
|
{
|
||||||
|
try
|
||||||
|
{
|
||||||
|
ShuffleHosts single_node{node};
|
||||||
|
auto tmp_impl = std::make_unique<Coordination::ZooKeeper>(single_node, args, zk_log);
|
||||||
|
auto idx = node.original_index;
|
||||||
|
availability_zones[idx] = tmp_impl->tryGetAvailabilityZone();
|
||||||
|
LOG_TEST(log, "Got availability zone for {}: {}", args.hosts[idx], availability_zones[idx]);
|
||||||
|
}
|
||||||
|
catch (...)
|
||||||
|
{
|
||||||
|
DB::tryLogCurrentException(log, "Failed to get availability zone for " + node.host);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
LOG_DEBUG(log, "Updated availability zones: [{}]", fmt::join(availability_zones, ", "));
|
||||||
|
}
|
||||||
|
|
||||||
|
void ZooKeeper::init(ZooKeeperArgs args_, std::unique_ptr<Coordination::IKeeper> existing_impl)
|
||||||
{
|
{
|
||||||
args = std::move(args_);
|
args = std::move(args_);
|
||||||
log = getLogger("ZooKeeper");
|
log = getLogger("ZooKeeper");
|
||||||
|
|
||||||
if (args.implementation == "zookeeper")
|
if (existing_impl)
|
||||||
|
{
|
||||||
|
chassert(args.implementation == "zookeeper");
|
||||||
|
impl = std::move(existing_impl);
|
||||||
|
LOG_INFO(log, "Switching to connection to a more optimal node {}", impl->getConnectedHostPort());
|
||||||
|
}
|
||||||
|
else if (args.implementation == "zookeeper")
|
||||||
{
|
{
|
||||||
if (args.hosts.empty())
|
if (args.hosts.empty())
|
||||||
throw KeeperException::fromMessage(Coordination::Error::ZBADARGUMENTS, "No hosts passed to ZooKeeper constructor.");
|
throw KeeperException::fromMessage(Coordination::Error::ZBADARGUMENTS, "No hosts passed to ZooKeeper constructor.");
|
||||||
|
|
||||||
Coordination::ZooKeeper::Nodes nodes;
|
chassert(args.availability_zones.size() == args.hosts.size());
|
||||||
nodes.reserve(args.hosts.size());
|
if (availability_zones.empty())
|
||||||
|
{
|
||||||
|
/// availability_zones is empty on server startup or after config reloading
|
||||||
|
/// We will keep the az info when starting new sessions
|
||||||
|
availability_zones = args.availability_zones;
|
||||||
|
LOG_TEST(log, "Availability zones from config: [{}], client: {}", fmt::join(availability_zones, ", "), args.client_availability_zone);
|
||||||
|
if (args.availability_zone_autodetect)
|
||||||
|
updateAvailabilityZones();
|
||||||
|
}
|
||||||
|
chassert(availability_zones.size() == args.hosts.size());
|
||||||
|
|
||||||
/// Shuffle the hosts to distribute the load among ZooKeeper nodes.
|
/// Shuffle the hosts to distribute the load among ZooKeeper nodes.
|
||||||
std::vector<ShuffleHost> shuffled_hosts = shuffleHosts();
|
ShuffleHosts shuffled_hosts = shuffleHosts();
|
||||||
|
|
||||||
bool dns_error = false;
|
impl = std::make_unique<Coordination::ZooKeeper>(shuffled_hosts, args, zk_log);
|
||||||
for (auto & host : shuffled_hosts)
|
Int8 node_idx = impl->getConnectedNodeIdx();
|
||||||
{
|
|
||||||
auto & host_string = host.host;
|
|
||||||
try
|
|
||||||
{
|
|
||||||
const bool secure = startsWith(host_string, "secure://");
|
|
||||||
|
|
||||||
if (secure)
|
|
||||||
host_string.erase(0, strlen("secure://"));
|
|
||||||
|
|
||||||
/// We want to resolve all hosts without DNS cache for keeper connection.
|
|
||||||
Coordination::DNSResolver::instance().removeHostFromCache(host_string);
|
|
||||||
|
|
||||||
const Poco::Net::SocketAddress host_socket_addr{host_string};
|
|
||||||
LOG_TEST(log, "Adding ZooKeeper host {} ({})", host_string, host_socket_addr.toString());
|
|
||||||
nodes.emplace_back(Coordination::ZooKeeper::Node{host_socket_addr, host.original_index, secure});
|
|
||||||
}
|
|
||||||
catch (const Poco::Net::HostNotFoundException & e)
|
|
||||||
{
|
|
||||||
/// Most likely it's misconfiguration and wrong hostname was specified
|
|
||||||
LOG_ERROR(log, "Cannot use ZooKeeper host {}, reason: {}", host_string, e.displayText());
|
|
||||||
}
|
|
||||||
catch (const Poco::Net::DNSException & e)
|
|
||||||
{
|
|
||||||
/// Most likely DNS is not available now
|
|
||||||
dns_error = true;
|
|
||||||
LOG_ERROR(log, "Cannot use ZooKeeper host {} due to DNS error: {}", host_string, e.displayText());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (nodes.empty())
|
|
||||||
{
|
|
||||||
/// For DNS errors we throw exception with ZCONNECTIONLOSS code, so it will be considered as hardware error, not user error
|
|
||||||
if (dns_error)
|
|
||||||
throw KeeperException::fromMessage(Coordination::Error::ZCONNECTIONLOSS, "Cannot resolve any of provided ZooKeeper hosts due to DNS error");
|
|
||||||
else
|
|
||||||
throw KeeperException::fromMessage(Coordination::Error::ZCONNECTIONLOSS, "Cannot use any of provided ZooKeeper nodes");
|
|
||||||
}
|
|
||||||
|
|
||||||
impl = std::make_unique<Coordination::ZooKeeper>(nodes, args, zk_log);
|
|
||||||
|
|
||||||
if (args.chroot.empty())
|
if (args.chroot.empty())
|
||||||
LOG_TRACE(log, "Initialized, hosts: {}", fmt::join(args.hosts, ","));
|
LOG_TRACE(log, "Initialized, hosts: {}", fmt::join(args.hosts, ","));
|
||||||
else
|
else
|
||||||
LOG_TRACE(log, "Initialized, hosts: {}, chroot: {}", fmt::join(args.hosts, ","), args.chroot);
|
LOG_TRACE(log, "Initialized, hosts: {}, chroot: {}", fmt::join(args.hosts, ","), args.chroot);
|
||||||
|
|
||||||
|
|
||||||
|
/// If the balancing strategy has an optimal node then it will be the first in the list
|
||||||
|
bool connected_to_suboptimal_node = node_idx != shuffled_hosts[0].original_index;
|
||||||
|
bool respect_az = args.prefer_local_availability_zone && !args.client_availability_zone.empty();
|
||||||
|
bool may_benefit_from_reconnecting = respect_az || args.get_priority_load_balancing.hasOptimalNode();
|
||||||
|
if (connected_to_suboptimal_node && may_benefit_from_reconnecting)
|
||||||
|
{
|
||||||
|
auto reconnect_timeout_sec = getSecondsUntilReconnect(args);
|
||||||
|
LOG_DEBUG(log, "Connected to a suboptimal ZooKeeper host ({}, index {})."
|
||||||
|
" To preserve balance in ZooKeeper usage, this ZooKeeper session will expire in {} seconds",
|
||||||
|
impl->getConnectedHostPort(), node_idx, reconnect_timeout_sec);
|
||||||
|
|
||||||
|
auto reconnect_task_holder = DB::Context::getGlobalContextInstance()->getSchedulePool().createTask("ZKReconnect", [this, optimal_host = shuffled_hosts[0]]()
|
||||||
|
{
|
||||||
|
try
|
||||||
|
{
|
||||||
|
LOG_DEBUG(log, "Trying to connect to a more optimal node {}", optimal_host.host);
|
||||||
|
ShuffleHosts node{optimal_host};
|
||||||
|
std::unique_ptr<Coordination::IKeeper> new_impl = std::make_unique<Coordination::ZooKeeper>(node, args, zk_log);
|
||||||
|
Int8 new_node_idx = new_impl->getConnectedNodeIdx();
|
||||||
|
|
||||||
|
/// Maybe the node was unavailable when getting AZs first time, update just in case
|
||||||
|
if (args.availability_zone_autodetect && availability_zones[new_node_idx].empty())
|
||||||
|
{
|
||||||
|
availability_zones[new_node_idx] = new_impl->tryGetAvailabilityZone();
|
||||||
|
LOG_DEBUG(log, "Got availability zone for {}: {}", optimal_host.host, availability_zones[new_node_idx]);
|
||||||
|
}
|
||||||
|
|
||||||
|
optimal_impl = std::move(new_impl);
|
||||||
|
impl->finalize("Connected to a more optimal node");
|
||||||
|
}
|
||||||
|
catch (...)
|
||||||
|
{
|
||||||
|
LOG_WARNING(log, "Failed to connect to a more optimal ZooKeeper, will try again later: {}", DB::getCurrentExceptionMessage(/*with_stacktrace*/ false));
|
||||||
|
(*reconnect_task)->scheduleAfter(getSecondsUntilReconnect(args) * 1000);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
reconnect_task = std::make_unique<DB::BackgroundSchedulePoolTaskHolder>(std::move(reconnect_task_holder));
|
||||||
|
(*reconnect_task)->activate();
|
||||||
|
(*reconnect_task)->scheduleAfter(reconnect_timeout_sec * 1000);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
else if (args.implementation == "testkeeper")
|
else if (args.implementation == "testkeeper")
|
||||||
{
|
{
|
||||||
@ -152,29 +205,53 @@ void ZooKeeper::init(ZooKeeperArgs args_)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ZooKeeper::~ZooKeeper()
|
||||||
|
{
|
||||||
|
if (reconnect_task)
|
||||||
|
(*reconnect_task)->deactivate();
|
||||||
|
}
|
||||||
|
|
||||||
ZooKeeper::ZooKeeper(const ZooKeeperArgs & args_, std::shared_ptr<DB::ZooKeeperLog> zk_log_)
|
ZooKeeper::ZooKeeper(const ZooKeeperArgs & args_, std::shared_ptr<DB::ZooKeeperLog> zk_log_)
|
||||||
: zk_log(std::move(zk_log_))
|
: zk_log(std::move(zk_log_))
|
||||||
{
|
{
|
||||||
init(args_);
|
init(args_, /*existing_impl*/ {});
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
ZooKeeper::ZooKeeper(const ZooKeeperArgs & args_, std::shared_ptr<DB::ZooKeeperLog> zk_log_, Strings availability_zones_, std::unique_ptr<Coordination::IKeeper> existing_impl)
|
||||||
|
: availability_zones(std::move(availability_zones_)), zk_log(std::move(zk_log_))
|
||||||
|
{
|
||||||
|
if (availability_zones.size() != args_.hosts.size())
|
||||||
|
throw DB::Exception(DB::ErrorCodes::LOGICAL_ERROR, "Argument sizes mismatch: availability_zones count {} and hosts count {}",
|
||||||
|
availability_zones.size(), args_.hosts.size());
|
||||||
|
init(args_, std::move(existing_impl));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
ZooKeeper::ZooKeeper(const Poco::Util::AbstractConfiguration & config, const std::string & config_name, std::shared_ptr<DB::ZooKeeperLog> zk_log_)
|
ZooKeeper::ZooKeeper(const Poco::Util::AbstractConfiguration & config, const std::string & config_name, std::shared_ptr<DB::ZooKeeperLog> zk_log_)
|
||||||
: zk_log(std::move(zk_log_))
|
: zk_log(std::move(zk_log_))
|
||||||
{
|
{
|
||||||
init(ZooKeeperArgs(config, config_name));
|
init(ZooKeeperArgs(config, config_name), /*existing_impl*/ {});
|
||||||
}
|
}
|
||||||
|
|
||||||
std::vector<ShuffleHost> ZooKeeper::shuffleHosts() const
|
ShuffleHosts ZooKeeper::shuffleHosts() const
|
||||||
{
|
{
|
||||||
std::function<Priority(size_t index)> get_priority = args.get_priority_load_balancing.getPriorityFunc(args.get_priority_load_balancing.load_balancing, 0, args.hosts.size());
|
std::function<Priority(size_t index)> get_priority = args.get_priority_load_balancing.getPriorityFunc(
|
||||||
std::vector<ShuffleHost> shuffle_hosts;
|
args.get_priority_load_balancing.load_balancing, /* offset for first_or_random */ 0, args.hosts.size());
|
||||||
|
ShuffleHosts shuffle_hosts;
|
||||||
for (size_t i = 0; i < args.hosts.size(); ++i)
|
for (size_t i = 0; i < args.hosts.size(); ++i)
|
||||||
{
|
{
|
||||||
ShuffleHost shuffle_host;
|
ShuffleHost shuffle_host;
|
||||||
shuffle_host.host = args.hosts[i];
|
shuffle_host.host = args.hosts[i];
|
||||||
shuffle_host.original_index = static_cast<UInt8>(i);
|
shuffle_host.original_index = static_cast<UInt8>(i);
|
||||||
|
|
||||||
|
shuffle_host.secure = startsWith(shuffle_host.host, "secure://");
|
||||||
|
if (shuffle_host.secure)
|
||||||
|
shuffle_host.host.erase(0, strlen("secure://"));
|
||||||
|
|
||||||
|
if (!args.client_availability_zone.empty() && !availability_zones[i].empty())
|
||||||
|
shuffle_host.az_info = availability_zones[i] == args.client_availability_zone ? ShuffleHost::SAME : ShuffleHost::OTHER;
|
||||||
|
|
||||||
if (get_priority)
|
if (get_priority)
|
||||||
shuffle_host.priority = get_priority(i);
|
shuffle_host.priority = get_priority(i);
|
||||||
shuffle_host.randomize();
|
shuffle_host.randomize();
|
||||||
@ -1023,7 +1100,10 @@ ZooKeeperPtr ZooKeeper::create(const Poco::Util::AbstractConfiguration & config,
|
|||||||
|
|
||||||
ZooKeeperPtr ZooKeeper::startNewSession() const
|
ZooKeeperPtr ZooKeeper::startNewSession() const
|
||||||
{
|
{
|
||||||
auto res = std::shared_ptr<ZooKeeper>(new ZooKeeper(args, zk_log));
|
if (reconnect_task)
|
||||||
|
(*reconnect_task)->deactivate();
|
||||||
|
|
||||||
|
auto res = std::shared_ptr<ZooKeeper>(new ZooKeeper(args, zk_log, availability_zones, std::move(optimal_impl)));
|
||||||
res->initSession();
|
res->initSession();
|
||||||
return res;
|
return res;
|
||||||
}
|
}
|
||||||
@ -1456,6 +1536,16 @@ int32_t ZooKeeper::getConnectionXid() const
|
|||||||
return impl->getConnectionXid();
|
return impl->getConnectionXid();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
String ZooKeeper::getConnectedHostAvailabilityZone() const
|
||||||
|
{
|
||||||
|
if (args.implementation != "zookeeper" || !impl)
|
||||||
|
return "";
|
||||||
|
Int8 idx = impl->getConnectedNodeIdx();
|
||||||
|
if (idx < 0)
|
||||||
|
return ""; /// session expired
|
||||||
|
return availability_zones.at(idx);
|
||||||
|
}
|
||||||
|
|
||||||
size_t getFailedOpIndex(Coordination::Error exception_code, const Coordination::Responses & responses)
|
size_t getFailedOpIndex(Coordination::Error exception_code, const Coordination::Responses & responses)
|
||||||
{
|
{
|
||||||
if (responses.empty())
|
if (responses.empty())
|
||||||
|
@ -32,6 +32,7 @@ namespace DB
|
|||||||
{
|
{
|
||||||
class ZooKeeperLog;
|
class ZooKeeperLog;
|
||||||
class ZooKeeperWithFaultInjection;
|
class ZooKeeperWithFaultInjection;
|
||||||
|
class BackgroundSchedulePoolTaskHolder;
|
||||||
|
|
||||||
namespace ErrorCodes
|
namespace ErrorCodes
|
||||||
{
|
{
|
||||||
@ -48,11 +49,23 @@ constexpr size_t MULTI_BATCH_SIZE = 100;
|
|||||||
|
|
||||||
struct ShuffleHost
|
struct ShuffleHost
|
||||||
{
|
{
|
||||||
|
enum AvailabilityZoneInfo
|
||||||
|
{
|
||||||
|
SAME = 0,
|
||||||
|
UNKNOWN = 1,
|
||||||
|
OTHER = 2,
|
||||||
|
};
|
||||||
|
|
||||||
String host;
|
String host;
|
||||||
|
bool secure = false;
|
||||||
UInt8 original_index = 0;
|
UInt8 original_index = 0;
|
||||||
|
AvailabilityZoneInfo az_info = UNKNOWN;
|
||||||
Priority priority;
|
Priority priority;
|
||||||
UInt64 random = 0;
|
UInt64 random = 0;
|
||||||
|
|
||||||
|
/// We should resolve it each time without caching
|
||||||
|
mutable std::optional<Poco::Net::SocketAddress> address;
|
||||||
|
|
||||||
void randomize()
|
void randomize()
|
||||||
{
|
{
|
||||||
random = thread_local_rng();
|
random = thread_local_rng();
|
||||||
@ -60,11 +73,13 @@ struct ShuffleHost
|
|||||||
|
|
||||||
static bool compare(const ShuffleHost & lhs, const ShuffleHost & rhs)
|
static bool compare(const ShuffleHost & lhs, const ShuffleHost & rhs)
|
||||||
{
|
{
|
||||||
return std::forward_as_tuple(lhs.priority, lhs.random)
|
return std::forward_as_tuple(lhs.az_info, lhs.priority, lhs.random)
|
||||||
< std::forward_as_tuple(rhs.priority, rhs.random);
|
< std::forward_as_tuple(rhs.az_info, rhs.priority, rhs.random);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
using ShuffleHosts = std::vector<ShuffleHost>;
|
||||||
|
|
||||||
struct RemoveException
|
struct RemoveException
|
||||||
{
|
{
|
||||||
explicit RemoveException(std::string_view path_ = "", bool remove_subtree_ = true)
|
explicit RemoveException(std::string_view path_ = "", bool remove_subtree_ = true)
|
||||||
@ -197,6 +212,9 @@ class ZooKeeper
|
|||||||
|
|
||||||
explicit ZooKeeper(const ZooKeeperArgs & args_, std::shared_ptr<DB::ZooKeeperLog> zk_log_ = nullptr);
|
explicit ZooKeeper(const ZooKeeperArgs & args_, std::shared_ptr<DB::ZooKeeperLog> zk_log_ = nullptr);
|
||||||
|
|
||||||
|
/// Allows to keep info about availability zones when starting a new session
|
||||||
|
ZooKeeper(const ZooKeeperArgs & args_, std::shared_ptr<DB::ZooKeeperLog> zk_log_, Strings availability_zones_, std::unique_ptr<Coordination::IKeeper> existing_impl);
|
||||||
|
|
||||||
/** Config of the form:
|
/** Config of the form:
|
||||||
<zookeeper>
|
<zookeeper>
|
||||||
<node>
|
<node>
|
||||||
@ -228,7 +246,9 @@ public:
|
|||||||
using Ptr = std::shared_ptr<ZooKeeper>;
|
using Ptr = std::shared_ptr<ZooKeeper>;
|
||||||
using ErrorsList = std::initializer_list<Coordination::Error>;
|
using ErrorsList = std::initializer_list<Coordination::Error>;
|
||||||
|
|
||||||
std::vector<ShuffleHost> shuffleHosts() const;
|
~ZooKeeper();
|
||||||
|
|
||||||
|
ShuffleHosts shuffleHosts() const;
|
||||||
|
|
||||||
static Ptr create(const Poco::Util::AbstractConfiguration & config,
|
static Ptr create(const Poco::Util::AbstractConfiguration & config,
|
||||||
const std::string & config_name,
|
const std::string & config_name,
|
||||||
@ -596,8 +616,6 @@ public:
|
|||||||
|
|
||||||
UInt32 getSessionUptime() const { return static_cast<UInt32>(session_uptime.elapsedSeconds()); }
|
UInt32 getSessionUptime() const { return static_cast<UInt32>(session_uptime.elapsedSeconds()); }
|
||||||
|
|
||||||
bool hasReachedDeadline() const { return impl->hasReachedDeadline(); }
|
|
||||||
|
|
||||||
uint64_t getSessionTimeoutMS() const { return args.session_timeout_ms; }
|
uint64_t getSessionTimeoutMS() const { return args.session_timeout_ms; }
|
||||||
|
|
||||||
void setServerCompletelyStarted();
|
void setServerCompletelyStarted();
|
||||||
@ -606,6 +624,8 @@ public:
|
|||||||
String getConnectedHostPort() const;
|
String getConnectedHostPort() const;
|
||||||
int32_t getConnectionXid() const;
|
int32_t getConnectionXid() const;
|
||||||
|
|
||||||
|
String getConnectedHostAvailabilityZone() const;
|
||||||
|
|
||||||
const DB::KeeperFeatureFlags * getKeeperFeatureFlags() const { return impl->getKeeperFeatureFlags(); }
|
const DB::KeeperFeatureFlags * getKeeperFeatureFlags() const { return impl->getKeeperFeatureFlags(); }
|
||||||
|
|
||||||
/// Checks that our session was not killed, and allows to avoid applying a request from an old lost session.
|
/// Checks that our session was not killed, and allows to avoid applying a request from an old lost session.
|
||||||
@ -625,7 +645,8 @@ public:
|
|||||||
void addCheckSessionOp(Coordination::Requests & requests) const;
|
void addCheckSessionOp(Coordination::Requests & requests) const;
|
||||||
|
|
||||||
private:
|
private:
|
||||||
void init(ZooKeeperArgs args_);
|
void init(ZooKeeperArgs args_, std::unique_ptr<Coordination::IKeeper> existing_impl);
|
||||||
|
void updateAvailabilityZones();
|
||||||
|
|
||||||
/// The following methods don't any throw exceptions but return error codes.
|
/// The following methods don't any throw exceptions but return error codes.
|
||||||
Coordination::Error createImpl(const std::string & path, const std::string & data, int32_t mode, std::string & path_created);
|
Coordination::Error createImpl(const std::string & path, const std::string & data, int32_t mode, std::string & path_created);
|
||||||
@ -690,15 +711,20 @@ private:
|
|||||||
}
|
}
|
||||||
|
|
||||||
std::unique_ptr<Coordination::IKeeper> impl;
|
std::unique_ptr<Coordination::IKeeper> impl;
|
||||||
|
mutable std::unique_ptr<Coordination::IKeeper> optimal_impl;
|
||||||
|
|
||||||
ZooKeeperArgs args;
|
ZooKeeperArgs args;
|
||||||
|
|
||||||
|
Strings availability_zones;
|
||||||
|
|
||||||
LoggerPtr log = nullptr;
|
LoggerPtr log = nullptr;
|
||||||
std::shared_ptr<DB::ZooKeeperLog> zk_log;
|
std::shared_ptr<DB::ZooKeeperLog> zk_log;
|
||||||
|
|
||||||
AtomicStopwatch session_uptime;
|
AtomicStopwatch session_uptime;
|
||||||
|
|
||||||
int32_t session_node_version;
|
int32_t session_node_version;
|
||||||
|
|
||||||
|
std::unique_ptr<DB::BackgroundSchedulePoolTaskHolder> reconnect_task;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
|
@ -5,6 +5,9 @@
|
|||||||
#include <Poco/Util/AbstractConfiguration.h>
|
#include <Poco/Util/AbstractConfiguration.h>
|
||||||
#include <Common/isLocalAddress.h>
|
#include <Common/isLocalAddress.h>
|
||||||
#include <Common/StringUtils.h>
|
#include <Common/StringUtils.h>
|
||||||
|
#include <Common/thread_local_rng.h>
|
||||||
|
#include <Server/CloudPlacementInfo.h>
|
||||||
|
#include <IO/S3/Credentials.h>
|
||||||
#include <Poco/String.h>
|
#include <Poco/String.h>
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
@ -53,6 +56,7 @@ ZooKeeperArgs::ZooKeeperArgs(const Poco::Util::AbstractConfiguration & config, c
|
|||||||
ZooKeeperArgs::ZooKeeperArgs(const String & hosts_string)
|
ZooKeeperArgs::ZooKeeperArgs(const String & hosts_string)
|
||||||
{
|
{
|
||||||
splitInto<','>(hosts, hosts_string);
|
splitInto<','>(hosts, hosts_string);
|
||||||
|
availability_zones.resize(hosts.size());
|
||||||
}
|
}
|
||||||
|
|
||||||
void ZooKeeperArgs::initFromKeeperServerSection(const Poco::Util::AbstractConfiguration & config)
|
void ZooKeeperArgs::initFromKeeperServerSection(const Poco::Util::AbstractConfiguration & config)
|
||||||
@ -103,8 +107,11 @@ void ZooKeeperArgs::initFromKeeperServerSection(const Poco::Util::AbstractConfig
|
|||||||
for (const auto & key : keys)
|
for (const auto & key : keys)
|
||||||
{
|
{
|
||||||
if (startsWith(key, "server"))
|
if (startsWith(key, "server"))
|
||||||
|
{
|
||||||
hosts.push_back(
|
hosts.push_back(
|
||||||
(secure ? "secure://" : "") + config.getString(raft_configuration_key + "." + key + ".hostname") + ":" + tcp_port);
|
(secure ? "secure://" : "") + config.getString(raft_configuration_key + "." + key + ".hostname") + ":" + tcp_port);
|
||||||
|
availability_zones.push_back(config.getString(raft_configuration_key + "." + key + ".availability_zone", ""));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static constexpr std::array load_balancing_keys
|
static constexpr std::array load_balancing_keys
|
||||||
@ -123,11 +130,15 @@ void ZooKeeperArgs::initFromKeeperServerSection(const Poco::Util::AbstractConfig
|
|||||||
auto load_balancing = magic_enum::enum_cast<DB::LoadBalancing>(Poco::toUpper(load_balancing_str));
|
auto load_balancing = magic_enum::enum_cast<DB::LoadBalancing>(Poco::toUpper(load_balancing_str));
|
||||||
if (!load_balancing)
|
if (!load_balancing)
|
||||||
throw DB::Exception(DB::ErrorCodes::BAD_ARGUMENTS, "Unknown load balancing: {}", load_balancing_str);
|
throw DB::Exception(DB::ErrorCodes::BAD_ARGUMENTS, "Unknown load balancing: {}", load_balancing_str);
|
||||||
get_priority_load_balancing.load_balancing = *load_balancing;
|
get_priority_load_balancing = DB::GetPriorityForLoadBalancing(*load_balancing, thread_local_rng() % hosts.size());
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
availability_zone_autodetect = config.getBool(std::string{config_name} + ".availability_zone_autodetect", false);
|
||||||
|
prefer_local_availability_zone = config.getBool(std::string{config_name} + ".prefer_local_availability_zone", false);
|
||||||
|
if (prefer_local_availability_zone)
|
||||||
|
client_availability_zone = DB::PlacementInfo::PlacementInfo::instance().getAvailabilityZone();
|
||||||
}
|
}
|
||||||
|
|
||||||
void ZooKeeperArgs::initFromKeeperSection(const Poco::Util::AbstractConfiguration & config, const std::string & config_name)
|
void ZooKeeperArgs::initFromKeeperSection(const Poco::Util::AbstractConfiguration & config, const std::string & config_name)
|
||||||
@ -137,6 +148,8 @@ void ZooKeeperArgs::initFromKeeperSection(const Poco::Util::AbstractConfiguratio
|
|||||||
Poco::Util::AbstractConfiguration::Keys keys;
|
Poco::Util::AbstractConfiguration::Keys keys;
|
||||||
config.keys(config_name, keys);
|
config.keys(config_name, keys);
|
||||||
|
|
||||||
|
std::optional<DB::LoadBalancing> load_balancing;
|
||||||
|
|
||||||
for (const auto & key : keys)
|
for (const auto & key : keys)
|
||||||
{
|
{
|
||||||
if (key.starts_with("node"))
|
if (key.starts_with("node"))
|
||||||
@ -144,6 +157,7 @@ void ZooKeeperArgs::initFromKeeperSection(const Poco::Util::AbstractConfiguratio
|
|||||||
hosts.push_back(
|
hosts.push_back(
|
||||||
(config.getBool(config_name + "." + key + ".secure", false) ? "secure://" : "")
|
(config.getBool(config_name + "." + key + ".secure", false) ? "secure://" : "")
|
||||||
+ config.getString(config_name + "." + key + ".host") + ":" + config.getString(config_name + "." + key + ".port", "2181"));
|
+ config.getString(config_name + "." + key + ".host") + ":" + config.getString(config_name + "." + key + ".port", "2181"));
|
||||||
|
availability_zones.push_back(config.getString(config_name + "." + key + ".availability_zone", ""));
|
||||||
}
|
}
|
||||||
else if (key == "session_timeout_ms")
|
else if (key == "session_timeout_ms")
|
||||||
{
|
{
|
||||||
@ -199,6 +213,10 @@ void ZooKeeperArgs::initFromKeeperSection(const Poco::Util::AbstractConfiguratio
|
|||||||
{
|
{
|
||||||
sessions_path = config.getString(config_name + "." + key);
|
sessions_path = config.getString(config_name + "." + key);
|
||||||
}
|
}
|
||||||
|
else if (key == "prefer_local_availability_zone")
|
||||||
|
{
|
||||||
|
prefer_local_availability_zone = config.getBool(config_name + "." + key);
|
||||||
|
}
|
||||||
else if (key == "implementation")
|
else if (key == "implementation")
|
||||||
{
|
{
|
||||||
implementation = config.getString(config_name + "." + key);
|
implementation = config.getString(config_name + "." + key);
|
||||||
@ -207,10 +225,9 @@ void ZooKeeperArgs::initFromKeeperSection(const Poco::Util::AbstractConfiguratio
|
|||||||
{
|
{
|
||||||
String load_balancing_str = config.getString(config_name + "." + key);
|
String load_balancing_str = config.getString(config_name + "." + key);
|
||||||
/// Use magic_enum to avoid dependency from dbms (`SettingFieldLoadBalancingTraits::fromString(...)`)
|
/// Use magic_enum to avoid dependency from dbms (`SettingFieldLoadBalancingTraits::fromString(...)`)
|
||||||
auto load_balancing = magic_enum::enum_cast<DB::LoadBalancing>(Poco::toUpper(load_balancing_str));
|
load_balancing = magic_enum::enum_cast<DB::LoadBalancing>(Poco::toUpper(load_balancing_str));
|
||||||
if (!load_balancing)
|
if (!load_balancing)
|
||||||
throw DB::Exception(DB::ErrorCodes::BAD_ARGUMENTS, "Unknown load balancing: {}", load_balancing_str);
|
throw DB::Exception(DB::ErrorCodes::BAD_ARGUMENTS, "Unknown load balancing: {}", load_balancing_str);
|
||||||
get_priority_load_balancing.load_balancing = *load_balancing;
|
|
||||||
}
|
}
|
||||||
else if (key == "fallback_session_lifetime")
|
else if (key == "fallback_session_lifetime")
|
||||||
{
|
{
|
||||||
@ -224,9 +241,19 @@ void ZooKeeperArgs::initFromKeeperSection(const Poco::Util::AbstractConfiguratio
|
|||||||
{
|
{
|
||||||
use_compression = config.getBool(config_name + "." + key);
|
use_compression = config.getBool(config_name + "." + key);
|
||||||
}
|
}
|
||||||
|
else if (key == "availability_zone_autodetect")
|
||||||
|
{
|
||||||
|
availability_zone_autodetect = config.getBool(config_name + "." + key);
|
||||||
|
}
|
||||||
else
|
else
|
||||||
throw KeeperException(Coordination::Error::ZBADARGUMENTS, "Unknown key {} in config file", key);
|
throw KeeperException(Coordination::Error::ZBADARGUMENTS, "Unknown key {} in config file", key);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (load_balancing)
|
||||||
|
get_priority_load_balancing = DB::GetPriorityForLoadBalancing(*load_balancing, thread_local_rng() % hosts.size());
|
||||||
|
|
||||||
|
if (prefer_local_availability_zone)
|
||||||
|
client_availability_zone = DB::PlacementInfo::PlacementInfo::instance().getAvailabilityZone();
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -32,10 +32,12 @@ struct ZooKeeperArgs
|
|||||||
String zookeeper_name = "zookeeper";
|
String zookeeper_name = "zookeeper";
|
||||||
String implementation = "zookeeper";
|
String implementation = "zookeeper";
|
||||||
Strings hosts;
|
Strings hosts;
|
||||||
|
Strings availability_zones;
|
||||||
String auth_scheme;
|
String auth_scheme;
|
||||||
String identity;
|
String identity;
|
||||||
String chroot;
|
String chroot;
|
||||||
String sessions_path = "/clickhouse/sessions";
|
String sessions_path = "/clickhouse/sessions";
|
||||||
|
String client_availability_zone;
|
||||||
int32_t connection_timeout_ms = Coordination::DEFAULT_CONNECTION_TIMEOUT_MS;
|
int32_t connection_timeout_ms = Coordination::DEFAULT_CONNECTION_TIMEOUT_MS;
|
||||||
int32_t session_timeout_ms = Coordination::DEFAULT_SESSION_TIMEOUT_MS;
|
int32_t session_timeout_ms = Coordination::DEFAULT_SESSION_TIMEOUT_MS;
|
||||||
int32_t operation_timeout_ms = Coordination::DEFAULT_OPERATION_TIMEOUT_MS;
|
int32_t operation_timeout_ms = Coordination::DEFAULT_OPERATION_TIMEOUT_MS;
|
||||||
@ -47,6 +49,8 @@ struct ZooKeeperArgs
|
|||||||
UInt64 send_sleep_ms = 0;
|
UInt64 send_sleep_ms = 0;
|
||||||
UInt64 recv_sleep_ms = 0;
|
UInt64 recv_sleep_ms = 0;
|
||||||
bool use_compression = false;
|
bool use_compression = false;
|
||||||
|
bool prefer_local_availability_zone = false;
|
||||||
|
bool availability_zone_autodetect = false;
|
||||||
|
|
||||||
SessionLifetimeConfiguration fallback_session_lifetime = {};
|
SessionLifetimeConfiguration fallback_session_lifetime = {};
|
||||||
DB::GetPriorityForLoadBalancing get_priority_load_balancing;
|
DB::GetPriorityForLoadBalancing get_priority_load_balancing;
|
||||||
|
@ -23,6 +23,9 @@
|
|||||||
#include <Common/setThreadName.h>
|
#include <Common/setThreadName.h>
|
||||||
#include <Common/thread_local_rng.h>
|
#include <Common/thread_local_rng.h>
|
||||||
|
|
||||||
|
#include <Poco/Net/NetException.h>
|
||||||
|
#include <Poco/Net/DNS.h>
|
||||||
|
|
||||||
#include "Coordination/KeeperConstants.h"
|
#include "Coordination/KeeperConstants.h"
|
||||||
#include "config.h"
|
#include "config.h"
|
||||||
|
|
||||||
@ -338,7 +341,7 @@ ZooKeeper::~ZooKeeper()
|
|||||||
|
|
||||||
|
|
||||||
ZooKeeper::ZooKeeper(
|
ZooKeeper::ZooKeeper(
|
||||||
const Nodes & nodes,
|
const zkutil::ShuffleHosts & nodes,
|
||||||
const zkutil::ZooKeeperArgs & args_,
|
const zkutil::ZooKeeperArgs & args_,
|
||||||
std::shared_ptr<ZooKeeperLog> zk_log_)
|
std::shared_ptr<ZooKeeperLog> zk_log_)
|
||||||
: args(args_)
|
: args(args_)
|
||||||
@ -426,7 +429,7 @@ ZooKeeper::ZooKeeper(
|
|||||||
|
|
||||||
|
|
||||||
void ZooKeeper::connect(
|
void ZooKeeper::connect(
|
||||||
const Nodes & nodes,
|
const zkutil::ShuffleHosts & nodes,
|
||||||
Poco::Timespan connection_timeout)
|
Poco::Timespan connection_timeout)
|
||||||
{
|
{
|
||||||
if (nodes.empty())
|
if (nodes.empty())
|
||||||
@ -434,15 +437,51 @@ void ZooKeeper::connect(
|
|||||||
|
|
||||||
static constexpr size_t num_tries = 3;
|
static constexpr size_t num_tries = 3;
|
||||||
bool connected = false;
|
bool connected = false;
|
||||||
|
bool dns_error = false;
|
||||||
|
|
||||||
|
size_t resolved_count = 0;
|
||||||
|
for (const auto & node : nodes)
|
||||||
|
{
|
||||||
|
try
|
||||||
|
{
|
||||||
|
const Poco::Net::SocketAddress host_socket_addr{node.host};
|
||||||
|
LOG_TRACE(log, "Adding ZooKeeper host {} ({}), az: {}, priority: {}", node.host, host_socket_addr.toString(), node.az_info, node.priority);
|
||||||
|
node.address = host_socket_addr;
|
||||||
|
++resolved_count;
|
||||||
|
}
|
||||||
|
catch (const Poco::Net::HostNotFoundException & e)
|
||||||
|
{
|
||||||
|
/// Most likely it's misconfiguration and wrong hostname was specified
|
||||||
|
LOG_ERROR(log, "Cannot use ZooKeeper host {}, reason: {}", node.host, e.displayText());
|
||||||
|
}
|
||||||
|
catch (const Poco::Net::DNSException & e)
|
||||||
|
{
|
||||||
|
/// Most likely DNS is not available now
|
||||||
|
dns_error = true;
|
||||||
|
LOG_ERROR(log, "Cannot use ZooKeeper host {} due to DNS error: {}", node.host, e.displayText());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (resolved_count == 0)
|
||||||
|
{
|
||||||
|
/// For DNS errors we throw exception with ZCONNECTIONLOSS code, so it will be considered as hardware error, not user error
|
||||||
|
if (dns_error)
|
||||||
|
throw zkutil::KeeperException::fromMessage(
|
||||||
|
Coordination::Error::ZCONNECTIONLOSS, "Cannot resolve any of provided ZooKeeper hosts due to DNS error");
|
||||||
|
else
|
||||||
|
throw zkutil::KeeperException::fromMessage(Coordination::Error::ZCONNECTIONLOSS, "Cannot use any of provided ZooKeeper nodes");
|
||||||
|
}
|
||||||
|
|
||||||
WriteBufferFromOwnString fail_reasons;
|
WriteBufferFromOwnString fail_reasons;
|
||||||
for (size_t try_no = 0; try_no < num_tries; ++try_no)
|
for (size_t try_no = 0; try_no < num_tries; ++try_no)
|
||||||
{
|
{
|
||||||
for (size_t i = 0; i < nodes.size(); ++i)
|
for (const auto & node : nodes)
|
||||||
{
|
{
|
||||||
const auto & node = nodes[i];
|
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
|
if (!node.address)
|
||||||
|
continue;
|
||||||
|
|
||||||
/// Reset the state of previous attempt.
|
/// Reset the state of previous attempt.
|
||||||
if (node.secure)
|
if (node.secure)
|
||||||
{
|
{
|
||||||
@ -458,7 +497,7 @@ void ZooKeeper::connect(
|
|||||||
socket = Poco::Net::StreamSocket();
|
socket = Poco::Net::StreamSocket();
|
||||||
}
|
}
|
||||||
|
|
||||||
socket.connect(node.address, connection_timeout);
|
socket.connect(*node.address, connection_timeout);
|
||||||
socket_address = socket.peerAddress();
|
socket_address = socket.peerAddress();
|
||||||
|
|
||||||
socket.setReceiveTimeout(args.operation_timeout_ms * 1000);
|
socket.setReceiveTimeout(args.operation_timeout_ms * 1000);
|
||||||
@ -498,27 +537,11 @@ void ZooKeeper::connect(
|
|||||||
}
|
}
|
||||||
|
|
||||||
original_index = static_cast<Int8>(node.original_index);
|
original_index = static_cast<Int8>(node.original_index);
|
||||||
|
|
||||||
if (i != 0)
|
|
||||||
{
|
|
||||||
std::uniform_int_distribution<UInt32> fallback_session_lifetime_distribution
|
|
||||||
{
|
|
||||||
args.fallback_session_lifetime.min_sec,
|
|
||||||
args.fallback_session_lifetime.max_sec,
|
|
||||||
};
|
|
||||||
UInt32 session_lifetime_seconds = fallback_session_lifetime_distribution(thread_local_rng);
|
|
||||||
client_session_deadline = clock::now() + std::chrono::seconds(session_lifetime_seconds);
|
|
||||||
|
|
||||||
LOG_DEBUG(log, "Connected to a suboptimal ZooKeeper host ({}, index {})."
|
|
||||||
" To preserve balance in ZooKeeper usage, this ZooKeeper session will expire in {} seconds",
|
|
||||||
node.address.toString(), i, session_lifetime_seconds);
|
|
||||||
}
|
|
||||||
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
{
|
{
|
||||||
fail_reasons << "\n" << getCurrentExceptionMessage(false) << ", " << node.address.toString();
|
fail_reasons << "\n" << getCurrentExceptionMessage(false) << ", " << node.address->toString();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -532,6 +555,9 @@ void ZooKeeper::connect(
|
|||||||
bool first = true;
|
bool first = true;
|
||||||
for (const auto & node : nodes)
|
for (const auto & node : nodes)
|
||||||
{
|
{
|
||||||
|
if (!node.address)
|
||||||
|
continue;
|
||||||
|
|
||||||
if (first)
|
if (first)
|
||||||
first = false;
|
first = false;
|
||||||
else
|
else
|
||||||
@ -540,7 +566,7 @@ void ZooKeeper::connect(
|
|||||||
if (node.secure)
|
if (node.secure)
|
||||||
message << "secure://";
|
message << "secure://";
|
||||||
|
|
||||||
message << node.address.toString();
|
message << node.address->toString();
|
||||||
}
|
}
|
||||||
|
|
||||||
message << fail_reasons.str() << "\n";
|
message << fail_reasons.str() << "\n";
|
||||||
@ -1153,7 +1179,6 @@ void ZooKeeper::pushRequest(RequestInfo && info)
|
|||||||
{
|
{
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
checkSessionDeadline();
|
|
||||||
info.time = clock::now();
|
info.time = clock::now();
|
||||||
auto maybe_zk_log = std::atomic_load(&zk_log);
|
auto maybe_zk_log = std::atomic_load(&zk_log);
|
||||||
if (maybe_zk_log)
|
if (maybe_zk_log)
|
||||||
@ -1201,44 +1226,44 @@ bool ZooKeeper::isFeatureEnabled(KeeperFeatureFlag feature_flag) const
|
|||||||
return keeper_feature_flags.isEnabled(feature_flag);
|
return keeper_feature_flags.isEnabled(feature_flag);
|
||||||
}
|
}
|
||||||
|
|
||||||
void ZooKeeper::initFeatureFlags()
|
std::optional<String> ZooKeeper::tryGetSystemZnode(const std::string & path, const std::string & description)
|
||||||
{
|
{
|
||||||
const auto try_get = [&](const std::string & path, const std::string & description) -> std::optional<std::string>
|
auto promise = std::make_shared<std::promise<Coordination::GetResponse>>();
|
||||||
|
auto future = promise->get_future();
|
||||||
|
|
||||||
|
auto callback = [promise](const Coordination::GetResponse & response) mutable
|
||||||
{
|
{
|
||||||
auto promise = std::make_shared<std::promise<Coordination::GetResponse>>();
|
promise->set_value(response);
|
||||||
auto future = promise->get_future();
|
|
||||||
|
|
||||||
auto callback = [promise](const Coordination::GetResponse & response) mutable
|
|
||||||
{
|
|
||||||
promise->set_value(response);
|
|
||||||
};
|
|
||||||
|
|
||||||
get(path, std::move(callback), {});
|
|
||||||
if (future.wait_for(std::chrono::milliseconds(args.operation_timeout_ms)) != std::future_status::ready)
|
|
||||||
throw Exception(Error::ZOPERATIONTIMEOUT, "Failed to get {}: timeout", description);
|
|
||||||
|
|
||||||
auto response = future.get();
|
|
||||||
|
|
||||||
if (response.error == Coordination::Error::ZNONODE)
|
|
||||||
{
|
|
||||||
LOG_TRACE(log, "Failed to get {}", description);
|
|
||||||
return std::nullopt;
|
|
||||||
}
|
|
||||||
else if (response.error != Coordination::Error::ZOK)
|
|
||||||
{
|
|
||||||
throw Exception(response.error, "Failed to get {}", description);
|
|
||||||
}
|
|
||||||
|
|
||||||
return std::move(response.data);
|
|
||||||
};
|
};
|
||||||
|
|
||||||
if (auto feature_flags = try_get(keeper_api_feature_flags_path, "feature flags"); feature_flags.has_value())
|
get(path, std::move(callback), {});
|
||||||
|
if (future.wait_for(std::chrono::milliseconds(args.operation_timeout_ms)) != std::future_status::ready)
|
||||||
|
throw Exception(Error::ZOPERATIONTIMEOUT, "Failed to get {}: timeout", description);
|
||||||
|
|
||||||
|
auto response = future.get();
|
||||||
|
|
||||||
|
if (response.error == Coordination::Error::ZNONODE)
|
||||||
|
{
|
||||||
|
LOG_TRACE(log, "Failed to get {}", description);
|
||||||
|
return std::nullopt;
|
||||||
|
}
|
||||||
|
else if (response.error != Coordination::Error::ZOK)
|
||||||
|
{
|
||||||
|
throw Exception(response.error, "Failed to get {}", description);
|
||||||
|
}
|
||||||
|
|
||||||
|
return std::move(response.data);
|
||||||
|
}
|
||||||
|
|
||||||
|
void ZooKeeper::initFeatureFlags()
|
||||||
|
{
|
||||||
|
if (auto feature_flags = tryGetSystemZnode(keeper_api_feature_flags_path, "feature flags"); feature_flags.has_value())
|
||||||
{
|
{
|
||||||
keeper_feature_flags.setFeatureFlags(std::move(*feature_flags));
|
keeper_feature_flags.setFeatureFlags(std::move(*feature_flags));
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
auto keeper_api_version_string = try_get(keeper_api_version_path, "API version");
|
auto keeper_api_version_string = tryGetSystemZnode(keeper_api_version_path, "API version");
|
||||||
|
|
||||||
DB::KeeperApiVersion keeper_api_version{DB::KeeperApiVersion::ZOOKEEPER_COMPATIBLE};
|
DB::KeeperApiVersion keeper_api_version{DB::KeeperApiVersion::ZOOKEEPER_COMPATIBLE};
|
||||||
|
|
||||||
@ -1256,6 +1281,17 @@ void ZooKeeper::initFeatureFlags()
|
|||||||
keeper_feature_flags.fromApiVersion(keeper_api_version);
|
keeper_feature_flags.fromApiVersion(keeper_api_version);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
String ZooKeeper::tryGetAvailabilityZone()
|
||||||
|
{
|
||||||
|
auto res = tryGetSystemZnode(keeper_availability_zone_path, "availability zone");
|
||||||
|
if (res)
|
||||||
|
{
|
||||||
|
LOG_TRACE(log, "Availability zone for ZooKeeper at {}: {}", getConnectedHostPort(), *res);
|
||||||
|
return *res;
|
||||||
|
}
|
||||||
|
return "";
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
void ZooKeeper::executeGenericRequest(
|
void ZooKeeper::executeGenericRequest(
|
||||||
const ZooKeeperRequestPtr & request,
|
const ZooKeeperRequestPtr & request,
|
||||||
@ -1587,17 +1623,6 @@ void ZooKeeper::setupFaultDistributions()
|
|||||||
inject_setup.test_and_set();
|
inject_setup.test_and_set();
|
||||||
}
|
}
|
||||||
|
|
||||||
void ZooKeeper::checkSessionDeadline() const
|
|
||||||
{
|
|
||||||
if (unlikely(hasReachedDeadline()))
|
|
||||||
throw Exception::fromMessage(Error::ZSESSIONEXPIRED, "Session expired (force expiry client-side)");
|
|
||||||
}
|
|
||||||
|
|
||||||
bool ZooKeeper::hasReachedDeadline() const
|
|
||||||
{
|
|
||||||
return client_session_deadline.has_value() && clock::now() >= client_session_deadline.value();
|
|
||||||
}
|
|
||||||
|
|
||||||
void ZooKeeper::maybeInjectSendFault()
|
void ZooKeeper::maybeInjectSendFault()
|
||||||
{
|
{
|
||||||
if (unlikely(inject_setup.test() && send_inject_fault && send_inject_fault.value()(thread_local_rng)))
|
if (unlikely(inject_setup.test() && send_inject_fault && send_inject_fault.value()(thread_local_rng)))
|
||||||
|
@ -8,6 +8,7 @@
|
|||||||
#include <Common/ZooKeeper/IKeeper.h>
|
#include <Common/ZooKeeper/IKeeper.h>
|
||||||
#include <Common/ZooKeeper/ZooKeeperCommon.h>
|
#include <Common/ZooKeeper/ZooKeeperCommon.h>
|
||||||
#include <Common/ZooKeeper/ZooKeeperArgs.h>
|
#include <Common/ZooKeeper/ZooKeeperArgs.h>
|
||||||
|
#include <Common/ZooKeeper/ZooKeeper.h>
|
||||||
#include <Coordination/KeeperConstants.h>
|
#include <Coordination/KeeperConstants.h>
|
||||||
#include <Coordination/KeeperFeatureFlags.h>
|
#include <Coordination/KeeperFeatureFlags.h>
|
||||||
|
|
||||||
@ -102,21 +103,12 @@ using namespace DB;
|
|||||||
class ZooKeeper final : public IKeeper
|
class ZooKeeper final : public IKeeper
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
struct Node
|
|
||||||
{
|
|
||||||
Poco::Net::SocketAddress address;
|
|
||||||
UInt8 original_index;
|
|
||||||
bool secure;
|
|
||||||
};
|
|
||||||
|
|
||||||
using Nodes = std::vector<Node>;
|
|
||||||
|
|
||||||
/** Connection to nodes is performed in order. If you want, shuffle them manually.
|
/** Connection to nodes is performed in order. If you want, shuffle them manually.
|
||||||
* Operation timeout couldn't be greater than session timeout.
|
* Operation timeout couldn't be greater than session timeout.
|
||||||
* Operation timeout applies independently for network read, network write, waiting for events and synchronization.
|
* Operation timeout applies independently for network read, network write, waiting for events and synchronization.
|
||||||
*/
|
*/
|
||||||
ZooKeeper(
|
ZooKeeper(
|
||||||
const Nodes & nodes,
|
const zkutil::ShuffleHosts & nodes,
|
||||||
const zkutil::ZooKeeperArgs & args_,
|
const zkutil::ZooKeeperArgs & args_,
|
||||||
std::shared_ptr<ZooKeeperLog> zk_log_);
|
std::shared_ptr<ZooKeeperLog> zk_log_);
|
||||||
|
|
||||||
@ -130,9 +122,7 @@ public:
|
|||||||
String getConnectedHostPort() const override { return (original_index == -1) ? "" : args.hosts[original_index]; }
|
String getConnectedHostPort() const override { return (original_index == -1) ? "" : args.hosts[original_index]; }
|
||||||
int32_t getConnectionXid() const override { return next_xid.load(); }
|
int32_t getConnectionXid() const override { return next_xid.load(); }
|
||||||
|
|
||||||
/// A ZooKeeper session can have an optional deadline set on it.
|
String tryGetAvailabilityZone() override;
|
||||||
/// After it has been reached, the session needs to be finalized.
|
|
||||||
bool hasReachedDeadline() const override;
|
|
||||||
|
|
||||||
/// Useful to check owner of ephemeral node.
|
/// Useful to check owner of ephemeral node.
|
||||||
int64_t getSessionID() const override { return session_id; }
|
int64_t getSessionID() const override { return session_id; }
|
||||||
@ -271,7 +261,6 @@ private:
|
|||||||
clock::time_point time;
|
clock::time_point time;
|
||||||
};
|
};
|
||||||
|
|
||||||
std::optional<clock::time_point> client_session_deadline {};
|
|
||||||
using RequestsQueue = ConcurrentBoundedQueue<RequestInfo>;
|
using RequestsQueue = ConcurrentBoundedQueue<RequestInfo>;
|
||||||
|
|
||||||
RequestsQueue requests_queue{1024};
|
RequestsQueue requests_queue{1024};
|
||||||
@ -316,7 +305,7 @@ private:
|
|||||||
LoggerPtr log;
|
LoggerPtr log;
|
||||||
|
|
||||||
void connect(
|
void connect(
|
||||||
const Nodes & node,
|
const zkutil::ShuffleHosts & node,
|
||||||
Poco::Timespan connection_timeout);
|
Poco::Timespan connection_timeout);
|
||||||
|
|
||||||
void sendHandshake();
|
void sendHandshake();
|
||||||
@ -346,9 +335,10 @@ private:
|
|||||||
|
|
||||||
void logOperationIfNeeded(const ZooKeeperRequestPtr & request, const ZooKeeperResponsePtr & response = nullptr, bool finalize = false, UInt64 elapsed_microseconds = 0);
|
void logOperationIfNeeded(const ZooKeeperRequestPtr & request, const ZooKeeperResponsePtr & response = nullptr, bool finalize = false, UInt64 elapsed_microseconds = 0);
|
||||||
|
|
||||||
|
std::optional<String> tryGetSystemZnode(const std::string & path, const std::string & description);
|
||||||
|
|
||||||
void initFeatureFlags();
|
void initFeatureFlags();
|
||||||
|
|
||||||
void checkSessionDeadline() const;
|
|
||||||
|
|
||||||
CurrentMetrics::Increment active_session_metric_increment{CurrentMetrics::ZooKeeperSession};
|
CurrentMetrics::Increment active_session_metric_increment{CurrentMetrics::ZooKeeperSession};
|
||||||
std::shared_ptr<ZooKeeperLog> zk_log;
|
std::shared_ptr<ZooKeeperLog> zk_log;
|
||||||
|
@ -25,24 +25,24 @@ try
|
|||||||
Poco::Logger::root().setChannel(channel);
|
Poco::Logger::root().setChannel(channel);
|
||||||
Poco::Logger::root().setLevel("trace");
|
Poco::Logger::root().setLevel("trace");
|
||||||
|
|
||||||
std::string hosts_arg = argv[1];
|
zkutil::ZooKeeperArgs args{argv[1]};
|
||||||
std::vector<std::string> hosts_strings;
|
zkutil::ShuffleHosts nodes;
|
||||||
splitInto<','>(hosts_strings, hosts_arg);
|
nodes.reserve(args.hosts.size());
|
||||||
ZooKeeper::Nodes nodes;
|
for (size_t i = 0; i < args.hosts.size(); ++i)
|
||||||
nodes.reserve(hosts_strings.size());
|
|
||||||
for (size_t i = 0; i < hosts_strings.size(); ++i)
|
|
||||||
{
|
{
|
||||||
std::string host_string = hosts_strings[i];
|
zkutil::ShuffleHost node;
|
||||||
bool secure = startsWith(host_string, "secure://");
|
std::string host_string = args.hosts[i];
|
||||||
|
node.secure = startsWith(host_string, "secure://");
|
||||||
|
|
||||||
if (secure)
|
if (node.secure)
|
||||||
host_string.erase(0, strlen("secure://"));
|
host_string.erase(0, strlen("secure://"));
|
||||||
|
|
||||||
nodes.emplace_back(ZooKeeper::Node{Poco::Net::SocketAddress{host_string}, static_cast<UInt8>(i) , secure});
|
node.host = host_string;
|
||||||
|
node.original_index = i;
|
||||||
|
|
||||||
|
nodes.emplace_back(node);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
zkutil::ZooKeeperArgs args;
|
|
||||||
ZooKeeper zk(nodes, args, nullptr);
|
ZooKeeper zk(nodes, args, nullptr);
|
||||||
|
|
||||||
Poco::Event event(true);
|
Poco::Event event(true);
|
||||||
|
@ -808,7 +808,11 @@ void LogEntryStorage::startCommitLogsPrefetch(uint64_t last_committed_index) con
|
|||||||
|
|
||||||
for (; current_index <= max_index_for_prefetch; ++current_index)
|
for (; current_index <= max_index_for_prefetch; ++current_index)
|
||||||
{
|
{
|
||||||
const auto & [changelog_description, position, size] = logs_location.at(current_index);
|
auto location_it = logs_location.find(current_index);
|
||||||
|
if (location_it == logs_location.end())
|
||||||
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "Location of log entry with index {} is missing", current_index);
|
||||||
|
|
||||||
|
const auto & [changelog_description, position, size] = location_it->second;
|
||||||
if (total_size == 0)
|
if (total_size == 0)
|
||||||
current_file_info = &file_infos.emplace_back(changelog_description, position, /* count */ 1);
|
current_file_info = &file_infos.emplace_back(changelog_description, position, /* count */ 1);
|
||||||
else if (total_size + size > commit_logs_cache.size_threshold)
|
else if (total_size + size > commit_logs_cache.size_threshold)
|
||||||
@ -1416,7 +1420,11 @@ LogEntriesPtr LogEntryStorage::getLogEntriesBetween(uint64_t start, uint64_t end
|
|||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
const auto & log_location = logs_location.at(i);
|
auto location_it = logs_location.find(i);
|
||||||
|
if (location_it == logs_location.end())
|
||||||
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "Location of log entry with index {} is missing", i);
|
||||||
|
|
||||||
|
const auto & log_location = location_it->second;
|
||||||
|
|
||||||
if (!read_info)
|
if (!read_info)
|
||||||
set_new_file(log_location);
|
set_new_file(log_location);
|
||||||
|
@ -7,11 +7,12 @@
|
|||||||
#include <mutex>
|
#include <mutex>
|
||||||
#include <string>
|
#include <string>
|
||||||
#include <Coordination/KeeperLogStore.h>
|
#include <Coordination/KeeperLogStore.h>
|
||||||
|
#include <Coordination/KeeperSnapshotManagerS3.h>
|
||||||
#include <Coordination/KeeperStateMachine.h>
|
#include <Coordination/KeeperStateMachine.h>
|
||||||
#include <Coordination/KeeperStateManager.h>
|
#include <Coordination/KeeperStateManager.h>
|
||||||
#include <Coordination/KeeperSnapshotManagerS3.h>
|
|
||||||
#include <Coordination/LoggerWrapper.h>
|
#include <Coordination/LoggerWrapper.h>
|
||||||
#include <Coordination/WriteBufferFromNuraftBuffer.h>
|
#include <Coordination/WriteBufferFromNuraftBuffer.h>
|
||||||
|
#include <Disks/DiskLocal.h>
|
||||||
#include <IO/ReadHelpers.h>
|
#include <IO/ReadHelpers.h>
|
||||||
#include <IO/WriteHelpers.h>
|
#include <IO/WriteHelpers.h>
|
||||||
#include <boost/algorithm/string.hpp>
|
#include <boost/algorithm/string.hpp>
|
||||||
@ -27,7 +28,7 @@
|
|||||||
#include <Common/LockMemoryExceptionInThread.h>
|
#include <Common/LockMemoryExceptionInThread.h>
|
||||||
#include <Common/Stopwatch.h>
|
#include <Common/Stopwatch.h>
|
||||||
#include <Common/getMultipleKeysFromConfig.h>
|
#include <Common/getMultipleKeysFromConfig.h>
|
||||||
#include <Disks/DiskLocal.h>
|
#include <Common/getNumberOfPhysicalCPUCores.h>
|
||||||
|
|
||||||
#pragma clang diagnostic ignored "-Wdeprecated-declarations"
|
#pragma clang diagnostic ignored "-Wdeprecated-declarations"
|
||||||
#include <fmt/chrono.h>
|
#include <fmt/chrono.h>
|
||||||
@ -365,6 +366,8 @@ void KeeperServer::launchRaftServer(const Poco::Util::AbstractConfiguration & co
|
|||||||
LockMemoryExceptionInThread::removeUniqueLock();
|
LockMemoryExceptionInThread::removeUniqueLock();
|
||||||
};
|
};
|
||||||
|
|
||||||
|
asio_opts.thread_pool_size_ = getNumberOfPhysicalCPUCores();
|
||||||
|
|
||||||
if (state_manager->isSecure())
|
if (state_manager->isSecure())
|
||||||
{
|
{
|
||||||
#if USE_SSL
|
#if USE_SSL
|
||||||
|
@ -534,6 +534,10 @@ bool KeeperStorage::UncommittedState::hasACL(int64_t session_id, bool is_local,
|
|||||||
if (is_local)
|
if (is_local)
|
||||||
return check_auth(storage.session_and_auth[session_id]);
|
return check_auth(storage.session_and_auth[session_id]);
|
||||||
|
|
||||||
|
/// we want to close the session and with that we will remove all the auth related to the session
|
||||||
|
if (closed_sessions.contains(session_id))
|
||||||
|
return false;
|
||||||
|
|
||||||
if (check_auth(storage.session_and_auth[session_id]))
|
if (check_auth(storage.session_and_auth[session_id]))
|
||||||
return true;
|
return true;
|
||||||
|
|
||||||
@ -559,6 +563,10 @@ void KeeperStorage::UncommittedState::addDelta(Delta new_delta)
|
|||||||
auto & uncommitted_auth = session_and_auth[auth_delta->session_id];
|
auto & uncommitted_auth = session_and_auth[auth_delta->session_id];
|
||||||
uncommitted_auth.emplace_back(&auth_delta->auth_id);
|
uncommitted_auth.emplace_back(&auth_delta->auth_id);
|
||||||
}
|
}
|
||||||
|
else if (const auto * close_session_delta = std::get_if<CloseSessionDelta>(&added_delta.operation))
|
||||||
|
{
|
||||||
|
closed_sessions.insert(close_session_delta->session_id);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void KeeperStorage::UncommittedState::addDeltas(std::vector<Delta> new_deltas)
|
void KeeperStorage::UncommittedState::addDeltas(std::vector<Delta> new_deltas)
|
||||||
@ -1013,9 +1021,11 @@ struct KeeperStorageHeartbeatRequestProcessor final : public KeeperStorageReques
|
|||||||
{
|
{
|
||||||
using KeeperStorageRequestProcessor::KeeperStorageRequestProcessor;
|
using KeeperStorageRequestProcessor::KeeperStorageRequestProcessor;
|
||||||
Coordination::ZooKeeperResponsePtr
|
Coordination::ZooKeeperResponsePtr
|
||||||
process(KeeperStorage & /* storage */, int64_t /* zxid */) const override
|
process(KeeperStorage & storage, int64_t zxid) const override
|
||||||
{
|
{
|
||||||
return zk_request->makeResponse();
|
Coordination::ZooKeeperResponsePtr response_ptr = zk_request->makeResponse();
|
||||||
|
response_ptr->error = storage.commit(zxid);
|
||||||
|
return response_ptr;
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -2377,15 +2387,13 @@ void KeeperStorage::preprocessRequest(
|
|||||||
|
|
||||||
ephemerals.erase(session_ephemerals);
|
ephemerals.erase(session_ephemerals);
|
||||||
}
|
}
|
||||||
new_deltas.emplace_back(transaction.zxid, CloseSessionDelta{session_id});
|
|
||||||
uncommitted_state.closed_sessions.insert(session_id);
|
|
||||||
|
|
||||||
|
new_deltas.emplace_back(transaction.zxid, CloseSessionDelta{session_id});
|
||||||
new_digest = calculateNodesDigest(new_digest, new_deltas);
|
new_digest = calculateNodesDigest(new_digest, new_deltas);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if ((check_acl && !request_processor->checkAuth(*this, session_id, false)) ||
|
if (check_acl && !request_processor->checkAuth(*this, session_id, false))
|
||||||
uncommitted_state.closed_sessions.contains(session_id)) // Is session closed but not committed yet
|
|
||||||
{
|
{
|
||||||
uncommitted_state.deltas.emplace_back(new_last_zxid, Coordination::Error::ZNOAUTH);
|
uncommitted_state.deltas.emplace_back(new_last_zxid, Coordination::Error::ZNOAUTH);
|
||||||
return;
|
return;
|
||||||
@ -2442,8 +2450,6 @@ KeeperStorage::ResponsesForSessions KeeperStorage::processRequest(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
uncommitted_state.commit(zxid);
|
|
||||||
|
|
||||||
clearDeadWatches(session_id);
|
clearDeadWatches(session_id);
|
||||||
auto auth_it = session_and_auth.find(session_id);
|
auto auth_it = session_and_auth.find(session_id);
|
||||||
if (auth_it != session_and_auth.end())
|
if (auth_it != session_and_auth.end())
|
||||||
@ -2488,7 +2494,6 @@ KeeperStorage::ResponsesForSessions KeeperStorage::processRequest(
|
|||||||
else
|
else
|
||||||
{
|
{
|
||||||
response = request_processor->process(*this, zxid);
|
response = request_processor->process(*this, zxid);
|
||||||
uncommitted_state.commit(zxid);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Watches for this requests are added to the watches lists
|
/// Watches for this requests are added to the watches lists
|
||||||
@ -2528,6 +2533,7 @@ KeeperStorage::ResponsesForSessions KeeperStorage::processRequest(
|
|||||||
results.push_back(ResponseForSession{session_id, response});
|
results.push_back(ResponseForSession{session_id, response});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
uncommitted_state.commit(zxid);
|
||||||
return results;
|
return results;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2028,56 +2028,175 @@ TEST_P(CoordinationTest, TestPreprocessWhenCloseSessionIsPrecommitted)
|
|||||||
setSnapshotDirectory("./snapshots");
|
setSnapshotDirectory("./snapshots");
|
||||||
ResponsesQueue queue(std::numeric_limits<size_t>::max());
|
ResponsesQueue queue(std::numeric_limits<size_t>::max());
|
||||||
SnapshotsQueue snapshots_queue{1};
|
SnapshotsQueue snapshots_queue{1};
|
||||||
int64_t session_id = 1;
|
int64_t session_without_auth = 1;
|
||||||
|
int64_t session_with_auth = 2;
|
||||||
size_t term = 0;
|
size_t term = 0;
|
||||||
|
|
||||||
auto state_machine = std::make_shared<KeeperStateMachine>(queue, snapshots_queue, keeper_context, nullptr);
|
auto state_machine = std::make_shared<KeeperStateMachine>(queue, snapshots_queue, keeper_context, nullptr);
|
||||||
state_machine->init();
|
state_machine->init();
|
||||||
|
|
||||||
auto & storage = state_machine->getStorageUnsafe();
|
auto & storage = state_machine->getStorageUnsafe();
|
||||||
const auto & uncommitted_state = storage.uncommitted_state;
|
const auto & uncommitted_state = storage.uncommitted_state;
|
||||||
|
|
||||||
// Create first node for the session
|
auto auth_req = std::make_shared<ZooKeeperAuthRequest>();
|
||||||
String node_path_1 = "/node_1";
|
auth_req->scheme = "digest";
|
||||||
std::shared_ptr<ZooKeeperCreateRequest> create_req_1 = std::make_shared<ZooKeeperCreateRequest>();
|
auth_req->data = "test_user:test_password";
|
||||||
create_req_1->path = node_path_1;
|
|
||||||
auto create_entry_1 = getLogEntryFromZKRequest(term, session_id, state_machine->getNextZxid(), create_req_1);
|
|
||||||
|
|
||||||
state_machine->pre_commit(1, create_entry_1->get_buf());
|
// Add auth data to the session
|
||||||
EXPECT_TRUE(uncommitted_state.nodes.contains(node_path_1));
|
auto auth_entry = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), auth_req);
|
||||||
|
state_machine->pre_commit(1, auth_entry->get_buf());
|
||||||
|
state_machine->commit(1, auth_entry->get_buf());
|
||||||
|
|
||||||
state_machine->commit(1, create_entry_1->get_buf());
|
std::string node_without_acl = "/node_without_acl";
|
||||||
EXPECT_TRUE(storage.container.contains(node_path_1));
|
{
|
||||||
|
auto create_req = std::make_shared<ZooKeeperCreateRequest>();
|
||||||
|
create_req->path = node_without_acl;
|
||||||
|
create_req->data = "notmodified";
|
||||||
|
auto create_entry = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), create_req);
|
||||||
|
state_machine->pre_commit(2, create_entry->get_buf());
|
||||||
|
state_machine->commit(2, create_entry->get_buf());
|
||||||
|
ASSERT_TRUE(storage.container.contains(node_without_acl));
|
||||||
|
}
|
||||||
|
|
||||||
// Close session
|
std::string node_with_acl = "/node_with_acl";
|
||||||
std::shared_ptr<ZooKeeperCloseRequest> close_req = std::make_shared<ZooKeeperCloseRequest>();
|
{
|
||||||
auto close_entry = getLogEntryFromZKRequest(term, session_id, state_machine->getNextZxid(), close_req);
|
auto create_req = std::make_shared<ZooKeeperCreateRequest>();
|
||||||
// Pre-commit close session
|
create_req->path = node_with_acl;
|
||||||
state_machine->pre_commit(2, close_entry->get_buf());
|
create_req->data = "notmodified";
|
||||||
|
create_req->acls = {{.permissions = ACL::All, .scheme = "auth", .id = ""}};
|
||||||
|
auto create_entry = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), create_req);
|
||||||
|
state_machine->pre_commit(3, create_entry->get_buf());
|
||||||
|
state_machine->commit(3, create_entry->get_buf());
|
||||||
|
ASSERT_TRUE(storage.container.contains(node_with_acl));
|
||||||
|
}
|
||||||
|
|
||||||
// Try to create second node after close session is pre-committed
|
auto set_req_with_acl = std::make_shared<ZooKeeperSetRequest>();
|
||||||
String node_path_2 = "/node_2";
|
set_req_with_acl->path = node_with_acl;
|
||||||
std::shared_ptr<ZooKeeperCreateRequest> create_req_2 = std::make_shared<ZooKeeperCreateRequest>();
|
set_req_with_acl->data = "modified";
|
||||||
create_req_2->path = node_path_2;
|
|
||||||
auto create_entry_2 = getLogEntryFromZKRequest(term, session_id, state_machine->getNextZxid(), create_req_2);
|
|
||||||
|
|
||||||
// Pre-commit creating second node
|
|
||||||
state_machine->pre_commit(3, create_entry_2->get_buf());
|
|
||||||
// Second node wasn't created
|
|
||||||
EXPECT_FALSE(uncommitted_state.nodes.contains(node_path_2));
|
|
||||||
|
|
||||||
// Rollback pre-committed closing session
|
auto set_req_without_acl = std::make_shared<ZooKeeperSetRequest>();
|
||||||
state_machine->rollback(3, create_entry_2->get_buf());
|
set_req_without_acl->path = node_without_acl;
|
||||||
state_machine->rollback(2, close_entry->get_buf());
|
set_req_without_acl->data = "modified";
|
||||||
|
|
||||||
// Pre-commit creating second node
|
const auto reset_node_value
|
||||||
state_machine->pre_commit(2, create_entry_2->get_buf());
|
= [&](const auto & path) { storage.container.updateValue(path, [](auto & node) { node.setData("notmodified"); }); };
|
||||||
// Now second node was created
|
|
||||||
EXPECT_TRUE(uncommitted_state.nodes.contains(node_path_2));
|
|
||||||
|
|
||||||
state_machine->commit(2, create_entry_2->get_buf());
|
auto close_req = std::make_shared<ZooKeeperCloseRequest>();
|
||||||
EXPECT_TRUE(storage.container.contains(node_path_1));
|
|
||||||
EXPECT_TRUE(storage.container.contains(node_path_2));
|
{
|
||||||
|
SCOPED_TRACE("Session with Auth");
|
||||||
|
|
||||||
|
// test we can modify both nodes
|
||||||
|
auto set_entry = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), set_req_with_acl);
|
||||||
|
state_machine->pre_commit(5, set_entry->get_buf());
|
||||||
|
state_machine->commit(5, set_entry->get_buf());
|
||||||
|
ASSERT_TRUE(storage.container.find(node_with_acl)->value.getData() == "modified");
|
||||||
|
reset_node_value(node_with_acl);
|
||||||
|
|
||||||
|
set_entry = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), set_req_without_acl);
|
||||||
|
state_machine->pre_commit(6, set_entry->get_buf());
|
||||||
|
state_machine->commit(6, set_entry->get_buf());
|
||||||
|
ASSERT_TRUE(storage.container.find(node_without_acl)->value.getData() == "modified");
|
||||||
|
reset_node_value(node_without_acl);
|
||||||
|
|
||||||
|
auto close_entry = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), close_req);
|
||||||
|
|
||||||
|
// Pre-commit close session
|
||||||
|
state_machine->pre_commit(7, close_entry->get_buf());
|
||||||
|
|
||||||
|
/// will be rejected because we don't have required auth
|
||||||
|
auto set_entry_with_acl = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), set_req_with_acl);
|
||||||
|
state_machine->pre_commit(8, set_entry_with_acl->get_buf());
|
||||||
|
|
||||||
|
/// will be accepted because no ACL
|
||||||
|
auto set_entry_without_acl = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), set_req_without_acl);
|
||||||
|
state_machine->pre_commit(9, set_entry_without_acl->get_buf());
|
||||||
|
|
||||||
|
ASSERT_TRUE(uncommitted_state.getNode(node_with_acl)->getData() == "notmodified");
|
||||||
|
ASSERT_TRUE(uncommitted_state.getNode(node_without_acl)->getData() == "modified");
|
||||||
|
|
||||||
|
state_machine->rollback(9, set_entry_without_acl->get_buf());
|
||||||
|
state_machine->rollback(8, set_entry_with_acl->get_buf());
|
||||||
|
|
||||||
|
// let's commit close and verify we get same outcome
|
||||||
|
state_machine->commit(7, close_entry->get_buf());
|
||||||
|
|
||||||
|
/// will be rejected because we don't have required auth
|
||||||
|
set_entry_with_acl = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), set_req_with_acl);
|
||||||
|
state_machine->pre_commit(8, set_entry_with_acl->get_buf());
|
||||||
|
|
||||||
|
/// will be accepted because no ACL
|
||||||
|
set_entry_without_acl = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), set_req_without_acl);
|
||||||
|
state_machine->pre_commit(9, set_entry_without_acl->get_buf());
|
||||||
|
|
||||||
|
ASSERT_TRUE(uncommitted_state.getNode(node_with_acl)->getData() == "notmodified");
|
||||||
|
ASSERT_TRUE(uncommitted_state.getNode(node_without_acl)->getData() == "modified");
|
||||||
|
|
||||||
|
state_machine->commit(8, set_entry_with_acl->get_buf());
|
||||||
|
state_machine->commit(9, set_entry_without_acl->get_buf());
|
||||||
|
|
||||||
|
ASSERT_TRUE(storage.container.find(node_with_acl)->value.getData() == "notmodified");
|
||||||
|
ASSERT_TRUE(storage.container.find(node_without_acl)->value.getData() == "modified");
|
||||||
|
|
||||||
|
reset_node_value(node_without_acl);
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
SCOPED_TRACE("Session without Auth");
|
||||||
|
|
||||||
|
// test we can modify only node without acl
|
||||||
|
auto set_entry = getLogEntryFromZKRequest(term, session_without_auth, state_machine->getNextZxid(), set_req_with_acl);
|
||||||
|
state_machine->pre_commit(10, set_entry->get_buf());
|
||||||
|
state_machine->commit(10, set_entry->get_buf());
|
||||||
|
ASSERT_TRUE(storage.container.find(node_with_acl)->value.getData() == "notmodified");
|
||||||
|
|
||||||
|
set_entry = getLogEntryFromZKRequest(term, session_without_auth, state_machine->getNextZxid(), set_req_without_acl);
|
||||||
|
state_machine->pre_commit(11, set_entry->get_buf());
|
||||||
|
state_machine->commit(11, set_entry->get_buf());
|
||||||
|
ASSERT_TRUE(storage.container.find(node_without_acl)->value.getData() == "modified");
|
||||||
|
reset_node_value(node_without_acl);
|
||||||
|
|
||||||
|
auto close_entry = getLogEntryFromZKRequest(term, session_without_auth, state_machine->getNextZxid(), close_req);
|
||||||
|
|
||||||
|
// Pre-commit close session
|
||||||
|
state_machine->pre_commit(12, close_entry->get_buf());
|
||||||
|
|
||||||
|
/// will be rejected because we don't have required auth
|
||||||
|
auto set_entry_with_acl = getLogEntryFromZKRequest(term, session_without_auth, state_machine->getNextZxid(), set_req_with_acl);
|
||||||
|
state_machine->pre_commit(13, set_entry_with_acl->get_buf());
|
||||||
|
|
||||||
|
/// will be accepted because no ACL
|
||||||
|
auto set_entry_without_acl = getLogEntryFromZKRequest(term, session_without_auth, state_machine->getNextZxid(), set_req_without_acl);
|
||||||
|
state_machine->pre_commit(14, set_entry_without_acl->get_buf());
|
||||||
|
|
||||||
|
ASSERT_TRUE(uncommitted_state.getNode(node_with_acl)->getData() == "notmodified");
|
||||||
|
ASSERT_TRUE(uncommitted_state.getNode(node_without_acl)->getData() == "modified");
|
||||||
|
|
||||||
|
state_machine->rollback(14, set_entry_without_acl->get_buf());
|
||||||
|
state_machine->rollback(13, set_entry_with_acl->get_buf());
|
||||||
|
|
||||||
|
// let's commit close and verify we get same outcome
|
||||||
|
state_machine->commit(12, close_entry->get_buf());
|
||||||
|
|
||||||
|
/// will be rejected because we don't have required auth
|
||||||
|
set_entry_with_acl = getLogEntryFromZKRequest(term, session_without_auth, state_machine->getNextZxid(), set_req_with_acl);
|
||||||
|
state_machine->pre_commit(13, set_entry_with_acl->get_buf());
|
||||||
|
|
||||||
|
/// will be accepted because no ACL
|
||||||
|
set_entry_without_acl = getLogEntryFromZKRequest(term, session_without_auth, state_machine->getNextZxid(), set_req_without_acl);
|
||||||
|
state_machine->pre_commit(14, set_entry_without_acl->get_buf());
|
||||||
|
|
||||||
|
ASSERT_TRUE(uncommitted_state.getNode(node_with_acl)->getData() == "notmodified");
|
||||||
|
ASSERT_TRUE(uncommitted_state.getNode(node_without_acl)->getData() == "modified");
|
||||||
|
|
||||||
|
state_machine->commit(13, set_entry_with_acl->get_buf());
|
||||||
|
state_machine->commit(14, set_entry_without_acl->get_buf());
|
||||||
|
|
||||||
|
ASSERT_TRUE(storage.container.find(node_with_acl)->value.getData() == "notmodified");
|
||||||
|
ASSERT_TRUE(storage.container.find(node_without_acl)->value.getData() == "modified");
|
||||||
|
|
||||||
|
reset_node_value(node_without_acl);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST_P(CoordinationTest, TestSetACLWithAuthSchemeForAclWhenAuthIsPrecommitted)
|
TEST_P(CoordinationTest, TestSetACLWithAuthSchemeForAclWhenAuthIsPrecommitted)
|
||||||
|
@ -142,6 +142,7 @@ void Settings::applyCompatibilitySetting(const String & compatibility_value)
|
|||||||
return;
|
return;
|
||||||
|
|
||||||
ClickHouseVersion version(compatibility_value);
|
ClickHouseVersion version(compatibility_value);
|
||||||
|
const auto & settings_changes_history = getSettingsChangesHistory();
|
||||||
/// Iterate through ClickHouse version in descending order and apply reversed
|
/// Iterate through ClickHouse version in descending order and apply reversed
|
||||||
/// changes for each version that is higher that version from compatibility setting
|
/// changes for each version that is higher that version from compatibility setting
|
||||||
for (auto it = settings_changes_history.rbegin(); it != settings_changes_history.rend(); ++it)
|
for (auto it = settings_changes_history.rbegin(); it != settings_changes_history.rend(); ++it)
|
||||||
|
@ -470,7 +470,7 @@ class IColumn;
|
|||||||
M(UInt64, max_rows_in_join, 0, "Maximum size of the hash table for JOIN (in number of rows).", 0) \
|
M(UInt64, max_rows_in_join, 0, "Maximum size of the hash table for JOIN (in number of rows).", 0) \
|
||||||
M(UInt64, max_bytes_in_join, 0, "Maximum size of the hash table for JOIN (in number of bytes in memory).", 0) \
|
M(UInt64, max_bytes_in_join, 0, "Maximum size of the hash table for JOIN (in number of bytes in memory).", 0) \
|
||||||
M(OverflowMode, join_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.", 0) \
|
M(OverflowMode, join_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.", 0) \
|
||||||
M(Bool, join_any_take_last_row, false, "When disabled (default) ANY JOIN will take the first found row for a key. When enabled, it will take the last row seen if there are multiple rows for the same key.", IMPORTANT) \
|
M(Bool, join_any_take_last_row, false, "When disabled (default) ANY JOIN will take the first found row for a key. When enabled, it will take the last row seen if there are multiple rows for the same key. Can be applied only to hash join and storage join.", IMPORTANT) \
|
||||||
M(JoinAlgorithm, join_algorithm, JoinAlgorithm::DEFAULT, "Specify join algorithm.", 0) \
|
M(JoinAlgorithm, join_algorithm, JoinAlgorithm::DEFAULT, "Specify join algorithm.", 0) \
|
||||||
M(UInt64, cross_join_min_rows_to_compress, 10000000, "Minimal count of rows to compress block in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached.", 0) \
|
M(UInt64, cross_join_min_rows_to_compress, 10000000, "Minimal count of rows to compress block in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached.", 0) \
|
||||||
M(UInt64, cross_join_min_bytes_to_compress, 1_GiB, "Minimal size of block to compress in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached.", 0) \
|
M(UInt64, cross_join_min_bytes_to_compress, 1_GiB, "Minimal size of block to compress in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached.", 0) \
|
||||||
@ -1092,6 +1092,7 @@ class IColumn;
|
|||||||
M(Bool, input_format_json_defaults_for_missing_elements_in_named_tuple, true, "Insert default value in named tuple element if it's missing in json object", 0) \
|
M(Bool, input_format_json_defaults_for_missing_elements_in_named_tuple, true, "Insert default value in named tuple element if it's missing in json object", 0) \
|
||||||
M(Bool, input_format_json_throw_on_bad_escape_sequence, true, "Throw an exception if JSON string contains bad escape sequence in JSON input formats. If disabled, bad escape sequences will remain as is in the data", 0) \
|
M(Bool, input_format_json_throw_on_bad_escape_sequence, true, "Throw an exception if JSON string contains bad escape sequence in JSON input formats. If disabled, bad escape sequences will remain as is in the data", 0) \
|
||||||
M(Bool, input_format_json_ignore_unnecessary_fields, true, "Ignore unnecessary fields and not parse them. Enabling this may not throw exceptions on json strings of invalid format or with duplicated fields", 0) \
|
M(Bool, input_format_json_ignore_unnecessary_fields, true, "Ignore unnecessary fields and not parse them. Enabling this may not throw exceptions on json strings of invalid format or with duplicated fields", 0) \
|
||||||
|
M(Bool, input_format_json_ignore_key_case, false, "Ignore json key case while read json field from string", 0) \
|
||||||
M(Bool, input_format_try_infer_integers, true, "Try to infer integers instead of floats while schema inference in text formats", 0) \
|
M(Bool, input_format_try_infer_integers, true, "Try to infer integers instead of floats while schema inference in text formats", 0) \
|
||||||
M(Bool, input_format_try_infer_dates, true, "Try to infer dates from string fields while schema inference in text formats", 0) \
|
M(Bool, input_format_try_infer_dates, true, "Try to infer dates from string fields while schema inference in text formats", 0) \
|
||||||
M(Bool, input_format_try_infer_datetimes, true, "Try to infer datetimes from string fields while schema inference in text formats", 0) \
|
M(Bool, input_format_try_infer_datetimes, true, "Try to infer datetimes from string fields while schema inference in text formats", 0) \
|
||||||
|
323
src/Core/SettingsChangesHistory.cpp
Normal file
323
src/Core/SettingsChangesHistory.cpp
Normal file
@ -0,0 +1,323 @@
|
|||||||
|
#include <Core/SettingsChangesHistory.h>
|
||||||
|
#include <Core/Defines.h>
|
||||||
|
#include <IO/ReadBufferFromString.h>
|
||||||
|
#include <IO/ReadHelpers.h>
|
||||||
|
#include <boost/algorithm/string.hpp>
|
||||||
|
|
||||||
|
namespace DB
|
||||||
|
{
|
||||||
|
|
||||||
|
namespace ErrorCodes
|
||||||
|
{
|
||||||
|
extern const int BAD_ARGUMENTS;
|
||||||
|
extern const int LOGICAL_ERROR;
|
||||||
|
}
|
||||||
|
|
||||||
|
ClickHouseVersion::ClickHouseVersion(const String & version)
|
||||||
|
{
|
||||||
|
Strings split;
|
||||||
|
boost::split(split, version, [](char c){ return c == '.'; });
|
||||||
|
components.reserve(split.size());
|
||||||
|
if (split.empty())
|
||||||
|
throw Exception{ErrorCodes::BAD_ARGUMENTS, "Cannot parse ClickHouse version here: {}", version};
|
||||||
|
|
||||||
|
for (const auto & split_element : split)
|
||||||
|
{
|
||||||
|
size_t component;
|
||||||
|
ReadBufferFromString buf(split_element);
|
||||||
|
if (!tryReadIntText(component, buf) || !buf.eof())
|
||||||
|
throw Exception{ErrorCodes::BAD_ARGUMENTS, "Cannot parse ClickHouse version here: {}", version};
|
||||||
|
components.push_back(component);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ClickHouseVersion::ClickHouseVersion(const char * version)
|
||||||
|
: ClickHouseVersion(String(version))
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
|
String ClickHouseVersion::toString() const
|
||||||
|
{
|
||||||
|
String version = std::to_string(components[0]);
|
||||||
|
for (size_t i = 1; i < components.size(); ++i)
|
||||||
|
version += "." + std::to_string(components[i]);
|
||||||
|
|
||||||
|
return version;
|
||||||
|
}
|
||||||
|
|
||||||
|
// clang-format off
|
||||||
|
/// History of settings changes that controls some backward incompatible changes
|
||||||
|
/// across all ClickHouse versions. It maps ClickHouse version to settings changes that were done
|
||||||
|
/// in this version. This history contains both changes to existing settings and newly added settings.
|
||||||
|
/// Settings changes is a vector of structs
|
||||||
|
/// {setting_name, previous_value, new_value, reason}.
|
||||||
|
/// For newly added setting choose the most appropriate previous_value (for example, if new setting
|
||||||
|
/// controls new feature and it's 'true' by default, use 'false' as previous_value).
|
||||||
|
/// It's used to implement `compatibility` setting (see https://github.com/ClickHouse/ClickHouse/issues/35972)
|
||||||
|
/// Note: please check if the key already exists to prevent duplicate entries.
|
||||||
|
static std::initializer_list<std::pair<ClickHouseVersion, SettingsChangesHistory::SettingsChanges>> settings_changes_history_initializer =
|
||||||
|
{
|
||||||
|
{"24.7", {{"output_format_parquet_write_page_index", false, true, "Add a possibility to write page index into parquet files."},
|
||||||
|
}},
|
||||||
|
{"24.6", {{"materialize_skip_indexes_on_insert", true, true, "Added new setting to allow to disable materialization of skip indexes on insert"},
|
||||||
|
{"materialize_statistics_on_insert", true, true, "Added new setting to allow to disable materialization of statistics on insert"},
|
||||||
|
{"input_format_parquet_use_native_reader", false, false, "When reading Parquet files, to use native reader instead of arrow reader."},
|
||||||
|
{"hdfs_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in HDFS engine instead of empty query result"},
|
||||||
|
{"azure_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in AzureBlobStorage engine instead of empty query result"},
|
||||||
|
{"s3_validate_request_settings", true, true, "Allow to disable S3 request settings validation"},
|
||||||
|
{"allow_experimental_full_text_index", false, false, "Enable experimental full-text index"},
|
||||||
|
{"azure_skip_empty_files", false, false, "Allow to skip empty files in azure table engine"},
|
||||||
|
{"hdfs_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in HDFS table engine"},
|
||||||
|
{"azure_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in AzureBlobStorage table engine"},
|
||||||
|
{"s3_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in S3 table engine"},
|
||||||
|
{"s3_max_part_number", 10000, 10000, "Maximum part number number for s3 upload part"},
|
||||||
|
{"s3_max_single_operation_copy_size", 32 * 1024 * 1024, 32 * 1024 * 1024, "Maximum size for a single copy operation in s3"},
|
||||||
|
{"input_format_parquet_max_block_size", 8192, DEFAULT_BLOCK_SIZE, "Increase block size for parquet reader."},
|
||||||
|
{"input_format_parquet_prefer_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Average block bytes output by parquet reader."},
|
||||||
|
{"enable_blob_storage_log", true, true, "Write information about blob storage operations to system.blob_storage_log table"},
|
||||||
|
{"allow_deprecated_snowflake_conversion_functions", true, false, "Disabled deprecated functions snowflakeToDateTime[64] and dateTime[64]ToSnowflake."},
|
||||||
|
{"allow_statistic_optimize", false, false, "Old setting which popped up here being renamed."},
|
||||||
|
{"allow_experimental_statistic", false, false, "Old setting which popped up here being renamed."},
|
||||||
|
{"allow_statistics_optimize", false, false, "The setting was renamed. The previous name is `allow_statistic_optimize`."},
|
||||||
|
{"allow_experimental_statistics", false, false, "The setting was renamed. The previous name is `allow_experimental_statistic`."},
|
||||||
|
{"enable_vertical_final", false, true, "Enable vertical final by default again after fixing bug"},
|
||||||
|
{"parallel_replicas_custom_key_range_lower", 0, 0, "Add settings to control the range filter when using parallel replicas with dynamic shards"},
|
||||||
|
{"parallel_replicas_custom_key_range_upper", 0, 0, "Add settings to control the range filter when using parallel replicas with dynamic shards. A value of 0 disables the upper limit"},
|
||||||
|
{"output_format_pretty_display_footer_column_names", 0, 1, "Add a setting to display column names in the footer if there are many rows. Threshold value is controlled by output_format_pretty_display_footer_column_names_min_rows."},
|
||||||
|
{"output_format_pretty_display_footer_column_names_min_rows", 0, 50, "Add a setting to control the threshold value for setting output_format_pretty_display_footer_column_names_min_rows. Default 50."},
|
||||||
|
{"output_format_csv_serialize_tuple_into_separate_columns", true, true, "A new way of how interpret tuples in CSV format was added."},
|
||||||
|
{"input_format_csv_deserialize_separate_columns_into_tuple", true, true, "A new way of how interpret tuples in CSV format was added."},
|
||||||
|
{"input_format_csv_try_infer_strings_from_quoted_tuples", true, true, "A new way of how interpret tuples in CSV format was added."},
|
||||||
|
{"input_format_json_ignore_key_case", false, false, "Ignore json key case while read json field from string."},
|
||||||
|
}},
|
||||||
|
{"24.5", {{"allow_deprecated_error_prone_window_functions", true, false, "Allow usage of deprecated error prone window functions (neighbor, runningAccumulate, runningDifferenceStartingWithFirstValue, runningDifference)"},
|
||||||
|
{"allow_experimental_join_condition", false, false, "Support join with inequal conditions which involve columns from both left and right table. e.g. t1.y < t2.y."},
|
||||||
|
{"input_format_tsv_crlf_end_of_line", false, false, "Enables reading of CRLF line endings with TSV formats"},
|
||||||
|
{"output_format_parquet_use_custom_encoder", false, true, "Enable custom Parquet encoder."},
|
||||||
|
{"cross_join_min_rows_to_compress", 0, 10000000, "Minimal count of rows to compress block in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."},
|
||||||
|
{"cross_join_min_bytes_to_compress", 0, 1_GiB, "Minimal size of block to compress in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."},
|
||||||
|
{"http_max_chunk_size", 0, 0, "Internal limitation"},
|
||||||
|
{"prefer_external_sort_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Prefer maximum block bytes for external sort, reduce the memory usage during merging."},
|
||||||
|
{"input_format_force_null_for_omitted_fields", false, false, "Disable type-defaults for omitted fields when needed"},
|
||||||
|
{"cast_string_to_dynamic_use_inference", false, false, "Add setting to allow converting String to Dynamic through parsing"},
|
||||||
|
{"allow_experimental_dynamic_type", false, false, "Add new experimental Dynamic type"},
|
||||||
|
{"azure_max_blocks_in_multipart_upload", 50000, 50000, "Maximum number of blocks in multipart upload for Azure."},
|
||||||
|
}},
|
||||||
|
{"24.4", {{"input_format_json_throw_on_bad_escape_sequence", true, true, "Allow to save JSON strings with bad escape sequences"},
|
||||||
|
{"max_parsing_threads", 0, 0, "Add a separate setting to control number of threads in parallel parsing from files"},
|
||||||
|
{"ignore_drop_queries_probability", 0, 0, "Allow to ignore drop queries in server with specified probability for testing purposes"},
|
||||||
|
{"lightweight_deletes_sync", 2, 2, "The same as 'mutation_sync', but controls only execution of lightweight deletes"},
|
||||||
|
{"query_cache_system_table_handling", "save", "throw", "The query cache no longer caches results of queries against system tables"},
|
||||||
|
{"input_format_json_ignore_unnecessary_fields", false, true, "Ignore unnecessary fields and not parse them. Enabling this may not throw exceptions on json strings of invalid format or with duplicated fields"},
|
||||||
|
{"input_format_hive_text_allow_variable_number_of_columns", false, true, "Ignore extra columns in Hive Text input (if file has more columns than expected) and treat missing fields in Hive Text input as default values."},
|
||||||
|
{"allow_experimental_database_replicated", false, true, "Database engine Replicated is now in Beta stage"},
|
||||||
|
{"temporary_data_in_cache_reserve_space_wait_lock_timeout_milliseconds", (10 * 60 * 1000), (10 * 60 * 1000), "Wait time to lock cache for sapce reservation in temporary data in filesystem cache"},
|
||||||
|
{"optimize_rewrite_sum_if_to_count_if", false, true, "Only available for the analyzer, where it works correctly"},
|
||||||
|
{"azure_allow_parallel_part_upload", "true", "true", "Use multiple threads for azure multipart upload."},
|
||||||
|
{"max_recursive_cte_evaluation_depth", DBMS_RECURSIVE_CTE_MAX_EVALUATION_DEPTH, DBMS_RECURSIVE_CTE_MAX_EVALUATION_DEPTH, "Maximum limit on recursive CTE evaluation depth"},
|
||||||
|
{"query_plan_convert_outer_join_to_inner_join", false, true, "Allow to convert OUTER JOIN to INNER JOIN if filter after JOIN always filters default values"},
|
||||||
|
}},
|
||||||
|
{"24.3", {{"s3_connect_timeout_ms", 1000, 1000, "Introduce new dedicated setting for s3 connection timeout"},
|
||||||
|
{"allow_experimental_shared_merge_tree", false, true, "The setting is obsolete"},
|
||||||
|
{"use_page_cache_for_disks_without_file_cache", false, false, "Added userspace page cache"},
|
||||||
|
{"read_from_page_cache_if_exists_otherwise_bypass_cache", false, false, "Added userspace page cache"},
|
||||||
|
{"page_cache_inject_eviction", false, false, "Added userspace page cache"},
|
||||||
|
{"default_table_engine", "None", "MergeTree", "Set default table engine to MergeTree for better usability"},
|
||||||
|
{"input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects", false, false, "Allow to use String type for ambiguous paths during named tuple inference from JSON objects"},
|
||||||
|
{"traverse_shadow_remote_data_paths", false, false, "Traverse shadow directory when query system.remote_data_paths."},
|
||||||
|
{"throw_if_deduplication_in_dependent_materialized_views_enabled_with_async_insert", false, true, "Deduplication is dependent materialized view cannot work together with async inserts."},
|
||||||
|
{"parallel_replicas_allow_in_with_subquery", false, true, "If true, subquery for IN will be executed on every follower replica"},
|
||||||
|
{"log_processors_profiles", false, true, "Enable by default"},
|
||||||
|
{"function_locate_has_mysql_compatible_argument_order", false, true, "Increase compatibility with MySQL's locate function."},
|
||||||
|
{"allow_suspicious_primary_key", true, false, "Forbid suspicious PRIMARY KEY/ORDER BY for MergeTree (i.e. SimpleAggregateFunction)"},
|
||||||
|
{"filesystem_cache_reserve_space_wait_lock_timeout_milliseconds", 1000, 1000, "Wait time to lock cache for sapce reservation in filesystem cache"},
|
||||||
|
{"max_parser_backtracks", 0, 1000000, "Limiting the complexity of parsing"},
|
||||||
|
{"analyzer_compatibility_join_using_top_level_identifier", false, false, "Force to resolve identifier in JOIN USING from projection"},
|
||||||
|
{"distributed_insert_skip_read_only_replicas", false, false, "If true, INSERT into Distributed will skip read-only replicas"},
|
||||||
|
{"keeper_max_retries", 10, 10, "Max retries for general keeper operations"},
|
||||||
|
{"keeper_retry_initial_backoff_ms", 100, 100, "Initial backoff timeout for general keeper operations"},
|
||||||
|
{"keeper_retry_max_backoff_ms", 5000, 5000, "Max backoff timeout for general keeper operations"},
|
||||||
|
{"s3queue_allow_experimental_sharded_mode", false, false, "Enable experimental sharded mode of S3Queue table engine. It is experimental because it will be rewritten"},
|
||||||
|
{"allow_experimental_analyzer", false, true, "Enable analyzer and planner by default."},
|
||||||
|
{"merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability", 0.0, 0.0, "For testing of `PartsSplitter` - split read ranges into intersecting and non intersecting every time you read from MergeTree with the specified probability."},
|
||||||
|
{"allow_get_client_http_header", false, false, "Introduced a new function."},
|
||||||
|
{"output_format_pretty_row_numbers", false, true, "It is better for usability."},
|
||||||
|
{"output_format_pretty_max_value_width_apply_for_single_value", true, false, "Single values in Pretty formats won't be cut."},
|
||||||
|
{"output_format_parquet_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
|
||||||
|
{"output_format_orc_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
|
||||||
|
{"output_format_arrow_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
|
||||||
|
{"output_format_parquet_compression_method", "lz4", "zstd", "Parquet/ORC/Arrow support many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools, such as 'duckdb', lack support for the faster `lz4` compression method, that's why we set zstd by default."},
|
||||||
|
{"output_format_orc_compression_method", "lz4", "zstd", "Parquet/ORC/Arrow support many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools, such as 'duckdb', lack support for the faster `lz4` compression method, that's why we set zstd by default."},
|
||||||
|
{"output_format_pretty_highlight_digit_groups", false, true, "If enabled and if output is a terminal, highlight every digit corresponding to the number of thousands, millions, etc. with underline."},
|
||||||
|
{"geo_distance_returns_float64_on_float64_arguments", false, true, "Increase the default precision."},
|
||||||
|
{"azure_max_inflight_parts_for_one_file", 20, 20, "The maximum number of a concurrent loaded parts in multipart upload request. 0 means unlimited."},
|
||||||
|
{"azure_strict_upload_part_size", 0, 0, "The exact size of part to upload during multipart upload to Azure blob storage."},
|
||||||
|
{"azure_min_upload_part_size", 16*1024*1024, 16*1024*1024, "The minimum size of part to upload during multipart upload to Azure blob storage."},
|
||||||
|
{"azure_max_upload_part_size", 5ull*1024*1024*1024, 5ull*1024*1024*1024, "The maximum size of part to upload during multipart upload to Azure blob storage."},
|
||||||
|
{"azure_upload_part_size_multiply_factor", 2, 2, "Multiply azure_min_upload_part_size by this factor each time azure_multiply_parts_count_threshold parts were uploaded from a single write to Azure blob storage."},
|
||||||
|
{"azure_upload_part_size_multiply_parts_count_threshold", 500, 500, "Each time this number of parts was uploaded to Azure blob storage, azure_min_upload_part_size is multiplied by azure_upload_part_size_multiply_factor."},
|
||||||
|
{"output_format_csv_serialize_tuple_into_separate_columns", true, true, "A new way of how interpret tuples in CSV format was added."},
|
||||||
|
{"input_format_csv_deserialize_separate_columns_into_tuple", true, true, "A new way of how interpret tuples in CSV format was added."},
|
||||||
|
{"input_format_csv_try_infer_strings_from_quoted_tuples", true, true, "A new way of how interpret tuples in CSV format was added."},
|
||||||
|
}},
|
||||||
|
{"24.2", {{"allow_suspicious_variant_types", true, false, "Don't allow creating Variant type with suspicious variants by default"},
|
||||||
|
{"validate_experimental_and_suspicious_types_inside_nested_types", false, true, "Validate usage of experimental and suspicious types inside nested types"},
|
||||||
|
{"output_format_values_escape_quote_with_quote", false, false, "If true escape ' with '', otherwise quoted with \\'"},
|
||||||
|
{"output_format_pretty_single_large_number_tip_threshold", 0, 1'000'000, "Print a readable number tip on the right side of the table if the block consists of a single number which exceeds this value (except 0)"},
|
||||||
|
{"input_format_try_infer_exponent_floats", true, false, "Don't infer floats in exponential notation by default"},
|
||||||
|
{"query_plan_optimize_prewhere", true, true, "Allow to push down filter to PREWHERE expression for supported storages"},
|
||||||
|
{"async_insert_max_data_size", 1000000, 10485760, "The previous value appeared to be too small."},
|
||||||
|
{"async_insert_poll_timeout_ms", 10, 10, "Timeout in milliseconds for polling data from asynchronous insert queue"},
|
||||||
|
{"async_insert_use_adaptive_busy_timeout", false, true, "Use adaptive asynchronous insert timeout"},
|
||||||
|
{"async_insert_busy_timeout_min_ms", 50, 50, "The minimum value of the asynchronous insert timeout in milliseconds; it also serves as the initial value, which may be increased later by the adaptive algorithm"},
|
||||||
|
{"async_insert_busy_timeout_max_ms", 200, 200, "The minimum value of the asynchronous insert timeout in milliseconds; async_insert_busy_timeout_ms is aliased to async_insert_busy_timeout_max_ms"},
|
||||||
|
{"async_insert_busy_timeout_increase_rate", 0.2, 0.2, "The exponential growth rate at which the adaptive asynchronous insert timeout increases"},
|
||||||
|
{"async_insert_busy_timeout_decrease_rate", 0.2, 0.2, "The exponential growth rate at which the adaptive asynchronous insert timeout decreases"},
|
||||||
|
{"format_template_row_format", "", "", "Template row format string can be set directly in query"},
|
||||||
|
{"format_template_resultset_format", "", "", "Template result set format string can be set in query"},
|
||||||
|
{"split_parts_ranges_into_intersecting_and_non_intersecting_final", true, true, "Allow to split parts ranges into intersecting and non intersecting during FINAL optimization"},
|
||||||
|
{"split_intersecting_parts_ranges_into_layers_final", true, true, "Allow to split intersecting parts ranges into layers during FINAL optimization"},
|
||||||
|
{"azure_max_single_part_copy_size", 256*1024*1024, 256*1024*1024, "The maximum size of object to copy using single part copy to Azure blob storage."},
|
||||||
|
{"min_external_table_block_size_rows", DEFAULT_INSERT_BLOCK_SIZE, DEFAULT_INSERT_BLOCK_SIZE, "Squash blocks passed to external table to specified size in rows, if blocks are not big enough"},
|
||||||
|
{"min_external_table_block_size_bytes", DEFAULT_INSERT_BLOCK_SIZE * 256, DEFAULT_INSERT_BLOCK_SIZE * 256, "Squash blocks passed to external table to specified size in bytes, if blocks are not big enough."},
|
||||||
|
{"parallel_replicas_prefer_local_join", true, true, "If true, and JOIN can be executed with parallel replicas algorithm, and all storages of right JOIN part are *MergeTree, local JOIN will be used instead of GLOBAL JOIN."},
|
||||||
|
{"optimize_time_filter_with_preimage", true, true, "Optimize Date and DateTime predicates by converting functions into equivalent comparisons without conversions (e.g. toYear(col) = 2023 -> col >= '2023-01-01' AND col <= '2023-12-31')"},
|
||||||
|
{"extract_key_value_pairs_max_pairs_per_row", 0, 0, "Max number of pairs that can be produced by the `extractKeyValuePairs` function. Used as a safeguard against consuming too much memory."},
|
||||||
|
{"default_view_definer", "CURRENT_USER", "CURRENT_USER", "Allows to set default `DEFINER` option while creating a view"},
|
||||||
|
{"default_materialized_view_sql_security", "DEFINER", "DEFINER", "Allows to set a default value for SQL SECURITY option when creating a materialized view"},
|
||||||
|
{"default_normal_view_sql_security", "INVOKER", "INVOKER", "Allows to set default `SQL SECURITY` option while creating a normal view"},
|
||||||
|
{"mysql_map_string_to_text_in_show_columns", false, true, "Reduce the configuration effort to connect ClickHouse with BI tools."},
|
||||||
|
{"mysql_map_fixed_string_to_text_in_show_columns", false, true, "Reduce the configuration effort to connect ClickHouse with BI tools."},
|
||||||
|
}},
|
||||||
|
{"24.1", {{"print_pretty_type_names", false, true, "Better user experience."},
|
||||||
|
{"input_format_json_read_bools_as_strings", false, true, "Allow to read bools as strings in JSON formats by default"},
|
||||||
|
{"output_format_arrow_use_signed_indexes_for_dictionary", false, true, "Use signed indexes type for Arrow dictionaries by default as it's recommended"},
|
||||||
|
{"allow_experimental_variant_type", false, false, "Add new experimental Variant type"},
|
||||||
|
{"use_variant_as_common_type", false, false, "Allow to use Variant in if/multiIf if there is no common type"},
|
||||||
|
{"output_format_arrow_use_64_bit_indexes_for_dictionary", false, false, "Allow to use 64 bit indexes type in Arrow dictionaries"},
|
||||||
|
{"parallel_replicas_mark_segment_size", 128, 128, "Add new setting to control segment size in new parallel replicas coordinator implementation"},
|
||||||
|
{"ignore_materialized_views_with_dropped_target_table", false, false, "Add new setting to allow to ignore materialized views with dropped target table"},
|
||||||
|
{"output_format_compression_level", 3, 3, "Allow to change compression level in the query output"},
|
||||||
|
{"output_format_compression_zstd_window_log", 0, 0, "Allow to change zstd window log in the query output when zstd compression is used"},
|
||||||
|
{"enable_zstd_qat_codec", false, false, "Add new ZSTD_QAT codec"},
|
||||||
|
{"enable_vertical_final", false, true, "Use vertical final by default"},
|
||||||
|
{"output_format_arrow_use_64_bit_indexes_for_dictionary", false, false, "Allow to use 64 bit indexes type in Arrow dictionaries"},
|
||||||
|
{"max_rows_in_set_to_optimize_join", 100000, 0, "Disable join optimization as it prevents from read in order optimization"},
|
||||||
|
{"output_format_pretty_color", true, "auto", "Setting is changed to allow also for auto value, disabling ANSI escapes if output is not a tty"},
|
||||||
|
{"function_visible_width_behavior", 0, 1, "We changed the default behavior of `visibleWidth` to be more precise"},
|
||||||
|
{"max_estimated_execution_time", 0, 0, "Separate max_execution_time and max_estimated_execution_time"},
|
||||||
|
{"iceberg_engine_ignore_schema_evolution", false, false, "Allow to ignore schema evolution in Iceberg table engine"},
|
||||||
|
{"optimize_injective_functions_in_group_by", false, true, "Replace injective functions by it's arguments in GROUP BY section in analyzer"},
|
||||||
|
{"update_insert_deduplication_token_in_dependent_materialized_views", false, false, "Allow to update insert deduplication token with table identifier during insert in dependent materialized views"},
|
||||||
|
{"azure_max_unexpected_write_error_retries", 4, 4, "The maximum number of retries in case of unexpected errors during Azure blob storage write"},
|
||||||
|
{"split_parts_ranges_into_intersecting_and_non_intersecting_final", false, true, "Allow to split parts ranges into intersecting and non intersecting during FINAL optimization"},
|
||||||
|
{"split_intersecting_parts_ranges_into_layers_final", true, true, "Allow to split intersecting parts ranges into layers during FINAL optimization"}}},
|
||||||
|
{"23.12", {{"allow_suspicious_ttl_expressions", true, false, "It is a new setting, and in previous versions the behavior was equivalent to allowing."},
|
||||||
|
{"input_format_parquet_allow_missing_columns", false, true, "Allow missing columns in Parquet files by default"},
|
||||||
|
{"input_format_orc_allow_missing_columns", false, true, "Allow missing columns in ORC files by default"},
|
||||||
|
{"input_format_arrow_allow_missing_columns", false, true, "Allow missing columns in Arrow files by default"}}},
|
||||||
|
{"23.11", {{"parsedatetime_parse_without_leading_zeros", false, true, "Improved compatibility with MySQL DATE_FORMAT/STR_TO_DATE"}}},
|
||||||
|
{"23.9", {{"optimize_group_by_constant_keys", false, true, "Optimize group by constant keys by default"},
|
||||||
|
{"input_format_json_try_infer_named_tuples_from_objects", false, true, "Try to infer named Tuples from JSON objects by default"},
|
||||||
|
{"input_format_json_read_numbers_as_strings", false, true, "Allow to read numbers as strings in JSON formats by default"},
|
||||||
|
{"input_format_json_read_arrays_as_strings", false, true, "Allow to read arrays as strings in JSON formats by default"},
|
||||||
|
{"input_format_json_infer_incomplete_types_as_strings", false, true, "Allow to infer incomplete types as Strings in JSON formats by default"},
|
||||||
|
{"input_format_json_try_infer_numbers_from_strings", true, false, "Don't infer numbers from strings in JSON formats by default to prevent possible parsing errors"},
|
||||||
|
{"http_write_exception_in_output_format", false, true, "Output valid JSON/XML on exception in HTTP streaming."}}},
|
||||||
|
{"23.8", {{"rewrite_count_distinct_if_with_count_distinct_implementation", false, true, "Rewrite countDistinctIf with count_distinct_implementation configuration"}}},
|
||||||
|
{"23.7", {{"function_sleep_max_microseconds_per_block", 0, 3000000, "In previous versions, the maximum sleep time of 3 seconds was applied only for `sleep`, but not for `sleepEachRow` function. In the new version, we introduce this setting. If you set compatibility with the previous versions, we will disable the limit altogether."}}},
|
||||||
|
{"23.6", {{"http_send_timeout", 180, 30, "3 minutes seems crazy long. Note that this is timeout for a single network write call, not for the whole upload operation."},
|
||||||
|
{"http_receive_timeout", 180, 30, "See http_send_timeout."}}},
|
||||||
|
{"23.5", {{"input_format_parquet_preserve_order", true, false, "Allow Parquet reader to reorder rows for better parallelism."},
|
||||||
|
{"parallelize_output_from_storages", false, true, "Allow parallelism when executing queries that read from file/url/s3/etc. This may reorder rows."},
|
||||||
|
{"use_with_fill_by_sorting_prefix", false, true, "Columns preceding WITH FILL columns in ORDER BY clause form sorting prefix. Rows with different values in sorting prefix are filled independently"},
|
||||||
|
{"output_format_parquet_compliant_nested_types", false, true, "Change an internal field name in output Parquet file schema."}}},
|
||||||
|
{"23.4", {{"allow_suspicious_indices", true, false, "If true, index can defined with identical expressions"},
|
||||||
|
{"allow_nonconst_timezone_arguments", true, false, "Allow non-const timezone arguments in certain time-related functions like toTimeZone(), fromUnixTimestamp*(), snowflakeToDateTime*()."},
|
||||||
|
{"connect_timeout_with_failover_ms", 50, 1000, "Increase default connect timeout because of async connect"},
|
||||||
|
{"connect_timeout_with_failover_secure_ms", 100, 1000, "Increase default secure connect timeout because of async connect"},
|
||||||
|
{"hedged_connection_timeout_ms", 100, 50, "Start new connection in hedged requests after 50 ms instead of 100 to correspond with previous connect timeout"},
|
||||||
|
{"formatdatetime_f_prints_single_zero", true, false, "Improved compatibility with MySQL DATE_FORMAT()/STR_TO_DATE()"},
|
||||||
|
{"formatdatetime_parsedatetime_m_is_month_name", false, true, "Improved compatibility with MySQL DATE_FORMAT/STR_TO_DATE"}}},
|
||||||
|
{"23.3", {{"output_format_parquet_version", "1.0", "2.latest", "Use latest Parquet format version for output format"},
|
||||||
|
{"input_format_json_ignore_unknown_keys_in_named_tuple", false, true, "Improve parsing JSON objects as named tuples"},
|
||||||
|
{"input_format_native_allow_types_conversion", false, true, "Allow types conversion in Native input forma"},
|
||||||
|
{"output_format_arrow_compression_method", "none", "lz4_frame", "Use lz4 compression in Arrow output format by default"},
|
||||||
|
{"output_format_parquet_compression_method", "snappy", "lz4", "Use lz4 compression in Parquet output format by default"},
|
||||||
|
{"output_format_orc_compression_method", "none", "lz4_frame", "Use lz4 compression in ORC output format by default"},
|
||||||
|
{"async_query_sending_for_remote", false, true, "Create connections and send query async across shards"}}},
|
||||||
|
{"23.2", {{"output_format_parquet_fixed_string_as_fixed_byte_array", false, true, "Use Parquet FIXED_LENGTH_BYTE_ARRAY type for FixedString by default"},
|
||||||
|
{"output_format_arrow_fixed_string_as_fixed_byte_array", false, true, "Use Arrow FIXED_SIZE_BINARY type for FixedString by default"},
|
||||||
|
{"query_plan_remove_redundant_distinct", false, true, "Remove redundant Distinct step in query plan"},
|
||||||
|
{"optimize_duplicate_order_by_and_distinct", true, false, "Remove duplicate ORDER BY and DISTINCT if it's possible"},
|
||||||
|
{"insert_keeper_max_retries", 0, 20, "Enable reconnections to Keeper on INSERT, improve reliability"}}},
|
||||||
|
{"23.1", {{"input_format_json_read_objects_as_strings", 0, 1, "Enable reading nested json objects as strings while object type is experimental"},
|
||||||
|
{"input_format_json_defaults_for_missing_elements_in_named_tuple", false, true, "Allow missing elements in JSON objects while reading named tuples by default"},
|
||||||
|
{"input_format_csv_detect_header", false, true, "Detect header in CSV format by default"},
|
||||||
|
{"input_format_tsv_detect_header", false, true, "Detect header in TSV format by default"},
|
||||||
|
{"input_format_custom_detect_header", false, true, "Detect header in CustomSeparated format by default"},
|
||||||
|
{"query_plan_remove_redundant_sorting", false, true, "Remove redundant sorting in query plan. For example, sorting steps related to ORDER BY clauses in subqueries"}}},
|
||||||
|
{"22.12", {{"max_size_to_preallocate_for_aggregation", 10'000'000, 100'000'000, "This optimizes performance"},
|
||||||
|
{"query_plan_aggregation_in_order", 0, 1, "Enable some refactoring around query plan"},
|
||||||
|
{"format_binary_max_string_size", 0, 1_GiB, "Prevent allocating large amount of memory"}}},
|
||||||
|
{"22.11", {{"use_structure_from_insertion_table_in_table_functions", 0, 2, "Improve using structure from insertion table in table functions"}}},
|
||||||
|
{"22.9", {{"force_grouping_standard_compatibility", false, true, "Make GROUPING function output the same as in SQL standard and other DBMS"}}},
|
||||||
|
{"22.7", {{"cross_to_inner_join_rewrite", 1, 2, "Force rewrite comma join to inner"},
|
||||||
|
{"enable_positional_arguments", false, true, "Enable positional arguments feature by default"},
|
||||||
|
{"format_csv_allow_single_quotes", true, false, "Most tools don't treat single quote in CSV specially, don't do it by default too"}}},
|
||||||
|
{"22.6", {{"output_format_json_named_tuples_as_objects", false, true, "Allow to serialize named tuples as JSON objects in JSON formats by default"},
|
||||||
|
{"input_format_skip_unknown_fields", false, true, "Optimize reading subset of columns for some input formats"}}},
|
||||||
|
{"22.5", {{"memory_overcommit_ratio_denominator", 0, 1073741824, "Enable memory overcommit feature by default"},
|
||||||
|
{"memory_overcommit_ratio_denominator_for_user", 0, 1073741824, "Enable memory overcommit feature by default"}}},
|
||||||
|
{"22.4", {{"allow_settings_after_format_in_insert", true, false, "Do not allow SETTINGS after FORMAT for INSERT queries because ClickHouse interpret SETTINGS as some values, which is misleading"}}},
|
||||||
|
{"22.3", {{"cast_ipv4_ipv6_default_on_conversion_error", true, false, "Make functions cast(value, 'IPv4') and cast(value, 'IPv6') behave same as toIPv4 and toIPv6 functions"}}},
|
||||||
|
{"21.12", {{"stream_like_engine_allow_direct_select", true, false, "Do not allow direct select for Kafka/RabbitMQ/FileLog by default"}}},
|
||||||
|
{"21.9", {{"output_format_decimal_trailing_zeros", true, false, "Do not output trailing zeros in text representation of Decimal types by default for better looking output"},
|
||||||
|
{"use_hedged_requests", false, true, "Enable Hedged Requests feature by default"}}},
|
||||||
|
{"21.7", {{"legacy_column_name_of_tuple_literal", true, false, "Add this setting only for compatibility reasons. It makes sense to set to 'true', while doing rolling update of cluster from version lower than 21.7 to higher"}}},
|
||||||
|
{"21.5", {{"async_socket_for_remote", false, true, "Fix all problems and turn on asynchronous reads from socket for remote queries by default again"}}},
|
||||||
|
{"21.3", {{"async_socket_for_remote", true, false, "Turn off asynchronous reads from socket for remote queries because of some problems"},
|
||||||
|
{"optimize_normalize_count_variants", false, true, "Rewrite aggregate functions that semantically equals to count() as count() by default"},
|
||||||
|
{"normalize_function_names", false, true, "Normalize function names to their canonical names, this was needed for projection query routing"}}},
|
||||||
|
{"21.2", {{"enable_global_with_statement", false, true, "Propagate WITH statements to UNION queries and all subqueries by default"}}},
|
||||||
|
{"21.1", {{"insert_quorum_parallel", false, true, "Use parallel quorum inserts by default. It is significantly more convenient to use than sequential quorum inserts"},
|
||||||
|
{"input_format_null_as_default", false, true, "Allow to insert NULL as default for input formats by default"},
|
||||||
|
{"optimize_on_insert", false, true, "Enable data optimization on INSERT by default for better user experience"},
|
||||||
|
{"use_compact_format_in_distributed_parts_names", false, true, "Use compact format for async INSERT into Distributed tables by default"}}},
|
||||||
|
{"20.10", {{"format_regexp_escaping_rule", "Escaped", "Raw", "Use Raw as default escaping rule for Regexp format to male the behaviour more like to what users expect"}}},
|
||||||
|
{"20.7", {{"show_table_uuid_in_table_create_query_if_not_nil", true, false, "Stop showing UID of the table in its CREATE query for Engine=Atomic"}}},
|
||||||
|
{"20.5", {{"input_format_with_names_use_header", false, true, "Enable using header with names for formats with WithNames/WithNamesAndTypes suffixes"},
|
||||||
|
{"allow_suspicious_codecs", true, false, "Don't allow to specify meaningless compression codecs"}}},
|
||||||
|
{"20.4", {{"validate_polygons", false, true, "Throw exception if polygon is invalid in function pointInPolygon by default instead of returning possibly wrong results"}}},
|
||||||
|
{"19.18", {{"enable_scalar_subquery_optimization", false, true, "Prevent scalar subqueries from (de)serializing large scalar values and possibly avoid running the same subquery more than once"}}},
|
||||||
|
{"19.14", {{"any_join_distinct_right_table_keys", true, false, "Disable ANY RIGHT and ANY FULL JOINs by default to avoid inconsistency"}}},
|
||||||
|
{"19.12", {{"input_format_defaults_for_omitted_fields", false, true, "Enable calculation of complex default expressions for omitted fields for some input formats, because it should be the expected behaviour"}}},
|
||||||
|
{"19.5", {{"max_partitions_per_insert_block", 0, 100, "Add a limit for the number of partitions in one block"}}},
|
||||||
|
{"18.12.17", {{"enable_optimize_predicate_expression", 0, 1, "Optimize predicates to subqueries by default"}}},
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
|
const std::map<ClickHouseVersion, SettingsChangesHistory::SettingsChanges> & getSettingsChangesHistory()
|
||||||
|
{
|
||||||
|
static std::map<ClickHouseVersion, SettingsChangesHistory::SettingsChanges> settings_changes_history;
|
||||||
|
|
||||||
|
static std::once_flag initialized_flag;
|
||||||
|
std::call_once(initialized_flag, []()
|
||||||
|
{
|
||||||
|
for (const auto & setting_change : settings_changes_history_initializer)
|
||||||
|
{
|
||||||
|
/// Disallow duplicate keys in the settings changes history. Example:
|
||||||
|
/// {"21.2", {{"some_setting_1", false, true, "[...]"}}},
|
||||||
|
/// [...]
|
||||||
|
/// {"21.2", {{"some_setting_2", false, true, "[...]"}}},
|
||||||
|
/// As std::set has unique keys, one of the entries would be overwritten.
|
||||||
|
if (settings_changes_history.contains(setting_change.first))
|
||||||
|
throw Exception{ErrorCodes::LOGICAL_ERROR, "Detected duplicate version '{}'", setting_change.first.toString()};
|
||||||
|
|
||||||
|
settings_changes_history[setting_change.first] = setting_change.second;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
return settings_changes_history;
|
||||||
|
}
|
||||||
|
}
|
@ -1,62 +1,25 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <Core/Field.h>
|
#include <Core/Field.h>
|
||||||
#include <Core/Settings.h>
|
|
||||||
#include <IO/ReadHelpers.h>
|
|
||||||
#include <IO/ReadBufferFromString.h>
|
|
||||||
#include <boost/algorithm/string.hpp>
|
|
||||||
#include <map>
|
#include <map>
|
||||||
|
#include <vector>
|
||||||
|
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
namespace ErrorCodes
|
|
||||||
{
|
|
||||||
extern const int BAD_ARGUMENTS;
|
|
||||||
}
|
|
||||||
|
|
||||||
class ClickHouseVersion
|
class ClickHouseVersion
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
ClickHouseVersion(const String & version) /// NOLINT(google-explicit-constructor)
|
/// NOLINTBEGIN(google-explicit-constructor)
|
||||||
{
|
ClickHouseVersion(const String & version);
|
||||||
Strings split;
|
ClickHouseVersion(const char * version);
|
||||||
boost::split(split, version, [](char c){ return c == '.'; });
|
/// NOLINTEND(google-explicit-constructor)
|
||||||
components.reserve(split.size());
|
|
||||||
if (split.empty())
|
|
||||||
throw Exception{ErrorCodes::BAD_ARGUMENTS, "Cannot parse ClickHouse version here: {}", version};
|
|
||||||
|
|
||||||
for (const auto & split_element : split)
|
String toString() const;
|
||||||
{
|
|
||||||
size_t component;
|
|
||||||
ReadBufferFromString buf(split_element);
|
|
||||||
if (!tryReadIntText(component, buf) || !buf.eof())
|
|
||||||
throw Exception{ErrorCodes::BAD_ARGUMENTS, "Cannot parse ClickHouse version here: {}", version};
|
|
||||||
components.push_back(component);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ClickHouseVersion(const char * version) : ClickHouseVersion(String(version)) {} /// NOLINT(google-explicit-constructor)
|
bool operator<(const ClickHouseVersion & other) const { return components < other.components; }
|
||||||
|
bool operator>=(const ClickHouseVersion & other) const { return components >= other.components; }
|
||||||
String toString() const
|
|
||||||
{
|
|
||||||
String version = std::to_string(components[0]);
|
|
||||||
for (size_t i = 1; i < components.size(); ++i)
|
|
||||||
version += "." + std::to_string(components[i]);
|
|
||||||
|
|
||||||
return version;
|
|
||||||
}
|
|
||||||
|
|
||||||
bool operator<(const ClickHouseVersion & other) const
|
|
||||||
{
|
|
||||||
return components < other.components;
|
|
||||||
}
|
|
||||||
|
|
||||||
bool operator>=(const ClickHouseVersion & other) const
|
|
||||||
{
|
|
||||||
return components >= other.components;
|
|
||||||
}
|
|
||||||
|
|
||||||
private:
|
private:
|
||||||
std::vector<size_t> components;
|
std::vector<size_t> components;
|
||||||
@ -75,253 +38,6 @@ namespace SettingsChangesHistory
|
|||||||
using SettingsChanges = std::vector<SettingChange>;
|
using SettingsChanges = std::vector<SettingChange>;
|
||||||
}
|
}
|
||||||
|
|
||||||
// clang-format off
|
const std::map<ClickHouseVersion, SettingsChangesHistory::SettingsChanges> & getSettingsChangesHistory();
|
||||||
/// History of settings changes that controls some backward incompatible changes
|
|
||||||
/// across all ClickHouse versions. It maps ClickHouse version to settings changes that were done
|
|
||||||
/// in this version. This history contains both changes to existing settings and newly added settings.
|
|
||||||
/// Settings changes is a vector of structs
|
|
||||||
/// {setting_name, previous_value, new_value, reason}.
|
|
||||||
/// For newly added setting choose the most appropriate previous_value (for example, if new setting
|
|
||||||
/// controls new feature and it's 'true' by default, use 'false' as previous_value).
|
|
||||||
/// It's used to implement `compatibility` setting (see https://github.com/ClickHouse/ClickHouse/issues/35972)
|
|
||||||
static const std::map<ClickHouseVersion, SettingsChangesHistory::SettingsChanges> settings_changes_history =
|
|
||||||
{
|
|
||||||
{"24.7", {{"output_format_parquet_write_page_index", false, true, "Add a possibility to write page index into parquet files."},
|
|
||||||
}},
|
|
||||||
{"24.6", {{"materialize_skip_indexes_on_insert", true, true, "Added new setting to allow to disable materialization of skip indexes on insert"},
|
|
||||||
{"materialize_statistics_on_insert", true, true, "Added new setting to allow to disable materialization of statistics on insert"},
|
|
||||||
{"input_format_parquet_use_native_reader", false, false, "When reading Parquet files, to use native reader instead of arrow reader."},
|
|
||||||
{"hdfs_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in HDFS engine instead of empty query result"},
|
|
||||||
{"azure_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in AzureBlobStorage engine instead of empty query result"},
|
|
||||||
{"s3_validate_request_settings", true, true, "Allow to disable S3 request settings validation"},
|
|
||||||
{"allow_experimental_full_text_index", false, false, "Enable experimental full-text index"},
|
|
||||||
{"azure_skip_empty_files", false, false, "Allow to skip empty files in azure table engine"},
|
|
||||||
{"hdfs_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in HDFS table engine"},
|
|
||||||
{"azure_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in AzureBlobStorage table engine"},
|
|
||||||
{"s3_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in S3 table engine"},
|
|
||||||
{"s3_max_part_number", 10000, 10000, "Maximum part number number for s3 upload part"},
|
|
||||||
{"s3_max_single_operation_copy_size", 32 * 1024 * 1024, 32 * 1024 * 1024, "Maximum size for a single copy operation in s3"},
|
|
||||||
{"input_format_parquet_max_block_size", 8192, DEFAULT_BLOCK_SIZE, "Increase block size for parquet reader."},
|
|
||||||
{"input_format_parquet_prefer_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Average block bytes output by parquet reader."},
|
|
||||||
{"enable_blob_storage_log", true, true, "Write information about blob storage operations to system.blob_storage_log table"},
|
|
||||||
{"allow_deprecated_snowflake_conversion_functions", true, false, "Disabled deprecated functions snowflakeToDateTime[64] and dateTime[64]ToSnowflake."},
|
|
||||||
{"allow_statistic_optimize", false, false, "Old setting which popped up here being renamed."},
|
|
||||||
{"allow_experimental_statistic", false, false, "Old setting which popped up here being renamed."},
|
|
||||||
{"allow_statistics_optimize", false, false, "The setting was renamed. The previous name is `allow_statistic_optimize`."},
|
|
||||||
{"allow_experimental_statistics", false, false, "The setting was renamed. The previous name is `allow_experimental_statistic`."},
|
|
||||||
{"enable_vertical_final", false, true, "Enable vertical final by default again after fixing bug"},
|
|
||||||
{"parallel_replicas_custom_key_range_lower", 0, 0, "Add settings to control the range filter when using parallel replicas with dynamic shards"},
|
|
||||||
{"parallel_replicas_custom_key_range_upper", 0, 0, "Add settings to control the range filter when using parallel replicas with dynamic shards. A value of 0 disables the upper limit"},
|
|
||||||
{"output_format_pretty_display_footer_column_names", 0, 1, "Add a setting to display column names in the footer if there are many rows. Threshold value is controlled by output_format_pretty_display_footer_column_names_min_rows."},
|
|
||||||
{"output_format_pretty_display_footer_column_names_min_rows", 0, 50, "Add a setting to control the threshold value for setting output_format_pretty_display_footer_column_names_min_rows. Default 50."},
|
|
||||||
{"output_format_csv_serialize_tuple_into_separate_columns", true, true, "A new way of how interpret tuples in CSV format was added."},
|
|
||||||
{"input_format_csv_deserialize_separate_columns_into_tuple", true, true, "A new way of how interpret tuples in CSV format was added."},
|
|
||||||
{"input_format_csv_try_infer_strings_from_quoted_tuples", true, true, "A new way of how interpret tuples in CSV format was added."},
|
|
||||||
}},
|
|
||||||
{"24.5", {{"allow_deprecated_error_prone_window_functions", true, false, "Allow usage of deprecated error prone window functions (neighbor, runningAccumulate, runningDifferenceStartingWithFirstValue, runningDifference)"},
|
|
||||||
{"allow_experimental_join_condition", false, false, "Support join with inequal conditions which involve columns from both left and right table. e.g. t1.y < t2.y."},
|
|
||||||
{"input_format_tsv_crlf_end_of_line", false, false, "Enables reading of CRLF line endings with TSV formats"},
|
|
||||||
{"output_format_parquet_use_custom_encoder", false, true, "Enable custom Parquet encoder."},
|
|
||||||
{"cross_join_min_rows_to_compress", 0, 10000000, "Minimal count of rows to compress block in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."},
|
|
||||||
{"cross_join_min_bytes_to_compress", 0, 1_GiB, "Minimal size of block to compress in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."},
|
|
||||||
{"http_max_chunk_size", 0, 0, "Internal limitation"},
|
|
||||||
{"prefer_external_sort_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Prefer maximum block bytes for external sort, reduce the memory usage during merging."},
|
|
||||||
{"input_format_force_null_for_omitted_fields", false, false, "Disable type-defaults for omitted fields when needed"},
|
|
||||||
{"cast_string_to_dynamic_use_inference", false, false, "Add setting to allow converting String to Dynamic through parsing"},
|
|
||||||
{"allow_experimental_dynamic_type", false, false, "Add new experimental Dynamic type"},
|
|
||||||
{"azure_max_blocks_in_multipart_upload", 50000, 50000, "Maximum number of blocks in multipart upload for Azure."},
|
|
||||||
}},
|
|
||||||
{"24.4", {{"input_format_json_throw_on_bad_escape_sequence", true, true, "Allow to save JSON strings with bad escape sequences"},
|
|
||||||
{"max_parsing_threads", 0, 0, "Add a separate setting to control number of threads in parallel parsing from files"},
|
|
||||||
{"ignore_drop_queries_probability", 0, 0, "Allow to ignore drop queries in server with specified probability for testing purposes"},
|
|
||||||
{"lightweight_deletes_sync", 2, 2, "The same as 'mutation_sync', but controls only execution of lightweight deletes"},
|
|
||||||
{"query_cache_system_table_handling", "save", "throw", "The query cache no longer caches results of queries against system tables"},
|
|
||||||
{"input_format_json_ignore_unnecessary_fields", false, true, "Ignore unnecessary fields and not parse them. Enabling this may not throw exceptions on json strings of invalid format or with duplicated fields"},
|
|
||||||
{"input_format_hive_text_allow_variable_number_of_columns", false, true, "Ignore extra columns in Hive Text input (if file has more columns than expected) and treat missing fields in Hive Text input as default values."},
|
|
||||||
{"allow_experimental_database_replicated", false, true, "Database engine Replicated is now in Beta stage"},
|
|
||||||
{"temporary_data_in_cache_reserve_space_wait_lock_timeout_milliseconds", (10 * 60 * 1000), (10 * 60 * 1000), "Wait time to lock cache for sapce reservation in temporary data in filesystem cache"},
|
|
||||||
{"optimize_rewrite_sum_if_to_count_if", false, true, "Only available for the analyzer, where it works correctly"},
|
|
||||||
{"azure_allow_parallel_part_upload", "true", "true", "Use multiple threads for azure multipart upload."},
|
|
||||||
{"max_recursive_cte_evaluation_depth", DBMS_RECURSIVE_CTE_MAX_EVALUATION_DEPTH, DBMS_RECURSIVE_CTE_MAX_EVALUATION_DEPTH, "Maximum limit on recursive CTE evaluation depth"},
|
|
||||||
{"query_plan_convert_outer_join_to_inner_join", false, true, "Allow to convert OUTER JOIN to INNER JOIN if filter after JOIN always filters default values"},
|
|
||||||
}},
|
|
||||||
{"24.3", {{"s3_connect_timeout_ms", 1000, 1000, "Introduce new dedicated setting for s3 connection timeout"},
|
|
||||||
{"allow_experimental_shared_merge_tree", false, true, "The setting is obsolete"},
|
|
||||||
{"use_page_cache_for_disks_without_file_cache", false, false, "Added userspace page cache"},
|
|
||||||
{"read_from_page_cache_if_exists_otherwise_bypass_cache", false, false, "Added userspace page cache"},
|
|
||||||
{"page_cache_inject_eviction", false, false, "Added userspace page cache"},
|
|
||||||
{"default_table_engine", "None", "MergeTree", "Set default table engine to MergeTree for better usability"},
|
|
||||||
{"input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects", false, false, "Allow to use String type for ambiguous paths during named tuple inference from JSON objects"},
|
|
||||||
{"traverse_shadow_remote_data_paths", false, false, "Traverse shadow directory when query system.remote_data_paths."},
|
|
||||||
{"throw_if_deduplication_in_dependent_materialized_views_enabled_with_async_insert", false, true, "Deduplication is dependent materialized view cannot work together with async inserts."},
|
|
||||||
{"parallel_replicas_allow_in_with_subquery", false, true, "If true, subquery for IN will be executed on every follower replica"},
|
|
||||||
{"log_processors_profiles", false, true, "Enable by default"},
|
|
||||||
{"function_locate_has_mysql_compatible_argument_order", false, true, "Increase compatibility with MySQL's locate function."},
|
|
||||||
{"allow_suspicious_primary_key", true, false, "Forbid suspicious PRIMARY KEY/ORDER BY for MergeTree (i.e. SimpleAggregateFunction)"},
|
|
||||||
{"filesystem_cache_reserve_space_wait_lock_timeout_milliseconds", 1000, 1000, "Wait time to lock cache for sapce reservation in filesystem cache"},
|
|
||||||
{"max_parser_backtracks", 0, 1000000, "Limiting the complexity of parsing"},
|
|
||||||
{"analyzer_compatibility_join_using_top_level_identifier", false, false, "Force to resolve identifier in JOIN USING from projection"},
|
|
||||||
{"distributed_insert_skip_read_only_replicas", false, false, "If true, INSERT into Distributed will skip read-only replicas"},
|
|
||||||
{"keeper_max_retries", 10, 10, "Max retries for general keeper operations"},
|
|
||||||
{"keeper_retry_initial_backoff_ms", 100, 100, "Initial backoff timeout for general keeper operations"},
|
|
||||||
{"keeper_retry_max_backoff_ms", 5000, 5000, "Max backoff timeout for general keeper operations"},
|
|
||||||
{"s3queue_allow_experimental_sharded_mode", false, false, "Enable experimental sharded mode of S3Queue table engine. It is experimental because it will be rewritten"},
|
|
||||||
{"allow_experimental_analyzer", false, true, "Enable analyzer and planner by default."},
|
|
||||||
{"merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability", 0.0, 0.0, "For testing of `PartsSplitter` - split read ranges into intersecting and non intersecting every time you read from MergeTree with the specified probability."},
|
|
||||||
{"allow_get_client_http_header", false, false, "Introduced a new function."},
|
|
||||||
{"output_format_pretty_row_numbers", false, true, "It is better for usability."},
|
|
||||||
{"output_format_pretty_max_value_width_apply_for_single_value", true, false, "Single values in Pretty formats won't be cut."},
|
|
||||||
{"output_format_parquet_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
|
|
||||||
{"output_format_orc_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
|
|
||||||
{"output_format_arrow_string_as_string", false, true, "ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases."},
|
|
||||||
{"output_format_parquet_compression_method", "lz4", "zstd", "Parquet/ORC/Arrow support many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools, such as 'duckdb', lack support for the faster `lz4` compression method, that's why we set zstd by default."},
|
|
||||||
{"output_format_orc_compression_method", "lz4", "zstd", "Parquet/ORC/Arrow support many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools, such as 'duckdb', lack support for the faster `lz4` compression method, that's why we set zstd by default."},
|
|
||||||
{"output_format_pretty_highlight_digit_groups", false, true, "If enabled and if output is a terminal, highlight every digit corresponding to the number of thousands, millions, etc. with underline."},
|
|
||||||
{"geo_distance_returns_float64_on_float64_arguments", false, true, "Increase the default precision."},
|
|
||||||
{"azure_max_inflight_parts_for_one_file", 20, 20, "The maximum number of a concurrent loaded parts in multipart upload request. 0 means unlimited."},
|
|
||||||
{"azure_strict_upload_part_size", 0, 0, "The exact size of part to upload during multipart upload to Azure blob storage."},
|
|
||||||
{"azure_min_upload_part_size", 16*1024*1024, 16*1024*1024, "The minimum size of part to upload during multipart upload to Azure blob storage."},
|
|
||||||
{"azure_max_upload_part_size", 5ull*1024*1024*1024, 5ull*1024*1024*1024, "The maximum size of part to upload during multipart upload to Azure blob storage."},
|
|
||||||
{"azure_upload_part_size_multiply_factor", 2, 2, "Multiply azure_min_upload_part_size by this factor each time azure_multiply_parts_count_threshold parts were uploaded from a single write to Azure blob storage."},
|
|
||||||
{"azure_upload_part_size_multiply_parts_count_threshold", 500, 500, "Each time this number of parts was uploaded to Azure blob storage, azure_min_upload_part_size is multiplied by azure_upload_part_size_multiply_factor."},
|
|
||||||
{"output_format_csv_serialize_tuple_into_separate_columns", true, true, "A new way of how interpret tuples in CSV format was added."},
|
|
||||||
{"input_format_csv_deserialize_separate_columns_into_tuple", true, true, "A new way of how interpret tuples in CSV format was added."},
|
|
||||||
{"input_format_csv_try_infer_strings_from_quoted_tuples", true, true, "A new way of how interpret tuples in CSV format was added."},
|
|
||||||
}},
|
|
||||||
{"24.2", {{"allow_suspicious_variant_types", true, false, "Don't allow creating Variant type with suspicious variants by default"},
|
|
||||||
{"validate_experimental_and_suspicious_types_inside_nested_types", false, true, "Validate usage of experimental and suspicious types inside nested types"},
|
|
||||||
{"output_format_values_escape_quote_with_quote", false, false, "If true escape ' with '', otherwise quoted with \\'"},
|
|
||||||
{"output_format_pretty_single_large_number_tip_threshold", 0, 1'000'000, "Print a readable number tip on the right side of the table if the block consists of a single number which exceeds this value (except 0)"},
|
|
||||||
{"input_format_try_infer_exponent_floats", true, false, "Don't infer floats in exponential notation by default"},
|
|
||||||
{"query_plan_optimize_prewhere", true, true, "Allow to push down filter to PREWHERE expression for supported storages"},
|
|
||||||
{"async_insert_max_data_size", 1000000, 10485760, "The previous value appeared to be too small."},
|
|
||||||
{"async_insert_poll_timeout_ms", 10, 10, "Timeout in milliseconds for polling data from asynchronous insert queue"},
|
|
||||||
{"async_insert_use_adaptive_busy_timeout", false, true, "Use adaptive asynchronous insert timeout"},
|
|
||||||
{"async_insert_busy_timeout_min_ms", 50, 50, "The minimum value of the asynchronous insert timeout in milliseconds; it also serves as the initial value, which may be increased later by the adaptive algorithm"},
|
|
||||||
{"async_insert_busy_timeout_max_ms", 200, 200, "The minimum value of the asynchronous insert timeout in milliseconds; async_insert_busy_timeout_ms is aliased to async_insert_busy_timeout_max_ms"},
|
|
||||||
{"async_insert_busy_timeout_increase_rate", 0.2, 0.2, "The exponential growth rate at which the adaptive asynchronous insert timeout increases"},
|
|
||||||
{"async_insert_busy_timeout_decrease_rate", 0.2, 0.2, "The exponential growth rate at which the adaptive asynchronous insert timeout decreases"},
|
|
||||||
{"format_template_row_format", "", "", "Template row format string can be set directly in query"},
|
|
||||||
{"format_template_resultset_format", "", "", "Template result set format string can be set in query"},
|
|
||||||
{"split_parts_ranges_into_intersecting_and_non_intersecting_final", true, true, "Allow to split parts ranges into intersecting and non intersecting during FINAL optimization"},
|
|
||||||
{"split_intersecting_parts_ranges_into_layers_final", true, true, "Allow to split intersecting parts ranges into layers during FINAL optimization"},
|
|
||||||
{"azure_max_single_part_copy_size", 256*1024*1024, 256*1024*1024, "The maximum size of object to copy using single part copy to Azure blob storage."},
|
|
||||||
{"min_external_table_block_size_rows", DEFAULT_INSERT_BLOCK_SIZE, DEFAULT_INSERT_BLOCK_SIZE, "Squash blocks passed to external table to specified size in rows, if blocks are not big enough"},
|
|
||||||
{"min_external_table_block_size_bytes", DEFAULT_INSERT_BLOCK_SIZE * 256, DEFAULT_INSERT_BLOCK_SIZE * 256, "Squash blocks passed to external table to specified size in bytes, if blocks are not big enough."},
|
|
||||||
{"parallel_replicas_prefer_local_join", true, true, "If true, and JOIN can be executed with parallel replicas algorithm, and all storages of right JOIN part are *MergeTree, local JOIN will be used instead of GLOBAL JOIN."},
|
|
||||||
{"optimize_time_filter_with_preimage", true, true, "Optimize Date and DateTime predicates by converting functions into equivalent comparisons without conversions (e.g. toYear(col) = 2023 -> col >= '2023-01-01' AND col <= '2023-12-31')"},
|
|
||||||
{"extract_key_value_pairs_max_pairs_per_row", 0, 0, "Max number of pairs that can be produced by the `extractKeyValuePairs` function. Used as a safeguard against consuming too much memory."},
|
|
||||||
{"default_view_definer", "CURRENT_USER", "CURRENT_USER", "Allows to set default `DEFINER` option while creating a view"},
|
|
||||||
{"default_materialized_view_sql_security", "DEFINER", "DEFINER", "Allows to set a default value for SQL SECURITY option when creating a materialized view"},
|
|
||||||
{"default_normal_view_sql_security", "INVOKER", "INVOKER", "Allows to set default `SQL SECURITY` option while creating a normal view"},
|
|
||||||
{"mysql_map_string_to_text_in_show_columns", false, true, "Reduce the configuration effort to connect ClickHouse with BI tools."},
|
|
||||||
{"mysql_map_fixed_string_to_text_in_show_columns", false, true, "Reduce the configuration effort to connect ClickHouse with BI tools."},
|
|
||||||
}},
|
|
||||||
{"24.1", {{"print_pretty_type_names", false, true, "Better user experience."},
|
|
||||||
{"input_format_json_read_bools_as_strings", false, true, "Allow to read bools as strings in JSON formats by default"},
|
|
||||||
{"output_format_arrow_use_signed_indexes_for_dictionary", false, true, "Use signed indexes type for Arrow dictionaries by default as it's recommended"},
|
|
||||||
{"allow_experimental_variant_type", false, false, "Add new experimental Variant type"},
|
|
||||||
{"use_variant_as_common_type", false, false, "Allow to use Variant in if/multiIf if there is no common type"},
|
|
||||||
{"output_format_arrow_use_64_bit_indexes_for_dictionary", false, false, "Allow to use 64 bit indexes type in Arrow dictionaries"},
|
|
||||||
{"parallel_replicas_mark_segment_size", 128, 128, "Add new setting to control segment size in new parallel replicas coordinator implementation"},
|
|
||||||
{"ignore_materialized_views_with_dropped_target_table", false, false, "Add new setting to allow to ignore materialized views with dropped target table"},
|
|
||||||
{"output_format_compression_level", 3, 3, "Allow to change compression level in the query output"},
|
|
||||||
{"output_format_compression_zstd_window_log", 0, 0, "Allow to change zstd window log in the query output when zstd compression is used"},
|
|
||||||
{"enable_zstd_qat_codec", false, false, "Add new ZSTD_QAT codec"},
|
|
||||||
{"enable_vertical_final", false, true, "Use vertical final by default"},
|
|
||||||
{"output_format_arrow_use_64_bit_indexes_for_dictionary", false, false, "Allow to use 64 bit indexes type in Arrow dictionaries"},
|
|
||||||
{"max_rows_in_set_to_optimize_join", 100000, 0, "Disable join optimization as it prevents from read in order optimization"},
|
|
||||||
{"output_format_pretty_color", true, "auto", "Setting is changed to allow also for auto value, disabling ANSI escapes if output is not a tty"},
|
|
||||||
{"function_visible_width_behavior", 0, 1, "We changed the default behavior of `visibleWidth` to be more precise"},
|
|
||||||
{"max_estimated_execution_time", 0, 0, "Separate max_execution_time and max_estimated_execution_time"},
|
|
||||||
{"iceberg_engine_ignore_schema_evolution", false, false, "Allow to ignore schema evolution in Iceberg table engine"},
|
|
||||||
{"optimize_injective_functions_in_group_by", false, true, "Replace injective functions by it's arguments in GROUP BY section in analyzer"},
|
|
||||||
{"update_insert_deduplication_token_in_dependent_materialized_views", false, false, "Allow to update insert deduplication token with table identifier during insert in dependent materialized views"},
|
|
||||||
{"azure_max_unexpected_write_error_retries", 4, 4, "The maximum number of retries in case of unexpected errors during Azure blob storage write"},
|
|
||||||
{"split_parts_ranges_into_intersecting_and_non_intersecting_final", false, true, "Allow to split parts ranges into intersecting and non intersecting during FINAL optimization"},
|
|
||||||
{"split_intersecting_parts_ranges_into_layers_final", true, true, "Allow to split intersecting parts ranges into layers during FINAL optimization"}}},
|
|
||||||
{"23.12", {{"allow_suspicious_ttl_expressions", true, false, "It is a new setting, and in previous versions the behavior was equivalent to allowing."},
|
|
||||||
{"input_format_parquet_allow_missing_columns", false, true, "Allow missing columns in Parquet files by default"},
|
|
||||||
{"input_format_orc_allow_missing_columns", false, true, "Allow missing columns in ORC files by default"},
|
|
||||||
{"input_format_arrow_allow_missing_columns", false, true, "Allow missing columns in Arrow files by default"}}},
|
|
||||||
{"23.9", {{"optimize_group_by_constant_keys", false, true, "Optimize group by constant keys by default"},
|
|
||||||
{"input_format_json_try_infer_named_tuples_from_objects", false, true, "Try to infer named Tuples from JSON objects by default"},
|
|
||||||
{"input_format_json_read_numbers_as_strings", false, true, "Allow to read numbers as strings in JSON formats by default"},
|
|
||||||
{"input_format_json_read_arrays_as_strings", false, true, "Allow to read arrays as strings in JSON formats by default"},
|
|
||||||
{"input_format_json_infer_incomplete_types_as_strings", false, true, "Allow to infer incomplete types as Strings in JSON formats by default"},
|
|
||||||
{"input_format_json_try_infer_numbers_from_strings", true, false, "Don't infer numbers from strings in JSON formats by default to prevent possible parsing errors"},
|
|
||||||
{"http_write_exception_in_output_format", false, true, "Output valid JSON/XML on exception in HTTP streaming."}}},
|
|
||||||
{"23.8", {{"rewrite_count_distinct_if_with_count_distinct_implementation", false, true, "Rewrite countDistinctIf with count_distinct_implementation configuration"}}},
|
|
||||||
{"23.7", {{"function_sleep_max_microseconds_per_block", 0, 3000000, "In previous versions, the maximum sleep time of 3 seconds was applied only for `sleep`, but not for `sleepEachRow` function. In the new version, we introduce this setting. If you set compatibility with the previous versions, we will disable the limit altogether."}}},
|
|
||||||
{"23.6", {{"http_send_timeout", 180, 30, "3 minutes seems crazy long. Note that this is timeout for a single network write call, not for the whole upload operation."},
|
|
||||||
{"http_receive_timeout", 180, 30, "See http_send_timeout."}}},
|
|
||||||
{"23.5", {{"input_format_parquet_preserve_order", true, false, "Allow Parquet reader to reorder rows for better parallelism."},
|
|
||||||
{"parallelize_output_from_storages", false, true, "Allow parallelism when executing queries that read from file/url/s3/etc. This may reorder rows."},
|
|
||||||
{"use_with_fill_by_sorting_prefix", false, true, "Columns preceding WITH FILL columns in ORDER BY clause form sorting prefix. Rows with different values in sorting prefix are filled independently"},
|
|
||||||
{"output_format_parquet_compliant_nested_types", false, true, "Change an internal field name in output Parquet file schema."}}},
|
|
||||||
{"23.4", {{"allow_suspicious_indices", true, false, "If true, index can defined with identical expressions"},
|
|
||||||
{"allow_nonconst_timezone_arguments", true, false, "Allow non-const timezone arguments in certain time-related functions like toTimeZone(), fromUnixTimestamp*(), snowflakeToDateTime*()."},
|
|
||||||
{"connect_timeout_with_failover_ms", 50, 1000, "Increase default connect timeout because of async connect"},
|
|
||||||
{"connect_timeout_with_failover_secure_ms", 100, 1000, "Increase default secure connect timeout because of async connect"},
|
|
||||||
{"hedged_connection_timeout_ms", 100, 50, "Start new connection in hedged requests after 50 ms instead of 100 to correspond with previous connect timeout"}}},
|
|
||||||
{"23.3", {{"output_format_parquet_version", "1.0", "2.latest", "Use latest Parquet format version for output format"},
|
|
||||||
{"input_format_json_ignore_unknown_keys_in_named_tuple", false, true, "Improve parsing JSON objects as named tuples"},
|
|
||||||
{"input_format_native_allow_types_conversion", false, true, "Allow types conversion in Native input forma"},
|
|
||||||
{"output_format_arrow_compression_method", "none", "lz4_frame", "Use lz4 compression in Arrow output format by default"},
|
|
||||||
{"output_format_parquet_compression_method", "snappy", "lz4", "Use lz4 compression in Parquet output format by default"},
|
|
||||||
{"output_format_orc_compression_method", "none", "lz4_frame", "Use lz4 compression in ORC output format by default"},
|
|
||||||
{"async_query_sending_for_remote", false, true, "Create connections and send query async across shards"}}},
|
|
||||||
{"23.2", {{"output_format_parquet_fixed_string_as_fixed_byte_array", false, true, "Use Parquet FIXED_LENGTH_BYTE_ARRAY type for FixedString by default"},
|
|
||||||
{"output_format_arrow_fixed_string_as_fixed_byte_array", false, true, "Use Arrow FIXED_SIZE_BINARY type for FixedString by default"},
|
|
||||||
{"query_plan_remove_redundant_distinct", false, true, "Remove redundant Distinct step in query plan"},
|
|
||||||
{"optimize_duplicate_order_by_and_distinct", true, false, "Remove duplicate ORDER BY and DISTINCT if it's possible"},
|
|
||||||
{"insert_keeper_max_retries", 0, 20, "Enable reconnections to Keeper on INSERT, improve reliability"}}},
|
|
||||||
{"23.1", {{"input_format_json_read_objects_as_strings", 0, 1, "Enable reading nested json objects as strings while object type is experimental"},
|
|
||||||
{"input_format_json_defaults_for_missing_elements_in_named_tuple", false, true, "Allow missing elements in JSON objects while reading named tuples by default"},
|
|
||||||
{"input_format_csv_detect_header", false, true, "Detect header in CSV format by default"},
|
|
||||||
{"input_format_tsv_detect_header", false, true, "Detect header in TSV format by default"},
|
|
||||||
{"input_format_custom_detect_header", false, true, "Detect header in CustomSeparated format by default"},
|
|
||||||
{"query_plan_remove_redundant_sorting", false, true, "Remove redundant sorting in query plan. For example, sorting steps related to ORDER BY clauses in subqueries"}}},
|
|
||||||
{"22.12", {{"max_size_to_preallocate_for_aggregation", 10'000'000, 100'000'000, "This optimizes performance"},
|
|
||||||
{"query_plan_aggregation_in_order", 0, 1, "Enable some refactoring around query plan"},
|
|
||||||
{"format_binary_max_string_size", 0, 1_GiB, "Prevent allocating large amount of memory"}}},
|
|
||||||
{"22.11", {{"use_structure_from_insertion_table_in_table_functions", 0, 2, "Improve using structure from insertion table in table functions"}}},
|
|
||||||
{"23.4", {{"formatdatetime_f_prints_single_zero", true, false, "Improved compatibility with MySQL DATE_FORMAT()/STR_TO_DATE()"}}},
|
|
||||||
{"23.4", {{"formatdatetime_parsedatetime_m_is_month_name", false, true, "Improved compatibility with MySQL DATE_FORMAT/STR_TO_DATE"}}},
|
|
||||||
{"23.11", {{"parsedatetime_parse_without_leading_zeros", false, true, "Improved compatibility with MySQL DATE_FORMAT/STR_TO_DATE"}}},
|
|
||||||
{"22.9", {{"force_grouping_standard_compatibility", false, true, "Make GROUPING function output the same as in SQL standard and other DBMS"}}},
|
|
||||||
{"22.7", {{"cross_to_inner_join_rewrite", 1, 2, "Force rewrite comma join to inner"},
|
|
||||||
{"enable_positional_arguments", false, true, "Enable positional arguments feature by default"},
|
|
||||||
{"format_csv_allow_single_quotes", true, false, "Most tools don't treat single quote in CSV specially, don't do it by default too"}}},
|
|
||||||
{"22.6", {{"output_format_json_named_tuples_as_objects", false, true, "Allow to serialize named tuples as JSON objects in JSON formats by default"},
|
|
||||||
{"input_format_skip_unknown_fields", false, true, "Optimize reading subset of columns for some input formats"}}},
|
|
||||||
{"22.5", {{"memory_overcommit_ratio_denominator", 0, 1073741824, "Enable memory overcommit feature by default"},
|
|
||||||
{"memory_overcommit_ratio_denominator_for_user", 0, 1073741824, "Enable memory overcommit feature by default"}}},
|
|
||||||
{"22.4", {{"allow_settings_after_format_in_insert", true, false, "Do not allow SETTINGS after FORMAT for INSERT queries because ClickHouse interpret SETTINGS as some values, which is misleading"}}},
|
|
||||||
{"22.3", {{"cast_ipv4_ipv6_default_on_conversion_error", true, false, "Make functions cast(value, 'IPv4') and cast(value, 'IPv6') behave same as toIPv4 and toIPv6 functions"}}},
|
|
||||||
{"21.12", {{"stream_like_engine_allow_direct_select", true, false, "Do not allow direct select for Kafka/RabbitMQ/FileLog by default"}}},
|
|
||||||
{"21.9", {{"output_format_decimal_trailing_zeros", true, false, "Do not output trailing zeros in text representation of Decimal types by default for better looking output"},
|
|
||||||
{"use_hedged_requests", false, true, "Enable Hedged Requests feature by default"}}},
|
|
||||||
{"21.7", {{"legacy_column_name_of_tuple_literal", true, false, "Add this setting only for compatibility reasons. It makes sense to set to 'true', while doing rolling update of cluster from version lower than 21.7 to higher"}}},
|
|
||||||
{"21.5", {{"async_socket_for_remote", false, true, "Fix all problems and turn on asynchronous reads from socket for remote queries by default again"}}},
|
|
||||||
{"21.3", {{"async_socket_for_remote", true, false, "Turn off asynchronous reads from socket for remote queries because of some problems"},
|
|
||||||
{"optimize_normalize_count_variants", false, true, "Rewrite aggregate functions that semantically equals to count() as count() by default"},
|
|
||||||
{"normalize_function_names", false, true, "Normalize function names to their canonical names, this was needed for projection query routing"}}},
|
|
||||||
{"21.2", {{"enable_global_with_statement", false, true, "Propagate WITH statements to UNION queries and all subqueries by default"}}},
|
|
||||||
{"21.1", {{"insert_quorum_parallel", false, true, "Use parallel quorum inserts by default. It is significantly more convenient to use than sequential quorum inserts"},
|
|
||||||
{"input_format_null_as_default", false, true, "Allow to insert NULL as default for input formats by default"},
|
|
||||||
{"optimize_on_insert", false, true, "Enable data optimization on INSERT by default for better user experience"},
|
|
||||||
{"use_compact_format_in_distributed_parts_names", false, true, "Use compact format for async INSERT into Distributed tables by default"}}},
|
|
||||||
{"20.10", {{"format_regexp_escaping_rule", "Escaped", "Raw", "Use Raw as default escaping rule for Regexp format to male the behaviour more like to what users expect"}}},
|
|
||||||
{"20.7", {{"show_table_uuid_in_table_create_query_if_not_nil", true, false, "Stop showing UID of the table in its CREATE query for Engine=Atomic"}}},
|
|
||||||
{"20.5", {{"input_format_with_names_use_header", false, true, "Enable using header with names for formats with WithNames/WithNamesAndTypes suffixes"},
|
|
||||||
{"allow_suspicious_codecs", true, false, "Don't allow to specify meaningless compression codecs"}}},
|
|
||||||
{"20.4", {{"validate_polygons", false, true, "Throw exception if polygon is invalid in function pointInPolygon by default instead of returning possibly wrong results"}}},
|
|
||||||
{"19.18", {{"enable_scalar_subquery_optimization", false, true, "Prevent scalar subqueries from (de)serializing large scalar values and possibly avoid running the same subquery more than once"}}},
|
|
||||||
{"19.14", {{"any_join_distinct_right_table_keys", true, false, "Disable ANY RIGHT and ANY FULL JOINs by default to avoid inconsistency"}}},
|
|
||||||
{"19.12", {{"input_format_defaults_for_omitted_fields", false, true, "Enable calculation of complex default expressions for omitted fields for some input formats, because it should be the expected behaviour"}}},
|
|
||||||
{"19.5", {{"max_partitions_per_insert_block", 0, 100, "Add a limit for the number of partitions in one block"}}},
|
|
||||||
{"18.12.17", {{"enable_optimize_predicate_expression", 0, 1, "Optimize predicates to subqueries by default"}}},
|
|
||||||
};
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -201,13 +201,13 @@ IMPLEMENT_SETTING_ENUM(ORCCompression, ErrorCodes::BAD_ARGUMENTS,
|
|||||||
{"zlib", FormatSettings::ORCCompression::ZLIB},
|
{"zlib", FormatSettings::ORCCompression::ZLIB},
|
||||||
{"lz4", FormatSettings::ORCCompression::LZ4}})
|
{"lz4", FormatSettings::ORCCompression::LZ4}})
|
||||||
|
|
||||||
IMPLEMENT_SETTING_ENUM(S3QueueMode, ErrorCodes::BAD_ARGUMENTS,
|
IMPLEMENT_SETTING_ENUM(ObjectStorageQueueMode, ErrorCodes::BAD_ARGUMENTS,
|
||||||
{{"ordered", S3QueueMode::ORDERED},
|
{{"ordered", ObjectStorageQueueMode::ORDERED},
|
||||||
{"unordered", S3QueueMode::UNORDERED}})
|
{"unordered", ObjectStorageQueueMode::UNORDERED}})
|
||||||
|
|
||||||
IMPLEMENT_SETTING_ENUM(S3QueueAction, ErrorCodes::BAD_ARGUMENTS,
|
IMPLEMENT_SETTING_ENUM(ObjectStorageQueueAction, ErrorCodes::BAD_ARGUMENTS,
|
||||||
{{"keep", S3QueueAction::KEEP},
|
{{"keep", ObjectStorageQueueAction::KEEP},
|
||||||
{"delete", S3QueueAction::DELETE}})
|
{"delete", ObjectStorageQueueAction::DELETE}})
|
||||||
|
|
||||||
IMPLEMENT_SETTING_ENUM(ExternalCommandStderrReaction, ErrorCodes::BAD_ARGUMENTS,
|
IMPLEMENT_SETTING_ENUM(ExternalCommandStderrReaction, ErrorCodes::BAD_ARGUMENTS,
|
||||||
{{"none", ExternalCommandStderrReaction::NONE},
|
{{"none", ExternalCommandStderrReaction::NONE},
|
||||||
|
@ -341,21 +341,21 @@ DECLARE_SETTING_ENUM(ParallelReplicasCustomKeyFilterType)
|
|||||||
|
|
||||||
DECLARE_SETTING_ENUM(LocalFSReadMethod)
|
DECLARE_SETTING_ENUM(LocalFSReadMethod)
|
||||||
|
|
||||||
enum class S3QueueMode : uint8_t
|
enum class ObjectStorageQueueMode : uint8_t
|
||||||
{
|
{
|
||||||
ORDERED,
|
ORDERED,
|
||||||
UNORDERED,
|
UNORDERED,
|
||||||
};
|
};
|
||||||
|
|
||||||
DECLARE_SETTING_ENUM(S3QueueMode)
|
DECLARE_SETTING_ENUM(ObjectStorageQueueMode)
|
||||||
|
|
||||||
enum class S3QueueAction : uint8_t
|
enum class ObjectStorageQueueAction : uint8_t
|
||||||
{
|
{
|
||||||
KEEP,
|
KEEP,
|
||||||
DELETE,
|
DELETE,
|
||||||
};
|
};
|
||||||
|
|
||||||
DECLARE_SETTING_ENUM(S3QueueAction)
|
DECLARE_SETTING_ENUM(ObjectStorageQueueAction)
|
||||||
|
|
||||||
DECLARE_SETTING_ENUM(ExternalCommandStderrReaction)
|
DECLARE_SETTING_ENUM(ExternalCommandStderrReaction)
|
||||||
|
|
||||||
|
@ -30,8 +30,8 @@ namespace
|
|||||||
{
|
{
|
||||||
friend void tryVisitNestedSelect(const String & query, DDLDependencyVisitorData & data);
|
friend void tryVisitNestedSelect(const String & query, DDLDependencyVisitorData & data);
|
||||||
public:
|
public:
|
||||||
DDLDependencyVisitorData(const ContextPtr & context_, const QualifiedTableName & table_name_, const ASTPtr & ast_)
|
DDLDependencyVisitorData(const ContextPtr & global_context_, const QualifiedTableName & table_name_, const ASTPtr & ast_, const String & current_database_)
|
||||||
: create_query(ast_), table_name(table_name_), current_database(context_->getCurrentDatabase()), context(context_)
|
: create_query(ast_), table_name(table_name_), default_database(global_context_->getCurrentDatabase()), current_database(current_database_), global_context(global_context_)
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -71,8 +71,9 @@ namespace
|
|||||||
ASTPtr create_query;
|
ASTPtr create_query;
|
||||||
std::unordered_set<const IAST *> skip_asts;
|
std::unordered_set<const IAST *> skip_asts;
|
||||||
QualifiedTableName table_name;
|
QualifiedTableName table_name;
|
||||||
|
String default_database;
|
||||||
String current_database;
|
String current_database;
|
||||||
ContextPtr context;
|
ContextPtr global_context;
|
||||||
TableNamesSet dependencies;
|
TableNamesSet dependencies;
|
||||||
|
|
||||||
/// CREATE TABLE or CREATE DICTIONARY or CREATE VIEW or CREATE TEMPORARY TABLE or CREATE DATABASE query.
|
/// CREATE TABLE or CREATE DICTIONARY or CREATE VIEW or CREATE TEMPORARY TABLE or CREATE DATABASE query.
|
||||||
@ -95,6 +96,11 @@ namespace
|
|||||||
as_table.database = current_database;
|
as_table.database = current_database;
|
||||||
dependencies.emplace(as_table);
|
dependencies.emplace(as_table);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Visit nested select query only for views, for other cases it's not
|
||||||
|
/// an actual dependency as it will be executed only once to fill the table.
|
||||||
|
if (create.select && !create.isView())
|
||||||
|
skip_asts.insert(create.select);
|
||||||
}
|
}
|
||||||
|
|
||||||
/// The definition of a dictionary: SOURCE(CLICKHOUSE(...)) LAYOUT(...) LIFETIME(...)
|
/// The definition of a dictionary: SOURCE(CLICKHOUSE(...)) LAYOUT(...) LIFETIME(...)
|
||||||
@ -103,8 +109,8 @@ namespace
|
|||||||
if (!dictionary.source || dictionary.source->name != "clickhouse" || !dictionary.source->elements)
|
if (!dictionary.source || dictionary.source->name != "clickhouse" || !dictionary.source->elements)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
auto config = getDictionaryConfigurationFromAST(create_query->as<ASTCreateQuery &>(), context);
|
auto config = getDictionaryConfigurationFromAST(create_query->as<ASTCreateQuery &>(), global_context);
|
||||||
auto info = getInfoIfClickHouseDictionarySource(config, context);
|
auto info = getInfoIfClickHouseDictionarySource(config, global_context);
|
||||||
|
|
||||||
/// We consider only dependencies on local tables.
|
/// We consider only dependencies on local tables.
|
||||||
if (!info || !info->is_local)
|
if (!info || !info->is_local)
|
||||||
@ -112,14 +118,21 @@ namespace
|
|||||||
|
|
||||||
if (!info->table_name.table.empty())
|
if (!info->table_name.table.empty())
|
||||||
{
|
{
|
||||||
|
/// If database is not specified in dictionary source, use database of the dictionary itself, not the current/default database.
|
||||||
if (info->table_name.database.empty())
|
if (info->table_name.database.empty())
|
||||||
info->table_name.database = current_database;
|
info->table_name.database = table_name.database;
|
||||||
dependencies.emplace(std::move(info->table_name));
|
dependencies.emplace(std::move(info->table_name));
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
/// We don't have a table name, we have a select query instead
|
/// We don't have a table name, we have a select query instead.
|
||||||
|
/// All tables from select query in dictionary definition won't
|
||||||
|
/// use current database, as this query is executed with global context.
|
||||||
|
/// Use default database from global context while visiting select query.
|
||||||
|
String current_database_ = current_database;
|
||||||
|
current_database = default_database;
|
||||||
tryVisitNestedSelect(info->query, *this);
|
tryVisitNestedSelect(info->query, *this);
|
||||||
|
current_database = current_database_;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -176,7 +189,7 @@ namespace
|
|||||||
|
|
||||||
if (auto cluster_name = tryGetClusterNameFromArgument(table_engine, 0))
|
if (auto cluster_name = tryGetClusterNameFromArgument(table_engine, 0))
|
||||||
{
|
{
|
||||||
auto cluster = context->tryGetCluster(*cluster_name);
|
auto cluster = global_context->tryGetCluster(*cluster_name);
|
||||||
if (cluster && cluster->getLocalShardCount())
|
if (cluster && cluster->getLocalShardCount())
|
||||||
has_local_replicas = true;
|
has_local_replicas = true;
|
||||||
}
|
}
|
||||||
@ -231,7 +244,7 @@ namespace
|
|||||||
{
|
{
|
||||||
if (auto cluster_name = tryGetClusterNameFromArgument(function, 0))
|
if (auto cluster_name = tryGetClusterNameFromArgument(function, 0))
|
||||||
{
|
{
|
||||||
if (auto cluster = context->tryGetCluster(*cluster_name))
|
if (auto cluster = global_context->tryGetCluster(*cluster_name))
|
||||||
{
|
{
|
||||||
if (cluster->getLocalShardCount())
|
if (cluster->getLocalShardCount())
|
||||||
has_local_replicas = true;
|
has_local_replicas = true;
|
||||||
@ -303,7 +316,10 @@ namespace
|
|||||||
try
|
try
|
||||||
{
|
{
|
||||||
/// We're just searching for dependencies here, it's not safe to execute subqueries now.
|
/// We're just searching for dependencies here, it's not safe to execute subqueries now.
|
||||||
auto evaluated = evaluateConstantExpressionOrIdentifierAsLiteral(arg, context);
|
/// Use copy of the global_context and set current database, because expressions can contain currentDatabase() function.
|
||||||
|
ContextMutablePtr global_context_copy = Context::createCopy(global_context);
|
||||||
|
global_context_copy->setCurrentDatabase(current_database);
|
||||||
|
auto evaluated = evaluateConstantExpressionOrIdentifierAsLiteral(arg, global_context_copy);
|
||||||
const auto * literal = evaluated->as<ASTLiteral>();
|
const auto * literal = evaluated->as<ASTLiteral>();
|
||||||
if (!literal || (literal->value.getType() != Field::Types::String))
|
if (!literal || (literal->value.getType() != Field::Types::String))
|
||||||
return {};
|
return {};
|
||||||
@ -444,7 +460,7 @@ namespace
|
|||||||
ParserSelectWithUnionQuery parser;
|
ParserSelectWithUnionQuery parser;
|
||||||
String description = fmt::format("Query for ClickHouse dictionary {}", data.table_name);
|
String description = fmt::format("Query for ClickHouse dictionary {}", data.table_name);
|
||||||
String fixed_query = removeWhereConditionPlaceholder(query);
|
String fixed_query = removeWhereConditionPlaceholder(query);
|
||||||
const Settings & settings = data.context->getSettingsRef();
|
const Settings & settings = data.global_context->getSettingsRef();
|
||||||
ASTPtr select = parseQuery(parser, fixed_query, description,
|
ASTPtr select = parseQuery(parser, fixed_query, description,
|
||||||
settings.max_query_size, settings.max_parser_depth, settings.max_parser_backtracks);
|
settings.max_query_size, settings.max_parser_depth, settings.max_parser_backtracks);
|
||||||
|
|
||||||
@ -459,12 +475,19 @@ namespace
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
TableNamesSet getDependenciesFromCreateQuery(const ContextPtr & context, const QualifiedTableName & table_name, const ASTPtr & ast)
|
TableNamesSet getDependenciesFromCreateQuery(const ContextPtr & global_global_context, const QualifiedTableName & table_name, const ASTPtr & ast, const String & current_database)
|
||||||
{
|
{
|
||||||
DDLDependencyVisitor::Data data{context, table_name, ast};
|
DDLDependencyVisitor::Data data{global_global_context, table_name, ast, current_database};
|
||||||
DDLDependencyVisitor::Visitor visitor{data};
|
DDLDependencyVisitor::Visitor visitor{data};
|
||||||
visitor.visit(ast);
|
visitor.visit(ast);
|
||||||
return std::move(data).getDependencies();
|
return std::move(data).getDependencies();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
TableNamesSet getDependenciesFromDictionaryNestedSelectQuery(const ContextPtr & global_context, const QualifiedTableName & table_name, const ASTPtr & ast, const String & select_query, const String & current_database)
|
||||||
|
{
|
||||||
|
DDLDependencyVisitor::Data data{global_context, table_name, ast, current_database};
|
||||||
|
tryVisitNestedSelect(select_query, data);
|
||||||
|
return std::move(data).getDependencies();
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -13,6 +13,9 @@ using TableNamesSet = std::unordered_set<QualifiedTableName>;
|
|||||||
/// Returns a list of all tables explicitly referenced in the create query of a specified table.
|
/// Returns a list of all tables explicitly referenced in the create query of a specified table.
|
||||||
/// For example, a column default expression can use dictGet() and thus reference a dictionary.
|
/// For example, a column default expression can use dictGet() and thus reference a dictionary.
|
||||||
/// Does not validate AST, works a best-effort way.
|
/// Does not validate AST, works a best-effort way.
|
||||||
TableNamesSet getDependenciesFromCreateQuery(const ContextPtr & context, const QualifiedTableName & table_name, const ASTPtr & ast);
|
TableNamesSet getDependenciesFromCreateQuery(const ContextPtr & global_context, const QualifiedTableName & table_name, const ASTPtr & ast, const String & current_database);
|
||||||
|
|
||||||
|
/// Returns a list of all tables explicitly referenced in the select query specified as a dictionary source.
|
||||||
|
TableNamesSet getDependenciesFromDictionaryNestedSelectQuery(const ContextPtr & global_context, const QualifiedTableName & table_name, const ASTPtr & ast, const String & select_query, const String & current_database);
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -110,19 +110,30 @@ void DDLLoadingDependencyVisitor::visit(const ASTFunctionWithKeyValueArguments &
|
|||||||
auto config = getDictionaryConfigurationFromAST(data.create_query->as<ASTCreateQuery &>(), data.global_context);
|
auto config = getDictionaryConfigurationFromAST(data.create_query->as<ASTCreateQuery &>(), data.global_context);
|
||||||
auto info = getInfoIfClickHouseDictionarySource(config, data.global_context);
|
auto info = getInfoIfClickHouseDictionarySource(config, data.global_context);
|
||||||
|
|
||||||
if (!info || !info->is_local || info->table_name.table.empty())
|
if (!info || !info->is_local)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
if (info->table_name.database.empty())
|
if (!info->table_name.table.empty())
|
||||||
info->table_name.database = data.default_database;
|
{
|
||||||
data.dependencies.emplace(std::move(info->table_name));
|
/// If database is not specified in dictionary source, use database of the dictionary itself, not the current/default database.
|
||||||
|
if (info->table_name.database.empty())
|
||||||
|
info->table_name.database = data.table_name.database;
|
||||||
|
data.dependencies.emplace(std::move(info->table_name));
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
/// We don't have a table name, we have a select query instead that will be executed during dictionary loading.
|
||||||
|
/// We need to find all tables used in this select query and add them to dependencies.
|
||||||
|
auto select_query_dependencies = getDependenciesFromDictionaryNestedSelectQuery(data.global_context, data.table_name, data.create_query, info->query, data.default_database);
|
||||||
|
data.dependencies.merge(select_query_dependencies);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void DDLLoadingDependencyVisitor::visit(const ASTStorage & storage, Data & data)
|
void DDLLoadingDependencyVisitor::visit(const ASTStorage & storage, Data & data)
|
||||||
{
|
{
|
||||||
if (storage.ttl_table)
|
if (storage.ttl_table)
|
||||||
{
|
{
|
||||||
auto ttl_dependensies = getDependenciesFromCreateQuery(data.global_context, data.table_name, storage.ttl_table->ptr());
|
auto ttl_dependensies = getDependenciesFromCreateQuery(data.global_context, data.table_name, storage.ttl_table->ptr(), data.default_database);
|
||||||
data.dependencies.merge(ttl_dependensies);
|
data.dependencies.merge(ttl_dependensies);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -154,7 +154,7 @@ void DatabaseMemory::alterTable(ContextPtr local_context, const StorageID & tabl
|
|||||||
applyMetadataChangesToCreateQuery(it->second, metadata);
|
applyMetadataChangesToCreateQuery(it->second, metadata);
|
||||||
|
|
||||||
/// The create query of the table has been just changed, we need to update dependencies too.
|
/// The create query of the table has been just changed, we need to update dependencies too.
|
||||||
auto ref_dependencies = getDependenciesFromCreateQuery(local_context->getGlobalContext(), table_id.getQualifiedName(), it->second);
|
auto ref_dependencies = getDependenciesFromCreateQuery(local_context->getGlobalContext(), table_id.getQualifiedName(), it->second, local_context->getCurrentDatabase());
|
||||||
auto loading_dependencies = getLoadingDependenciesFromCreateQuery(local_context->getGlobalContext(), table_id.getQualifiedName(), it->second);
|
auto loading_dependencies = getLoadingDependenciesFromCreateQuery(local_context->getGlobalContext(), table_id.getQualifiedName(), it->second);
|
||||||
DatabaseCatalog::instance().updateDependencies(table_id, ref_dependencies, loading_dependencies);
|
DatabaseCatalog::instance().updateDependencies(table_id, ref_dependencies, loading_dependencies);
|
||||||
}
|
}
|
||||||
|
@ -539,7 +539,7 @@ void DatabaseOrdinary::alterTable(ContextPtr local_context, const StorageID & ta
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// The create query of the table has been just changed, we need to update dependencies too.
|
/// The create query of the table has been just changed, we need to update dependencies too.
|
||||||
auto ref_dependencies = getDependenciesFromCreateQuery(local_context->getGlobalContext(), table_id.getQualifiedName(), ast);
|
auto ref_dependencies = getDependenciesFromCreateQuery(local_context->getGlobalContext(), table_id.getQualifiedName(), ast, local_context->getCurrentDatabase());
|
||||||
auto loading_dependencies = getLoadingDependenciesFromCreateQuery(local_context->getGlobalContext(), table_id.getQualifiedName(), ast);
|
auto loading_dependencies = getLoadingDependenciesFromCreateQuery(local_context->getGlobalContext(), table_id.getQualifiedName(), ast);
|
||||||
DatabaseCatalog::instance().updateDependencies(table_id, ref_dependencies, loading_dependencies);
|
DatabaseCatalog::instance().updateDependencies(table_id, ref_dependencies, loading_dependencies);
|
||||||
|
|
||||||
|
@ -1165,7 +1165,7 @@ void DatabaseReplicated::recoverLostReplica(const ZooKeeperPtr & current_zookeep
|
|||||||
/// And QualifiedTableName::parseFromString doesn't handle this.
|
/// And QualifiedTableName::parseFromString doesn't handle this.
|
||||||
auto qualified_name = QualifiedTableName{.database = getDatabaseName(), .table = table_name};
|
auto qualified_name = QualifiedTableName{.database = getDatabaseName(), .table = table_name};
|
||||||
auto query_ast = parseQueryFromMetadataInZooKeeper(table_name, create_table_query);
|
auto query_ast = parseQueryFromMetadataInZooKeeper(table_name, create_table_query);
|
||||||
tables_dependencies.addDependencies(qualified_name, getDependenciesFromCreateQuery(getContext(), qualified_name, query_ast));
|
tables_dependencies.addDependencies(qualified_name, getDependenciesFromCreateQuery(getContext()->getGlobalContext(), qualified_name, query_ast, getContext()->getCurrentDatabase()));
|
||||||
}
|
}
|
||||||
|
|
||||||
tables_dependencies.checkNoCyclicDependencies();
|
tables_dependencies.checkNoCyclicDependencies();
|
||||||
|
@ -137,7 +137,7 @@ void TablesLoader::buildDependencyGraph()
|
|||||||
{
|
{
|
||||||
for (const auto & [table_name, table_metadata] : metadata.parsed_tables)
|
for (const auto & [table_name, table_metadata] : metadata.parsed_tables)
|
||||||
{
|
{
|
||||||
auto new_ref_dependencies = getDependenciesFromCreateQuery(global_context, table_name, table_metadata.ast);
|
auto new_ref_dependencies = getDependenciesFromCreateQuery(global_context, table_name, table_metadata.ast, global_context->getCurrentDatabase());
|
||||||
auto new_loading_dependencies = getLoadingDependenciesFromCreateQuery(global_context, table_name, table_metadata.ast);
|
auto new_loading_dependencies = getLoadingDependenciesFromCreateQuery(global_context, table_name, table_metadata.ast);
|
||||||
|
|
||||||
if (!new_ref_dependencies.empty())
|
if (!new_ref_dependencies.empty())
|
||||||
|
@ -149,6 +149,7 @@ FormatSettings getFormatSettings(const ContextPtr & context, const Settings & se
|
|||||||
format_settings.json.try_infer_objects_as_tuples = settings.input_format_json_try_infer_named_tuples_from_objects;
|
format_settings.json.try_infer_objects_as_tuples = settings.input_format_json_try_infer_named_tuples_from_objects;
|
||||||
format_settings.json.throw_on_bad_escape_sequence = settings.input_format_json_throw_on_bad_escape_sequence;
|
format_settings.json.throw_on_bad_escape_sequence = settings.input_format_json_throw_on_bad_escape_sequence;
|
||||||
format_settings.json.ignore_unnecessary_fields = settings.input_format_json_ignore_unnecessary_fields;
|
format_settings.json.ignore_unnecessary_fields = settings.input_format_json_ignore_unnecessary_fields;
|
||||||
|
format_settings.json.ignore_key_case = settings.input_format_json_ignore_key_case;
|
||||||
format_settings.null_as_default = settings.input_format_null_as_default;
|
format_settings.null_as_default = settings.input_format_null_as_default;
|
||||||
format_settings.force_null_for_omitted_fields = settings.input_format_force_null_for_omitted_fields;
|
format_settings.force_null_for_omitted_fields = settings.input_format_force_null_for_omitted_fields;
|
||||||
format_settings.decimal_trailing_zeros = settings.output_format_decimal_trailing_zeros;
|
format_settings.decimal_trailing_zeros = settings.output_format_decimal_trailing_zeros;
|
||||||
|
@ -228,6 +228,7 @@ struct FormatSettings
|
|||||||
bool infer_incomplete_types_as_strings = true;
|
bool infer_incomplete_types_as_strings = true;
|
||||||
bool throw_on_bad_escape_sequence = true;
|
bool throw_on_bad_escape_sequence = true;
|
||||||
bool ignore_unnecessary_fields = true;
|
bool ignore_unnecessary_fields = true;
|
||||||
|
bool ignore_key_case = false;
|
||||||
} json{};
|
} json{};
|
||||||
|
|
||||||
struct
|
struct
|
||||||
|
@ -978,8 +978,7 @@ namespace
|
|||||||
[[nodiscard]]
|
[[nodiscard]]
|
||||||
static PosOrError mysqlAmericanDate(Pos cur, Pos end, const String & fragment, DateTime<error_handling> & date)
|
static PosOrError mysqlAmericanDate(Pos cur, Pos end, const String & fragment, DateTime<error_handling> & date)
|
||||||
{
|
{
|
||||||
if (auto status = checkSpace(cur, end, 8, "mysqlAmericanDate requires size >= 8", fragment))
|
RETURN_ERROR_IF_FAILED(checkSpace(cur, end, 8, "mysqlAmericanDate requires size >= 8", fragment))
|
||||||
return tl::unexpected(status.error());
|
|
||||||
|
|
||||||
Int32 month;
|
Int32 month;
|
||||||
ASSIGN_RESULT_OR_RETURN_ERROR(cur, (readNumber2<Int32, NeedCheckSpace::No>(cur, end, fragment, month)))
|
ASSIGN_RESULT_OR_RETURN_ERROR(cur, (readNumber2<Int32, NeedCheckSpace::No>(cur, end, fragment, month)))
|
||||||
@ -993,7 +992,7 @@ namespace
|
|||||||
|
|
||||||
Int32 year;
|
Int32 year;
|
||||||
ASSIGN_RESULT_OR_RETURN_ERROR(cur, (readNumber2<Int32, NeedCheckSpace::No>(cur, end, fragment, year)))
|
ASSIGN_RESULT_OR_RETURN_ERROR(cur, (readNumber2<Int32, NeedCheckSpace::No>(cur, end, fragment, year)))
|
||||||
RETURN_ERROR_IF_FAILED(date.setYear(year))
|
RETURN_ERROR_IF_FAILED(date.setYear(year + 2000))
|
||||||
return cur;
|
return cur;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1015,8 +1014,7 @@ namespace
|
|||||||
[[nodiscard]]
|
[[nodiscard]]
|
||||||
static PosOrError mysqlISO8601Date(Pos cur, Pos end, const String & fragment, DateTime<error_handling> & date)
|
static PosOrError mysqlISO8601Date(Pos cur, Pos end, const String & fragment, DateTime<error_handling> & date)
|
||||||
{
|
{
|
||||||
if (auto status = checkSpace(cur, end, 10, "mysqlISO8601Date requires size >= 10", fragment))
|
RETURN_ERROR_IF_FAILED(checkSpace(cur, end, 10, "mysqlISO8601Date requires size >= 10", fragment))
|
||||||
return tl::unexpected(status.error());
|
|
||||||
|
|
||||||
Int32 year;
|
Int32 year;
|
||||||
Int32 month;
|
Int32 month;
|
||||||
@ -1462,8 +1460,7 @@ namespace
|
|||||||
[[nodiscard]]
|
[[nodiscard]]
|
||||||
static PosOrError jodaDayOfWeekText(size_t /*min_represent_digits*/, Pos cur, Pos end, const String & fragment, DateTime<error_handling> & date)
|
static PosOrError jodaDayOfWeekText(size_t /*min_represent_digits*/, Pos cur, Pos end, const String & fragment, DateTime<error_handling> & date)
|
||||||
{
|
{
|
||||||
if (auto result= checkSpace(cur, end, 3, "jodaDayOfWeekText requires size >= 3", fragment); !result.has_value())
|
RETURN_ERROR_IF_FAILED(checkSpace(cur, end, 3, "jodaDayOfWeekText requires size >= 3", fragment))
|
||||||
return tl::unexpected(result.error());
|
|
||||||
|
|
||||||
String text1(cur, 3);
|
String text1(cur, 3);
|
||||||
boost::to_lower(text1);
|
boost::to_lower(text1);
|
||||||
@ -1556,8 +1553,8 @@ namespace
|
|||||||
Int32 day_of_month;
|
Int32 day_of_month;
|
||||||
ASSIGN_RESULT_OR_RETURN_ERROR(cur, (readNumberWithVariableLength(
|
ASSIGN_RESULT_OR_RETURN_ERROR(cur, (readNumberWithVariableLength(
|
||||||
cur, end, false, false, false, repetitions, std::max(repetitions, 2uz), fragment, day_of_month)))
|
cur, end, false, false, false, repetitions, std::max(repetitions, 2uz), fragment, day_of_month)))
|
||||||
if (auto res = date.setDayOfMonth(day_of_month); !res.has_value())
|
RETURN_ERROR_IF_FAILED(date.setDayOfMonth(day_of_month))
|
||||||
return tl::unexpected(res.error());
|
|
||||||
return cur;
|
return cur;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -9,6 +9,21 @@ namespace ErrorCodes
|
|||||||
extern const int UNSUPPORTED_METHOD;
|
extern const int UNSUPPORTED_METHOD;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
namespace S3
|
||||||
|
{
|
||||||
|
std::string tryGetRunningAvailabilityZone()
|
||||||
|
{
|
||||||
|
try
|
||||||
|
{
|
||||||
|
return getRunningAvailabilityZone();
|
||||||
|
}
|
||||||
|
catch (...)
|
||||||
|
{
|
||||||
|
tryLogCurrentException("tryGetRunningAvailabilityZone");
|
||||||
|
return "";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#if USE_AWS_S3
|
#if USE_AWS_S3
|
||||||
|
@ -24,6 +24,7 @@ static inline constexpr char GCP_METADATA_SERVICE_ENDPOINT[] = "http://metadata.
|
|||||||
|
|
||||||
/// getRunningAvailabilityZone returns the availability zone of the underlying compute resources where the current process runs.
|
/// getRunningAvailabilityZone returns the availability zone of the underlying compute resources where the current process runs.
|
||||||
std::string getRunningAvailabilityZone();
|
std::string getRunningAvailabilityZone();
|
||||||
|
std::string tryGetRunningAvailabilityZone();
|
||||||
|
|
||||||
class AWSEC2MetadataClient : public Aws::Internal::AWSHttpResourceClient
|
class AWSEC2MetadataClient : public Aws::Internal::AWSHttpResourceClient
|
||||||
{
|
{
|
||||||
@ -195,6 +196,7 @@ namespace DB
|
|||||||
namespace S3
|
namespace S3
|
||||||
{
|
{
|
||||||
std::string getRunningAvailabilityZone();
|
std::string getRunningAvailabilityZone();
|
||||||
|
std::string tryGetRunningAvailabilityZone();
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -187,13 +187,6 @@ size_t FileSegment::getDownloadedSize() const
|
|||||||
return downloaded_size;
|
return downloaded_size;
|
||||||
}
|
}
|
||||||
|
|
||||||
void FileSegment::setDownloadedSize(size_t delta)
|
|
||||||
{
|
|
||||||
auto lk = lock();
|
|
||||||
downloaded_size += delta;
|
|
||||||
assert(downloaded_size == std::filesystem::file_size(getPath()));
|
|
||||||
}
|
|
||||||
|
|
||||||
bool FileSegment::isDownloaded() const
|
bool FileSegment::isDownloaded() const
|
||||||
{
|
{
|
||||||
auto lk = lock();
|
auto lk = lock();
|
||||||
@ -311,6 +304,11 @@ FileSegment::RemoteFileReaderPtr FileSegment::getRemoteFileReader()
|
|||||||
return remote_file_reader;
|
return remote_file_reader;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
FileSegment::LocalCacheWriterPtr FileSegment::getLocalCacheWriter()
|
||||||
|
{
|
||||||
|
return cache_writer;
|
||||||
|
}
|
||||||
|
|
||||||
void FileSegment::resetRemoteFileReader()
|
void FileSegment::resetRemoteFileReader()
|
||||||
{
|
{
|
||||||
auto lk = lock();
|
auto lk = lock();
|
||||||
@ -340,33 +338,31 @@ void FileSegment::setRemoteFileReader(RemoteFileReaderPtr remote_file_reader_)
|
|||||||
remote_file_reader = remote_file_reader_;
|
remote_file_reader = remote_file_reader_;
|
||||||
}
|
}
|
||||||
|
|
||||||
void FileSegment::write(char * from, size_t size, size_t offset)
|
void FileSegment::write(char * from, size_t size, size_t offset_in_file)
|
||||||
{
|
{
|
||||||
ProfileEventTimeIncrement<Microseconds> watch(ProfileEvents::FileSegmentWriteMicroseconds);
|
ProfileEventTimeIncrement<Microseconds> watch(ProfileEvents::FileSegmentWriteMicroseconds);
|
||||||
|
auto file_segment_path = getPath();
|
||||||
if (!size)
|
|
||||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Writing zero size is not allowed");
|
|
||||||
|
|
||||||
{
|
{
|
||||||
auto lk = lock();
|
if (!size)
|
||||||
assertIsDownloaderUnlocked("write", lk);
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "Writing zero size is not allowed");
|
||||||
assertNotDetachedUnlocked(lk);
|
|
||||||
}
|
|
||||||
|
|
||||||
const auto file_segment_path = getPath();
|
{
|
||||||
|
auto lk = lock();
|
||||||
|
assertIsDownloaderUnlocked("write", lk);
|
||||||
|
assertNotDetachedUnlocked(lk);
|
||||||
|
}
|
||||||
|
|
||||||
{
|
|
||||||
if (download_state != State::DOWNLOADING)
|
if (download_state != State::DOWNLOADING)
|
||||||
throw Exception(
|
throw Exception(
|
||||||
ErrorCodes::LOGICAL_ERROR,
|
ErrorCodes::LOGICAL_ERROR,
|
||||||
"Expected DOWNLOADING state, got {}", stateToString(download_state));
|
"Expected DOWNLOADING state, got {}", stateToString(download_state));
|
||||||
|
|
||||||
const size_t first_non_downloaded_offset = getCurrentWriteOffset();
|
const size_t first_non_downloaded_offset = getCurrentWriteOffset();
|
||||||
if (offset != first_non_downloaded_offset)
|
if (offset_in_file != first_non_downloaded_offset)
|
||||||
throw Exception(
|
throw Exception(
|
||||||
ErrorCodes::LOGICAL_ERROR,
|
ErrorCodes::LOGICAL_ERROR,
|
||||||
"Attempt to write {} bytes to offset: {}, but current write offset is {}",
|
"Attempt to write {} bytes to offset: {}, but current write offset is {}",
|
||||||
size, offset, first_non_downloaded_offset);
|
size, offset_in_file, first_non_downloaded_offset);
|
||||||
|
|
||||||
const size_t current_downloaded_size = getDownloadedSize();
|
const size_t current_downloaded_size = getDownloadedSize();
|
||||||
chassert(reserved_size >= current_downloaded_size);
|
chassert(reserved_size >= current_downloaded_size);
|
||||||
@ -396,10 +392,10 @@ void FileSegment::write(char * from, size_t size, size_t offset)
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
if (!cache_writer)
|
if (!cache_writer)
|
||||||
cache_writer = std::make_unique<WriteBufferFromFile>(file_segment_path, /* buf_size */0);
|
cache_writer = std::make_unique<WriteBufferFromFile>(getPath(), /* buf_size */0);
|
||||||
|
|
||||||
/// Size is equal to offset as offset for write buffer points to data end.
|
/// Size is equal to offset as offset for write buffer points to data end.
|
||||||
cache_writer->set(from, size, /* offset */size);
|
cache_writer->set(from, /* size */size, /* offset */size);
|
||||||
/// Reset the buffer when finished.
|
/// Reset the buffer when finished.
|
||||||
SCOPE_EXIT({ cache_writer->set(nullptr, 0); });
|
SCOPE_EXIT({ cache_writer->set(nullptr, 0); });
|
||||||
/// Flush the buffer.
|
/// Flush the buffer.
|
||||||
@ -435,7 +431,6 @@ void FileSegment::write(char * from, size_t size, size_t offset)
|
|||||||
}
|
}
|
||||||
|
|
||||||
throw;
|
throw;
|
||||||
|
|
||||||
}
|
}
|
||||||
catch (Exception & e)
|
catch (Exception & e)
|
||||||
{
|
{
|
||||||
@ -445,7 +440,7 @@ void FileSegment::write(char * from, size_t size, size_t offset)
|
|||||||
throw;
|
throw;
|
||||||
}
|
}
|
||||||
|
|
||||||
chassert(getCurrentWriteOffset() == offset + size);
|
chassert(getCurrentWriteOffset() == offset_in_file + size);
|
||||||
}
|
}
|
||||||
|
|
||||||
FileSegment::State FileSegment::wait(size_t offset)
|
FileSegment::State FileSegment::wait(size_t offset)
|
||||||
@ -828,7 +823,7 @@ bool FileSegment::assertCorrectnessUnlocked(const FileSegmentGuard::Lock & lock)
|
|||||||
};
|
};
|
||||||
|
|
||||||
const auto file_path = getPath();
|
const auto file_path = getPath();
|
||||||
if (segment_kind != FileSegmentKind::Temporary)
|
|
||||||
{
|
{
|
||||||
std::lock_guard lk(write_mutex);
|
std::lock_guard lk(write_mutex);
|
||||||
if (downloaded_size == 0)
|
if (downloaded_size == 0)
|
||||||
|
@ -48,7 +48,7 @@ friend class FileCache; /// Because of reserved_size in tryReserve().
|
|||||||
public:
|
public:
|
||||||
using Key = FileCacheKey;
|
using Key = FileCacheKey;
|
||||||
using RemoteFileReaderPtr = std::shared_ptr<ReadBufferFromFileBase>;
|
using RemoteFileReaderPtr = std::shared_ptr<ReadBufferFromFileBase>;
|
||||||
using LocalCacheWriterPtr = std::unique_ptr<WriteBufferFromFile>;
|
using LocalCacheWriterPtr = std::shared_ptr<WriteBufferFromFile>;
|
||||||
using Downloader = std::string;
|
using Downloader = std::string;
|
||||||
using DownloaderId = std::string;
|
using DownloaderId = std::string;
|
||||||
using Priority = IFileCachePriority;
|
using Priority = IFileCachePriority;
|
||||||
@ -204,7 +204,7 @@ public:
|
|||||||
bool reserve(size_t size_to_reserve, size_t lock_wait_timeout_milliseconds, FileCacheReserveStat * reserve_stat = nullptr);
|
bool reserve(size_t size_to_reserve, size_t lock_wait_timeout_milliseconds, FileCacheReserveStat * reserve_stat = nullptr);
|
||||||
|
|
||||||
/// Write data into reserved space.
|
/// Write data into reserved space.
|
||||||
void write(char * from, size_t size, size_t offset);
|
void write(char * from, size_t size, size_t offset_in_file);
|
||||||
|
|
||||||
// Invariant: if state() != DOWNLOADING and remote file reader is present, the reader's
|
// Invariant: if state() != DOWNLOADING and remote file reader is present, the reader's
|
||||||
// available() == 0, and getFileOffsetOfBufferEnd() == our getCurrentWriteOffset().
|
// available() == 0, and getFileOffsetOfBufferEnd() == our getCurrentWriteOffset().
|
||||||
@ -212,6 +212,7 @@ public:
|
|||||||
// The reader typically requires its internal_buffer to be assigned from the outside before
|
// The reader typically requires its internal_buffer to be assigned from the outside before
|
||||||
// calling next().
|
// calling next().
|
||||||
RemoteFileReaderPtr getRemoteFileReader();
|
RemoteFileReaderPtr getRemoteFileReader();
|
||||||
|
LocalCacheWriterPtr getLocalCacheWriter();
|
||||||
|
|
||||||
RemoteFileReaderPtr extractRemoteFileReader();
|
RemoteFileReaderPtr extractRemoteFileReader();
|
||||||
|
|
||||||
@ -219,8 +220,6 @@ public:
|
|||||||
|
|
||||||
void setRemoteFileReader(RemoteFileReaderPtr remote_file_reader_);
|
void setRemoteFileReader(RemoteFileReaderPtr remote_file_reader_);
|
||||||
|
|
||||||
void setDownloadedSize(size_t delta);
|
|
||||||
|
|
||||||
void setDownloadFailed();
|
void setDownloadFailed();
|
||||||
|
|
||||||
private:
|
private:
|
||||||
|
@ -944,14 +944,7 @@ KeyMetadata::iterator LockedKey::removeFileSegmentImpl(
|
|||||||
try
|
try
|
||||||
{
|
{
|
||||||
const auto path = key_metadata->getFileSegmentPath(*file_segment);
|
const auto path = key_metadata->getFileSegmentPath(*file_segment);
|
||||||
if (file_segment->segment_kind == FileSegmentKind::Temporary)
|
if (file_segment->downloaded_size == 0)
|
||||||
{
|
|
||||||
/// FIXME: For temporary file segment the requirement is not as strong because
|
|
||||||
/// the implementation of "temporary data in cache" creates files in advance.
|
|
||||||
if (fs::exists(path))
|
|
||||||
fs::remove(path);
|
|
||||||
}
|
|
||||||
else if (file_segment->downloaded_size == 0)
|
|
||||||
{
|
{
|
||||||
chassert(!fs::exists(path));
|
chassert(!fs::exists(path));
|
||||||
}
|
}
|
||||||
|
@ -4,6 +4,7 @@
|
|||||||
#include <Interpreters/Context.h>
|
#include <Interpreters/Context.h>
|
||||||
#include <IO/SwapHelper.h>
|
#include <IO/SwapHelper.h>
|
||||||
#include <IO/ReadBufferFromFile.h>
|
#include <IO/ReadBufferFromFile.h>
|
||||||
|
#include <IO/EmptyReadBuffer.h>
|
||||||
|
|
||||||
#include <base/scope_guard.h>
|
#include <base/scope_guard.h>
|
||||||
|
|
||||||
@ -33,21 +34,20 @@ namespace
|
|||||||
}
|
}
|
||||||
|
|
||||||
WriteBufferToFileSegment::WriteBufferToFileSegment(FileSegment * file_segment_)
|
WriteBufferToFileSegment::WriteBufferToFileSegment(FileSegment * file_segment_)
|
||||||
: WriteBufferFromFileDecorator(std::make_unique<WriteBufferFromFile>(file_segment_->getPath()))
|
: WriteBufferFromFileBase(DBMS_DEFAULT_BUFFER_SIZE, nullptr, 0)
|
||||||
, file_segment(file_segment_)
|
, file_segment(file_segment_)
|
||||||
, reserve_space_lock_wait_timeout_milliseconds(getCacheLockWaitTimeout())
|
, reserve_space_lock_wait_timeout_milliseconds(getCacheLockWaitTimeout())
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
WriteBufferToFileSegment::WriteBufferToFileSegment(FileSegmentsHolderPtr segment_holder_)
|
WriteBufferToFileSegment::WriteBufferToFileSegment(FileSegmentsHolderPtr segment_holder_)
|
||||||
: WriteBufferFromFileDecorator(
|
: WriteBufferFromFileBase(DBMS_DEFAULT_BUFFER_SIZE, nullptr, 0)
|
||||||
segment_holder_->size() == 1
|
|
||||||
? std::make_unique<WriteBufferFromFile>(segment_holder_->front().getPath())
|
|
||||||
: throw Exception(ErrorCodes::LOGICAL_ERROR, "WriteBufferToFileSegment can be created only from single segment"))
|
|
||||||
, file_segment(&segment_holder_->front())
|
, file_segment(&segment_holder_->front())
|
||||||
, segment_holder(std::move(segment_holder_))
|
, segment_holder(std::move(segment_holder_))
|
||||||
, reserve_space_lock_wait_timeout_milliseconds(getCacheLockWaitTimeout())
|
, reserve_space_lock_wait_timeout_milliseconds(getCacheLockWaitTimeout())
|
||||||
{
|
{
|
||||||
|
if (segment_holder->size() != 1)
|
||||||
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "WriteBufferToFileSegment can be created only from single segment");
|
||||||
}
|
}
|
||||||
|
|
||||||
/// If it throws an exception, the file segment will be incomplete, so you should not use it in the future.
|
/// If it throws an exception, the file segment will be incomplete, so you should not use it in the future.
|
||||||
@ -82,9 +82,6 @@ void WriteBufferToFileSegment::nextImpl()
|
|||||||
reserve_stat_msg += fmt::format("{} hold {}, can release {}; ",
|
reserve_stat_msg += fmt::format("{} hold {}, can release {}; ",
|
||||||
toString(kind), ReadableSize(stat.non_releasable_size), ReadableSize(stat.releasable_size));
|
toString(kind), ReadableSize(stat.non_releasable_size), ReadableSize(stat.releasable_size));
|
||||||
|
|
||||||
if (std::filesystem::exists(file_segment->getPath()))
|
|
||||||
std::filesystem::remove(file_segment->getPath());
|
|
||||||
|
|
||||||
throw Exception(ErrorCodes::NOT_ENOUGH_SPACE, "Failed to reserve {} bytes for {}: {}(segment info: {})",
|
throw Exception(ErrorCodes::NOT_ENOUGH_SPACE, "Failed to reserve {} bytes for {}: {}(segment info: {})",
|
||||||
bytes_to_write,
|
bytes_to_write,
|
||||||
file_segment->getKind() == FileSegmentKind::Temporary ? "temporary file" : "the file in cache",
|
file_segment->getKind() == FileSegmentKind::Temporary ? "temporary file" : "the file in cache",
|
||||||
@ -95,17 +92,37 @@ void WriteBufferToFileSegment::nextImpl()
|
|||||||
|
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
SwapHelper swap(*this, *impl);
|
|
||||||
/// Write data to the underlying buffer.
|
/// Write data to the underlying buffer.
|
||||||
impl->next();
|
file_segment->write(working_buffer.begin(), bytes_to_write, written_bytes);
|
||||||
|
written_bytes += bytes_to_write;
|
||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
{
|
{
|
||||||
LOG_WARNING(getLogger("WriteBufferToFileSegment"), "Failed to write to the underlying buffer ({})", file_segment->getInfoForLog());
|
LOG_WARNING(getLogger("WriteBufferToFileSegment"), "Failed to write to the underlying buffer ({})", file_segment->getInfoForLog());
|
||||||
throw;
|
throw;
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
file_segment->setDownloadedSize(bytes_to_write);
|
void WriteBufferToFileSegment::finalizeImpl()
|
||||||
|
{
|
||||||
|
next();
|
||||||
|
auto cache_writer = file_segment->getLocalCacheWriter();
|
||||||
|
if (cache_writer)
|
||||||
|
{
|
||||||
|
SwapHelper swap(*this, *cache_writer);
|
||||||
|
cache_writer->finalize();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void WriteBufferToFileSegment::sync()
|
||||||
|
{
|
||||||
|
next();
|
||||||
|
auto cache_writer = file_segment->getLocalCacheWriter();
|
||||||
|
if (cache_writer)
|
||||||
|
{
|
||||||
|
SwapHelper swap(*this, *cache_writer);
|
||||||
|
cache_writer->sync();
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
std::unique_ptr<ReadBuffer> WriteBufferToFileSegment::getReadBufferImpl()
|
std::unique_ptr<ReadBuffer> WriteBufferToFileSegment::getReadBufferImpl()
|
||||||
@ -114,7 +131,10 @@ std::unique_ptr<ReadBuffer> WriteBufferToFileSegment::getReadBufferImpl()
|
|||||||
* because in case destructor called without `getReadBufferImpl` called, data won't be read.
|
* because in case destructor called without `getReadBufferImpl` called, data won't be read.
|
||||||
*/
|
*/
|
||||||
finalize();
|
finalize();
|
||||||
return std::make_unique<ReadBufferFromFile>(file_segment->getPath());
|
if (file_segment->getDownloadedSize() > 0)
|
||||||
|
return std::make_unique<ReadBufferFromFile>(file_segment->getPath());
|
||||||
|
else
|
||||||
|
return std::make_unique<EmptyReadBuffer>();
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -9,7 +9,7 @@ namespace DB
|
|||||||
|
|
||||||
class FileSegment;
|
class FileSegment;
|
||||||
|
|
||||||
class WriteBufferToFileSegment : public WriteBufferFromFileDecorator, public IReadableWriteBuffer
|
class WriteBufferToFileSegment : public WriteBufferFromFileBase, public IReadableWriteBuffer
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
explicit WriteBufferToFileSegment(FileSegment * file_segment_);
|
explicit WriteBufferToFileSegment(FileSegment * file_segment_);
|
||||||
@ -17,6 +17,13 @@ public:
|
|||||||
|
|
||||||
void nextImpl() override;
|
void nextImpl() override;
|
||||||
|
|
||||||
|
std::string getFileName() const override { return file_segment->getPath(); }
|
||||||
|
|
||||||
|
void sync() override;
|
||||||
|
|
||||||
|
protected:
|
||||||
|
void finalizeImpl() override;
|
||||||
|
|
||||||
private:
|
private:
|
||||||
|
|
||||||
std::unique_ptr<ReadBuffer> getReadBufferImpl() override;
|
std::unique_ptr<ReadBuffer> getReadBufferImpl() override;
|
||||||
@ -29,6 +36,7 @@ private:
|
|||||||
FileSegmentsHolderPtr segment_holder;
|
FileSegmentsHolderPtr segment_holder;
|
||||||
|
|
||||||
const size_t reserve_space_lock_wait_timeout_milliseconds;
|
const size_t reserve_space_lock_wait_timeout_milliseconds;
|
||||||
|
size_t written_bytes = 0;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
|
@ -3402,8 +3402,6 @@ zkutil::ZooKeeperPtr Context::getZooKeeper() const
|
|||||||
const auto & config = shared->zookeeper_config ? *shared->zookeeper_config : getConfigRef();
|
const auto & config = shared->zookeeper_config ? *shared->zookeeper_config : getConfigRef();
|
||||||
if (!shared->zookeeper)
|
if (!shared->zookeeper)
|
||||||
shared->zookeeper = zkutil::ZooKeeper::create(config, zkutil::getZooKeeperConfigName(config), getZooKeeperLog());
|
shared->zookeeper = zkutil::ZooKeeper::create(config, zkutil::getZooKeeperConfigName(config), getZooKeeperLog());
|
||||||
else if (shared->zookeeper->hasReachedDeadline())
|
|
||||||
shared->zookeeper->finalize("ZooKeeper session has reached its deadline");
|
|
||||||
|
|
||||||
if (shared->zookeeper->expired())
|
if (shared->zookeeper->expired())
|
||||||
{
|
{
|
||||||
@ -4135,7 +4133,7 @@ std::shared_ptr<FilesystemCacheLog> Context::getFilesystemCacheLog() const
|
|||||||
return shared->system_logs->filesystem_cache_log;
|
return shared->system_logs->filesystem_cache_log;
|
||||||
}
|
}
|
||||||
|
|
||||||
std::shared_ptr<S3QueueLog> Context::getS3QueueLog() const
|
std::shared_ptr<ObjectStorageQueueLog> Context::getS3QueueLog() const
|
||||||
{
|
{
|
||||||
SharedLockGuard lock(shared->mutex);
|
SharedLockGuard lock(shared->mutex);
|
||||||
if (!shared->system_logs)
|
if (!shared->system_logs)
|
||||||
@ -4144,6 +4142,15 @@ std::shared_ptr<S3QueueLog> Context::getS3QueueLog() const
|
|||||||
return shared->system_logs->s3_queue_log;
|
return shared->system_logs->s3_queue_log;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
std::shared_ptr<ObjectStorageQueueLog> Context::getAzureQueueLog() const
|
||||||
|
{
|
||||||
|
SharedLockGuard lock(shared->mutex);
|
||||||
|
if (!shared->system_logs)
|
||||||
|
return {};
|
||||||
|
|
||||||
|
return shared->system_logs->azure_queue_log;
|
||||||
|
}
|
||||||
|
|
||||||
std::shared_ptr<FilesystemReadPrefetchesLog> Context::getFilesystemReadPrefetchesLog() const
|
std::shared_ptr<FilesystemReadPrefetchesLog> Context::getFilesystemReadPrefetchesLog() const
|
||||||
{
|
{
|
||||||
SharedLockGuard lock(shared->mutex);
|
SharedLockGuard lock(shared->mutex);
|
||||||
|
@ -107,7 +107,7 @@ class TransactionsInfoLog;
|
|||||||
class ProcessorsProfileLog;
|
class ProcessorsProfileLog;
|
||||||
class FilesystemCacheLog;
|
class FilesystemCacheLog;
|
||||||
class FilesystemReadPrefetchesLog;
|
class FilesystemReadPrefetchesLog;
|
||||||
class S3QueueLog;
|
class ObjectStorageQueueLog;
|
||||||
class AsynchronousInsertLog;
|
class AsynchronousInsertLog;
|
||||||
class BackupLog;
|
class BackupLog;
|
||||||
class BlobStorageLog;
|
class BlobStorageLog;
|
||||||
@ -1133,7 +1133,8 @@ public:
|
|||||||
std::shared_ptr<TransactionsInfoLog> getTransactionsInfoLog() const;
|
std::shared_ptr<TransactionsInfoLog> getTransactionsInfoLog() const;
|
||||||
std::shared_ptr<ProcessorsProfileLog> getProcessorsProfileLog() const;
|
std::shared_ptr<ProcessorsProfileLog> getProcessorsProfileLog() const;
|
||||||
std::shared_ptr<FilesystemCacheLog> getFilesystemCacheLog() const;
|
std::shared_ptr<FilesystemCacheLog> getFilesystemCacheLog() const;
|
||||||
std::shared_ptr<S3QueueLog> getS3QueueLog() const;
|
std::shared_ptr<ObjectStorageQueueLog> getS3QueueLog() const;
|
||||||
|
std::shared_ptr<ObjectStorageQueueLog> getAzureQueueLog() const;
|
||||||
std::shared_ptr<FilesystemReadPrefetchesLog> getFilesystemReadPrefetchesLog() const;
|
std::shared_ptr<FilesystemReadPrefetchesLog> getFilesystemReadPrefetchesLog() const;
|
||||||
std::shared_ptr<AsynchronousInsertLog> getAsynchronousInsertLog() const;
|
std::shared_ptr<AsynchronousInsertLog> getAsynchronousInsertLog() const;
|
||||||
std::shared_ptr<BackupLog> getBackupLog() const;
|
std::shared_ptr<BackupLog> getBackupLog() const;
|
||||||
|
@ -63,6 +63,7 @@ namespace ErrorCodes
|
|||||||
extern const int LOGICAL_ERROR;
|
extern const int LOGICAL_ERROR;
|
||||||
extern const int HAVE_DEPENDENT_OBJECTS;
|
extern const int HAVE_DEPENDENT_OBJECTS;
|
||||||
extern const int UNFINISHED;
|
extern const int UNFINISHED;
|
||||||
|
extern const int INFINITE_LOOP;
|
||||||
}
|
}
|
||||||
|
|
||||||
class DatabaseNameHints : public IHints<>
|
class DatabaseNameHints : public IHints<>
|
||||||
@ -1473,6 +1474,114 @@ void DatabaseCatalog::checkTableCanBeRemovedOrRenamedUnlocked(
|
|||||||
removing_table, fmt::join(from_other_databases, ", "));
|
removing_table, fmt::join(from_other_databases, ", "));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void DatabaseCatalog::checkTableCanBeAddedWithNoCyclicDependencies(
|
||||||
|
const QualifiedTableName & table_name,
|
||||||
|
const TableNamesSet & new_referential_dependencies,
|
||||||
|
const TableNamesSet & new_loading_dependencies)
|
||||||
|
{
|
||||||
|
std::lock_guard lock{databases_mutex};
|
||||||
|
|
||||||
|
StorageID table_id = StorageID{table_name};
|
||||||
|
|
||||||
|
auto check = [&](TablesDependencyGraph & dependencies, const TableNamesSet & new_dependencies)
|
||||||
|
{
|
||||||
|
auto old_dependencies = dependencies.removeDependencies(table_id);
|
||||||
|
dependencies.addDependencies(table_name, new_dependencies);
|
||||||
|
auto restore_dependencies = [&]()
|
||||||
|
{
|
||||||
|
dependencies.removeDependencies(table_id);
|
||||||
|
if (!old_dependencies.empty())
|
||||||
|
dependencies.addDependencies(table_id, old_dependencies);
|
||||||
|
};
|
||||||
|
|
||||||
|
if (dependencies.hasCyclicDependencies())
|
||||||
|
{
|
||||||
|
auto cyclic_dependencies_description = dependencies.describeCyclicDependencies();
|
||||||
|
restore_dependencies();
|
||||||
|
throw Exception(
|
||||||
|
ErrorCodes::INFINITE_LOOP,
|
||||||
|
"Cannot add dependencies for '{}', because it will lead to cyclic dependencies: {}",
|
||||||
|
table_name.getFullName(),
|
||||||
|
cyclic_dependencies_description);
|
||||||
|
}
|
||||||
|
|
||||||
|
restore_dependencies();
|
||||||
|
};
|
||||||
|
|
||||||
|
check(referential_dependencies, new_referential_dependencies);
|
||||||
|
check(loading_dependencies, new_loading_dependencies);
|
||||||
|
}
|
||||||
|
|
||||||
|
void DatabaseCatalog::checkTableCanBeRenamedWithNoCyclicDependencies(const StorageID & from_table_id, const StorageID & to_table_id)
|
||||||
|
{
|
||||||
|
std::lock_guard lock{databases_mutex};
|
||||||
|
|
||||||
|
auto check = [&](TablesDependencyGraph & dependencies)
|
||||||
|
{
|
||||||
|
auto old_dependencies = dependencies.removeDependencies(from_table_id);
|
||||||
|
dependencies.addDependencies(to_table_id, old_dependencies);
|
||||||
|
auto restore_dependencies = [&]()
|
||||||
|
{
|
||||||
|
dependencies.removeDependencies(to_table_id);
|
||||||
|
dependencies.addDependencies(from_table_id, old_dependencies);
|
||||||
|
};
|
||||||
|
|
||||||
|
if (dependencies.hasCyclicDependencies())
|
||||||
|
{
|
||||||
|
auto cyclic_dependencies_description = dependencies.describeCyclicDependencies();
|
||||||
|
restore_dependencies();
|
||||||
|
throw Exception(
|
||||||
|
ErrorCodes::INFINITE_LOOP,
|
||||||
|
"Cannot rename '{}' to '{}', because it will lead to cyclic dependencies: {}",
|
||||||
|
from_table_id.getFullTableName(),
|
||||||
|
to_table_id.getFullTableName(),
|
||||||
|
cyclic_dependencies_description);
|
||||||
|
}
|
||||||
|
|
||||||
|
restore_dependencies();
|
||||||
|
};
|
||||||
|
|
||||||
|
check(referential_dependencies);
|
||||||
|
check(loading_dependencies);
|
||||||
|
}
|
||||||
|
|
||||||
|
void DatabaseCatalog::checkTablesCanBeExchangedWithNoCyclicDependencies(const StorageID & table_id_1, const StorageID & table_id_2)
|
||||||
|
{
|
||||||
|
std::lock_guard lock{databases_mutex};
|
||||||
|
|
||||||
|
auto check = [&](TablesDependencyGraph & dependencies)
|
||||||
|
{
|
||||||
|
auto old_dependencies_1 = dependencies.removeDependencies(table_id_1);
|
||||||
|
auto old_dependencies_2 = dependencies.removeDependencies(table_id_2);
|
||||||
|
dependencies.addDependencies(table_id_1, old_dependencies_2);
|
||||||
|
dependencies.addDependencies(table_id_2, old_dependencies_1);
|
||||||
|
auto restore_dependencies = [&]()
|
||||||
|
{
|
||||||
|
dependencies.removeDependencies(table_id_1);
|
||||||
|
dependencies.removeDependencies(table_id_2);
|
||||||
|
dependencies.addDependencies(table_id_1, old_dependencies_1);
|
||||||
|
dependencies.addDependencies(table_id_2, old_dependencies_2);
|
||||||
|
};
|
||||||
|
|
||||||
|
if (dependencies.hasCyclicDependencies())
|
||||||
|
{
|
||||||
|
auto cyclic_dependencies_description = dependencies.describeCyclicDependencies();
|
||||||
|
restore_dependencies();
|
||||||
|
throw Exception(
|
||||||
|
ErrorCodes::INFINITE_LOOP,
|
||||||
|
"Cannot exchange '{}' and '{}', because it will lead to cyclic dependencies: {}",
|
||||||
|
table_id_1.getFullTableName(),
|
||||||
|
table_id_2.getFullTableName(),
|
||||||
|
cyclic_dependencies_description);
|
||||||
|
}
|
||||||
|
|
||||||
|
restore_dependencies();
|
||||||
|
};
|
||||||
|
|
||||||
|
check(referential_dependencies);
|
||||||
|
check(loading_dependencies);
|
||||||
|
}
|
||||||
|
|
||||||
void DatabaseCatalog::cleanupStoreDirectoryTask()
|
void DatabaseCatalog::cleanupStoreDirectoryTask()
|
||||||
{
|
{
|
||||||
for (const auto & [disk_name, disk] : getContext()->getDisksMap())
|
for (const auto & [disk_name, disk] : getContext()->getDisksMap())
|
||||||
|
@ -245,6 +245,9 @@ public:
|
|||||||
|
|
||||||
void checkTableCanBeRemovedOrRenamed(const StorageID & table_id, bool check_referential_dependencies, bool check_loading_dependencies, bool is_drop_database = false) const;
|
void checkTableCanBeRemovedOrRenamed(const StorageID & table_id, bool check_referential_dependencies, bool check_loading_dependencies, bool is_drop_database = false) const;
|
||||||
|
|
||||||
|
void checkTableCanBeAddedWithNoCyclicDependencies(const QualifiedTableName & table_name, const TableNamesSet & new_referential_dependencies, const TableNamesSet & new_loading_dependencies);
|
||||||
|
void checkTableCanBeRenamedWithNoCyclicDependencies(const StorageID & from_table_id, const StorageID & to_table_id);
|
||||||
|
void checkTablesCanBeExchangedWithNoCyclicDependencies(const StorageID & table_id_1, const StorageID & table_id_2);
|
||||||
|
|
||||||
struct TableMarkedAsDropped
|
struct TableMarkedAsDropped
|
||||||
{
|
{
|
||||||
|
@ -195,6 +195,10 @@ static void setLazyExecutionInfo(
|
|||||||
}
|
}
|
||||||
|
|
||||||
lazy_execution_info.short_circuit_ancestors_info[parent].insert(indexes.begin(), indexes.end());
|
lazy_execution_info.short_circuit_ancestors_info[parent].insert(indexes.begin(), indexes.end());
|
||||||
|
/// After checking arguments_with_disabled_lazy_execution, if there is no relation with parent,
|
||||||
|
/// disable the current node.
|
||||||
|
if (indexes.empty())
|
||||||
|
lazy_execution_info.can_be_lazy_executed = false;
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
/// If lazy execution is disabled for one of parents, we should disable it for current node.
|
/// If lazy execution is disabled for one of parents, we should disable it for current node.
|
||||||
@ -292,9 +296,9 @@ static std::unordered_set<const ActionsDAG::Node *> processShortCircuitFunctions
|
|||||||
|
|
||||||
/// Firstly, find all short-circuit functions and get their settings.
|
/// Firstly, find all short-circuit functions and get their settings.
|
||||||
std::unordered_map<const ActionsDAG::Node *, IFunctionBase::ShortCircuitSettings> short_circuit_nodes;
|
std::unordered_map<const ActionsDAG::Node *, IFunctionBase::ShortCircuitSettings> short_circuit_nodes;
|
||||||
IFunctionBase::ShortCircuitSettings short_circuit_settings;
|
|
||||||
for (const auto & node : nodes)
|
for (const auto & node : nodes)
|
||||||
{
|
{
|
||||||
|
IFunctionBase::ShortCircuitSettings short_circuit_settings;
|
||||||
if (node.type == ActionsDAG::ActionType::FUNCTION && node.function_base->isShortCircuit(short_circuit_settings, node.children.size()) && !node.children.empty())
|
if (node.type == ActionsDAG::ActionType::FUNCTION && node.function_base->isShortCircuit(short_circuit_settings, node.children.size()) && !node.children.empty())
|
||||||
short_circuit_nodes[&node] = short_circuit_settings;
|
short_circuit_nodes[&node] = short_circuit_settings;
|
||||||
}
|
}
|
||||||
|
@ -898,6 +898,8 @@ InterpreterCreateQuery::TableProperties InterpreterCreateQuery::getTableProperti
|
|||||||
assert(as_database_saved.empty() && as_table_saved.empty());
|
assert(as_database_saved.empty() && as_table_saved.empty());
|
||||||
std::swap(create.as_database, as_database_saved);
|
std::swap(create.as_database, as_database_saved);
|
||||||
std::swap(create.as_table, as_table_saved);
|
std::swap(create.as_table, as_table_saved);
|
||||||
|
if (!as_table_saved.empty())
|
||||||
|
create.is_create_empty = false;
|
||||||
|
|
||||||
return properties;
|
return properties;
|
||||||
}
|
}
|
||||||
@ -1109,6 +1111,27 @@ void InterpreterCreateQuery::assertOrSetUUID(ASTCreateQuery & create, const Data
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
namespace
|
||||||
|
{
|
||||||
|
|
||||||
|
void addTableDependencies(const ASTCreateQuery & create, const ASTPtr & query_ptr, const ContextPtr & context)
|
||||||
|
{
|
||||||
|
QualifiedTableName qualified_name{create.getDatabase(), create.getTable()};
|
||||||
|
auto ref_dependencies = getDependenciesFromCreateQuery(context->getGlobalContext(), qualified_name, query_ptr, context->getCurrentDatabase());
|
||||||
|
auto loading_dependencies = getLoadingDependenciesFromCreateQuery(context->getGlobalContext(), qualified_name, query_ptr);
|
||||||
|
DatabaseCatalog::instance().addDependencies(qualified_name, ref_dependencies, loading_dependencies);
|
||||||
|
}
|
||||||
|
|
||||||
|
void checkTableCanBeAddedWithNoCyclicDependencies(const ASTCreateQuery & create, const ASTPtr & query_ptr, const ContextPtr & context)
|
||||||
|
{
|
||||||
|
QualifiedTableName qualified_name{create.getDatabase(), create.getTable()};
|
||||||
|
auto ref_dependencies = getDependenciesFromCreateQuery(context->getGlobalContext(), qualified_name, query_ptr, context->getCurrentDatabase());
|
||||||
|
auto loading_dependencies = getLoadingDependenciesFromCreateQuery(context->getGlobalContext(), qualified_name, query_ptr);
|
||||||
|
DatabaseCatalog::instance().checkTableCanBeAddedWithNoCyclicDependencies(qualified_name, ref_dependencies, loading_dependencies);
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create)
|
BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create)
|
||||||
{
|
{
|
||||||
/// Temporary tables are created out of databases.
|
/// Temporary tables are created out of databases.
|
||||||
@ -1354,11 +1377,7 @@ BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create)
|
|||||||
return {};
|
return {};
|
||||||
|
|
||||||
/// If table has dependencies - add them to the graph
|
/// If table has dependencies - add them to the graph
|
||||||
QualifiedTableName qualified_name{database_name, create.getTable()};
|
addTableDependencies(create, query_ptr, getContext());
|
||||||
auto ref_dependencies = getDependenciesFromCreateQuery(getContext()->getGlobalContext(), qualified_name, query_ptr);
|
|
||||||
auto loading_dependencies = getLoadingDependenciesFromCreateQuery(getContext()->getGlobalContext(), qualified_name, query_ptr);
|
|
||||||
DatabaseCatalog::instance().addDependencies(qualified_name, ref_dependencies, loading_dependencies);
|
|
||||||
|
|
||||||
return fillTableIfNeeded(create);
|
return fillTableIfNeeded(create);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1510,6 +1529,9 @@ bool InterpreterCreateQuery::doCreateTable(ASTCreateQuery & create,
|
|||||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot find UUID mapping for {}, it's a bug", create.uuid);
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot find UUID mapping for {}, it's a bug", create.uuid);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Before actually creating the table, check if it will lead to cyclic dependencies.
|
||||||
|
checkTableCanBeAddedWithNoCyclicDependencies(create, query_ptr, getContext());
|
||||||
|
|
||||||
StoragePtr res;
|
StoragePtr res;
|
||||||
/// NOTE: CREATE query may be rewritten by Storage creator or table function
|
/// NOTE: CREATE query may be rewritten by Storage creator or table function
|
||||||
if (create.as_table_function)
|
if (create.as_table_function)
|
||||||
@ -1621,6 +1643,9 @@ BlockIO InterpreterCreateQuery::doCreateOrReplaceTable(ASTCreateQuery & create,
|
|||||||
ContextMutablePtr create_context = Context::createCopy(current_context);
|
ContextMutablePtr create_context = Context::createCopy(current_context);
|
||||||
create_context->setQueryContext(std::const_pointer_cast<Context>(current_context));
|
create_context->setQueryContext(std::const_pointer_cast<Context>(current_context));
|
||||||
|
|
||||||
|
/// Before actually creating/replacing the table, check if it will lead to cyclic dependencies.
|
||||||
|
checkTableCanBeAddedWithNoCyclicDependencies(create, query_ptr, create_context);
|
||||||
|
|
||||||
auto make_drop_context = [&]() -> ContextMutablePtr
|
auto make_drop_context = [&]() -> ContextMutablePtr
|
||||||
{
|
{
|
||||||
ContextMutablePtr drop_context = Context::createCopy(current_context);
|
ContextMutablePtr drop_context = Context::createCopy(current_context);
|
||||||
@ -1667,6 +1692,9 @@ BlockIO InterpreterCreateQuery::doCreateOrReplaceTable(ASTCreateQuery & create,
|
|||||||
assert(done);
|
assert(done);
|
||||||
created = true;
|
created = true;
|
||||||
|
|
||||||
|
/// If table has dependencies - add them to the graph
|
||||||
|
addTableDependencies(create, query_ptr, getContext());
|
||||||
|
|
||||||
/// Try fill temporary table
|
/// Try fill temporary table
|
||||||
BlockIO fill_io = fillTableIfNeeded(create);
|
BlockIO fill_io = fillTableIfNeeded(create);
|
||||||
executeTrivialBlockIO(fill_io, getContext());
|
executeTrivialBlockIO(fill_io, getContext());
|
||||||
|
@ -127,14 +127,23 @@ BlockIO InterpreterRenameQuery::executeToTables(const ASTRenameQuery & rename, c
|
|||||||
{
|
{
|
||||||
StorageID from_table_id{elem.from_database_name, elem.from_table_name};
|
StorageID from_table_id{elem.from_database_name, elem.from_table_name};
|
||||||
StorageID to_table_id{elem.to_database_name, elem.to_table_name};
|
StorageID to_table_id{elem.to_database_name, elem.to_table_name};
|
||||||
std::vector<StorageID> ref_dependencies;
|
std::vector<StorageID> from_ref_dependencies;
|
||||||
std::vector<StorageID> loading_dependencies;
|
std::vector<StorageID> from_loading_dependencies;
|
||||||
|
std::vector<StorageID> to_ref_dependencies;
|
||||||
|
std::vector<StorageID> to_loading_dependencies;
|
||||||
|
|
||||||
if (!exchange_tables)
|
if (exchange_tables)
|
||||||
{
|
{
|
||||||
|
DatabaseCatalog::instance().checkTablesCanBeExchangedWithNoCyclicDependencies(from_table_id, to_table_id);
|
||||||
|
std::tie(from_ref_dependencies, from_loading_dependencies) = database_catalog.removeDependencies(from_table_id, false, false);
|
||||||
|
std::tie(to_ref_dependencies, to_loading_dependencies) = database_catalog.removeDependencies(to_table_id, false, false);
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
DatabaseCatalog::instance().checkTableCanBeRenamedWithNoCyclicDependencies(from_table_id, to_table_id);
|
||||||
bool check_ref_deps = getContext()->getSettingsRef().check_referential_table_dependencies;
|
bool check_ref_deps = getContext()->getSettingsRef().check_referential_table_dependencies;
|
||||||
bool check_loading_deps = !check_ref_deps && getContext()->getSettingsRef().check_table_dependencies;
|
bool check_loading_deps = !check_ref_deps && getContext()->getSettingsRef().check_table_dependencies;
|
||||||
std::tie(ref_dependencies, loading_dependencies) = database_catalog.removeDependencies(from_table_id, check_ref_deps, check_loading_deps);
|
std::tie(from_ref_dependencies, from_loading_dependencies) = database_catalog.removeDependencies(from_table_id, check_ref_deps, check_loading_deps);
|
||||||
}
|
}
|
||||||
|
|
||||||
try
|
try
|
||||||
@ -147,12 +156,17 @@ BlockIO InterpreterRenameQuery::executeToTables(const ASTRenameQuery & rename, c
|
|||||||
exchange_tables,
|
exchange_tables,
|
||||||
rename.dictionary);
|
rename.dictionary);
|
||||||
|
|
||||||
DatabaseCatalog::instance().addDependencies(to_table_id, ref_dependencies, loading_dependencies);
|
DatabaseCatalog::instance().addDependencies(to_table_id, from_ref_dependencies, from_loading_dependencies);
|
||||||
|
if (!to_ref_dependencies.empty() || !to_loading_dependencies.empty())
|
||||||
|
DatabaseCatalog::instance().addDependencies(from_table_id, to_ref_dependencies, to_loading_dependencies);
|
||||||
|
|
||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
{
|
{
|
||||||
/// Restore dependencies if RENAME fails
|
/// Restore dependencies if RENAME fails
|
||||||
DatabaseCatalog::instance().addDependencies(from_table_id, ref_dependencies, loading_dependencies);
|
DatabaseCatalog::instance().addDependencies(from_table_id, from_ref_dependencies, from_loading_dependencies);
|
||||||
|
if (!to_ref_dependencies.empty() || !to_loading_dependencies.empty())
|
||||||
|
DatabaseCatalog::instance().addDependencies(to_table_id, to_ref_dependencies, to_loading_dependencies);
|
||||||
throw;
|
throw;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -910,7 +910,7 @@ bool InterpreterSelectQuery::adjustParallelReplicasAfterAnalysis()
|
|||||||
UInt64 max_rows = maxBlockSizeByLimit();
|
UInt64 max_rows = maxBlockSizeByLimit();
|
||||||
if (settings.max_rows_to_read)
|
if (settings.max_rows_to_read)
|
||||||
max_rows = max_rows ? std::min(max_rows, settings.max_rows_to_read.value) : settings.max_rows_to_read;
|
max_rows = max_rows ? std::min(max_rows, settings.max_rows_to_read.value) : settings.max_rows_to_read;
|
||||||
query_info_copy.limit = max_rows;
|
query_info_copy.trivial_limit = max_rows;
|
||||||
|
|
||||||
/// Apply filters to prewhere and add them to the query_info so we can filter out parts efficiently during row estimation
|
/// Apply filters to prewhere and add them to the query_info so we can filter out parts efficiently during row estimation
|
||||||
applyFiltersToPrewhereInAnalysis(analysis_copy);
|
applyFiltersToPrewhereInAnalysis(analysis_copy);
|
||||||
@ -2445,13 +2445,13 @@ void InterpreterSelectQuery::executeFetchColumns(QueryProcessingStage::Enum proc
|
|||||||
if (local_limits.local_limits.size_limits.max_rows != 0)
|
if (local_limits.local_limits.size_limits.max_rows != 0)
|
||||||
{
|
{
|
||||||
if (max_block_limited < local_limits.local_limits.size_limits.max_rows)
|
if (max_block_limited < local_limits.local_limits.size_limits.max_rows)
|
||||||
query_info.limit = max_block_limited;
|
query_info.trivial_limit = max_block_limited;
|
||||||
else if (local_limits.local_limits.size_limits.max_rows < std::numeric_limits<UInt64>::max()) /// Ask to read just enough rows to make the max_rows limit effective (so it has a chance to be triggered).
|
else if (local_limits.local_limits.size_limits.max_rows < std::numeric_limits<UInt64>::max()) /// Ask to read just enough rows to make the max_rows limit effective (so it has a chance to be triggered).
|
||||||
query_info.limit = 1 + local_limits.local_limits.size_limits.max_rows;
|
query_info.trivial_limit = 1 + local_limits.local_limits.size_limits.max_rows;
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
query_info.limit = max_block_limited;
|
query_info.trivial_limit = max_block_limited;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -8,19 +8,19 @@
|
|||||||
#include <DataTypes/DataTypeMap.h>
|
#include <DataTypes/DataTypeMap.h>
|
||||||
#include <Interpreters/ProfileEventsExt.h>
|
#include <Interpreters/ProfileEventsExt.h>
|
||||||
#include <DataTypes/DataTypeEnum.h>
|
#include <DataTypes/DataTypeEnum.h>
|
||||||
#include <Interpreters/S3QueueLog.h>
|
#include <Interpreters/ObjectStorageQueueLog.h>
|
||||||
|
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
ColumnsDescription S3QueueLogElement::getColumnsDescription()
|
ColumnsDescription ObjectStorageQueueLogElement::getColumnsDescription()
|
||||||
{
|
{
|
||||||
auto status_datatype = std::make_shared<DataTypeEnum8>(
|
auto status_datatype = std::make_shared<DataTypeEnum8>(
|
||||||
DataTypeEnum8::Values
|
DataTypeEnum8::Values
|
||||||
{
|
{
|
||||||
{"Processed", static_cast<Int8>(S3QueueLogElement::S3QueueStatus::Processed)},
|
{"Processed", static_cast<Int8>(ObjectStorageQueueLogElement::ObjectStorageQueueStatus::Processed)},
|
||||||
{"Failed", static_cast<Int8>(S3QueueLogElement::S3QueueStatus::Failed)},
|
{"Failed", static_cast<Int8>(ObjectStorageQueueLogElement::ObjectStorageQueueStatus::Failed)},
|
||||||
});
|
});
|
||||||
|
|
||||||
return ColumnsDescription
|
return ColumnsDescription
|
||||||
@ -41,7 +41,7 @@ ColumnsDescription S3QueueLogElement::getColumnsDescription()
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
void S3QueueLogElement::appendToBlock(MutableColumns & columns) const
|
void ObjectStorageQueueLogElement::appendToBlock(MutableColumns & columns) const
|
||||||
{
|
{
|
||||||
size_t i = 0;
|
size_t i = 0;
|
||||||
columns[i++]->insert(getFQDNOrHostName());
|
columns[i++]->insert(getFQDNOrHostName());
|
@ -9,7 +9,7 @@
|
|||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
struct S3QueueLogElement
|
struct ObjectStorageQueueLogElement
|
||||||
{
|
{
|
||||||
time_t event_time{};
|
time_t event_time{};
|
||||||
|
|
||||||
@ -20,18 +20,18 @@ struct S3QueueLogElement
|
|||||||
std::string file_name;
|
std::string file_name;
|
||||||
size_t rows_processed = 0;
|
size_t rows_processed = 0;
|
||||||
|
|
||||||
enum class S3QueueStatus : uint8_t
|
enum class ObjectStorageQueueStatus : uint8_t
|
||||||
{
|
{
|
||||||
Processed,
|
Processed,
|
||||||
Failed,
|
Failed,
|
||||||
};
|
};
|
||||||
S3QueueStatus status;
|
ObjectStorageQueueStatus status;
|
||||||
ProfileEvents::Counters::Snapshot counters_snapshot;
|
ProfileEvents::Counters::Snapshot counters_snapshot;
|
||||||
time_t processing_start_time;
|
time_t processing_start_time;
|
||||||
time_t processing_end_time;
|
time_t processing_end_time;
|
||||||
std::string exception;
|
std::string exception;
|
||||||
|
|
||||||
static std::string name() { return "S3QueueLog"; }
|
static std::string name() { return "ObjectStorageQueueLog"; }
|
||||||
|
|
||||||
static ColumnsDescription getColumnsDescription();
|
static ColumnsDescription getColumnsDescription();
|
||||||
static NamesAndAliases getNamesAndAliases() { return {}; }
|
static NamesAndAliases getNamesAndAliases() { return {}; }
|
||||||
@ -39,9 +39,9 @@ struct S3QueueLogElement
|
|||||||
void appendToBlock(MutableColumns & columns) const;
|
void appendToBlock(MutableColumns & columns) const;
|
||||||
};
|
};
|
||||||
|
|
||||||
class S3QueueLog : public SystemLog<S3QueueLogElement>
|
class ObjectStorageQueueLog : public SystemLog<ObjectStorageQueueLogElement>
|
||||||
{
|
{
|
||||||
using SystemLog<S3QueueLogElement>::SystemLog;
|
using SystemLog<ObjectStorageQueueLogElement>::SystemLog;
|
||||||
};
|
};
|
||||||
|
|
||||||
}
|
}
|
@ -25,7 +25,7 @@
|
|||||||
#include <Interpreters/QueryLog.h>
|
#include <Interpreters/QueryLog.h>
|
||||||
#include <Interpreters/QueryThreadLog.h>
|
#include <Interpreters/QueryThreadLog.h>
|
||||||
#include <Interpreters/QueryViewsLog.h>
|
#include <Interpreters/QueryViewsLog.h>
|
||||||
#include <Interpreters/S3QueueLog.h>
|
#include <Interpreters/ObjectStorageQueueLog.h>
|
||||||
#include <Interpreters/SessionLog.h>
|
#include <Interpreters/SessionLog.h>
|
||||||
#include <Interpreters/TextLog.h>
|
#include <Interpreters/TextLog.h>
|
||||||
#include <Interpreters/TraceLog.h>
|
#include <Interpreters/TraceLog.h>
|
||||||
@ -306,7 +306,8 @@ SystemLogs::SystemLogs(ContextPtr global_context, const Poco::Util::AbstractConf
|
|||||||
processors_profile_log = createSystemLog<ProcessorsProfileLog>(global_context, "system", "processors_profile_log", config, "processors_profile_log", "Contains profiling information on processors level (building blocks for a pipeline for query execution.");
|
processors_profile_log = createSystemLog<ProcessorsProfileLog>(global_context, "system", "processors_profile_log", config, "processors_profile_log", "Contains profiling information on processors level (building blocks for a pipeline for query execution.");
|
||||||
asynchronous_insert_log = createSystemLog<AsynchronousInsertLog>(global_context, "system", "asynchronous_insert_log", config, "asynchronous_insert_log", "Contains a history for all asynchronous inserts executed on current server.");
|
asynchronous_insert_log = createSystemLog<AsynchronousInsertLog>(global_context, "system", "asynchronous_insert_log", config, "asynchronous_insert_log", "Contains a history for all asynchronous inserts executed on current server.");
|
||||||
backup_log = createSystemLog<BackupLog>(global_context, "system", "backup_log", config, "backup_log", "Contains logging entries with the information about BACKUP and RESTORE operations.");
|
backup_log = createSystemLog<BackupLog>(global_context, "system", "backup_log", config, "backup_log", "Contains logging entries with the information about BACKUP and RESTORE operations.");
|
||||||
s3_queue_log = createSystemLog<S3QueueLog>(global_context, "system", "s3queue_log", config, "s3queue_log", "Contains logging entries with the information files processes by S3Queue engine.");
|
s3_queue_log = createSystemLog<ObjectStorageQueueLog>(global_context, "system", "s3queue_log", config, "s3queue_log", "Contains logging entries with the information files processes by S3Queue engine.");
|
||||||
|
azure_queue_log = createSystemLog<ObjectStorageQueueLog>(global_context, "system", "azure_queue_log", config, "azure_queue_log", "Contains logging entries with the information files processes by S3Queue engine.");
|
||||||
blob_storage_log = createSystemLog<BlobStorageLog>(global_context, "system", "blob_storage_log", config, "blob_storage_log", "Contains logging entries with information about various blob storage operations such as uploads and deletes.");
|
blob_storage_log = createSystemLog<BlobStorageLog>(global_context, "system", "blob_storage_log", config, "blob_storage_log", "Contains logging entries with information about various blob storage operations such as uploads and deletes.");
|
||||||
|
|
||||||
if (query_log)
|
if (query_log)
|
||||||
|
@ -53,7 +53,7 @@ class FilesystemCacheLog;
|
|||||||
class FilesystemReadPrefetchesLog;
|
class FilesystemReadPrefetchesLog;
|
||||||
class AsynchronousInsertLog;
|
class AsynchronousInsertLog;
|
||||||
class BackupLog;
|
class BackupLog;
|
||||||
class S3QueueLog;
|
class ObjectStorageQueueLog;
|
||||||
class BlobStorageLog;
|
class BlobStorageLog;
|
||||||
|
|
||||||
/// System logs should be destroyed in destructor of the last Context and before tables,
|
/// System logs should be destroyed in destructor of the last Context and before tables,
|
||||||
@ -76,7 +76,8 @@ struct SystemLogs
|
|||||||
std::shared_ptr<ErrorLog> error_log; /// Used to log errors.
|
std::shared_ptr<ErrorLog> error_log; /// Used to log errors.
|
||||||
std::shared_ptr<FilesystemCacheLog> filesystem_cache_log;
|
std::shared_ptr<FilesystemCacheLog> filesystem_cache_log;
|
||||||
std::shared_ptr<FilesystemReadPrefetchesLog> filesystem_read_prefetches_log;
|
std::shared_ptr<FilesystemReadPrefetchesLog> filesystem_read_prefetches_log;
|
||||||
std::shared_ptr<S3QueueLog> s3_queue_log;
|
std::shared_ptr<ObjectStorageQueueLog> s3_queue_log;
|
||||||
|
std::shared_ptr<ObjectStorageQueueLog> azure_queue_log;
|
||||||
/// Metrics from system.asynchronous_metrics.
|
/// Metrics from system.asynchronous_metrics.
|
||||||
std::shared_ptr<AsynchronousMetricLog> asynchronous_metric_log;
|
std::shared_ptr<AsynchronousMetricLog> asynchronous_metric_log;
|
||||||
/// OpenTelemetry trace spans.
|
/// OpenTelemetry trace spans.
|
||||||
|
@ -3,6 +3,8 @@
|
|||||||
#include <Interpreters/TemporaryDataOnDisk.h>
|
#include <Interpreters/TemporaryDataOnDisk.h>
|
||||||
|
|
||||||
#include <IO/WriteBufferFromFile.h>
|
#include <IO/WriteBufferFromFile.h>
|
||||||
|
#include <IO/ReadBufferFromFile.h>
|
||||||
|
#include <IO/ReadBufferFromEmptyFile.h>
|
||||||
#include <Compression/CompressedWriteBuffer.h>
|
#include <Compression/CompressedWriteBuffer.h>
|
||||||
#include <Interpreters/Cache/FileCache.h>
|
#include <Interpreters/Cache/FileCache.h>
|
||||||
#include <Formats/NativeWriter.h>
|
#include <Formats/NativeWriter.h>
|
||||||
@ -224,25 +226,37 @@ struct TemporaryFileStream::OutputWriter
|
|||||||
bool finalized = false;
|
bool finalized = false;
|
||||||
};
|
};
|
||||||
|
|
||||||
TemporaryFileStream::Reader::Reader(const String & path, const Block & header_, size_t size)
|
TemporaryFileStream::Reader::Reader(const String & path_, const Block & header_, size_t size_)
|
||||||
: in_file_buf(path, size ? std::min<size_t>(DBMS_DEFAULT_BUFFER_SIZE, size) : DBMS_DEFAULT_BUFFER_SIZE)
|
: path(path_)
|
||||||
, in_compressed_buf(in_file_buf)
|
, size(size_ ? std::min<size_t>(size_, DBMS_DEFAULT_BUFFER_SIZE) : DBMS_DEFAULT_BUFFER_SIZE)
|
||||||
, in_reader(in_compressed_buf, header_, DBMS_TCP_PROTOCOL_VERSION)
|
, header(header_)
|
||||||
{
|
{
|
||||||
LOG_TEST(getLogger("TemporaryFileStream"), "Reading {} from {}", header_.dumpStructure(), path);
|
LOG_TEST(getLogger("TemporaryFileStream"), "Reading {} from {}", header_.dumpStructure(), path);
|
||||||
}
|
}
|
||||||
|
|
||||||
TemporaryFileStream::Reader::Reader(const String & path, size_t size)
|
TemporaryFileStream::Reader::Reader(const String & path_, size_t size_)
|
||||||
: in_file_buf(path, size ? std::min<size_t>(DBMS_DEFAULT_BUFFER_SIZE, size) : DBMS_DEFAULT_BUFFER_SIZE)
|
: path(path_)
|
||||||
, in_compressed_buf(in_file_buf)
|
, size(size_ ? std::min<size_t>(size_, DBMS_DEFAULT_BUFFER_SIZE) : DBMS_DEFAULT_BUFFER_SIZE)
|
||||||
, in_reader(in_compressed_buf, DBMS_TCP_PROTOCOL_VERSION)
|
|
||||||
{
|
{
|
||||||
LOG_TEST(getLogger("TemporaryFileStream"), "Reading from {}", path);
|
LOG_TEST(getLogger("TemporaryFileStream"), "Reading from {}", path);
|
||||||
}
|
}
|
||||||
|
|
||||||
Block TemporaryFileStream::Reader::read()
|
Block TemporaryFileStream::Reader::read()
|
||||||
{
|
{
|
||||||
return in_reader.read();
|
if (!in_reader)
|
||||||
|
{
|
||||||
|
if (fs::exists(path))
|
||||||
|
in_file_buf = std::make_unique<ReadBufferFromFile>(path, size);
|
||||||
|
else
|
||||||
|
in_file_buf = std::make_unique<ReadBufferFromEmptyFile>();
|
||||||
|
|
||||||
|
in_compressed_buf = std::make_unique<CompressedReadBuffer>(*in_file_buf);
|
||||||
|
if (header.has_value())
|
||||||
|
in_reader = std::make_unique<NativeReader>(*in_compressed_buf, header.value(), DBMS_TCP_PROTOCOL_VERSION);
|
||||||
|
else
|
||||||
|
in_reader = std::make_unique<NativeReader>(*in_compressed_buf, DBMS_TCP_PROTOCOL_VERSION);
|
||||||
|
}
|
||||||
|
return in_reader->read();
|
||||||
}
|
}
|
||||||
|
|
||||||
TemporaryFileStream::TemporaryFileStream(TemporaryFileOnDiskHolder file_, const Block & header_, TemporaryDataOnDisk * parent_)
|
TemporaryFileStream::TemporaryFileStream(TemporaryFileOnDiskHolder file_, const Block & header_, TemporaryDataOnDisk * parent_)
|
||||||
|
@ -151,9 +151,13 @@ public:
|
|||||||
|
|
||||||
Block read();
|
Block read();
|
||||||
|
|
||||||
ReadBufferFromFile in_file_buf;
|
const std::string path;
|
||||||
CompressedReadBuffer in_compressed_buf;
|
const size_t size;
|
||||||
NativeReader in_reader;
|
const std::optional<Block> header;
|
||||||
|
|
||||||
|
std::unique_ptr<ReadBufferFromFileBase> in_file_buf;
|
||||||
|
std::unique_ptr<CompressedReadBuffer> in_compressed_buf;
|
||||||
|
std::unique_ptr<NativeReader> in_reader;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct Stat
|
struct Stat
|
||||||
|
@ -404,8 +404,18 @@ void ASTCreateQuery::formatQueryImpl(const FormatSettings & settings, FormatStat
|
|||||||
<< quoteString(toString(to_inner_uuid));
|
<< quoteString(toString(to_inner_uuid));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool should_add_empty = is_create_empty;
|
||||||
|
auto add_empty_if_needed = [&]
|
||||||
|
{
|
||||||
|
if (!should_add_empty)
|
||||||
|
return;
|
||||||
|
should_add_empty = false;
|
||||||
|
settings.ostr << (settings.hilite ? hilite_keyword : "") << " EMPTY" << (settings.hilite ? hilite_none : "");
|
||||||
|
};
|
||||||
|
|
||||||
if (!as_table.empty())
|
if (!as_table.empty())
|
||||||
{
|
{
|
||||||
|
add_empty_if_needed();
|
||||||
settings.ostr
|
settings.ostr
|
||||||
<< (settings.hilite ? hilite_keyword : "") << " AS " << (settings.hilite ? hilite_none : "")
|
<< (settings.hilite ? hilite_keyword : "") << " AS " << (settings.hilite ? hilite_none : "")
|
||||||
<< (!as_database.empty() ? backQuoteIfNeed(as_database) + "." : "") << backQuoteIfNeed(as_table);
|
<< (!as_database.empty() ? backQuoteIfNeed(as_database) + "." : "") << backQuoteIfNeed(as_table);
|
||||||
@ -423,6 +433,7 @@ void ASTCreateQuery::formatQueryImpl(const FormatSettings & settings, FormatStat
|
|||||||
frame.expression_list_always_start_on_new_line = false;
|
frame.expression_list_always_start_on_new_line = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
add_empty_if_needed();
|
||||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << " AS " << (settings.hilite ? hilite_none : "");
|
settings.ostr << (settings.hilite ? hilite_keyword : "") << " AS " << (settings.hilite ? hilite_none : "");
|
||||||
as_table_function->formatImpl(settings, state, frame);
|
as_table_function->formatImpl(settings, state, frame);
|
||||||
}
|
}
|
||||||
@ -484,8 +495,8 @@ void ASTCreateQuery::formatQueryImpl(const FormatSettings & settings, FormatStat
|
|||||||
|
|
||||||
if (is_populate)
|
if (is_populate)
|
||||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << " POPULATE" << (settings.hilite ? hilite_none : "");
|
settings.ostr << (settings.hilite ? hilite_keyword : "") << " POPULATE" << (settings.hilite ? hilite_none : "");
|
||||||
else if (is_create_empty)
|
|
||||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << " EMPTY" << (settings.hilite ? hilite_none : "");
|
add_empty_if_needed();
|
||||||
|
|
||||||
if (sql_security && supportSQLSecurity() && sql_security->as<ASTSQLSecurity &>().type.has_value())
|
if (sql_security && supportSQLSecurity() && sql_security->as<ASTSQLSecurity &>().type.has_value())
|
||||||
{
|
{
|
||||||
|
@ -693,14 +693,14 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(QueryTreeNodePtr table_expres
|
|||||||
if (select_query_info.local_storage_limits.local_limits.size_limits.max_rows != 0)
|
if (select_query_info.local_storage_limits.local_limits.size_limits.max_rows != 0)
|
||||||
{
|
{
|
||||||
if (max_block_size_limited < select_query_info.local_storage_limits.local_limits.size_limits.max_rows)
|
if (max_block_size_limited < select_query_info.local_storage_limits.local_limits.size_limits.max_rows)
|
||||||
table_expression_query_info.limit = max_block_size_limited;
|
table_expression_query_info.trivial_limit = max_block_size_limited;
|
||||||
/// Ask to read just enough rows to make the max_rows limit effective (so it has a chance to be triggered).
|
/// Ask to read just enough rows to make the max_rows limit effective (so it has a chance to be triggered).
|
||||||
else if (select_query_info.local_storage_limits.local_limits.size_limits.max_rows < std::numeric_limits<UInt64>::max())
|
else if (select_query_info.local_storage_limits.local_limits.size_limits.max_rows < std::numeric_limits<UInt64>::max())
|
||||||
table_expression_query_info.limit = 1 + select_query_info.local_storage_limits.local_limits.size_limits.max_rows;
|
table_expression_query_info.trivial_limit = 1 + select_query_info.local_storage_limits.local_limits.size_limits.max_rows;
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
table_expression_query_info.limit = max_block_size_limited;
|
table_expression_query_info.trivial_limit = max_block_size_limited;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -913,8 +913,8 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(QueryTreeNodePtr table_expres
|
|||||||
auto result_ptr = reading->selectRangesToRead();
|
auto result_ptr = reading->selectRangesToRead();
|
||||||
|
|
||||||
UInt64 rows_to_read = result_ptr->selected_rows;
|
UInt64 rows_to_read = result_ptr->selected_rows;
|
||||||
if (table_expression_query_info.limit > 0 && table_expression_query_info.limit < rows_to_read)
|
if (table_expression_query_info.trivial_limit > 0 && table_expression_query_info.trivial_limit < rows_to_read)
|
||||||
rows_to_read = table_expression_query_info.limit;
|
rows_to_read = table_expression_query_info.trivial_limit;
|
||||||
|
|
||||||
if (max_block_size_limited && (max_block_size_limited < rows_to_read))
|
if (max_block_size_limited && (max_block_size_limited < rows_to_read))
|
||||||
rows_to_read = max_block_size_limited;
|
rows_to_read = max_block_size_limited;
|
||||||
|
@ -802,13 +802,12 @@ static std::shared_ptr<IJoin> tryCreateJoin(JoinAlgorithm algorithm,
|
|||||||
algorithm == JoinAlgorithm::PARALLEL_HASH ||
|
algorithm == JoinAlgorithm::PARALLEL_HASH ||
|
||||||
algorithm == JoinAlgorithm::DEFAULT)
|
algorithm == JoinAlgorithm::DEFAULT)
|
||||||
{
|
{
|
||||||
if (table_join->allowParallelHashJoin())
|
auto query_context = planner_context->getQueryContext();
|
||||||
{
|
|
||||||
auto query_context = planner_context->getQueryContext();
|
|
||||||
return std::make_shared<ConcurrentHashJoin>(query_context, table_join, query_context->getSettings().max_threads, right_table_expression_header);
|
|
||||||
}
|
|
||||||
|
|
||||||
return std::make_shared<HashJoin>(table_join, right_table_expression_header);
|
if (table_join->allowParallelHashJoin())
|
||||||
|
return std::make_shared<ConcurrentHashJoin>(query_context, table_join, query_context->getSettings().max_threads, right_table_expression_header);
|
||||||
|
|
||||||
|
return std::make_shared<HashJoin>(table_join, right_table_expression_header, query_context->getSettingsRef().join_any_take_last_row);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (algorithm == JoinAlgorithm::FULL_SORTING_MERGE)
|
if (algorithm == JoinAlgorithm::FULL_SORTING_MERGE)
|
||||||
|
@ -46,6 +46,15 @@ JSONEachRowRowInputFormat::JSONEachRowRowInputFormat(
|
|||||||
{
|
{
|
||||||
const auto & header = getPort().getHeader();
|
const auto & header = getPort().getHeader();
|
||||||
name_map = header.getNamesToIndexesMap();
|
name_map = header.getNamesToIndexesMap();
|
||||||
|
if (format_settings_.json.ignore_key_case)
|
||||||
|
{
|
||||||
|
for (auto & it : name_map)
|
||||||
|
{
|
||||||
|
StringRef key = it.first;
|
||||||
|
String lower_case_key = transformFieldNameToLowerCase(key);
|
||||||
|
lower_case_name_map[lower_case_key] = key;
|
||||||
|
}
|
||||||
|
}
|
||||||
if (format_settings_.import_nested_json)
|
if (format_settings_.import_nested_json)
|
||||||
{
|
{
|
||||||
for (size_t i = 0; i != header.columns(); ++i)
|
for (size_t i = 0; i != header.columns(); ++i)
|
||||||
@ -171,7 +180,15 @@ void JSONEachRowRowInputFormat::readJSONObject(MutableColumns & columns)
|
|||||||
skipUnknownField(name_ref);
|
skipUnknownField(name_ref);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
const size_t column_index = columnIndex(name_ref, key_index);
|
size_t column_index = 0;
|
||||||
|
if (format_settings.json.ignore_key_case)
|
||||||
|
{
|
||||||
|
String lower_case_name = transformFieldNameToLowerCase(name_ref);
|
||||||
|
StringRef field_name_ref = lower_case_name_map[lower_case_name];
|
||||||
|
column_index = columnIndex(field_name_ref, key_index);
|
||||||
|
}
|
||||||
|
else
|
||||||
|
column_index = columnIndex(name_ref, key_index);
|
||||||
|
|
||||||
if (unlikely(ssize_t(column_index) < 0))
|
if (unlikely(ssize_t(column_index) < 0))
|
||||||
{
|
{
|
||||||
|
@ -55,7 +55,13 @@ private:
|
|||||||
|
|
||||||
virtual void readRowStart(MutableColumns &) {}
|
virtual void readRowStart(MutableColumns &) {}
|
||||||
virtual void skipRowStart() {}
|
virtual void skipRowStart() {}
|
||||||
|
String transformFieldNameToLowerCase(const StringRef & field_name)
|
||||||
|
{
|
||||||
|
String field_name_str = field_name.toString();
|
||||||
|
std::transform(field_name_str.begin(), field_name_str.end(), field_name_str.begin(),
|
||||||
|
[](unsigned char c) { return std::tolower(c); });
|
||||||
|
return field_name_str;
|
||||||
|
}
|
||||||
/// Buffer for the read from the stream field name. Used when you have to copy it.
|
/// Buffer for the read from the stream field name. Used when you have to copy it.
|
||||||
/// Also, if processing of Nested data is in progress, it holds the common prefix
|
/// Also, if processing of Nested data is in progress, it holds the common prefix
|
||||||
/// of the nested column names (so that appending the field name to it produces
|
/// of the nested column names (so that appending the field name to it produces
|
||||||
@ -74,7 +80,8 @@ private:
|
|||||||
|
|
||||||
/// Hash table match `field name -> position in the block`. NOTE You can use perfect hash map.
|
/// Hash table match `field name -> position in the block`. NOTE You can use perfect hash map.
|
||||||
Block::NameMap name_map;
|
Block::NameMap name_map;
|
||||||
|
/// Hash table match `lower_case field name -> field name in the block`.
|
||||||
|
std::unordered_map<String, StringRef> lower_case_name_map;
|
||||||
/// Cached search results for previous row (keyed as index in JSON object) - used as a hint.
|
/// Cached search results for previous row (keyed as index in JSON object) - used as a hint.
|
||||||
std::vector<Block::NameMap::const_iterator> prev_positions;
|
std::vector<Block::NameMap::const_iterator> prev_positions;
|
||||||
|
|
||||||
|
@ -250,9 +250,9 @@ void ReadFromMergeTree::AnalysisResult::checkLimits(const Settings & settings, c
|
|||||||
{
|
{
|
||||||
/// Fail fast if estimated number of rows to read exceeds the limit
|
/// Fail fast if estimated number of rows to read exceeds the limit
|
||||||
size_t total_rows_estimate = selected_rows;
|
size_t total_rows_estimate = selected_rows;
|
||||||
if (query_info_.limit > 0 && total_rows_estimate > query_info_.limit)
|
if (query_info_.trivial_limit > 0 && total_rows_estimate > query_info_.trivial_limit)
|
||||||
{
|
{
|
||||||
total_rows_estimate = query_info_.limit;
|
total_rows_estimate = query_info_.trivial_limit;
|
||||||
}
|
}
|
||||||
limits.check(total_rows_estimate, 0, "rows (controlled by 'max_rows_to_read' setting)", ErrorCodes::TOO_MANY_ROWS);
|
limits.check(total_rows_estimate, 0, "rows (controlled by 'max_rows_to_read' setting)", ErrorCodes::TOO_MANY_ROWS);
|
||||||
leaf_limits.check(
|
leaf_limits.check(
|
||||||
@ -398,8 +398,8 @@ Pipe ReadFromMergeTree::readFromPool(
|
|||||||
{
|
{
|
||||||
size_t total_rows = parts_with_range.getRowsCountAllParts();
|
size_t total_rows = parts_with_range.getRowsCountAllParts();
|
||||||
|
|
||||||
if (query_info.limit > 0 && query_info.limit < total_rows)
|
if (query_info.trivial_limit > 0 && query_info.trivial_limit < total_rows)
|
||||||
total_rows = query_info.limit;
|
total_rows = query_info.trivial_limit;
|
||||||
|
|
||||||
const auto & settings = context->getSettingsRef();
|
const auto & settings = context->getSettingsRef();
|
||||||
|
|
||||||
@ -436,7 +436,7 @@ Pipe ReadFromMergeTree::readFromPool(
|
|||||||
* Because time spend during filling per thread tasks can be greater than whole query
|
* Because time spend during filling per thread tasks can be greater than whole query
|
||||||
* execution for big tables with small limit.
|
* execution for big tables with small limit.
|
||||||
*/
|
*/
|
||||||
bool use_prefetched_read_pool = query_info.limit == 0 && (allow_prefetched_remote || allow_prefetched_local);
|
bool use_prefetched_read_pool = query_info.trivial_limit == 0 && (allow_prefetched_remote || allow_prefetched_local);
|
||||||
|
|
||||||
if (use_prefetched_read_pool)
|
if (use_prefetched_read_pool)
|
||||||
{
|
{
|
||||||
@ -563,9 +563,8 @@ Pipe ReadFromMergeTree::readInOrder(
|
|||||||
/// Actually it means that parallel reading from replicas enabled
|
/// Actually it means that parallel reading from replicas enabled
|
||||||
/// and we have to collaborate with initiator.
|
/// and we have to collaborate with initiator.
|
||||||
/// In this case we won't set approximate rows, because it will be accounted multiple times.
|
/// In this case we won't set approximate rows, because it will be accounted multiple times.
|
||||||
/// Also do not count amount of read rows if we read in order of sorting key,
|
const auto in_order_limit = query_info.input_order_info ? query_info.input_order_info->limit : 0;
|
||||||
/// because we don't know actual amount of read rows in case when limit is set.
|
const bool set_total_rows_approx = !is_parallel_reading_from_replicas;
|
||||||
bool set_rows_approx = !is_parallel_reading_from_replicas && !reader_settings.read_in_order;
|
|
||||||
|
|
||||||
Pipes pipes;
|
Pipes pipes;
|
||||||
for (size_t i = 0; i < parts_with_ranges.size(); ++i)
|
for (size_t i = 0; i < parts_with_ranges.size(); ++i)
|
||||||
@ -573,8 +572,10 @@ Pipe ReadFromMergeTree::readInOrder(
|
|||||||
const auto & part_with_ranges = parts_with_ranges[i];
|
const auto & part_with_ranges = parts_with_ranges[i];
|
||||||
|
|
||||||
UInt64 total_rows = part_with_ranges.getRowsCount();
|
UInt64 total_rows = part_with_ranges.getRowsCount();
|
||||||
if (query_info.limit > 0 && query_info.limit < total_rows)
|
if (query_info.trivial_limit > 0 && query_info.trivial_limit < total_rows)
|
||||||
total_rows = query_info.limit;
|
total_rows = query_info.trivial_limit;
|
||||||
|
else if (in_order_limit > 0 && in_order_limit < total_rows)
|
||||||
|
total_rows = in_order_limit;
|
||||||
|
|
||||||
LOG_TRACE(log, "Reading {} ranges in{}order from part {}, approx. {} rows starting from {}",
|
LOG_TRACE(log, "Reading {} ranges in{}order from part {}, approx. {} rows starting from {}",
|
||||||
part_with_ranges.ranges.size(),
|
part_with_ranges.ranges.size(),
|
||||||
@ -595,7 +596,7 @@ Pipe ReadFromMergeTree::readInOrder(
|
|||||||
processor->addPartLevelToChunk(isQueryWithFinal());
|
processor->addPartLevelToChunk(isQueryWithFinal());
|
||||||
|
|
||||||
auto source = std::make_shared<MergeTreeSource>(std::move(processor));
|
auto source = std::make_shared<MergeTreeSource>(std::move(processor));
|
||||||
if (set_rows_approx)
|
if (set_total_rows_approx)
|
||||||
source->addTotalRowsApprox(total_rows);
|
source->addTotalRowsApprox(total_rows);
|
||||||
|
|
||||||
pipes.emplace_back(std::move(source));
|
pipes.emplace_back(std::move(source));
|
||||||
|
@ -393,7 +393,7 @@ ReadFromSystemNumbersStep::ReadFromSystemNumbersStep(
|
|||||||
, num_streams{num_streams_}
|
, num_streams{num_streams_}
|
||||||
, limit_length_and_offset(InterpreterSelectQuery::getLimitLengthAndOffset(query_info.query->as<ASTSelectQuery &>(), context))
|
, limit_length_and_offset(InterpreterSelectQuery::getLimitLengthAndOffset(query_info.query->as<ASTSelectQuery &>(), context))
|
||||||
, should_pushdown_limit(shouldPushdownLimit(query_info, limit_length_and_offset.first))
|
, should_pushdown_limit(shouldPushdownLimit(query_info, limit_length_and_offset.first))
|
||||||
, query_info_limit(query_info.limit)
|
, query_info_limit(query_info.trivial_limit)
|
||||||
, storage_limits(query_info.storage_limits)
|
, storage_limits(query_info.storage_limits)
|
||||||
{
|
{
|
||||||
storage_snapshot->check(column_names);
|
storage_snapshot->check(column_names);
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user