mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-10 01:25:21 +00:00
Merge remote-tracking branch 'origin/master' into distinct_in_order_wo_order_by
This commit is contained in:
commit
6f7d0fec52
208
CHANGELOG.md
208
CHANGELOG.md
@ -1,4 +1,5 @@
|
||||
### Table of Contents
|
||||
**[ClickHouse release v22.9, 2022-09-22](#229)**<br/>
|
||||
**[ClickHouse release v22.8, 2022-08-18](#228)**<br/>
|
||||
**[ClickHouse release v22.7, 2022-07-21](#227)**<br/>
|
||||
**[ClickHouse release v22.6, 2022-06-16](#226)**<br/>
|
||||
@ -10,6 +11,213 @@
|
||||
**[Changelog for 2021](https://clickhouse.com/docs/en/whats-new/changelog/2021/)**<br/>
|
||||
|
||||
|
||||
### <a id="229"></a> ClickHouse release 22.9, 2022-09-22
|
||||
|
||||
#### Backward Incompatible Change
|
||||
* Upgrade from 20.3 and older to 22.9 and newer should be done through an intermediate version if there are any `ReplicatedMergeTree` tables, otherwise server with the new version will not start. [#40641](https://github.com/ClickHouse/ClickHouse/pull/40641) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Remove the functions `accurate_Cast` and `accurate_CastOrNull` (they are different to `accurateCast` and `accurateCastOrNull` by underscore in the name and they are not affected by the value of `cast_keep_nullable` setting). These functions were undocumented, untested, unused, and unneeded. They appeared to be alive due to code generalization. [#40682](https://github.com/ClickHouse/ClickHouse/pull/40682) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add a test to ensure that every new table function will be documented. See [#40649](https://github.com/ClickHouse/ClickHouse/issues/40649). Rename table function `MeiliSearch` to `meilisearch`. [#40709](https://github.com/ClickHouse/ClickHouse/pull/40709) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add a test to ensure that every new function will be documented. See [#40649](https://github.com/ClickHouse/ClickHouse/pull/40649). The functions `lemmatize`, `synonyms`, `stem` were case-insensitive by mistake. Now they are case-sensitive. [#40711](https://github.com/ClickHouse/ClickHouse/pull/40711) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Make interpretation of YAML configs to be more conventional. [#41044](https://github.com/ClickHouse/ClickHouse/pull/41044) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
|
||||
#### New Feature
|
||||
* Support `insert_quorum = 'auto'` to use majority number. [#39970](https://github.com/ClickHouse/ClickHouse/pull/39970) ([Sachin](https://github.com/SachinSetiya)).
|
||||
* Add embedded dashboards to ClickHouse server. This is a demo project about how to achieve 90% results with 1% effort using ClickHouse features. [#40461](https://github.com/ClickHouse/ClickHouse/pull/40461) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added new settings constraint writability kind `changeable_in_readonly`. [#40631](https://github.com/ClickHouse/ClickHouse/pull/40631) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Add support for `INTERSECT DISTINCT` and `EXCEPT DISTINCT`. [#40792](https://github.com/ClickHouse/ClickHouse/pull/40792) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Add new input/output format `JSONObjectEachRow` - Support import for formats `JSON/JSONCompact/JSONColumnsWithMetadata`. Add new setting `input_format_json_validate_types_from_metadata` that controls whether we should check if data types from metadata match data types from the header. - Add new setting `input_format_json_validate_utf8`, when it's enabled, all `JSON` formats will validate UTF-8 sequences. It will be disabled by default. Note that this setting doesn't influence output formats `JSON/JSONCompact/JSONColumnsWithMetadata`, they always validate utf8 sequences (this exception was made because of compatibility reasons). - Add new setting `input_format_json_read_numbers_as_strings ` that allows to parse numbers in String column, the setting is disabled by default. - Add new setting `output_format_json_quote_decimals` that allows to output decimals in double quotes, disabled by default. - Allow to parse decimals in double quotes during data import. [#40910](https://github.com/ClickHouse/ClickHouse/pull/40910) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Query parameters supported in DESCRIBE TABLE query. [#40952](https://github.com/ClickHouse/ClickHouse/pull/40952) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Add support to Parquet Time32/64 by converting it into DateTime64. Parquet time32/64 represents time elapsed since midnight, while DateTime32/64 represents an actual unix timestamp. Conversion simply offsets from `0`. [#41333](https://github.com/ClickHouse/ClickHouse/pull/41333) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Implement set operations on Apache Datasketches. [#39919](https://github.com/ClickHouse/ClickHouse/pull/39919) ([Fangyuan Deng](https://github.com/pzhdfy)). Note: there is no point of using Apache Datasketches, they are inferiour than ClickHouse and only make sense for integration with other systems.
|
||||
* Allow recording errors to specified file while reading text formats (`CSV`, `TSV`). [#40516](https://github.com/ClickHouse/ClickHouse/pull/40516) ([zjial](https://github.com/zjial)).
|
||||
|
||||
#### Experimental Feature
|
||||
|
||||
* Add ANN (approximate nearest neighbor) index based on `Annoy`. [#40818](https://github.com/ClickHouse/ClickHouse/pull/40818) ([Filatenkov Artur](https://github.com/FArthur-cmd)). [#37215](https://github.com/ClickHouse/ClickHouse/pull/37215) ([VVMak](https://github.com/VVMak)).
|
||||
* Add new storage engine `KeeperMap`, that uses ClickHouse Keeper or ZooKeeper as a key-value store. [#39976](https://github.com/ClickHouse/ClickHouse/pull/39976) ([Antonio Andelic](https://github.com/antonio2368)). This storage engine is intended to store a small amount of metadata.
|
||||
* Improvement for in-memory data parts: remove completely processed WAL files. [#40592](https://github.com/ClickHouse/ClickHouse/pull/40592) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
||||
#### Performance Improvement
|
||||
* Implement compression of marks and primary key. Close [#34437](https://github.com/ClickHouse/ClickHouse/issues/34437). [#37693](https://github.com/ClickHouse/ClickHouse/pull/37693) ([zhongyuankai](https://github.com/zhongyuankai)).
|
||||
* Allow to load marks with threadpool in advance. Regulated by setting `load_marks_asynchronously` (default: 0). [#40821](https://github.com/ClickHouse/ClickHouse/pull/40821) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Virtual filesystem over s3 will use random object names split into multiple path prefixes for better performance on AWS. [#40968](https://github.com/ClickHouse/ClickHouse/pull/40968) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Account `max_block_size` value while producing single-level aggregation results. Allows to execute following query plan steps using more threads. [#39138](https://github.com/ClickHouse/ClickHouse/pull/39138) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Software prefetching is used in aggregation to speed up operations with hash tables. Controlled by the setting `enable_software_prefetch_in_aggregation`, enabled by default. [#39304](https://github.com/ClickHouse/ClickHouse/pull/39304) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Better support of `optimize_read_in_order` in case when some of sorting key columns are always constant after applying `WHERE` clause. E.g. query like `SELECT ... FROM table WHERE a = 'x' ORDER BY a, b`, where `table` has storage definition: `MergeTree ORDER BY (a, b)`. [#38715](https://github.com/ClickHouse/ClickHouse/pull/38715) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Filter joined streams for `full_sorting_join` by each other before sorting. [#39418](https://github.com/ClickHouse/ClickHouse/pull/39418) ([Vladimir C](https://github.com/vdimir)).
|
||||
* LZ4 decompression optimised by skipping empty literals processing. [#40142](https://github.com/ClickHouse/ClickHouse/pull/40142) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Speedup backup process using native `copy` when possible instead of copying through `clickhouse-server` memory. [#40395](https://github.com/ClickHouse/ClickHouse/pull/40395) ([alesapin](https://github.com/alesapin)).
|
||||
* Do not obtain storage snapshot for each INSERT block (slightly improves performance). [#40638](https://github.com/ClickHouse/ClickHouse/pull/40638) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Implement batch processing for aggregate functions with multiple nullable arguments. [#41058](https://github.com/ClickHouse/ClickHouse/pull/41058) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Speed up reading UniquesHashSet (`uniqState` from disk for example). [#41089](https://github.com/ClickHouse/ClickHouse/pull/41089) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fixed high memory usage while executing mutations of compact parts in tables with huge number of columns. [#41122](https://github.com/ClickHouse/ClickHouse/pull/41122) ([lthaooo](https://github.com/lthaooo)).
|
||||
* Enable the vectorscan library on ARM, this speeds up regexp evaluation. [#41033](https://github.com/ClickHouse/ClickHouse/pull/41033) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Upgrade vectorscan to 5.4.8 which has many performance optimizations to speed up regexp evaluation. [#41270](https://github.com/ClickHouse/ClickHouse/pull/41270) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix incorrect fallback to skip the local filesystem cache for VFS (like S3) which happened on very high concurrency level. [#40420](https://github.com/ClickHouse/ClickHouse/pull/40420) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* If row policy filter is always false, return empty result immediately without reading any data. This closes [#24012](https://github.com/ClickHouse/ClickHouse/issues/24012). [#40740](https://github.com/ClickHouse/ClickHouse/pull/40740) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Parallel hash JOIN for Float data types might be suboptimal. Make it better. [#41183](https://github.com/ClickHouse/ClickHouse/pull/41183) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Improvement
|
||||
* During startup and ATTACH call, `ReplicatedMergeTree` tables will be readonly until the ZooKeeper connection is made and the setup is finished. [#40148](https://github.com/ClickHouse/ClickHouse/pull/40148) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Add `enable_extended_results_for_datetime_functions` option to return results of type Date32 for functions toStartOfYear, toStartOfISOYear, toStartOfQuarter, toStartOfMonth, toStartOfWeek, toMonday and toLastDayOfMonth when argument is Date32 or DateTime64, otherwise results of Date type are returned. For compatibility reasons default value is ‘0’. [#41214](https://github.com/ClickHouse/ClickHouse/pull/41214) ([Roman Vasin](https://github.com/rvasin)).
|
||||
* For security and stability reasons, CatBoost models are no longer evaluated within the ClickHouse server. Instead, the evaluation is now done in the clickhouse-library-bridge, a separate process that loads the catboost library and communicates with the server process via HTTP. [#40897](https://github.com/ClickHouse/ClickHouse/pull/40897) ([Robert Schulze](https://github.com/rschu1ze)). [#39629](https://github.com/ClickHouse/ClickHouse/pull/39629) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Add more metrics for on-disk temporary data, close [#40206](https://github.com/ClickHouse/ClickHouse/issues/40206). [#40239](https://github.com/ClickHouse/ClickHouse/pull/40239) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Add config option `warning_supress_regexp`, close [#40330](https://github.com/ClickHouse/ClickHouse/issues/40330). [#40548](https://github.com/ClickHouse/ClickHouse/pull/40548) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Add setting to disable limit on kafka_num_consumers. Closes [#40331](https://github.com/ClickHouse/ClickHouse/issues/40331). [#40670](https://github.com/ClickHouse/ClickHouse/pull/40670) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Support `SETTINGS` in `DELETE ...` query. [#41533](https://github.com/ClickHouse/ClickHouse/pull/41533) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Detailed S3 profile events `DiskS3*` per S3 API call split for S3 ObjectStorage. [#41532](https://github.com/ClickHouse/ClickHouse/pull/41532) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Two new metrics in `system.asynchronous_metrics`. `NumberOfDetachedParts` and `NumberOfDetachedByUserParts`. [#40779](https://github.com/ClickHouse/ClickHouse/pull/40779) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Allow CONSTRAINTs for ODBC and JDBC tables. [#34551](https://github.com/ClickHouse/ClickHouse/pull/34551) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Don't print `SETTINGS` more than once during query formatting if it didn't appear multiple times in the original query. [#38900](https://github.com/ClickHouse/ClickHouse/pull/38900) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Improve the tracing (OpenTelemetry) context propagation across threads. [#39010](https://github.com/ClickHouse/ClickHouse/pull/39010) ([Frank Chen](https://github.com/FrankChen021)).
|
||||
* ClickHouse Keeper: add listeners for `interserver_listen_host` only in Keeper if specified. [#39973](https://github.com/ClickHouse/ClickHouse/pull/39973) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Improve recovery of Replicated user access storage after errors. [#39977](https://github.com/ClickHouse/ClickHouse/pull/39977) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Add support for TTL in `EmbeddedRocksDB`. [#39986](https://github.com/ClickHouse/ClickHouse/pull/39986) ([Lloyd-Pottiger](https://github.com/Lloyd-Pottiger)).
|
||||
* Add schema inference to `clickhouse-obfuscator`, so the `--structure` argument is no longer required. [#40120](https://github.com/ClickHouse/ClickHouse/pull/40120) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Improve and fix dictionaries in `Arrow` format. [#40173](https://github.com/ClickHouse/ClickHouse/pull/40173) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* More natural conversion of `Date32`, `DateTime64`, `Date` to narrower types: upper or lower normal value is considered when out of normal range. [#40217](https://github.com/ClickHouse/ClickHouse/pull/40217) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||
* Fix the case when `Merge` table over `View` cannot use index. [#40233](https://github.com/ClickHouse/ClickHouse/pull/40233) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Custom key names for JSON server logs. [#40251](https://github.com/ClickHouse/ClickHouse/pull/40251) ([Mallik Hassan](https://github.com/SadiHassan)).
|
||||
* It is now possible to set a custom error code for the exception thrown by function `throwIf`. [#40319](https://github.com/ClickHouse/ClickHouse/pull/40319) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Improve schema inference cache, respect format settings that can change the schema. [#40414](https://github.com/ClickHouse/ClickHouse/pull/40414) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Allow parsing `Date` as `DateTime` and `DateTime64`. This implements the enhancement proposed in [#36949](https://github.com/ClickHouse/ClickHouse/issues/36949). [#40474](https://github.com/ClickHouse/ClickHouse/pull/40474) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Allow conversion from `String` with `DateTime64` like `2022-08-22 01:02:03.456` to `Date` and `Date32`. Allow conversion from String with DateTime like `2022-08-22 01:02:03` to `Date32`. This closes [#39598](https://github.com/ClickHouse/ClickHouse/issues/39598). [#40475](https://github.com/ClickHouse/ClickHouse/pull/40475) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Better support for nested data structures in Parquet format [#40485](https://github.com/ClickHouse/ClickHouse/pull/40485) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Support reading Array(Record) into flatten nested table in Avro. [#40534](https://github.com/ClickHouse/ClickHouse/pull/40534) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Add read-only support for `EmbeddedRocksDB`. [#40543](https://github.com/ClickHouse/ClickHouse/pull/40543) ([Lloyd-Pottiger](https://github.com/Lloyd-Pottiger)).
|
||||
* Validate the compression method parameter of URL table engine. [#40600](https://github.com/ClickHouse/ClickHouse/pull/40600) ([Frank Chen](https://github.com/FrankChen021)).
|
||||
* Better format detection for url table function/engine in presence of a query string after a file name. Closes [#40315](https://github.com/ClickHouse/ClickHouse/issues/40315). [#40636](https://github.com/ClickHouse/ClickHouse/pull/40636) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Disable projection when grouping set is used. It generated wrong result. This fixes [#40635](https://github.com/ClickHouse/ClickHouse/issues/40635). [#40726](https://github.com/ClickHouse/ClickHouse/pull/40726) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix incorrect format of `APPLY` column transformer which can break metadata if used in table definition. This fixes [#37590](https://github.com/ClickHouse/ClickHouse/issues/37590). [#40727](https://github.com/ClickHouse/ClickHouse/pull/40727) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Support the `%z` descriptor for formatting the timezone offset in `formatDateTime`. [#40736](https://github.com/ClickHouse/ClickHouse/pull/40736) ([Cory Levy](https://github.com/LevyCory)).
|
||||
* The interactive mode in `clickhouse-client` now interprets `.` and `/` as "run the last command". [#40750](https://github.com/ClickHouse/ClickHouse/pull/40750) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix issue with passing MySQL timeouts for MySQL database engine and MySQL table function. Closes [#34168](https://github.com/ClickHouse/ClickHouse/issues/34168). [#40751](https://github.com/ClickHouse/ClickHouse/pull/40751) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Create status file for filesystem cache directory to make sure that cache directories are not shared between different servers or caches. [#40820](https://github.com/ClickHouse/ClickHouse/pull/40820) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Add support for `DELETE` and `UPDATE` for `EmbeddedRocksDB` storage. [#40853](https://github.com/ClickHouse/ClickHouse/pull/40853) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* ClickHouse Keeper: fix shutdown during long commit and increase allowed request size. [#40941](https://github.com/ClickHouse/ClickHouse/pull/40941) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix race in WriteBufferFromS3, add TSA annotations. [#40950](https://github.com/ClickHouse/ClickHouse/pull/40950) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Grouping sets with group_by_use_nulls should only convert key columns to nullable. [#40997](https://github.com/ClickHouse/ClickHouse/pull/40997) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Improve the observability of INSERT on distributed table. [#41034](https://github.com/ClickHouse/ClickHouse/pull/41034) ([Frank Chen](https://github.com/FrankChen021)).
|
||||
* More low-level metrics for S3 interaction. [#41039](https://github.com/ClickHouse/ClickHouse/pull/41039) ([mateng915](https://github.com/mateng0915)).
|
||||
* Support relative path in Location header after HTTP redirect. Closes [#40985](https://github.com/ClickHouse/ClickHouse/issues/40985). [#41162](https://github.com/ClickHouse/ClickHouse/pull/41162) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Apply changes to HTTP handlers on fly without server restart. [#41177](https://github.com/ClickHouse/ClickHouse/pull/41177) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* ClickHouse Keeper: properly close active sessions during shutdown. [#41215](https://github.com/ClickHouse/ClickHouse/pull/41215) ([Antonio Andelic](https://github.com/antonio2368)). This lowers the period of "table is read-only" errors.
|
||||
* Add ability to automatically comment SQL queries in clickhouse-client/local (with `Alt-#`, like in readline). [#41224](https://github.com/ClickHouse/ClickHouse/pull/41224) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix incompatibility of cache after switching setting `do_no_evict_index_and_mark_files` from 1 to 0, 0 to 1. [#41330](https://github.com/ClickHouse/ClickHouse/pull/41330) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Add a setting `allow_suspicious_fixed_string_types` to prevent users from creating columns of type FixedString with size > 256. [#41495](https://github.com/ClickHouse/ClickHouse/pull/41495) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Add `has_lightweight_delete` to system.parts. [#41564](https://github.com/ClickHouse/ClickHouse/pull/41564) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Enforce documentation for every setting. [#40644](https://github.com/ClickHouse/ClickHouse/pull/40644) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Enforce documentation for every current metric. [#40645](https://github.com/ClickHouse/ClickHouse/pull/40645) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Enforce documentation for every profile event counter. Write the documentation where it was missing. [#40646](https://github.com/ClickHouse/ClickHouse/pull/40646) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Allow minimal `clickhouse-local` build by correcting some dependencies. [#40460](https://github.com/ClickHouse/ClickHouse/pull/40460) ([Alexey Milovidov](https://github.com/alexey-milovidov)). It is less than 50 MiB.
|
||||
* Calculate and report SQL function coverage in tests. [#40593](https://github.com/ClickHouse/ClickHouse/issues/40593). [#40647](https://github.com/ClickHouse/ClickHouse/pull/40647) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Enforce documentation for every MergeTree setting. [#40648](https://github.com/ClickHouse/ClickHouse/pull/40648) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* A prototype of embedded reference documentation for high-level uniform server components. [#40649](https://github.com/ClickHouse/ClickHouse/pull/40649) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* We will check all queries from the changed perf tests to ensure that all changed queries were tested. [#40322](https://github.com/ClickHouse/ClickHouse/pull/40322) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Fix TGZ packages. [#40681](https://github.com/ClickHouse/ClickHouse/pull/40681) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Fix debug symbols. [#40873](https://github.com/ClickHouse/ClickHouse/pull/40873) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Extended the CI configuration to create a x86 SSE2-only build. Useful for old or embedded hardware. [#40999](https://github.com/ClickHouse/ClickHouse/pull/40999) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Switch to llvm/clang 15. [#41046](https://github.com/ClickHouse/ClickHouse/pull/41046) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Continuation of [#40938](https://github.com/ClickHouse/ClickHouse/issues/40938). Fix ODR violation for `Loggers` class. Fixes [#40398](https://github.com/ClickHouse/ClickHouse/issues/40398), [#40937](https://github.com/ClickHouse/ClickHouse/issues/40937). [#41060](https://github.com/ClickHouse/ClickHouse/pull/41060) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Add macOS binaries to GitHub release assets, it fixes [#37718](https://github.com/ClickHouse/ClickHouse/issues/37718). [#41088](https://github.com/ClickHouse/ClickHouse/pull/41088) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* The c-ares library is now bundled with ClickHouse's build system. [#41239](https://github.com/ClickHouse/ClickHouse/pull/41239) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Get rid of `dlopen` from the main ClickHouse code. It remains in the library-bridge and odbc-bridge. [#41428](https://github.com/ClickHouse/ClickHouse/pull/41428) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Don't allow `dlopen` in the main ClickHouse binary, because it is harmful and insecure. We don't use it. But it can be used by some libraries for the implementation of "plugins". We absolutely discourage the ancient technique of loading 3rd-party uncontrolled dangerous libraries into the process address space, because it is insane. [#41429](https://github.com/ClickHouse/ClickHouse/pull/41429) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add `source` field to deb packages, update `nfpm`. [#41531](https://github.com/ClickHouse/ClickHouse/pull/41531) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Support for DWARF-5 in the in-house DWARF parser. [#40710](https://github.com/ClickHouse/ClickHouse/pull/40710) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add fault injection in ZooKeeper client for testing [#30498](https://github.com/ClickHouse/ClickHouse/pull/30498) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Add stateless tests with s3 storage with debug and tsan [#35262](https://github.com/ClickHouse/ClickHouse/pull/35262) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Trying stress on top of S3 [#36837](https://github.com/ClickHouse/ClickHouse/pull/36837) ([alesapin](https://github.com/alesapin)).
|
||||
* Enable `concurrency-mt-unsafe` in `clang-tidy` [#40224](https://github.com/ClickHouse/ClickHouse/pull/40224) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix potential dataloss due to [a bug in AWS SDK](https://github.com/aws/aws-sdk-cpp/issues/658). Bug can be triggered only when clickhouse is used over S3. [#40506](https://github.com/ClickHouse/ClickHouse/pull/40506) ([alesapin](https://github.com/alesapin)). This bug has been open for 5 years in AWS SDK and is closed after our report.
|
||||
* Malicious data in Native format might cause a crash. [#41441](https://github.com/ClickHouse/ClickHouse/pull/41441) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* The aggregate function `categorialInformationValue` was having incorrectly defined properties, which might cause a null pointer dereferencing at runtime. This closes [#41443](https://github.com/ClickHouse/ClickHouse/issues/41443). [#41449](https://github.com/ClickHouse/ClickHouse/pull/41449) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Writing data in Apache `ORC` format might lead to a buffer overrun. [#41458](https://github.com/ClickHouse/ClickHouse/pull/41458) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix memory safety issues with functions `encrypt` and `contingency` if Array of Nullable is used as an argument. This fixes [#41004](https://github.com/ClickHouse/ClickHouse/issues/41004). [#40195](https://github.com/ClickHouse/ClickHouse/pull/40195) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix bugs in MergeJoin when 'not_processed' is not null. [#40335](https://github.com/ClickHouse/ClickHouse/pull/40335) ([liql2007](https://github.com/liql2007)).
|
||||
* Fix incorrect result in case of decimal precision loss in IN operator, ref [#41125](https://github.com/ClickHouse/ClickHouse/issues/41125). [#41130](https://github.com/ClickHouse/ClickHouse/pull/41130) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Fix filling of missed `Nested` columns with multiple levels. [#37152](https://github.com/ClickHouse/ClickHouse/pull/37152) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix SYSTEM UNFREEZE query for Ordinary (deprecated) database. Fix for https://github.com/ClickHouse/ClickHouse/pull/36424. [#38262](https://github.com/ClickHouse/ClickHouse/pull/38262) ([Vadim Volodin](https://github.com/PolyProgrammist)).
|
||||
* Fix unused unknown columns introduced by WITH statement. This fixes [#37812](https://github.com/ClickHouse/ClickHouse/issues/37812) . [#39131](https://github.com/ClickHouse/ClickHouse/pull/39131) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix query analysis for ORDER BY in presence of window functions. Fixes [#38741](https://github.com/ClickHouse/ClickHouse/issues/38741) Fixes [#24892](https://github.com/ClickHouse/ClickHouse/issues/24892). [#39354](https://github.com/ClickHouse/ClickHouse/pull/39354) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Fixed `Unknown identifier (aggregate-function)` exception which appears when a user tries to calculate WINDOW ORDER BY/PARTITION BY expressions over aggregate functions. [#39762](https://github.com/ClickHouse/ClickHouse/pull/39762) ([Vladimir Chebotaryov](https://github.com/quickhouse)).
|
||||
* Limit number of analyze for one query with setting `max_analyze_depth`. It prevents exponential blow up of analysis time for queries with extraordinarily large number of subqueries. [#40334](https://github.com/ClickHouse/ClickHouse/pull/40334) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Fix rare bug with column TTL for MergeTree engines family: In case of repeated vertical merge the error `Cannot unlink file ColumnName.bin ... No such file or directory.` could happen. [#40346](https://github.com/ClickHouse/ClickHouse/pull/40346) ([alesapin](https://github.com/alesapin)).
|
||||
* Use DNS entries for both IPv4 and IPv6 if present. [#40353](https://github.com/ClickHouse/ClickHouse/pull/40353) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Allow to read snappy compressed files from Hadoop. [#40482](https://github.com/ClickHouse/ClickHouse/pull/40482) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix crash while parsing values of type `Object` (experimental feature) that contains arrays of variadic dimension. [#40483](https://github.com/ClickHouse/ClickHouse/pull/40483) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Fix settings `input_format_tsv_skip_first_lines`. [#40491](https://github.com/ClickHouse/ClickHouse/pull/40491) ([mini4](https://github.com/mini4)).
|
||||
* Fix bug (race condition) when starting up MaterializedPostgreSQL database/table engine. [#40262](https://github.com/ClickHouse/ClickHouse/issues/40262). Fix error with reaching limit of relcache_callback_list slots. [#40511](https://github.com/ClickHouse/ClickHouse/pull/40511) ([Maksim Buren](https://github.com/maks-buren630501)).
|
||||
* Fix possible error 'Decimal math overflow' while parsing DateTime64. [#40546](https://github.com/ClickHouse/ClickHouse/pull/40546) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix vertical merge of parts with lightweight deleted rows. [#40559](https://github.com/ClickHouse/ClickHouse/pull/40559) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Fix segment fault when writing data to URL table engine if it enables compression. [#40565](https://github.com/ClickHouse/ClickHouse/pull/40565) ([Frank Chen](https://github.com/FrankChen021)).
|
||||
* Fix possible logical error `'Invalid Field get from type UInt64 to type String'` in arrayElement function with Map. [#40572](https://github.com/ClickHouse/ClickHouse/pull/40572) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix possible race in filesystem cache. [#40586](https://github.com/ClickHouse/ClickHouse/pull/40586) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Removed skipping of mutations in unaffected partitions of `MergeTree` tables, because this feature never worked correctly and might cause resurrection of finished mutations. [#40589](https://github.com/ClickHouse/ClickHouse/pull/40589) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* The clickhouse server will crash if we add a grpc port which has been occupied to the configuration in runtime. [#40597](https://github.com/ClickHouse/ClickHouse/pull/40597) ([何李夫](https://github.com/helifu)).
|
||||
* Fix `base58Encode / base58Decode` handling leading 0 / '1'. [#40620](https://github.com/ClickHouse/ClickHouse/pull/40620) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||
* keeper-fix: fix race in accessing logs while snapshot is being installed. [#40627](https://github.com/ClickHouse/ClickHouse/pull/40627) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix short circuit execution of toFixedString function. Solves (partially) [#40622](https://github.com/ClickHouse/ClickHouse/issues/40622). [#40628](https://github.com/ClickHouse/ClickHouse/pull/40628) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* - Fixes SQLite int8 column conversion to int64 column in ClickHouse. Fixes [#40639](https://github.com/ClickHouse/ClickHouse/issues/40639). [#40642](https://github.com/ClickHouse/ClickHouse/pull/40642) ([Barum Rho](https://github.com/barumrho)).
|
||||
* Fix stack overflow in recursive `Buffer` tables. This closes [#40637](https://github.com/ClickHouse/ClickHouse/issues/40637). [#40643](https://github.com/ClickHouse/ClickHouse/pull/40643) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* During insertion of a new query to the `ProcessList` allocations happen. If we reach the memory limit during these allocations we can not use `OvercommitTracker`, because `ProcessList::mutex` is already acquired. Fixes [#40611](https://github.com/ClickHouse/ClickHouse/issues/40611). [#40677](https://github.com/ClickHouse/ClickHouse/pull/40677) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Fix LOGICAL_ERROR with max_read_buffer_size=0 during reading marks. [#40705](https://github.com/ClickHouse/ClickHouse/pull/40705) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix memory leak while pushing to MVs w/o query context (from Kafka/...). [#40732](https://github.com/ClickHouse/ClickHouse/pull/40732) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix possible error Attempt to read after eof in CSV schema inference. [#40746](https://github.com/ClickHouse/ClickHouse/pull/40746) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix logical error in write-through cache "File segment completion can be done only by downloader". Closes [#40748](https://github.com/ClickHouse/ClickHouse/issues/40748). [#40759](https://github.com/ClickHouse/ClickHouse/pull/40759) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Make the result of GROUPING function the same as in SQL and other DBMS. [#40762](https://github.com/ClickHouse/ClickHouse/pull/40762) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* In [#40595](https://github.com/ClickHouse/ClickHouse/issues/40595) it was reported that the `host_regexp` functionality was not working properly with a name to address resolution in `/etc/hosts`. It's fixed. [#40769](https://github.com/ClickHouse/ClickHouse/pull/40769) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Fix incremental backups for Log family. [#40827](https://github.com/ClickHouse/ClickHouse/pull/40827) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix extremely rare bug which can lead to potential data loss in zero-copy replication. [#40844](https://github.com/ClickHouse/ClickHouse/pull/40844) ([alesapin](https://github.com/alesapin)).
|
||||
* - Fix key condition analyzing crashes when same set expression built from different column(s). [#40850](https://github.com/ClickHouse/ClickHouse/pull/40850) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Fix nested JSON Objects schema inference. [#40851](https://github.com/ClickHouse/ClickHouse/pull/40851) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix 3-digit prefix directory for filesystem cache files not being deleted if empty. Closes [#40797](https://github.com/ClickHouse/ClickHouse/issues/40797). [#40867](https://github.com/ClickHouse/ClickHouse/pull/40867) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix uncaught DNS_ERROR on failed connection to replicas. [#40881](https://github.com/ClickHouse/ClickHouse/pull/40881) ([Robert Coelho](https://github.com/coelho)).
|
||||
* Fix bug when removing unneeded columns in subquery. [#40884](https://github.com/ClickHouse/ClickHouse/pull/40884) ([luocongkai](https://github.com/TKaxe)).
|
||||
* Fix extra memory allocation for remote read buffers. [#40896](https://github.com/ClickHouse/ClickHouse/pull/40896) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fixed a behaviour when user with explicitly revoked grant for dropping databases can still drop it. [#40906](https://github.com/ClickHouse/ClickHouse/pull/40906) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* A fix for ClickHouse Keeper: correctly compare paths in write requests to Keeper internal system node paths. [#40918](https://github.com/ClickHouse/ClickHouse/pull/40918) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix deadlock in WriteBufferFromS3. [#40943](https://github.com/ClickHouse/ClickHouse/pull/40943) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix access rights for `DESCRIBE TABLE url()` and some other `DESCRIBE TABLE <table_function>()`. [#40975](https://github.com/ClickHouse/ClickHouse/pull/40975) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Remove wrong parser logic for `WITH GROUPING SETS` which may lead to nullptr dereference. [#41049](https://github.com/ClickHouse/ClickHouse/pull/41049) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* A fix for ClickHouse Keeper: fix possible segfault during Keeper shutdown. [#41075](https://github.com/ClickHouse/ClickHouse/pull/41075) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix possible segfaults, use-heap-after-free and memory leak in aggregate function combinators. Closes [#40848](https://github.com/ClickHouse/ClickHouse/issues/40848). [#41083](https://github.com/ClickHouse/ClickHouse/pull/41083) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix query_views_log with Window views. [#41132](https://github.com/ClickHouse/ClickHouse/pull/41132) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Disables optimize_monotonous_functions_in_order_by by default, mitigates: [#40094](https://github.com/ClickHouse/ClickHouse/issues/40094). [#41136](https://github.com/ClickHouse/ClickHouse/pull/41136) ([Denny Crane](https://github.com/den-crane)).
|
||||
* Fixed "possible deadlock avoided" error on automatic conversion of database engine from Ordinary to Atomic. [#41146](https://github.com/ClickHouse/ClickHouse/pull/41146) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix SIGSEGV in SortedBlocksWriter in case of empty block (possible to get with `optimize_aggregation_in_order` and `join_algorithm=auto`). [#41154](https://github.com/ClickHouse/ClickHouse/pull/41154) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix incorrect query result when trivial count optimization is in effect with array join. This fixes [#39431](https://github.com/ClickHouse/ClickHouse/issues/39431). [#41158](https://github.com/ClickHouse/ClickHouse/pull/41158) ([Denny Crane](https://github.com/den-crane)).
|
||||
* Fix stack-use-after-return in GetPriorityForLoadBalancing::getPriorityFunc(). [#41159](https://github.com/ClickHouse/ClickHouse/pull/41159) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix positional arguments exception Positional argument out of bounds. Closes [#40634](https://github.com/ClickHouse/ClickHouse/issues/40634). [#41189](https://github.com/ClickHouse/ClickHouse/pull/41189) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix background clean up of broken detached parts. [#41190](https://github.com/ClickHouse/ClickHouse/pull/41190) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix exponential query rewrite in case of lots of cross joins with where, close [#21557](https://github.com/ClickHouse/ClickHouse/issues/21557). [#41223](https://github.com/ClickHouse/ClickHouse/pull/41223) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Fix possible logical error in write-through cache, which happened because not all types of exception were handled as needed. Closes [#41208](https://github.com/ClickHouse/ClickHouse/issues/41208). [#41232](https://github.com/ClickHouse/ClickHouse/pull/41232) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix String log entry in system.filesystem_cache_log. [#41233](https://github.com/ClickHouse/ClickHouse/pull/41233) ([jmimbrero](https://github.com/josemimbrero-tinybird)).
|
||||
* Queries with `OFFSET` clause in subquery and `WHERE` clause in outer query might return incorrect result, it's fixed. Fixes [#40416](https://github.com/ClickHouse/ClickHouse/issues/40416). [#41280](https://github.com/ClickHouse/ClickHouse/pull/41280) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix possible wrong query result with `query_plan_optimize_primary_key` enabled. Fixes [#40599](https://github.com/ClickHouse/ClickHouse/issues/40599). [#41281](https://github.com/ClickHouse/ClickHouse/pull/41281) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Do not allow invalid sequences influence other rows in lowerUTF8/upperUTF8. [#41286](https://github.com/ClickHouse/ClickHouse/pull/41286) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix `ALTER <table> ADD COLUMN` queries with columns of type `Object`. [#41290](https://github.com/ClickHouse/ClickHouse/pull/41290) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fixed "No node" error when selecting from `system.distributed_ddl_queue` when there's no `distributed_ddl.path` in config. Fixes [#41096](https://github.com/ClickHouse/ClickHouse/issues/41096). [#41296](https://github.com/ClickHouse/ClickHouse/pull/41296) ([young scott](https://github.com/young-scott)).
|
||||
* Fix incorrect logical error `Expected relative path` in disk object storage. Related to [#41246](https://github.com/ClickHouse/ClickHouse/issues/41246). [#41297](https://github.com/ClickHouse/ClickHouse/pull/41297) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Add column type check before UUID insertion in MsgPack format. [#41309](https://github.com/ClickHouse/ClickHouse/pull/41309) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix possible crash after inserting asynchronously (with enabled setting `async_insert`) malformed data to columns of type `Object`. It could happen, if JSONs in all batches of async inserts were invalid and could not be parsed. [#41336](https://github.com/ClickHouse/ClickHouse/pull/41336) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix possible deadlock with async_socket_for_remote/use_hedged_requests and parallel KILL. [#41343](https://github.com/ClickHouse/ClickHouse/pull/41343) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Disables optimize_rewrite_sum_if_to_count_if by default, mitigates: [#38605](https://github.com/ClickHouse/ClickHouse/issues/38605) [#38683](https://github.com/ClickHouse/ClickHouse/issues/38683). [#41388](https://github.com/ClickHouse/ClickHouse/pull/41388) ([Denny Crane](https://github.com/den-crane)).
|
||||
* Since 22.8 `ON CLUSTER` clause is ignored if database is `Replicated` and cluster name and database name are the same. Because of this `DROP PARTITION ON CLUSTER` worked unexpected way with `Replicated`. It's fixed, now `ON CLUSTER` clause is ignored only for queries that are replicated on database level. Fixes [#41299](https://github.com/ClickHouse/ClickHouse/issues/41299). [#41390](https://github.com/ClickHouse/ClickHouse/pull/41390) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix possible hung/deadlock on query cancellation (`KILL QUERY` or server shutdown). [#41467](https://github.com/ClickHouse/ClickHouse/pull/41467) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix possible server crash when using the JBOD feature. This fixes [#41365](https://github.com/ClickHouse/ClickHouse/issues/41365). [#41483](https://github.com/ClickHouse/ClickHouse/pull/41483) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix conversion from nullable fixed string to string. [#41541](https://github.com/ClickHouse/ClickHouse/pull/41541) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Prevent crash when passing wrong aggregation states to groupBitmap*. [#41563](https://github.com/ClickHouse/ClickHouse/pull/41563) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Queries with `ORDER BY` and `1500 <= LIMIT <= max_block_size` could return incorrect result with missing rows from top. Fixes [#41182](https://github.com/ClickHouse/ClickHouse/issues/41182). [#41576](https://github.com/ClickHouse/ClickHouse/pull/41576) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix read bytes/rows in X-ClickHouse-Summary with materialized views. [#41586](https://github.com/ClickHouse/ClickHouse/pull/41586) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix possible `pipeline stuck` exception for queries with `OFFSET`. The error was found with `enable_optimize_predicate_expression = 0` and always false condition in `WHERE`. Fixes [#41383](https://github.com/ClickHouse/ClickHouse/issues/41383). [#41588](https://github.com/ClickHouse/ClickHouse/pull/41588) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
|
||||
|
||||
### <a id="228"></a> ClickHouse release 22.8, 2022-08-18
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
@ -1,12 +1,12 @@
|
||||
# This variables autochanged by release_lib.sh:
|
||||
# This variables autochanged by tests/ci/version_helper.py:
|
||||
|
||||
# NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION,
|
||||
# only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes.
|
||||
SET(VERSION_REVISION 54466)
|
||||
SET(VERSION_REVISION 54467)
|
||||
SET(VERSION_MAJOR 22)
|
||||
SET(VERSION_MINOR 9)
|
||||
SET(VERSION_MINOR 10)
|
||||
SET(VERSION_PATCH 1)
|
||||
SET(VERSION_GITHASH 09a2ff88435f79e5279745bbe1dc0e5e401df38d)
|
||||
SET(VERSION_DESCRIBE v22.9.1.1-testing)
|
||||
SET(VERSION_STRING 22.9.1.1)
|
||||
SET(VERSION_GITHASH 3030d4c7ff09ec44ab07d0a8069ea923227288a1)
|
||||
SET(VERSION_DESCRIBE v22.10.1.1-testing)
|
||||
SET(VERSION_STRING 22.10.1.1)
|
||||
# end of autochange
|
||||
|
@ -106,8 +106,8 @@ fi
|
||||
|
||||
if [ -n "$(ls /docker-entrypoint-initdb.d/)" ] || [ -n "$CLICKHOUSE_DB" ]; then
|
||||
# port is needed to check if clickhouse-server is ready for connections
|
||||
HTTP_PORT="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=http_port)"
|
||||
HTTPS_PORT="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=https_port)"
|
||||
HTTP_PORT="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=http_port --try)"
|
||||
HTTPS_PORT="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=https_port --try)"
|
||||
|
||||
if [ -n "$HTTP_PORT" ]; then
|
||||
URL="http://127.0.0.1:$HTTP_PORT/ping"
|
||||
|
@ -213,9 +213,31 @@ Cache **commands**:
|
||||
|
||||
- `SYSTEM DROP FILESYSTEM CACHE (<path>) (ON CLUSTER)`
|
||||
|
||||
- `SHOW CACHES` -- show list of caches which were configured on the server.
|
||||
- `SHOW FILESYSTEM CACHES` -- show list of filesystem caches which were configured on the server. (For versions <= `22.8` the command is named `SHOW CACHES`)
|
||||
|
||||
- `DESCRIBE CACHE '<cache_name>'` - show cache configuration and some general statistics for a specific cache. Cache name can be taken from `SHOW CACHES` command.
|
||||
```sql
|
||||
SHOW FILESYSTEM CACHES
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─Caches────┐
|
||||
│ s3_cache │
|
||||
└───────────┘
|
||||
```
|
||||
|
||||
- `DESCRIBE CACHE '<cache_name>'` - show cache configuration and some general statistics for a specific cache. Cache name can be taken from `SHOW CACHES` command. (For versions <= `22.8` the command is named `DESCRIBE CACHE`)
|
||||
|
||||
```sql
|
||||
DESCRIBE CACHE 's3_cache'
|
||||
```
|
||||
|
||||
``` text
|
||||
┌────max_size─┬─max_elements─┬─max_file_segment_size─┬─cache_on_write_operations─┬─enable_cache_hits_threshold─┬─current_size─┬─current_elements─┬─path────────┬─do_not_evict_index_and_mark_files─┐
|
||||
│ 10000000000 │ 1048576 │ 104857600 │ 1 │ 0 │ 3276 │ 54 │ /s3_cache/ │ 1 │
|
||||
└─────────────┴──────────────┴───────────────────────┴───────────────────────────┴─────────────────────────────┴──────────────┴──────────────────┴─────────────┴───────────────────────────────────┘
|
||||
```
|
||||
|
||||
Cache current metrics:
|
||||
|
||||
|
@ -362,7 +362,7 @@ SHOW ACCESS
|
||||
|
||||
Returns a list of clusters. All available clusters are listed in the [system.clusters](../../operations/system-tables/clusters.md) table.
|
||||
|
||||
:::note
|
||||
:::note
|
||||
`SHOW CLUSTER name` query displays the contents of system.clusters table for this cluster.
|
||||
:::
|
||||
|
||||
@ -493,6 +493,20 @@ Result:
|
||||
└──────────────────┴────────┴─────────────┘
|
||||
```
|
||||
|
||||
## SHOW FILESYSTEM CACHES
|
||||
|
||||
```sql
|
||||
SHOW FILESYSTEM CACHES
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─Caches────┐
|
||||
│ s3_cache │
|
||||
└───────────┘
|
||||
```
|
||||
|
||||
**See Also**
|
||||
|
||||
- [system.settings](../../operations/system-tables/settings.md) table
|
||||
|
@ -159,7 +159,7 @@ Provides possibility to stop background merges for tables in the MergeTree famil
|
||||
SYSTEM STOP MERGES [ON VOLUME <volume_name> | [db.]merge_tree_family_table_name]
|
||||
```
|
||||
|
||||
:::note
|
||||
:::note
|
||||
`DETACH / ATTACH` table will start background merges for the table even in case when merges have been stopped for all MergeTree tables before.
|
||||
:::
|
||||
|
||||
@ -303,7 +303,7 @@ One may execute query after:
|
||||
Replica attaches locally found parts and sends info about them to Zookeeper.
|
||||
Parts present on a replica before metadata loss are not re-fetched from other ones if not being outdated (so replica restoration does not mean re-downloading all data over the network).
|
||||
|
||||
:::warning
|
||||
:::warning
|
||||
Parts in all states are moved to `detached/` folder. Parts active before data loss (committed) are attached.
|
||||
:::
|
||||
|
||||
@ -345,3 +345,11 @@ SYSTEM RESTORE REPLICA test ON CLUSTER cluster;
|
||||
### RESTART REPLICAS
|
||||
|
||||
Provides possibility to reinitialize Zookeeper sessions state for all `ReplicatedMergeTree` tables, will compare current state with Zookeeper as source of true and add tasks to Zookeeper queue if needed
|
||||
|
||||
### DROP FILESYSTEM CACHE
|
||||
|
||||
Allows to drop filesystem cache.
|
||||
|
||||
```sql
|
||||
SYSTEM DROP FILESYSTEM CACHE
|
||||
```
|
||||
|
@ -1474,23 +1474,6 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
||||
/// try set up encryption. There are some errors in config, error will be printed and server wouldn't start.
|
||||
CompressionCodecEncrypted::Configuration::instance().load(config(), "encryption_codecs");
|
||||
|
||||
std::unique_ptr<DNSCacheUpdater> dns_cache_updater;
|
||||
if (config().has("disable_internal_dns_cache") && config().getInt("disable_internal_dns_cache"))
|
||||
{
|
||||
/// Disable DNS caching at all
|
||||
DNSResolver::instance().setDisableCacheFlag();
|
||||
LOG_DEBUG(log, "DNS caching disabled");
|
||||
}
|
||||
else
|
||||
{
|
||||
/// Initialize a watcher periodically updating DNS cache
|
||||
dns_cache_updater = std::make_unique<DNSCacheUpdater>(
|
||||
global_context, config().getInt("dns_cache_update_period", 15), config().getUInt("dns_max_consecutive_failures", 5));
|
||||
}
|
||||
|
||||
if (dns_cache_updater)
|
||||
dns_cache_updater->start();
|
||||
|
||||
SCOPE_EXIT({
|
||||
/// Stop reloading of the main config. This must be done before `global_context->shutdown()` because
|
||||
/// otherwise the reloading may pass a changed config to some destroyed parts of ContextSharedPart.
|
||||
@ -1547,6 +1530,27 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
||||
LOG_DEBUG(log, "Destroyed global context.");
|
||||
});
|
||||
|
||||
/// DNSCacheUpdater uses BackgroundSchedulePool which lives in shared context
|
||||
/// and thus this object must be created after the SCOPE_EXIT object where shared
|
||||
/// context is destroyed.
|
||||
/// In addition this object has to be created before the loading of the tables.
|
||||
std::unique_ptr<DNSCacheUpdater> dns_cache_updater;
|
||||
if (config().has("disable_internal_dns_cache") && config().getInt("disable_internal_dns_cache"))
|
||||
{
|
||||
/// Disable DNS caching at all
|
||||
DNSResolver::instance().setDisableCacheFlag();
|
||||
LOG_DEBUG(log, "DNS caching disabled");
|
||||
}
|
||||
else
|
||||
{
|
||||
/// Initialize a watcher periodically updating DNS cache
|
||||
dns_cache_updater = std::make_unique<DNSCacheUpdater>(
|
||||
global_context, config().getInt("dns_cache_update_period", 15), config().getUInt("dns_max_consecutive_failures", 5));
|
||||
}
|
||||
|
||||
if (dns_cache_updater)
|
||||
dns_cache_updater->start();
|
||||
|
||||
/// Set current database name before loading tables and databases because
|
||||
/// system logs may copy global context.
|
||||
global_context->setCurrentDatabaseNameInGlobalContext(default_database);
|
||||
|
@ -462,8 +462,9 @@
|
||||
<tmp_path>/var/lib/clickhouse/tmp/</tmp_path>
|
||||
|
||||
<!-- Disable AuthType plaintext_password and no_password for ACL. -->
|
||||
<!-- <allow_plaintext_password>0</allow_plaintext_password> -->
|
||||
<!-- <allow_no_password>0</allow_no_password> -->`
|
||||
<allow_plaintext_password>1</allow_plaintext_password>
|
||||
<allow_no_password>1</allow_no_password>
|
||||
<allow_implicit_no_password>1</allow_implicit_no_password>
|
||||
|
||||
<!-- Policy from the <storage_configuration> for the temporary files.
|
||||
If not set <tmp_path> is used, otherwise <tmp_path> is ignored.
|
||||
|
@ -162,6 +162,7 @@ void AccessControl::setUpFromMainConfig(const Poco::Util::AbstractConfiguration
|
||||
if (config_.has("custom_settings_prefixes"))
|
||||
setCustomSettingsPrefixes(config_.getString("custom_settings_prefixes"));
|
||||
|
||||
setImplicitNoPasswordAllowed(config_.getBool("allow_implicit_no_password", true));
|
||||
setNoPasswordAllowed(config_.getBool("allow_no_password", true));
|
||||
setPlaintextPasswordAllowed(config_.getBool("allow_plaintext_password", true));
|
||||
|
||||
@ -499,6 +500,15 @@ void AccessControl::checkSettingNameIsAllowed(const std::string_view setting_nam
|
||||
custom_settings_prefixes->checkSettingNameIsAllowed(setting_name);
|
||||
}
|
||||
|
||||
void AccessControl::setImplicitNoPasswordAllowed(bool allow_implicit_no_password_)
|
||||
{
|
||||
allow_implicit_no_password = allow_implicit_no_password_;
|
||||
}
|
||||
|
||||
bool AccessControl::isImplicitNoPasswordAllowed() const
|
||||
{
|
||||
return allow_implicit_no_password;
|
||||
}
|
||||
|
||||
void AccessControl::setNoPasswordAllowed(bool allow_no_password_)
|
||||
{
|
||||
|
@ -134,6 +134,11 @@ public:
|
||||
bool isSettingNameAllowed(const std::string_view name) const;
|
||||
void checkSettingNameIsAllowed(const std::string_view name) const;
|
||||
|
||||
/// Allows implicit user creation without password (by default it's allowed).
|
||||
/// In other words, allow 'CREATE USER' queries without 'IDENTIFIED WITH' clause.
|
||||
void setImplicitNoPasswordAllowed(const bool allow_implicit_no_password_);
|
||||
bool isImplicitNoPasswordAllowed() const;
|
||||
|
||||
/// Allows users without password (by default it's allowed).
|
||||
void setNoPasswordAllowed(const bool allow_no_password_);
|
||||
bool isNoPasswordAllowed() const;
|
||||
@ -222,6 +227,7 @@ private:
|
||||
std::unique_ptr<AccessChangesNotifier> changes_notifier;
|
||||
std::atomic_bool allow_plaintext_password = true;
|
||||
std::atomic_bool allow_no_password = true;
|
||||
std::atomic_bool allow_implicit_no_password = true;
|
||||
std::atomic_bool users_without_row_policies_can_read_rows = false;
|
||||
std::atomic_bool on_cluster_queries_require_cluster_grant = false;
|
||||
std::atomic_bool select_from_system_db_requires_grant = false;
|
||||
|
@ -25,7 +25,7 @@ enum class AccessType
|
||||
M(SHOW_DICTIONARIES, "", DICTIONARY, SHOW) /* allows to execute SHOW DICTIONARIES, SHOW CREATE DICTIONARY, EXISTS <dictionary>;
|
||||
implicitly enabled by any grant on the dictionary */\
|
||||
M(SHOW, "", GROUP, ALL) /* allows to execute SHOW, USE, EXISTS, CHECK, DESCRIBE */\
|
||||
M(SHOW_CACHES, "", GROUP, ALL) \
|
||||
M(SHOW_FILESYSTEM_CACHES, "", GROUP, ALL) \
|
||||
\
|
||||
M(SELECT, "", COLUMN, ALL) \
|
||||
M(INSERT, "", COLUMN, ALL) \
|
||||
|
@ -92,7 +92,6 @@ static constexpr UInt64 operator""_GiB(unsigned long long value)
|
||||
M(Bool, s3_truncate_on_insert, false, "Enables or disables truncate before insert in s3 engine tables.", 0) \
|
||||
M(Bool, s3_create_new_file_on_insert, false, "Enables or disables creating a new file on each insert in s3 engine tables", 0) \
|
||||
M(Bool, s3_check_objects_after_upload, false, "Check each uploaded object to s3 with head request to be sure that upload was successful", 0) \
|
||||
M(Bool, s3_allow_parallel_part_upload, true, "Use multiple threads for s3 multipart upload. It may lead to slightly higher memory usage", 0) \
|
||||
M(Bool, enable_s3_requests_logging, false, "Enable very explicit logging of S3 requests. Makes sense for debug only.", 0) \
|
||||
M(UInt64, hdfs_replication, 0, "The actual number of replications can be specified when the hdfs file is created.", 0) \
|
||||
M(Bool, hdfs_truncate_on_insert, false, "Enables or disables truncate before insert in s3 engine tables", 0) \
|
||||
|
@ -24,13 +24,13 @@ bool IDisk::isDirectoryEmpty(const String & path) const
|
||||
return !iterateDirectory(path)->isValid();
|
||||
}
|
||||
|
||||
void IDisk::copyFile(const String & from_file_path, IDisk & to_disk, const String & to_file_path, const WriteSettings & settings) /// NOLINT
|
||||
void IDisk::copyFile(const String & from_file_path, IDisk & to_disk, const String & to_file_path)
|
||||
{
|
||||
LOG_DEBUG(&Poco::Logger::get("IDisk"), "Copying from {} (path: {}) {} to {} (path: {}) {}.",
|
||||
getName(), getPath(), from_file_path, to_disk.getName(), to_disk.getPath(), to_file_path);
|
||||
|
||||
auto in = readFile(from_file_path);
|
||||
auto out = to_disk.writeFile(to_file_path, DBMS_DEFAULT_BUFFER_SIZE, WriteMode::Rewrite, settings);
|
||||
auto out = to_disk.writeFile(to_file_path);
|
||||
copyData(*in, *out);
|
||||
out->finalize();
|
||||
}
|
||||
@ -56,15 +56,15 @@ void IDisk::removeSharedFiles(const RemoveBatchRequest & files, bool keep_all_ba
|
||||
|
||||
using ResultsCollector = std::vector<std::future<void>>;
|
||||
|
||||
void asyncCopy(IDisk & from_disk, String from_path, IDisk & to_disk, String to_path, Executor & exec, ResultsCollector & results, bool copy_root_dir, const WriteSettings & settings)
|
||||
void asyncCopy(IDisk & from_disk, String from_path, IDisk & to_disk, String to_path, Executor & exec, ResultsCollector & results, bool copy_root_dir)
|
||||
{
|
||||
if (from_disk.isFile(from_path))
|
||||
{
|
||||
auto result = exec.execute(
|
||||
[&from_disk, from_path, &to_disk, to_path, &settings]()
|
||||
[&from_disk, from_path, &to_disk, to_path]()
|
||||
{
|
||||
setThreadName("DiskCopier");
|
||||
from_disk.copyFile(from_path, to_disk, fs::path(to_path) / fileName(from_path), settings);
|
||||
from_disk.copyFile(from_path, to_disk, fs::path(to_path) / fileName(from_path));
|
||||
});
|
||||
|
||||
results.push_back(std::move(result));
|
||||
@ -80,7 +80,7 @@ void asyncCopy(IDisk & from_disk, String from_path, IDisk & to_disk, String to_p
|
||||
}
|
||||
|
||||
for (auto it = from_disk.iterateDirectory(from_path); it->isValid(); it->next())
|
||||
asyncCopy(from_disk, it->path(), to_disk, dest, exec, results, true, settings);
|
||||
asyncCopy(from_disk, it->path(), to_disk, dest, exec, results, true);
|
||||
}
|
||||
}
|
||||
|
||||
@ -89,12 +89,7 @@ void IDisk::copyThroughBuffers(const String & from_path, const std::shared_ptr<I
|
||||
auto & exec = to_disk->getExecutor();
|
||||
ResultsCollector results;
|
||||
|
||||
WriteSettings settings;
|
||||
/// Disable parallel write. We already copy in parallel.
|
||||
/// Avoid high memory usage. See test_s3_zero_copy_ttl/test.py::test_move_and_s3_memory_usage
|
||||
settings.s3_allow_parallel_part_upload = false;
|
||||
|
||||
asyncCopy(*this, from_path, *to_disk, to_path, exec, results, copy_root_dir, settings);
|
||||
asyncCopy(*this, from_path, *to_disk, to_path, exec, results, copy_root_dir);
|
||||
|
||||
for (auto & result : results)
|
||||
result.wait();
|
||||
|
@ -174,11 +174,7 @@ public:
|
||||
virtual void copyDirectoryContent(const String & from_dir, const std::shared_ptr<IDisk> & to_disk, const String & to_dir);
|
||||
|
||||
/// Copy file `from_file_path` to `to_file_path` located at `to_disk`.
|
||||
virtual void copyFile( /// NOLINT
|
||||
const String & from_file_path,
|
||||
IDisk & to_disk,
|
||||
const String & to_file_path,
|
||||
const WriteSettings & settings = {});
|
||||
virtual void copyFile(const String & from_file_path, IDisk & to_disk, const String & to_file_path);
|
||||
|
||||
/// List files at `path` and add their names to `file_names`
|
||||
virtual void listFiles(const String & path, std::vector<String> & file_names) const = 0;
|
||||
|
@ -285,7 +285,7 @@ bool DiskObjectStorage::checkUniqueId(const String & id) const
|
||||
{
|
||||
if (!id.starts_with(object_storage_root_path))
|
||||
{
|
||||
LOG_DEBUG(log, "Blob with id {} doesn't start with blob storage prefix {}", id, object_storage_root_path);
|
||||
LOG_DEBUG(log, "Blob with id {} doesn't start with blob storage prefix {}, Stack {}", id, object_storage_root_path, StackTrace().toString());
|
||||
return false;
|
||||
}
|
||||
|
||||
|
@ -230,10 +230,6 @@ std::unique_ptr<WriteBufferFromFileBase> S3ObjectStorage::writeObject( /// NOLIN
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "S3 doesn't support append to files");
|
||||
|
||||
auto settings_ptr = s3_settings.get();
|
||||
ScheduleFunc scheduler;
|
||||
if (write_settings.s3_allow_parallel_part_upload)
|
||||
scheduler = threadPoolCallbackRunner(getThreadPoolWriter());
|
||||
|
||||
auto s3_buffer = std::make_unique<WriteBufferFromS3>(
|
||||
client.get(),
|
||||
bucket,
|
||||
@ -241,7 +237,7 @@ std::unique_ptr<WriteBufferFromFileBase> S3ObjectStorage::writeObject( /// NOLIN
|
||||
settings_ptr->s3_settings,
|
||||
attributes,
|
||||
buf_size,
|
||||
std::move(scheduler),
|
||||
threadPoolCallbackRunner(getThreadPoolWriter()),
|
||||
disk_write_settings);
|
||||
|
||||
return std::make_unique<WriteIndirectBufferFromRemoteFS>(
|
||||
|
@ -71,12 +71,14 @@ const char * S3_LOGGER_TAG_NAMES[][2] = {
|
||||
|
||||
const std::pair<DB::LogsLevel, Poco::Message::Priority> & convertLogLevel(Aws::Utils::Logging::LogLevel log_level)
|
||||
{
|
||||
/// We map levels to our own logger 1 to 1 except WARN+ levels. In most cases we failover such errors with retries
|
||||
/// and don't want to see them as Errors in our logs.
|
||||
static const std::unordered_map<Aws::Utils::Logging::LogLevel, std::pair<DB::LogsLevel, Poco::Message::Priority>> mapping =
|
||||
{
|
||||
{Aws::Utils::Logging::LogLevel::Off, {DB::LogsLevel::none, Poco::Message::PRIO_FATAL}},
|
||||
{Aws::Utils::Logging::LogLevel::Fatal, {DB::LogsLevel::error, Poco::Message::PRIO_FATAL}},
|
||||
{Aws::Utils::Logging::LogLevel::Error, {DB::LogsLevel::error, Poco::Message::PRIO_ERROR}},
|
||||
{Aws::Utils::Logging::LogLevel::Warn, {DB::LogsLevel::warning, Poco::Message::PRIO_WARNING}},
|
||||
{Aws::Utils::Logging::LogLevel::Off, {DB::LogsLevel::none, Poco::Message::PRIO_INFORMATION}},
|
||||
{Aws::Utils::Logging::LogLevel::Fatal, {DB::LogsLevel::information, Poco::Message::PRIO_INFORMATION}},
|
||||
{Aws::Utils::Logging::LogLevel::Error, {DB::LogsLevel::information, Poco::Message::PRIO_INFORMATION}},
|
||||
{Aws::Utils::Logging::LogLevel::Warn, {DB::LogsLevel::information, Poco::Message::PRIO_INFORMATION}},
|
||||
{Aws::Utils::Logging::LogLevel::Info, {DB::LogsLevel::information, Poco::Message::PRIO_INFORMATION}},
|
||||
{Aws::Utils::Logging::LogLevel::Debug, {DB::LogsLevel::debug, Poco::Message::PRIO_TEST}},
|
||||
{Aws::Utils::Logging::LogLevel::Trace, {DB::LogsLevel::trace, Poco::Message::PRIO_TEST}},
|
||||
|
@ -15,7 +15,6 @@ struct WriteSettings
|
||||
bool enable_filesystem_cache_on_write_operations = false;
|
||||
bool enable_filesystem_cache_log = false;
|
||||
bool is_file_cache_persistent = false;
|
||||
bool s3_allow_parallel_part_upload = true;
|
||||
|
||||
/// Monitoring
|
||||
bool for_object_storage = false; // to choose which profile events should be incremented
|
||||
|
@ -100,9 +100,14 @@ BlockIO InterpreterCreateUserQuery::execute()
|
||||
auto & access_control = getContext()->getAccessControl();
|
||||
auto access = getContext()->getAccess();
|
||||
access->checkAccess(query.alter ? AccessType::ALTER_USER : AccessType::CREATE_USER);
|
||||
bool implicit_no_password_allowed = access_control.isImplicitNoPasswordAllowed();
|
||||
bool no_password_allowed = access_control.isNoPasswordAllowed();
|
||||
bool plaintext_password_allowed = access_control.isPlaintextPasswordAllowed();
|
||||
|
||||
if (!query.attach && !query.alter && !query.auth_data && !implicit_no_password_allowed)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||
"Authentication type NO_PASSWORD must be explicitly specified, check the setting allow_implicit_no_password in the server configuration");
|
||||
|
||||
std::optional<RolesOrUsersSet> default_roles_from_query;
|
||||
if (query.default_roles)
|
||||
{
|
||||
|
@ -181,19 +181,19 @@ public:
|
||||
/// because each projection can provide some columns as inputs to substitute certain sub-DAGs
|
||||
/// (expressions). Consider the following example:
|
||||
/// CREATE TABLE tbl (dt DateTime, val UInt64,
|
||||
/// PROJECTION p_hour (SELECT SUM(val) GROUP BY toStartOfHour(dt)));
|
||||
/// PROJECTION p_hour (SELECT sum(val) GROUP BY toStartOfHour(dt)));
|
||||
///
|
||||
/// Query: SELECT toStartOfHour(dt), SUM(val) FROM tbl GROUP BY toStartOfHour(dt);
|
||||
/// Query: SELECT toStartOfHour(dt), sum(val) FROM tbl GROUP BY toStartOfHour(dt);
|
||||
///
|
||||
/// We will have an ActionsDAG like this:
|
||||
/// FUNCTION: toStartOfHour(dt) SUM(val)
|
||||
/// FUNCTION: toStartOfHour(dt) sum(val)
|
||||
/// ^ ^
|
||||
/// | |
|
||||
/// INPUT: dt val
|
||||
///
|
||||
/// Now we traverse the DAG and see if any FUNCTION node can be replaced by projection's INPUT node.
|
||||
/// The result DAG will be:
|
||||
/// INPUT: toStartOfHour(dt) SUM(val)
|
||||
/// INPUT: toStartOfHour(dt) sum(val)
|
||||
///
|
||||
/// We don't need aggregate columns from projection because they are matched after DAG.
|
||||
/// Currently we use canonical names of each node to find matches. It can be improved after we
|
||||
|
@ -14,7 +14,7 @@ namespace ErrorCodes
|
||||
void FileCacheSettings::loadFromConfig(const Poco::Util::AbstractConfiguration & config, const std::string & config_prefix)
|
||||
{
|
||||
if (!config.has(config_prefix + ".max_size"))
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Expected cache size (`size`) in configuration");
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Expected cache size (`max_size`) in configuration");
|
||||
|
||||
max_size = config.getUInt64(config_prefix + ".max_size", 0);
|
||||
if (max_size == 0)
|
||||
|
@ -3443,7 +3443,6 @@ WriteSettings Context::getWriteSettings() const
|
||||
|
||||
res.enable_filesystem_cache_on_write_operations = settings.enable_filesystem_cache_on_write_operations;
|
||||
res.enable_filesystem_cache_log = settings.enable_filesystem_cache_log;
|
||||
res.s3_allow_parallel_part_upload = settings.s3_allow_parallel_part_upload;
|
||||
|
||||
res.remote_throttler = getRemoteWriteThrottler();
|
||||
|
||||
|
@ -5,6 +5,13 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
/// Rewrite function names to their canonical forms.
|
||||
/// For example, rewrite (1) to (2)
|
||||
/// (1) SELECT suM(1), AVG(2);
|
||||
/// (2) SELECT sum(1), avg(2);
|
||||
///
|
||||
/// It's used to help projection query analysis matching function nodes by their canonical names.
|
||||
/// See the comment of ActionsDAG::foldActionsByProjection for details.
|
||||
struct FunctionNameNormalizer
|
||||
{
|
||||
static void visit(IAST *);
|
||||
|
@ -31,7 +31,7 @@ static Block getSampleBlock()
|
||||
|
||||
BlockIO InterpreterDescribeCacheQuery::execute()
|
||||
{
|
||||
getContext()->checkAccess(AccessType::SHOW_CACHES);
|
||||
getContext()->checkAccess(AccessType::SHOW_FILESYSTEM_CACHES);
|
||||
|
||||
const auto & ast = query_ptr->as<ASTDescribeCacheQuery &>();
|
||||
Block sample_block = getSampleBlock();
|
||||
|
@ -150,7 +150,7 @@ BlockIO InterpreterShowTablesQuery::execute()
|
||||
const auto & query = query_ptr->as<ASTShowTablesQuery &>();
|
||||
if (query.caches)
|
||||
{
|
||||
getContext()->checkAccess(AccessType::SHOW_CACHES);
|
||||
getContext()->checkAccess(AccessType::SHOW_FILESYSTEM_CACHES);
|
||||
|
||||
Block sample_block{ColumnWithTypeAndName(std::make_shared<DataTypeString>(), "Caches")};
|
||||
MutableColumns res_columns = sample_block.cloneEmptyColumns();
|
||||
|
@ -21,7 +21,7 @@ public:
|
||||
protected:
|
||||
void formatQueryImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override
|
||||
{
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "DESCRIBE CACHE" << (settings.hilite ? hilite_none : "") << " " << cache_name;
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "DESCRIBE FILESYSTEM CACHE" << (settings.hilite ? hilite_none : "") << " " << cache_name;
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -57,7 +57,7 @@ void ASTShowTablesQuery::formatQueryImpl(const FormatSettings & settings, Format
|
||||
}
|
||||
else if (caches)
|
||||
{
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "SHOW CACHES" << (settings.hilite ? hilite_none : "");
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "SHOW FILESYSTEM CACHES" << (settings.hilite ? hilite_none : "");
|
||||
formatLike(settings);
|
||||
formatLimit(settings, state, frame);
|
||||
}
|
||||
|
@ -11,7 +11,7 @@ bool ParserDescribeCacheQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & ex
|
||||
{
|
||||
ParserKeyword p_describe("DESCRIBE");
|
||||
ParserKeyword p_desc("DESC");
|
||||
ParserKeyword p_cache("CACHE");
|
||||
ParserKeyword p_cache("FILESYSTEM CACHE");
|
||||
ParserLiteral p_cache_name;
|
||||
|
||||
if ((!p_describe.ignore(pos, expected) && !p_desc.ignore(pos, expected))
|
||||
|
@ -8,12 +8,12 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
/** Query (DESCRIBE | DESC) CACHE 'cache_name'
|
||||
/** Query (DESCRIBE | DESC) FILESYSTEM CACHE 'cache_name'
|
||||
*/
|
||||
class ParserDescribeCacheQuery : public IParserBase
|
||||
{
|
||||
protected:
|
||||
const char * getName() const override { return "DESCRIBE CACHE query"; }
|
||||
const char * getName() const override { return "DESCRIBE FILESYSTEM CACHE query"; }
|
||||
bool parseImpl(Pos & pos, ASTPtr & node, Expected & expected) override;
|
||||
};
|
||||
|
||||
|
@ -24,7 +24,7 @@ bool ParserShowTablesQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expec
|
||||
ParserKeyword s_clusters("CLUSTERS");
|
||||
ParserKeyword s_cluster("CLUSTER");
|
||||
ParserKeyword s_dictionaries("DICTIONARIES");
|
||||
ParserKeyword s_caches("CACHES");
|
||||
ParserKeyword s_caches("FILESYSTEM CACHES");
|
||||
ParserKeyword s_settings("SETTINGS");
|
||||
ParserKeyword s_changed("CHANGED");
|
||||
ParserKeyword s_from("FROM");
|
||||
|
@ -32,10 +32,10 @@ void JSONCompactEachRowRowOutputFormat::writeField(const IColumn & column, const
|
||||
WriteBufferFromOwnString buf;
|
||||
|
||||
serialization.serializeText(column, row_num, buf, settings);
|
||||
writeJSONString(buf.str(), out, settings);
|
||||
writeJSONString(buf.str(), *ostr, settings);
|
||||
}
|
||||
else
|
||||
serialization.serializeTextJSON(column, row_num, out, settings);
|
||||
serialization.serializeTextJSON(column, row_num, *ostr, settings);
|
||||
}
|
||||
|
||||
|
||||
@ -47,7 +47,7 @@ void JSONCompactEachRowRowOutputFormat::writeFieldDelimiter()
|
||||
|
||||
void JSONCompactEachRowRowOutputFormat::writeRowStartDelimiter()
|
||||
{
|
||||
writeChar('[', out);
|
||||
writeChar('[', *ostr);
|
||||
}
|
||||
|
||||
|
||||
@ -77,9 +77,9 @@ void JSONCompactEachRowRowOutputFormat::writeLine(const std::vector<String> & va
|
||||
writeRowStartDelimiter();
|
||||
for (size_t i = 0; i < values.size(); ++i)
|
||||
{
|
||||
writeChar('\"', out);
|
||||
writeString(values[i], out);
|
||||
writeChar('\"', out);
|
||||
writeChar('\"', *ostr);
|
||||
writeString(values[i], *ostr);
|
||||
writeChar('\"', *ostr);
|
||||
if (i != values.size() - 1)
|
||||
writeFieldDelimiter();
|
||||
}
|
||||
|
@ -30,13 +30,13 @@ void JSONEachRowRowOutputFormat::writeField(const IColumn & column, const ISeria
|
||||
|
||||
void JSONEachRowRowOutputFormat::writeFieldDelimiter()
|
||||
{
|
||||
writeChar(',', out);
|
||||
writeChar(',', *ostr);
|
||||
}
|
||||
|
||||
|
||||
void JSONEachRowRowOutputFormat::writeRowStartDelimiter()
|
||||
{
|
||||
writeChar('{', out);
|
||||
writeChar('{', *ostr);
|
||||
}
|
||||
|
||||
|
||||
@ -92,7 +92,7 @@ void JSONEachRowRowOutputFormat::writePrefix()
|
||||
{
|
||||
if (settings.json.array_of_rows)
|
||||
{
|
||||
writeCString("[\n", out);
|
||||
writeCString("[\n", *ostr);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -187,7 +187,13 @@ void Service::processQuery(const HTMLForm & params, ReadBuffer & /*body*/, Write
|
||||
{
|
||||
/// Send metadata if the receiver's capability covers the source disk type.
|
||||
response.addCookie({"remote_fs_metadata", disk_type});
|
||||
sendPartFromDiskRemoteMeta(part, out);
|
||||
if (client_protocol_version >= REPLICATION_PROTOCOL_VERSION_WITH_PARTS_PROJECTION)
|
||||
{
|
||||
const auto & projections = part->getProjectionParts();
|
||||
writeBinary(projections.size(), out);
|
||||
}
|
||||
|
||||
sendPartFromDiskRemoteMeta(part, out, true, part->getProjectionParts());
|
||||
return;
|
||||
}
|
||||
}
|
||||
@ -274,7 +280,6 @@ MergeTreeData::DataPart::Checksums Service::sendPartFromDisk(
|
||||
checksums.files[file_name] = {};
|
||||
}
|
||||
|
||||
//auto disk = part->volume->getDisk();
|
||||
MergeTreeData::DataPart::Checksums data_checksums;
|
||||
for (const auto & [name, projection] : part->getProjectionParts())
|
||||
{
|
||||
@ -329,7 +334,11 @@ MergeTreeData::DataPart::Checksums Service::sendPartFromDisk(
|
||||
return data_checksums;
|
||||
}
|
||||
|
||||
void Service::sendPartFromDiskRemoteMeta(const MergeTreeData::DataPartPtr & part, WriteBuffer & out)
|
||||
MergeTreeData::DataPart::Checksums Service::sendPartFromDiskRemoteMeta(
|
||||
const MergeTreeData::DataPartPtr & part,
|
||||
WriteBuffer & out,
|
||||
bool send_part_id,
|
||||
const std::map<String, std::shared_ptr<IMergeTreeDataPart>> & projections)
|
||||
{
|
||||
const auto * data_part_storage_on_disk = dynamic_cast<const DataPartStorageOnDisk *>(part->data_part_storage.get());
|
||||
if (!data_part_storage_on_disk)
|
||||
@ -345,6 +354,12 @@ void Service::sendPartFromDiskRemoteMeta(const MergeTreeData::DataPartPtr & part
|
||||
for (const auto & file_name : file_names_without_checksums)
|
||||
checksums.files[file_name] = {};
|
||||
|
||||
for (const auto & [name, projection] : part->getProjectionParts())
|
||||
{
|
||||
// Get rid of projection files
|
||||
checksums.files.erase(name + ".proj");
|
||||
}
|
||||
|
||||
std::vector<std::string> paths;
|
||||
paths.reserve(checksums.files.size());
|
||||
for (const auto & it : checksums.files)
|
||||
@ -353,8 +368,30 @@ void Service::sendPartFromDiskRemoteMeta(const MergeTreeData::DataPartPtr & part
|
||||
/// Serialized metadatadatas with zero ref counts.
|
||||
auto metadatas = data_part_storage_on_disk->getSerializedMetadata(paths);
|
||||
|
||||
String part_id = data_part_storage_on_disk->getUniqueId();
|
||||
writeStringBinary(part_id, out);
|
||||
if (send_part_id)
|
||||
{
|
||||
String part_id = data_part_storage_on_disk->getUniqueId();
|
||||
writeStringBinary(part_id, out);
|
||||
}
|
||||
|
||||
MergeTreeData::DataPart::Checksums data_checksums;
|
||||
for (const auto & [name, projection] : part->getProjectionParts())
|
||||
{
|
||||
auto it = projections.find(name);
|
||||
if (it != projections.end())
|
||||
{
|
||||
|
||||
writeStringBinary(name, out);
|
||||
MergeTreeData::DataPart::Checksums projection_checksum = sendPartFromDiskRemoteMeta(it->second, out, false);
|
||||
data_checksums.addFile(name + ".proj", projection_checksum.getTotalSizeOnDisk(), projection_checksum.getTotalChecksumUInt128());
|
||||
}
|
||||
else if (part->checksums.has(name + ".proj"))
|
||||
{
|
||||
// We don't send this projection, just add out checksum to bypass the following check
|
||||
const auto & our_checksum = part->checksums.files.find(name + ".proj")->second;
|
||||
data_checksums.addFile(name + ".proj", our_checksum.file_size, our_checksum.file_hash);
|
||||
}
|
||||
}
|
||||
|
||||
writeBinary(checksums.files.size(), out);
|
||||
for (const auto & it : checksums.files)
|
||||
@ -387,7 +424,12 @@ void Service::sendPartFromDiskRemoteMeta(const MergeTreeData::DataPartPtr & part
|
||||
throw Exception(ErrorCodes::BAD_SIZE_OF_FILE_IN_DATA_PART, "Unexpected size of file {}", metadata_file_path);
|
||||
|
||||
writePODBinary(hashing_out.getHash(), out);
|
||||
|
||||
if (!file_names_without_checksums.contains(file_name))
|
||||
data_checksums.addFile(file_name, hashing_out.count(), hashing_out.getHash());
|
||||
}
|
||||
|
||||
return data_checksums;
|
||||
}
|
||||
|
||||
MergeTreeData::DataPartPtr Service::findPart(const String & name)
|
||||
@ -560,6 +602,12 @@ MergeTreeData::MutableDataPartPtr Fetcher::fetchSelectedPart(
|
||||
readUUIDText(part_uuid, *in);
|
||||
|
||||
String remote_fs_metadata = parse<String>(in->getResponseCookie("remote_fs_metadata", ""));
|
||||
|
||||
size_t projections = 0;
|
||||
if (server_protocol_version >= REPLICATION_PROTOCOL_VERSION_WITH_PARTS_PROJECTION)
|
||||
readBinary(projections, *in);
|
||||
|
||||
MergeTreeData::DataPart::Checksums checksums;
|
||||
if (!remote_fs_metadata.empty())
|
||||
{
|
||||
if (!try_zero_copy)
|
||||
@ -573,7 +621,7 @@ MergeTreeData::MutableDataPartPtr Fetcher::fetchSelectedPart(
|
||||
|
||||
try
|
||||
{
|
||||
return downloadPartToDiskRemoteMeta(part_name, replica_path, to_detached, tmp_prefix, disk, *in, throttler);
|
||||
return downloadPartToDiskRemoteMeta(part_name, replica_path, to_detached, tmp_prefix, disk, *in, projections, checksums, throttler);
|
||||
}
|
||||
|
||||
catch (const Exception & e)
|
||||
@ -624,11 +672,6 @@ MergeTreeData::MutableDataPartPtr Fetcher::fetchSelectedPart(
|
||||
|
||||
in->setNextCallback(ReplicatedFetchReadCallback(*entry));
|
||||
|
||||
size_t projections = 0;
|
||||
if (server_protocol_version >= REPLICATION_PROTOCOL_VERSION_WITH_PARTS_PROJECTION)
|
||||
readBinary(projections, *in);
|
||||
|
||||
MergeTreeData::DataPart::Checksums checksums;
|
||||
return part_type == "InMemory"
|
||||
? downloadPartToMemory(part_name, part_uuid, metadata_snapshot, context, disk, *in, projections, throttler)
|
||||
: downloadPartToDisk(part_name, replica_path, to_detached, tmp_prefix, sync, disk, *in, projections, checksums, throttler);
|
||||
@ -725,6 +768,63 @@ MergeTreeData::MutableDataPartPtr Fetcher::downloadPartToMemory(
|
||||
return new_data_part;
|
||||
}
|
||||
|
||||
void Fetcher::downloadBasePartOrProjectionPartToDiskRemoteMeta(
|
||||
const String & replica_path,
|
||||
DataPartStorageBuilderPtr & data_part_storage_builder,
|
||||
PooledReadWriteBufferFromHTTP & in,
|
||||
MergeTreeData::DataPart::Checksums & checksums,
|
||||
ThrottlerPtr throttler) const
|
||||
{
|
||||
size_t files;
|
||||
readBinary(files, in);
|
||||
|
||||
for (size_t i = 0; i < files; ++i)
|
||||
{
|
||||
String file_name;
|
||||
UInt64 file_size;
|
||||
|
||||
readStringBinary(file_name, in);
|
||||
readBinary(file_size, in);
|
||||
|
||||
String metadata_file = fs::path(data_part_storage_builder->getFullPath()) / file_name;
|
||||
|
||||
{
|
||||
auto file_out = std::make_unique<WriteBufferFromFile>(metadata_file, DBMS_DEFAULT_BUFFER_SIZE, -1, 0666, nullptr, 0);
|
||||
|
||||
HashingWriteBuffer hashing_out(*file_out);
|
||||
|
||||
copyDataWithThrottler(in, hashing_out, file_size, blocker.getCounter(), throttler);
|
||||
|
||||
if (blocker.isCancelled())
|
||||
{
|
||||
/// NOTE The is_cancelled flag also makes sense to check every time you read over the network,
|
||||
/// performing a poll with a not very large timeout.
|
||||
/// And now we check it only between read chunks (in the `copyData` function).
|
||||
data_part_storage_builder->removeSharedRecursive(true);
|
||||
data_part_storage_builder->commit();
|
||||
throw Exception("Fetching of part was cancelled", ErrorCodes::ABORTED);
|
||||
}
|
||||
|
||||
MergeTreeDataPartChecksum::uint128 expected_hash;
|
||||
readPODBinary(expected_hash, in);
|
||||
|
||||
if (expected_hash != hashing_out.getHash())
|
||||
{
|
||||
throw Exception(ErrorCodes::CHECKSUM_DOESNT_MATCH,
|
||||
"Checksum mismatch for file {} transferred from {}",
|
||||
metadata_file, replica_path);
|
||||
}
|
||||
|
||||
if (file_name != "checksums.txt" &&
|
||||
file_name != "columns.txt" &&
|
||||
file_name != IMergeTreeDataPart::DEFAULT_COMPRESSION_CODEC_FILE_NAME)
|
||||
checksums.addFile(file_name, file_size, expected_hash);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
void Fetcher::downloadBaseOrProjectionPartToDisk(
|
||||
const String & replica_path,
|
||||
DataPartStorageBuilderPtr & data_part_storage_builder,
|
||||
@ -880,6 +980,8 @@ MergeTreeData::MutableDataPartPtr Fetcher::downloadPartToDiskRemoteMeta(
|
||||
const String & tmp_prefix,
|
||||
DiskPtr disk,
|
||||
PooledReadWriteBufferFromHTTP & in,
|
||||
size_t projections,
|
||||
MergeTreeData::DataPart::Checksums & checksums,
|
||||
ThrottlerPtr throttler)
|
||||
{
|
||||
String part_id;
|
||||
@ -919,57 +1021,53 @@ MergeTreeData::MutableDataPartPtr Fetcher::downloadPartToDiskRemoteMeta(
|
||||
|
||||
volume->getDisk()->createDirectories(data_part_storage->getFullPath());
|
||||
|
||||
size_t files;
|
||||
readBinary(files, in);
|
||||
|
||||
for (size_t i = 0; i < files; ++i)
|
||||
for (auto i = 0ul; i < projections; ++i)
|
||||
{
|
||||
String file_name;
|
||||
UInt64 file_size;
|
||||
String projection_name;
|
||||
readStringBinary(projection_name, in);
|
||||
MergeTreeData::DataPart::Checksums projection_checksum;
|
||||
|
||||
readStringBinary(file_name, in);
|
||||
readBinary(file_size, in);
|
||||
auto projection_part_storage = data_part_storage->getProjection(projection_name + ".proj");
|
||||
auto projection_part_storage_builder = data_part_storage_builder->getProjection(projection_name + ".proj");
|
||||
|
||||
String metadata_file = fs::path(data_part_storage->getFullPath()) / file_name;
|
||||
projection_part_storage_builder->createDirectories();
|
||||
downloadBasePartOrProjectionPartToDiskRemoteMeta(
|
||||
replica_path, projection_part_storage_builder, in, projection_checksum, throttler);
|
||||
|
||||
{
|
||||
auto file_out = std::make_unique<WriteBufferFromFile>(metadata_file, DBMS_DEFAULT_BUFFER_SIZE, -1, 0666, nullptr, 0);
|
||||
|
||||
HashingWriteBuffer hashing_out(*file_out);
|
||||
|
||||
copyDataWithThrottler(in, hashing_out, file_size, blocker.getCounter(), throttler);
|
||||
|
||||
if (blocker.isCancelled())
|
||||
{
|
||||
/// NOTE The is_cancelled flag also makes sense to check every time you read over the network,
|
||||
/// performing a poll with a not very large timeout.
|
||||
/// And now we check it only between read chunks (in the `copyData` function).
|
||||
data_part_storage_builder->removeSharedRecursive(true);
|
||||
data_part_storage_builder->commit();
|
||||
throw Exception("Fetching of part was cancelled", ErrorCodes::ABORTED);
|
||||
}
|
||||
|
||||
MergeTreeDataPartChecksum::uint128 expected_hash;
|
||||
readPODBinary(expected_hash, in);
|
||||
|
||||
if (expected_hash != hashing_out.getHash())
|
||||
{
|
||||
throw Exception(ErrorCodes::CHECKSUM_DOESNT_MATCH,
|
||||
"Checksum mismatch for file {} transferred from {}",
|
||||
metadata_file, replica_path);
|
||||
}
|
||||
}
|
||||
checksums.addFile(
|
||||
projection_name + ".proj", projection_checksum.getTotalSizeOnDisk(), projection_checksum.getTotalChecksumUInt128());
|
||||
}
|
||||
|
||||
downloadBasePartOrProjectionPartToDiskRemoteMeta(
|
||||
replica_path, data_part_storage_builder, in, checksums, throttler);
|
||||
|
||||
assertEOF(in);
|
||||
MergeTreeData::MutableDataPartPtr new_data_part;
|
||||
try
|
||||
{
|
||||
data_part_storage_builder->commit();
|
||||
|
||||
data_part_storage_builder->commit();
|
||||
new_data_part = data.createPart(part_name, data_part_storage);
|
||||
new_data_part->version.setCreationTID(Tx::PrehistoricTID, nullptr);
|
||||
new_data_part->is_temp = true;
|
||||
new_data_part->modification_time = time(nullptr);
|
||||
|
||||
MergeTreeData::MutableDataPartPtr new_data_part = data.createPart(part_name, data_part_storage);
|
||||
new_data_part->version.setCreationTID(Tx::PrehistoricTID, nullptr);
|
||||
new_data_part->is_temp = true;
|
||||
new_data_part->modification_time = time(nullptr);
|
||||
new_data_part->loadColumnsChecksumsIndexes(true, false);
|
||||
new_data_part->loadColumnsChecksumsIndexes(true, false);
|
||||
}
|
||||
#if USE_AWS_S3
|
||||
catch (const S3Exception & ex)
|
||||
{
|
||||
if (ex.getS3ErrorCode() == Aws::S3::S3Errors::NO_SUCH_KEY)
|
||||
{
|
||||
throw Exception(ErrorCodes::S3_ERROR, "Cannot fetch part {} because we lost lock and it was concurrently removed", part_name);
|
||||
}
|
||||
throw;
|
||||
}
|
||||
#endif
|
||||
catch (...) /// Redundant catch, just to be able to add first one with #if
|
||||
{
|
||||
throw;
|
||||
}
|
||||
|
||||
data.lockSharedData(*new_data_part, /* replace_existing_lock = */ true, {});
|
||||
|
||||
|
@ -50,7 +50,11 @@ private:
|
||||
int client_protocol_version,
|
||||
const std::map<String, std::shared_ptr<IMergeTreeDataPart>> & projections = {});
|
||||
|
||||
void sendPartFromDiskRemoteMeta(const MergeTreeData::DataPartPtr & part, WriteBuffer & out);
|
||||
MergeTreeData::DataPart::Checksums sendPartFromDiskRemoteMeta(
|
||||
const MergeTreeData::DataPartPtr & part,
|
||||
WriteBuffer & out,
|
||||
bool send_part_id,
|
||||
const std::map<String, std::shared_ptr<IMergeTreeDataPart>> & projections = {});
|
||||
|
||||
/// StorageReplicatedMergeTree::shutdown() waits for all parts exchange handlers to finish,
|
||||
/// so Service will never access dangling reference to storage
|
||||
@ -89,44 +93,53 @@ public:
|
||||
|
||||
private:
|
||||
void downloadBaseOrProjectionPartToDisk(
|
||||
const String & replica_path,
|
||||
DataPartStorageBuilderPtr & data_part_storage_builder,
|
||||
bool sync,
|
||||
PooledReadWriteBufferFromHTTP & in,
|
||||
MergeTreeData::DataPart::Checksums & checksums,
|
||||
ThrottlerPtr throttler) const;
|
||||
const String & replica_path,
|
||||
DataPartStorageBuilderPtr & data_part_storage_builder,
|
||||
bool sync,
|
||||
PooledReadWriteBufferFromHTTP & in,
|
||||
MergeTreeData::DataPart::Checksums & checksums,
|
||||
ThrottlerPtr throttler) const;
|
||||
|
||||
void downloadBasePartOrProjectionPartToDiskRemoteMeta(
|
||||
const String & replica_path,
|
||||
DataPartStorageBuilderPtr & data_part_storage_builder,
|
||||
PooledReadWriteBufferFromHTTP & in,
|
||||
MergeTreeData::DataPart::Checksums & checksums,
|
||||
ThrottlerPtr throttler) const;
|
||||
|
||||
|
||||
MergeTreeData::MutableDataPartPtr downloadPartToDisk(
|
||||
const String & part_name,
|
||||
const String & replica_path,
|
||||
bool to_detached,
|
||||
const String & tmp_prefix_,
|
||||
bool sync,
|
||||
DiskPtr disk,
|
||||
PooledReadWriteBufferFromHTTP & in,
|
||||
size_t projections,
|
||||
MergeTreeData::DataPart::Checksums & checksums,
|
||||
ThrottlerPtr throttler);
|
||||
const String & part_name,
|
||||
const String & replica_path,
|
||||
bool to_detached,
|
||||
const String & tmp_prefix_,
|
||||
bool sync,
|
||||
DiskPtr disk,
|
||||
PooledReadWriteBufferFromHTTP & in,
|
||||
size_t projections,
|
||||
MergeTreeData::DataPart::Checksums & checksums,
|
||||
ThrottlerPtr throttler);
|
||||
|
||||
MergeTreeData::MutableDataPartPtr downloadPartToMemory(
|
||||
const String & part_name,
|
||||
const UUID & part_uuid,
|
||||
const StorageMetadataPtr & metadata_snapshot,
|
||||
ContextPtr context,
|
||||
DiskPtr disk,
|
||||
PooledReadWriteBufferFromHTTP & in,
|
||||
size_t projections,
|
||||
ThrottlerPtr throttler);
|
||||
const String & part_name,
|
||||
const UUID & part_uuid,
|
||||
const StorageMetadataPtr & metadata_snapshot,
|
||||
ContextPtr context,
|
||||
DiskPtr disk,
|
||||
PooledReadWriteBufferFromHTTP & in,
|
||||
size_t projections,
|
||||
ThrottlerPtr throttler);
|
||||
|
||||
MergeTreeData::MutableDataPartPtr downloadPartToDiskRemoteMeta(
|
||||
const String & part_name,
|
||||
const String & replica_path,
|
||||
bool to_detached,
|
||||
const String & tmp_prefix_,
|
||||
DiskPtr disk,
|
||||
PooledReadWriteBufferFromHTTP & in,
|
||||
ThrottlerPtr throttler);
|
||||
const String & part_name,
|
||||
const String & replica_path,
|
||||
bool to_detached,
|
||||
const String & tmp_prefix_,
|
||||
DiskPtr disk,
|
||||
PooledReadWriteBufferFromHTTP & in,
|
||||
size_t projections,
|
||||
MergeTreeData::DataPart::Checksums & checksums,
|
||||
ThrottlerPtr throttler);
|
||||
|
||||
StorageReplicatedMergeTree & data;
|
||||
Poco::Logger * log;
|
||||
|
@ -2490,25 +2490,11 @@ void MergeTreeData::checkAlterIsPossible(const AlterCommands & commands, Context
|
||||
ErrorCodes::BAD_ARGUMENTS);
|
||||
}
|
||||
if (command.type == AlterCommand::ADD_PROJECTION)
|
||||
|
||||
{
|
||||
if (!is_custom_partitioned)
|
||||
throw Exception(
|
||||
"ALTER ADD PROJECTION is not supported for tables with the old syntax",
|
||||
ErrorCodes::BAD_ARGUMENTS);
|
||||
|
||||
/// TODO: implement it the main issue in DataPartsExchange (not able to send directories metadata)
|
||||
if (supportsReplication() && getSettings()->allow_remote_fs_zero_copy_replication)
|
||||
{
|
||||
auto storage_policy = getStoragePolicy();
|
||||
auto disks = storage_policy->getDisks();
|
||||
for (const auto & disk : disks)
|
||||
{
|
||||
if (disk->supportZeroCopyReplication())
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "ALTER ADD PROJECTION is not supported when zero-copy replication is enabled for table. "
|
||||
"Currently disk '{}' supports zero copy replication", disk->getName());
|
||||
}
|
||||
}
|
||||
}
|
||||
if (command.type == AlterCommand::RENAME_COLUMN)
|
||||
{
|
||||
|
@ -84,7 +84,7 @@ struct Settings;
|
||||
M(Seconds, remote_fs_execute_merges_on_single_replica_time_threshold, 3 * 60 * 60, "When greater than zero only a single replica starts the merge immediately if merged part on shared storage and 'allow_remote_fs_zero_copy_replication' is enabled.", 0) \
|
||||
M(Seconds, try_fetch_recompressed_part_timeout, 7200, "Recompression works slow in most cases, so we don't start merge with recompression until this timeout and trying to fetch recompressed part from replica which assigned this merge with recompression.", 0) \
|
||||
M(Bool, always_fetch_merged_part, false, "If true, replica never merge parts and always download merged parts from other replicas.", 0) \
|
||||
M(UInt64, max_suspicious_broken_parts, 10, "Max broken parts, if more - deny automatic deletion.", 0) \
|
||||
M(UInt64, max_suspicious_broken_parts, 100, "Max broken parts, if more - deny automatic deletion.", 0) \
|
||||
M(UInt64, max_suspicious_broken_parts_bytes, 1ULL * 1024 * 1024 * 1024, "Max size of all broken parts, if more - deny automatic deletion.", 0) \
|
||||
M(UInt64, max_files_to_modify_in_alter_columns, 75, "Not apply ALTER if number of files for modification(deletion, addition) more than this.", 0) \
|
||||
M(UInt64, max_files_to_remove_in_alter_columns, 50, "Not apply ALTER, if number of files for deletion more than this.", 0) \
|
||||
|
@ -5,6 +5,7 @@
|
||||
#include <Parsers/ASTFunction.h>
|
||||
#include <Parsers/ExpressionListParsers.h>
|
||||
#include <IO/Operators.h>
|
||||
#include <Interpreters/FunctionNameNormalizer.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -24,6 +25,17 @@ static String formattedAST(const ASTPtr & ast)
|
||||
return buf.str();
|
||||
}
|
||||
|
||||
static String formattedASTNormalized(const ASTPtr & ast)
|
||||
{
|
||||
if (!ast)
|
||||
return "";
|
||||
auto ast_normalized = ast->clone();
|
||||
FunctionNameNormalizer().visit(ast_normalized.get());
|
||||
WriteBufferFromOwnString buf;
|
||||
formatAST(*ast_normalized, buf, false, true);
|
||||
return buf.str();
|
||||
}
|
||||
|
||||
ReplicatedMergeTreeTableMetadata::ReplicatedMergeTreeTableMetadata(const MergeTreeData & data, const StorageMetadataPtr & metadata_snapshot)
|
||||
{
|
||||
if (data.format_version < MERGE_TREE_DATA_MIN_FORMAT_VERSION_WITH_CUSTOM_PARTITIONING)
|
||||
@ -33,7 +45,7 @@ ReplicatedMergeTreeTableMetadata::ReplicatedMergeTreeTableMetadata(const MergeTr
|
||||
}
|
||||
|
||||
const auto data_settings = data.getSettings();
|
||||
sampling_expression = formattedAST(metadata_snapshot->getSamplingKeyAST());
|
||||
sampling_expression = formattedASTNormalized(metadata_snapshot->getSamplingKeyAST());
|
||||
index_granularity = data_settings->index_granularity;
|
||||
merging_params_mode = static_cast<int>(data.merging_params.mode);
|
||||
sign_column = data.merging_params.sign_column;
|
||||
@ -45,7 +57,7 @@ ReplicatedMergeTreeTableMetadata::ReplicatedMergeTreeTableMetadata(const MergeTr
|
||||
/// - When we have only ORDER BY, than store it in "primary key:" row of /metadata
|
||||
/// - When we have both, than store PRIMARY KEY in "primary key:" row and ORDER BY in "sorting key:" row of /metadata
|
||||
|
||||
primary_key = formattedAST(metadata_snapshot->getPrimaryKey().expression_list_ast);
|
||||
primary_key = formattedASTNormalized(metadata_snapshot->getPrimaryKey().expression_list_ast);
|
||||
if (metadata_snapshot->isPrimaryKeyDefined())
|
||||
{
|
||||
/// We don't use preparsed AST `sorting_key.expression_list_ast` because
|
||||
@ -54,15 +66,15 @@ ReplicatedMergeTreeTableMetadata::ReplicatedMergeTreeTableMetadata(const MergeTr
|
||||
/// compatible way is just to convert definition_ast to list and
|
||||
/// serialize it. In all other places key.expression_list_ast should be
|
||||
/// used.
|
||||
sorting_key = formattedAST(extractKeyExpressionList(metadata_snapshot->getSortingKey().definition_ast));
|
||||
sorting_key = formattedASTNormalized(extractKeyExpressionList(metadata_snapshot->getSortingKey().definition_ast));
|
||||
}
|
||||
|
||||
data_format_version = data.format_version;
|
||||
|
||||
if (data.format_version >= MERGE_TREE_DATA_MIN_FORMAT_VERSION_WITH_CUSTOM_PARTITIONING)
|
||||
partition_key = formattedAST(metadata_snapshot->getPartitionKey().expression_list_ast);
|
||||
partition_key = formattedASTNormalized(metadata_snapshot->getPartitionKey().expression_list_ast);
|
||||
|
||||
ttl_table = formattedAST(metadata_snapshot->getTableTTLs().definition_ast);
|
||||
ttl_table = formattedASTNormalized(metadata_snapshot->getTableTTLs().definition_ast);
|
||||
|
||||
skip_indices = metadata_snapshot->getSecondaryIndices().toString();
|
||||
|
||||
|
@ -21,6 +21,7 @@
|
||||
#include <AggregateFunctions/parseAggregateFunctionParameters.h>
|
||||
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Interpreters/FunctionNameNormalizer.h>
|
||||
#include <Interpreters/evaluateConstantExpression.h>
|
||||
|
||||
|
||||
@ -33,7 +34,6 @@ namespace ErrorCodes
|
||||
extern const int UNKNOWN_STORAGE;
|
||||
extern const int NO_REPLICA_NAME_GIVEN;
|
||||
extern const int CANNOT_EXTRACT_TABLE_STRUCTURE;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
}
|
||||
|
||||
|
||||
@ -566,9 +566,11 @@ static StoragePtr create(const StorageFactory::Arguments & args)
|
||||
}
|
||||
|
||||
auto minmax_columns = metadata.getColumnsRequiredForPartitionKey();
|
||||
auto partition_key = metadata.partition_key.expression_list_ast->clone();
|
||||
FunctionNameNormalizer().visit(partition_key.get());
|
||||
auto primary_key_asts = metadata.primary_key.expression_list_ast->children;
|
||||
metadata.minmax_count_projection.emplace(ProjectionDescription::getMinMaxCountProjection(
|
||||
args.columns, metadata.partition_key.expression_list_ast, minmax_columns, primary_key_asts, args.getContext()));
|
||||
args.columns, partition_key, minmax_columns, primary_key_asts, args.getContext()));
|
||||
|
||||
if (args.storage_def->sample_by)
|
||||
metadata.sampling_key = KeyDescription::getKeyFromAST(args.storage_def->sample_by->ptr(), metadata.columns, args.getContext());
|
||||
@ -648,9 +650,11 @@ static StoragePtr create(const StorageFactory::Arguments & args)
|
||||
++arg_num;
|
||||
|
||||
auto minmax_columns = metadata.getColumnsRequiredForPartitionKey();
|
||||
auto partition_key = metadata.partition_key.expression_list_ast->clone();
|
||||
FunctionNameNormalizer().visit(partition_key.get());
|
||||
auto primary_key_asts = metadata.primary_key.expression_list_ast->children;
|
||||
metadata.minmax_count_projection.emplace(ProjectionDescription::getMinMaxCountProjection(
|
||||
args.columns, metadata.partition_key.expression_list_ast, minmax_columns, primary_key_asts, args.getContext()));
|
||||
args.columns, partition_key, minmax_columns, primary_key_asts, args.getContext()));
|
||||
|
||||
const auto * ast = engine_args[arg_num]->as<ASTLiteral>();
|
||||
if (ast && ast->value.getType() == Field::Types::UInt64)
|
||||
@ -681,17 +685,6 @@ static StoragePtr create(const StorageFactory::Arguments & args)
|
||||
{
|
||||
auto storage_policy = args.getContext()->getStoragePolicy(storage_settings->storage_policy);
|
||||
|
||||
for (const auto & disk : storage_policy->getDisks())
|
||||
{
|
||||
/// TODO: implement it the main issue in DataPartsExchange (not able to send directories metadata)
|
||||
if (storage_settings->allow_remote_fs_zero_copy_replication
|
||||
&& disk->supportZeroCopyReplication() && metadata.hasProjections())
|
||||
{
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Projections are not supported when zero-copy replication is enabled for table. "
|
||||
"Currently disk '{}' supports zero copy replication", disk->getName());
|
||||
}
|
||||
}
|
||||
|
||||
return std::make_shared<StorageReplicatedMergeTree>(
|
||||
zookeeper_path,
|
||||
replica_name,
|
||||
|
@ -494,9 +494,18 @@ void StorageKeeperMap::drop()
|
||||
checkTable<true>();
|
||||
auto client = getClient();
|
||||
|
||||
client->remove(table_path);
|
||||
// we allow ZNONODE in case we got hardware error on previous drop
|
||||
if (auto code = client->tryRemove(table_path); code == Coordination::Error::ZNOTEMPTY)
|
||||
{
|
||||
throw zkutil::KeeperException(
|
||||
code, "{} contains children which shouldn't happen. Please DETACH the table if you want to delete it", table_path);
|
||||
}
|
||||
|
||||
if (!client->getChildren(tables_path).empty())
|
||||
std::vector<std::string> children;
|
||||
// if the tables_path is not found, some other table removed it
|
||||
// if there are children, some other tables are still using this path as storage
|
||||
if (auto code = client->tryGetChildren(tables_path, children);
|
||||
code != Coordination::Error::ZOK || !children.empty())
|
||||
return;
|
||||
|
||||
Coordination::Requests ops;
|
||||
|
@ -20,7 +20,9 @@ const char * auto_contributors[] {
|
||||
"Alain BERRIER",
|
||||
"Albert Kidrachev",
|
||||
"Alberto",
|
||||
"Aleksandr",
|
||||
"Aleksandr Karo",
|
||||
"Aleksandr Musorin",
|
||||
"Aleksandr Razumov",
|
||||
"Aleksandr Shalimov",
|
||||
"Aleksandra (Ася)",
|
||||
@ -167,6 +169,7 @@ const char * auto_contributors[] {
|
||||
"Babacar Diassé",
|
||||
"Bakhtiyor Ruziev",
|
||||
"BanyRule",
|
||||
"Barum Rho",
|
||||
"Baudouin Giard",
|
||||
"BayoNet",
|
||||
"Ben",
|
||||
@ -213,6 +216,7 @@ const char * auto_contributors[] {
|
||||
"Constantin S. Pan",
|
||||
"Constantine Peresypkin",
|
||||
"CoolT2",
|
||||
"Cory Levy",
|
||||
"CurtizJ",
|
||||
"DF5HSE",
|
||||
"DIAOZHAFENG",
|
||||
@ -236,7 +240,9 @@ const char * auto_contributors[] {
|
||||
"Denis Krivak",
|
||||
"Denis Zhuravlev",
|
||||
"Denny Crane",
|
||||
"Derek Chia",
|
||||
"Derek Perkins",
|
||||
"Diego Nieto (lesandie)",
|
||||
"DimaAmega",
|
||||
"Ding Xiang Fei",
|
||||
"Dmitriev Mikhail",
|
||||
@ -340,6 +346,7 @@ const char * auto_contributors[] {
|
||||
"Haavard Kvaalen",
|
||||
"Habibullah Oladepo",
|
||||
"HaiBo Li",
|
||||
"Hakob Saghatelyan",
|
||||
"Hamoon",
|
||||
"Han Fei",
|
||||
"Harry Lee",
|
||||
@ -417,7 +424,9 @@ const char * auto_contributors[] {
|
||||
"John Hummel",
|
||||
"John Skopis",
|
||||
"Jonatas Freitas",
|
||||
"Jonathan-Ackerman",
|
||||
"Jordi Villar",
|
||||
"Jose",
|
||||
"Josh Taylor",
|
||||
"João Figueiredo",
|
||||
"Julian Gilyadov",
|
||||
@ -444,6 +453,7 @@ const char * auto_contributors[] {
|
||||
"Konstantin Ilchenko",
|
||||
"Konstantin Lebedev",
|
||||
"Konstantin Malanchev",
|
||||
"Konstantin Morozov",
|
||||
"Konstantin Podshumok",
|
||||
"Konstantin Rudenskii",
|
||||
"Korenevskiy Denis",
|
||||
@ -472,18 +482,22 @@ const char * auto_contributors[] {
|
||||
"Liu Cong",
|
||||
"LiuCong",
|
||||
"LiuYangkuan",
|
||||
"Lloyd-Pottiger",
|
||||
"Lopatin Konstantin",
|
||||
"Lorenzo Mangani",
|
||||
"Loud_Scream",
|
||||
"Lucid Dreams",
|
||||
"Luck-Chang",
|
||||
"Luis Bosque",
|
||||
"Lv Feng",
|
||||
"Léo Ercolanelli",
|
||||
"M0r64n",
|
||||
"MEX7",
|
||||
"MaceWindu",
|
||||
"MagiaGroz",
|
||||
"Maks Skorokhod",
|
||||
"Maksim",
|
||||
"Maksim Buren",
|
||||
"Maksim Fedotov",
|
||||
"Maksim Kita",
|
||||
"Mallik Hassan",
|
||||
@ -682,6 +696,7 @@ const char * auto_contributors[] {
|
||||
"Reto Kromer",
|
||||
"Ri",
|
||||
"Rich Raposa",
|
||||
"Robert Coelho",
|
||||
"Robert Hodges",
|
||||
"Robert Schulze",
|
||||
"RogerYK",
|
||||
@ -708,6 +723,7 @@ const char * auto_contributors[] {
|
||||
"S.M.A. Djawadi",
|
||||
"Saad Ur Rahman",
|
||||
"Sabyanin Maxim",
|
||||
"Sachin",
|
||||
"Safronov Michail",
|
||||
"SaltTan",
|
||||
"Sami Kerola",
|
||||
@ -807,6 +823,7 @@ const char * auto_contributors[] {
|
||||
"UnamedRus",
|
||||
"V",
|
||||
"VDimir",
|
||||
"VVMak",
|
||||
"Vadim",
|
||||
"Vadim Plakhtinskiy",
|
||||
"Vadim Skipin",
|
||||
@ -828,6 +845,7 @@ const char * auto_contributors[] {
|
||||
"Victor",
|
||||
"Victor Tarnavsky",
|
||||
"Viktor Taranenko",
|
||||
"Vincent Bernat",
|
||||
"Vitalii S",
|
||||
"Vitaliy Fedorchenko",
|
||||
"Vitaliy Karnienko",
|
||||
@ -855,6 +873,8 @@ const char * auto_contributors[] {
|
||||
"Vladimir Kolobaev",
|
||||
"Vladimir Kopysov",
|
||||
"Vladimir Kozbin",
|
||||
"Vladimir Makarov",
|
||||
"Vladimir Mihailenco",
|
||||
"Vladimir Smirnov",
|
||||
"Vladislav Rassokhin",
|
||||
"Vladislav Smirnov",
|
||||
@ -1115,17 +1135,20 @@ const char * auto_contributors[] {
|
||||
"jianmei zhang",
|
||||
"jinjunzh",
|
||||
"jkuklis",
|
||||
"jthmath",
|
||||
"jus1096",
|
||||
"jyz0309",
|
||||
"karnevil13",
|
||||
"kashwy",
|
||||
"keenwolf",
|
||||
"kevin wan",
|
||||
"kgurjev",
|
||||
"khamadiev",
|
||||
"kirillikoff",
|
||||
"kmeaw",
|
||||
"koloshmet",
|
||||
"kolsys",
|
||||
"konnectr",
|
||||
"koshachy",
|
||||
"kreuzerkrieg",
|
||||
"ks1322",
|
||||
@ -1156,6 +1179,7 @@ const char * auto_contributors[] {
|
||||
"lincion",
|
||||
"lingo-xp",
|
||||
"lingpeng0314",
|
||||
"liql2007",
|
||||
"lirulei",
|
||||
"listar",
|
||||
"litao91",
|
||||
@ -1176,14 +1200,17 @@ const char * auto_contributors[] {
|
||||
"ltybc-coder",
|
||||
"luc1ph3r",
|
||||
"lulichao",
|
||||
"luocongkai",
|
||||
"m-ves",
|
||||
"madianjun",
|
||||
"maiha",
|
||||
"maks-buren630501",
|
||||
"malkfilipp",
|
||||
"manmitya",
|
||||
"maqroll",
|
||||
"martincholuj",
|
||||
"mastertheknife",
|
||||
"mateng0915",
|
||||
"maxim",
|
||||
"maxim-babenko",
|
||||
"maxkuzn",
|
||||
@ -1316,6 +1343,7 @@ const char * auto_contributors[] {
|
||||
"tcoyvwac",
|
||||
"tekeri",
|
||||
"templarzq",
|
||||
"teng.ma",
|
||||
"terrylin",
|
||||
"tesw yew isal",
|
||||
"tianzhou",
|
||||
@ -1342,6 +1370,7 @@ const char * auto_contributors[] {
|
||||
"vivarum",
|
||||
"vladimir golovchenko",
|
||||
"vsrsvas",
|
||||
"vvbufetov",
|
||||
"vxider",
|
||||
"vzakaznikov",
|
||||
"wangchao",
|
||||
@ -1365,10 +1394,12 @@ const char * auto_contributors[] {
|
||||
"yhgcn",
|
||||
"yiguolei",
|
||||
"yingjinghan",
|
||||
"yinpeiqi",
|
||||
"yjant",
|
||||
"ylchou",
|
||||
"yonesko",
|
||||
"youenn lebras",
|
||||
"young scott",
|
||||
"yuchuansun",
|
||||
"yuefoo",
|
||||
"yulu86",
|
||||
@ -1386,6 +1417,7 @@ const char * auto_contributors[] {
|
||||
"zhangyuli1",
|
||||
"zhao zhou",
|
||||
"zhen ni",
|
||||
"zhenjial",
|
||||
"zhifeng",
|
||||
"zhongyuankai",
|
||||
"zhoubintao",
|
||||
@ -1400,6 +1432,7 @@ const char * auto_contributors[] {
|
||||
"zxealous",
|
||||
"zzsmdfj",
|
||||
"Šimon Podlipský",
|
||||
"Александр",
|
||||
"Артем Стрельцов",
|
||||
"Владислав Тихонов",
|
||||
"Георгий Кондратьев",
|
||||
|
@ -794,12 +794,12 @@ def test_cache_setting_compatibility(cluster, node_name):
|
||||
"<do_not_evict_index_and_mark_files>0</do_not_evict_index_and_mark_files>",
|
||||
)
|
||||
|
||||
result = node.query("DESCRIBE CACHE 's3_cache_r'")
|
||||
result = node.query("DESCRIBE FILESYSTEM CACHE 's3_cache_r'")
|
||||
assert result.strip().endswith("1")
|
||||
|
||||
node.restart_clickhouse()
|
||||
|
||||
result = node.query("DESCRIBE CACHE 's3_cache_r'")
|
||||
result = node.query("DESCRIBE FILESYSTEM CACHE 's3_cache_r'")
|
||||
assert result.strip().endswith("0")
|
||||
|
||||
result = node.query(
|
||||
@ -824,7 +824,7 @@ def test_cache_setting_compatibility(cluster, node_name):
|
||||
|
||||
node.restart_clickhouse()
|
||||
|
||||
result = node.query("DESCRIBE CACHE 's3_cache_r'")
|
||||
result = node.query("DESCRIBE FILESYSTEM CACHE 's3_cache_r'")
|
||||
assert result.strip().endswith("1")
|
||||
|
||||
node.query("SELECT * FROM s3_test FORMAT Null")
|
||||
|
@ -15,11 +15,6 @@ node3 = cluster.add_instance(
|
||||
"node3", main_configs=["configs/s3.xml"], with_minio=True, with_zookeeper=True
|
||||
)
|
||||
|
||||
single_node_cluster = ClickHouseCluster(__file__)
|
||||
small_node = single_node_cluster.add_instance(
|
||||
"small_node", main_configs=["configs/s3.xml"], with_minio=True
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def started_cluster():
|
||||
@ -97,52 +92,3 @@ def test_ttl_move_and_s3(started_cluster):
|
||||
print(f"Attempts remaining: {attempt}")
|
||||
|
||||
assert counter == 300
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def started_single_node_cluster():
|
||||
try:
|
||||
single_node_cluster.start()
|
||||
|
||||
yield single_node_cluster
|
||||
finally:
|
||||
single_node_cluster.shutdown()
|
||||
|
||||
|
||||
def test_move_and_s3_memory_usage(started_single_node_cluster):
|
||||
if small_node.is_built_with_sanitizer() or small_node.is_debug_build():
|
||||
pytest.skip("Disabled for debug and sanitizers. Too slow.")
|
||||
|
||||
small_node.query(
|
||||
"CREATE TABLE s3_test_with_ttl (x UInt32, a String codec(NONE), b String codec(NONE), c String codec(NONE), d String codec(NONE), e String codec(NONE)) engine = MergeTree order by x partition by x SETTINGS storage_policy='s3_and_default'"
|
||||
)
|
||||
|
||||
for _ in range(10):
|
||||
small_node.query(
|
||||
"insert into s3_test_with_ttl select 0, repeat('a', 100), repeat('b', 100), repeat('c', 100), repeat('d', 100), repeat('e', 100) from zeros(400000) settings max_block_size = 8192, max_insert_block_size=10000000, min_insert_block_size_rows=10000000"
|
||||
)
|
||||
|
||||
# After this, we should have 5 columns per 10 * 100 * 400000 ~ 400 MB; total ~2G data in partition
|
||||
small_node.query("optimize table s3_test_with_ttl final")
|
||||
|
||||
small_node.query("system flush logs")
|
||||
# Will take memory usage from metric_log.
|
||||
# It is easier then specifying total memory limit (insert queries can hit this limit).
|
||||
small_node.query("truncate table system.metric_log")
|
||||
|
||||
small_node.query(
|
||||
"alter table s3_test_with_ttl move partition 0 to volume 'external'",
|
||||
settings={"send_logs_level": "error"},
|
||||
)
|
||||
small_node.query("system flush logs")
|
||||
max_usage = small_node.query(
|
||||
"select max(CurrentMetric_MemoryTracking) from system.metric_log"
|
||||
)
|
||||
# 3G limit is a big one. However, we can hit it anyway with parallel s3 writes enabled.
|
||||
# Also actual value can be bigger because of memory drift.
|
||||
# Increase it a little bit if test fails.
|
||||
assert int(max_usage) < 3e9
|
||||
res = small_node.query(
|
||||
"select * from system.errors where last_error_message like '%Memory limit%' limit 1"
|
||||
)
|
||||
assert res == ""
|
||||
|
@ -3,7 +3,7 @@ SHOW TABLES [] TABLE SHOW
|
||||
SHOW COLUMNS [] COLUMN SHOW
|
||||
SHOW DICTIONARIES [] DICTIONARY SHOW
|
||||
SHOW [] \N ALL
|
||||
SHOW CACHES [] \N ALL
|
||||
SHOW FILESYSTEM CACHES [] \N ALL
|
||||
SELECT [] COLUMN ALL
|
||||
INSERT [] COLUMN ALL
|
||||
ALTER UPDATE ['UPDATE'] COLUMN ALTER TABLE
|
||||
|
@ -1,4 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
set allow_experimental_projection_optimization = 1, force_optimize_projection = 1;
|
||||
|
||||
drop table if exists tp;
|
||||
|
@ -1,5 +1,4 @@
|
||||
#!/usr/bin/env bash
|
||||
# Tags: no-s3-storage
|
||||
|
||||
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
# shellcheck source=../shell_config.sh
|
||||
|
@ -1,4 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
drop table if exists tp;
|
||||
|
||||
create table tp (d1 Int32, d2 Int32, eventcnt Int64, projection p (select sum(eventcnt) group by d1)) engine = MergeTree order by (d1, d2);
|
||||
|
@ -1,4 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
drop table if exists d;
|
||||
|
||||
create table d (i int, j int) engine MergeTree partition by i % 2 order by tuple() settings index_granularity = 1;
|
||||
|
@ -1,4 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
drop table if exists t;
|
||||
|
||||
create table t (i int, j int) engine MergeTree order by i;
|
||||
|
@ -1,6 +1,4 @@
|
||||
#!/usr/bin/env bash
|
||||
# Tags: no-s3-storage
|
||||
|
||||
|
||||
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
# shellcheck source=../shell_config.sh
|
||||
|
@ -1,4 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
set allow_experimental_projection_optimization = 1;
|
||||
|
||||
drop table if exists x;
|
||||
|
@ -1,4 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
set allow_experimental_projection_optimization = 1;
|
||||
|
||||
drop table if exists t;
|
||||
|
@ -1,5 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
|
||||
drop table if exists tp;
|
||||
|
||||
create table tp (x Int32, y Int32, projection p (select x, y order by x)) engine = MergeTree order by y;
|
||||
|
@ -1,4 +1,4 @@
|
||||
-- Tags: long, no-s3-storage
|
||||
-- Tags: long
|
||||
|
||||
drop table if exists tp_1;
|
||||
drop table if exists tp_2;
|
||||
|
@ -1,5 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
|
||||
DROP TABLE IF EXISTS t;
|
||||
drop table if exists tp;
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
drop table if exists t;
|
||||
|
||||
create table t (i int, j int, k int, projection p (select * order by j)) engine MergeTree order by i settings index_granularity = 1;
|
||||
|
@ -1,4 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
drop table if exists x;
|
||||
create table x (i UInt64, j UInt64, k UInt64, projection agg (select sum(j), avg(k) group by i), projection norm (select j, k order by i)) engine MergeTree order by tuple();
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
drop table if exists x;
|
||||
|
||||
create table x (i int) engine MergeTree order by tuple();
|
||||
|
@ -1,4 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
DROP TABLE IF EXISTS t;
|
||||
|
||||
CREATE TABLE t (`key` UInt32, `created_at` Date, `value` UInt32, PROJECTION xxx (SELECT key, created_at, sum(value) GROUP BY key, created_at)) ENGINE = MergeTree PARTITION BY toYYYYMM(created_at) ORDER BY key;
|
||||
|
@ -1,4 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
drop table if exists z;
|
||||
|
||||
create table z (pk Int64, d Date, id UInt64, c UInt64) Engine MergeTree partition by d order by pk ;
|
||||
|
@ -1,4 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
drop table if exists tp;
|
||||
|
||||
create table tp (x Int32, y Int32, projection p (select x, y order by x)) engine = MergeTree order by y settings min_rows_for_compact_part = 2, min_rows_for_wide_part = 4, min_bytes_for_compact_part = 16, min_bytes_for_wide_part = 32;
|
||||
|
@ -1,4 +1,4 @@
|
||||
-- Tags: long, no-parallel, no-s3-storage
|
||||
-- Tags: long, no-parallel
|
||||
|
||||
drop table if exists t;
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
drop table if exists t;
|
||||
|
||||
create table t (s UInt16, l UInt16, projection p (select s, l order by l)) engine MergeTree order by s;
|
||||
|
@ -1,4 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
drop table if exists t;
|
||||
|
||||
create table t (x UInt32) engine = MergeTree order by tuple() settings index_granularity = 8;
|
||||
|
@ -1,4 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
drop table if exists projection_test;
|
||||
|
||||
create table projection_test (`sum(block_count)` UInt64, domain_alias UInt64 alias length(domain), datetime DateTime, domain LowCardinality(String), x_id String, y_id String, block_count Int64, retry_count Int64, duration Int64, kbytes Int64, buffer_time Int64, first_time Int64, total_bytes Nullable(UInt64), valid_bytes Nullable(UInt64), completed_bytes Nullable(UInt64), fixed_bytes Nullable(UInt64), force_bytes Nullable(UInt64), projection p (select toStartOfMinute(datetime) dt_m, countIf(first_time = 0) / count(), avg((kbytes * 8) / duration), count(), sum(block_count) / sum(duration), avg(block_count / duration), sum(buffer_time) / sum(duration), avg(buffer_time / duration), sum(valid_bytes) / sum(total_bytes), sum(completed_bytes) / sum(total_bytes), sum(fixed_bytes) / sum(total_bytes), sum(force_bytes) / sum(total_bytes), sum(valid_bytes) / sum(total_bytes), sum(retry_count) / sum(duration), avg(retry_count / duration), countIf(block_count > 0) / count(), countIf(first_time = 0) / count(), uniqHLL12(x_id), uniqHLL12(y_id) group by dt_m, domain)) engine MergeTree partition by toDate(datetime) order by (toStartOfTenMinutes(datetime), domain);
|
||||
|
@ -1,4 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
drop table if exists projection_without_key;
|
||||
|
||||
create table projection_without_key (key UInt32, PROJECTION x (SELECT sum(key) group by key % 3)) engine MergeTree order by key;
|
||||
|
@ -1,4 +1,4 @@
|
||||
-- Tags: distributed, no-s3-storage
|
||||
-- Tags: distributed
|
||||
|
||||
drop table if exists projection_test;
|
||||
|
||||
|
@ -279,7 +279,7 @@ CREATE TABLE system.grants
|
||||
(
|
||||
`user_name` Nullable(String),
|
||||
`role_name` Nullable(String),
|
||||
`access_type` Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER TABLE' = 41, 'ALTER DATABASE' = 42, 'ALTER VIEW REFRESH' = 43, 'ALTER VIEW MODIFY QUERY' = 44, 'ALTER VIEW' = 45, 'ALTER' = 46, 'CREATE DATABASE' = 47, 'CREATE TABLE' = 48, 'CREATE VIEW' = 49, 'CREATE DICTIONARY' = 50, 'CREATE TEMPORARY TABLE' = 51, 'CREATE FUNCTION' = 52, 'CREATE' = 53, 'DROP DATABASE' = 54, 'DROP TABLE' = 55, 'DROP VIEW' = 56, 'DROP DICTIONARY' = 57, 'DROP FUNCTION' = 58, 'DROP' = 59, 'TRUNCATE' = 60, 'OPTIMIZE' = 61, 'BACKUP' = 62, 'KILL QUERY' = 63, 'KILL TRANSACTION' = 64, 'MOVE PARTITION BETWEEN SHARDS' = 65, 'CREATE USER' = 66, 'ALTER USER' = 67, 'DROP USER' = 68, 'CREATE ROLE' = 69, 'ALTER ROLE' = 70, 'DROP ROLE' = 71, 'ROLE ADMIN' = 72, 'CREATE ROW POLICY' = 73, 'ALTER ROW POLICY' = 74, 'DROP ROW POLICY' = 75, 'CREATE QUOTA' = 76, 'ALTER QUOTA' = 77, 'DROP QUOTA' = 78, 'CREATE SETTINGS PROFILE' = 79, 'ALTER SETTINGS PROFILE' = 80, 'DROP SETTINGS PROFILE' = 81, 'SHOW USERS' = 82, 'SHOW ROLES' = 83, 'SHOW ROW POLICIES' = 84, 'SHOW QUOTAS' = 85, 'SHOW SETTINGS PROFILES' = 86, 'SHOW ACCESS' = 87, 'ACCESS MANAGEMENT' = 88, 'SYSTEM SHUTDOWN' = 89, 'SYSTEM DROP DNS CACHE' = 90, 'SYSTEM DROP MARK CACHE' = 91, 'SYSTEM DROP UNCOMPRESSED CACHE' = 92, 'SYSTEM DROP MMAP CACHE' = 93, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 94, 'SYSTEM DROP FILESYSTEM CACHE' = 95, 'SYSTEM DROP SCHEMA CACHE' = 96, 'SYSTEM DROP CACHE' = 97, 'SYSTEM RELOAD CONFIG' = 98, 'SYSTEM RELOAD USERS' = 99, 'SYSTEM RELOAD SYMBOLS' = 100, 'SYSTEM RELOAD DICTIONARY' = 101, 'SYSTEM RELOAD MODEL' = 102, 'SYSTEM RELOAD FUNCTION' = 103, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 104, 'SYSTEM RELOAD' = 105, 'SYSTEM RESTART DISK' = 106, 'SYSTEM MERGES' = 107, 'SYSTEM TTL MERGES' = 108, 'SYSTEM FETCHES' = 109, 'SYSTEM MOVES' = 110, 'SYSTEM DISTRIBUTED SENDS' = 111, 'SYSTEM REPLICATED SENDS' = 112, 'SYSTEM SENDS' = 113, 'SYSTEM REPLICATION QUEUES' = 114, 'SYSTEM DROP REPLICA' = 115, 'SYSTEM SYNC REPLICA' = 116, 'SYSTEM RESTART REPLICA' = 117, 'SYSTEM RESTORE REPLICA' = 118, 'SYSTEM SYNC DATABASE REPLICA' = 119, 'SYSTEM SYNC TRANSACTION LOG' = 120, 'SYSTEM FLUSH DISTRIBUTED' = 121, 'SYSTEM FLUSH LOGS' = 122, 'SYSTEM FLUSH' = 123, 'SYSTEM THREAD FUZZER' = 124, 'SYSTEM UNFREEZE' = 125, 'SYSTEM' = 126, 'dictGet' = 127, 'addressToLine' = 128, 'addressToLineWithInlines' = 129, 'addressToSymbol' = 130, 'demangle' = 131, 'INTROSPECTION' = 132, 'FILE' = 133, 'URL' = 134, 'REMOTE' = 135, 'MONGO' = 136, 'MEILISEARCH' = 137, 'MYSQL' = 138, 'POSTGRES' = 139, 'SQLITE' = 140, 'ODBC' = 141, 'JDBC' = 142, 'HDFS' = 143, 'S3' = 144, 'HIVE' = 145, 'SOURCES' = 146, 'CLUSTER' = 147, 'ALL' = 148, 'NONE' = 149),
|
||||
`access_type` Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW FILESYSTEM CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER TABLE' = 41, 'ALTER DATABASE' = 42, 'ALTER VIEW REFRESH' = 43, 'ALTER VIEW MODIFY QUERY' = 44, 'ALTER VIEW' = 45, 'ALTER' = 46, 'CREATE DATABASE' = 47, 'CREATE TABLE' = 48, 'CREATE VIEW' = 49, 'CREATE DICTIONARY' = 50, 'CREATE TEMPORARY TABLE' = 51, 'CREATE FUNCTION' = 52, 'CREATE' = 53, 'DROP DATABASE' = 54, 'DROP TABLE' = 55, 'DROP VIEW' = 56, 'DROP DICTIONARY' = 57, 'DROP FUNCTION' = 58, 'DROP' = 59, 'TRUNCATE' = 60, 'OPTIMIZE' = 61, 'BACKUP' = 62, 'KILL QUERY' = 63, 'KILL TRANSACTION' = 64, 'MOVE PARTITION BETWEEN SHARDS' = 65, 'CREATE USER' = 66, 'ALTER USER' = 67, 'DROP USER' = 68, 'CREATE ROLE' = 69, 'ALTER ROLE' = 70, 'DROP ROLE' = 71, 'ROLE ADMIN' = 72, 'CREATE ROW POLICY' = 73, 'ALTER ROW POLICY' = 74, 'DROP ROW POLICY' = 75, 'CREATE QUOTA' = 76, 'ALTER QUOTA' = 77, 'DROP QUOTA' = 78, 'CREATE SETTINGS PROFILE' = 79, 'ALTER SETTINGS PROFILE' = 80, 'DROP SETTINGS PROFILE' = 81, 'SHOW USERS' = 82, 'SHOW ROLES' = 83, 'SHOW ROW POLICIES' = 84, 'SHOW QUOTAS' = 85, 'SHOW SETTINGS PROFILES' = 86, 'SHOW ACCESS' = 87, 'ACCESS MANAGEMENT' = 88, 'SYSTEM SHUTDOWN' = 89, 'SYSTEM DROP DNS CACHE' = 90, 'SYSTEM DROP MARK CACHE' = 91, 'SYSTEM DROP UNCOMPRESSED CACHE' = 92, 'SYSTEM DROP MMAP CACHE' = 93, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 94, 'SYSTEM DROP FILESYSTEM CACHE' = 95, 'SYSTEM DROP SCHEMA CACHE' = 96, 'SYSTEM DROP CACHE' = 97, 'SYSTEM RELOAD CONFIG' = 98, 'SYSTEM RELOAD USERS' = 99, 'SYSTEM RELOAD SYMBOLS' = 100, 'SYSTEM RELOAD DICTIONARY' = 101, 'SYSTEM RELOAD MODEL' = 102, 'SYSTEM RELOAD FUNCTION' = 103, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 104, 'SYSTEM RELOAD' = 105, 'SYSTEM RESTART DISK' = 106, 'SYSTEM MERGES' = 107, 'SYSTEM TTL MERGES' = 108, 'SYSTEM FETCHES' = 109, 'SYSTEM MOVES' = 110, 'SYSTEM DISTRIBUTED SENDS' = 111, 'SYSTEM REPLICATED SENDS' = 112, 'SYSTEM SENDS' = 113, 'SYSTEM REPLICATION QUEUES' = 114, 'SYSTEM DROP REPLICA' = 115, 'SYSTEM SYNC REPLICA' = 116, 'SYSTEM RESTART REPLICA' = 117, 'SYSTEM RESTORE REPLICA' = 118, 'SYSTEM SYNC DATABASE REPLICA' = 119, 'SYSTEM SYNC TRANSACTION LOG' = 120, 'SYSTEM FLUSH DISTRIBUTED' = 121, 'SYSTEM FLUSH LOGS' = 122, 'SYSTEM FLUSH' = 123, 'SYSTEM THREAD FUZZER' = 124, 'SYSTEM UNFREEZE' = 125, 'SYSTEM' = 126, 'dictGet' = 127, 'addressToLine' = 128, 'addressToLineWithInlines' = 129, 'addressToSymbol' = 130, 'demangle' = 131, 'INTROSPECTION' = 132, 'FILE' = 133, 'URL' = 134, 'REMOTE' = 135, 'MONGO' = 136, 'MEILISEARCH' = 137, 'MYSQL' = 138, 'POSTGRES' = 139, 'SQLITE' = 140, 'ODBC' = 141, 'JDBC' = 142, 'HDFS' = 143, 'S3' = 144, 'HIVE' = 145, 'SOURCES' = 146, 'CLUSTER' = 147, 'ALL' = 148, 'NONE' = 149),
|
||||
`database` Nullable(String),
|
||||
`table` Nullable(String),
|
||||
`column` Nullable(String),
|
||||
@ -542,10 +542,10 @@ ENGINE = SystemPartsColumns
|
||||
COMMENT 'SYSTEM TABLE is built on the fly.'
|
||||
CREATE TABLE system.privileges
|
||||
(
|
||||
`privilege` Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER TABLE' = 41, 'ALTER DATABASE' = 42, 'ALTER VIEW REFRESH' = 43, 'ALTER VIEW MODIFY QUERY' = 44, 'ALTER VIEW' = 45, 'ALTER' = 46, 'CREATE DATABASE' = 47, 'CREATE TABLE' = 48, 'CREATE VIEW' = 49, 'CREATE DICTIONARY' = 50, 'CREATE TEMPORARY TABLE' = 51, 'CREATE FUNCTION' = 52, 'CREATE' = 53, 'DROP DATABASE' = 54, 'DROP TABLE' = 55, 'DROP VIEW' = 56, 'DROP DICTIONARY' = 57, 'DROP FUNCTION' = 58, 'DROP' = 59, 'TRUNCATE' = 60, 'OPTIMIZE' = 61, 'BACKUP' = 62, 'KILL QUERY' = 63, 'KILL TRANSACTION' = 64, 'MOVE PARTITION BETWEEN SHARDS' = 65, 'CREATE USER' = 66, 'ALTER USER' = 67, 'DROP USER' = 68, 'CREATE ROLE' = 69, 'ALTER ROLE' = 70, 'DROP ROLE' = 71, 'ROLE ADMIN' = 72, 'CREATE ROW POLICY' = 73, 'ALTER ROW POLICY' = 74, 'DROP ROW POLICY' = 75, 'CREATE QUOTA' = 76, 'ALTER QUOTA' = 77, 'DROP QUOTA' = 78, 'CREATE SETTINGS PROFILE' = 79, 'ALTER SETTINGS PROFILE' = 80, 'DROP SETTINGS PROFILE' = 81, 'SHOW USERS' = 82, 'SHOW ROLES' = 83, 'SHOW ROW POLICIES' = 84, 'SHOW QUOTAS' = 85, 'SHOW SETTINGS PROFILES' = 86, 'SHOW ACCESS' = 87, 'ACCESS MANAGEMENT' = 88, 'SYSTEM SHUTDOWN' = 89, 'SYSTEM DROP DNS CACHE' = 90, 'SYSTEM DROP MARK CACHE' = 91, 'SYSTEM DROP UNCOMPRESSED CACHE' = 92, 'SYSTEM DROP MMAP CACHE' = 93, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 94, 'SYSTEM DROP FILESYSTEM CACHE' = 95, 'SYSTEM DROP SCHEMA CACHE' = 96, 'SYSTEM DROP CACHE' = 97, 'SYSTEM RELOAD CONFIG' = 98, 'SYSTEM RELOAD USERS' = 99, 'SYSTEM RELOAD SYMBOLS' = 100, 'SYSTEM RELOAD DICTIONARY' = 101, 'SYSTEM RELOAD MODEL' = 102, 'SYSTEM RELOAD FUNCTION' = 103, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 104, 'SYSTEM RELOAD' = 105, 'SYSTEM RESTART DISK' = 106, 'SYSTEM MERGES' = 107, 'SYSTEM TTL MERGES' = 108, 'SYSTEM FETCHES' = 109, 'SYSTEM MOVES' = 110, 'SYSTEM DISTRIBUTED SENDS' = 111, 'SYSTEM REPLICATED SENDS' = 112, 'SYSTEM SENDS' = 113, 'SYSTEM REPLICATION QUEUES' = 114, 'SYSTEM DROP REPLICA' = 115, 'SYSTEM SYNC REPLICA' = 116, 'SYSTEM RESTART REPLICA' = 117, 'SYSTEM RESTORE REPLICA' = 118, 'SYSTEM SYNC DATABASE REPLICA' = 119, 'SYSTEM SYNC TRANSACTION LOG' = 120, 'SYSTEM FLUSH DISTRIBUTED' = 121, 'SYSTEM FLUSH LOGS' = 122, 'SYSTEM FLUSH' = 123, 'SYSTEM THREAD FUZZER' = 124, 'SYSTEM UNFREEZE' = 125, 'SYSTEM' = 126, 'dictGet' = 127, 'addressToLine' = 128, 'addressToLineWithInlines' = 129, 'addressToSymbol' = 130, 'demangle' = 131, 'INTROSPECTION' = 132, 'FILE' = 133, 'URL' = 134, 'REMOTE' = 135, 'MONGO' = 136, 'MEILISEARCH' = 137, 'MYSQL' = 138, 'POSTGRES' = 139, 'SQLITE' = 140, 'ODBC' = 141, 'JDBC' = 142, 'HDFS' = 143, 'S3' = 144, 'HIVE' = 145, 'SOURCES' = 146, 'CLUSTER' = 147, 'ALL' = 148, 'NONE' = 149),
|
||||
`privilege` Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW FILESYSTEM CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER TABLE' = 41, 'ALTER DATABASE' = 42, 'ALTER VIEW REFRESH' = 43, 'ALTER VIEW MODIFY QUERY' = 44, 'ALTER VIEW' = 45, 'ALTER' = 46, 'CREATE DATABASE' = 47, 'CREATE TABLE' = 48, 'CREATE VIEW' = 49, 'CREATE DICTIONARY' = 50, 'CREATE TEMPORARY TABLE' = 51, 'CREATE FUNCTION' = 52, 'CREATE' = 53, 'DROP DATABASE' = 54, 'DROP TABLE' = 55, 'DROP VIEW' = 56, 'DROP DICTIONARY' = 57, 'DROP FUNCTION' = 58, 'DROP' = 59, 'TRUNCATE' = 60, 'OPTIMIZE' = 61, 'BACKUP' = 62, 'KILL QUERY' = 63, 'KILL TRANSACTION' = 64, 'MOVE PARTITION BETWEEN SHARDS' = 65, 'CREATE USER' = 66, 'ALTER USER' = 67, 'DROP USER' = 68, 'CREATE ROLE' = 69, 'ALTER ROLE' = 70, 'DROP ROLE' = 71, 'ROLE ADMIN' = 72, 'CREATE ROW POLICY' = 73, 'ALTER ROW POLICY' = 74, 'DROP ROW POLICY' = 75, 'CREATE QUOTA' = 76, 'ALTER QUOTA' = 77, 'DROP QUOTA' = 78, 'CREATE SETTINGS PROFILE' = 79, 'ALTER SETTINGS PROFILE' = 80, 'DROP SETTINGS PROFILE' = 81, 'SHOW USERS' = 82, 'SHOW ROLES' = 83, 'SHOW ROW POLICIES' = 84, 'SHOW QUOTAS' = 85, 'SHOW SETTINGS PROFILES' = 86, 'SHOW ACCESS' = 87, 'ACCESS MANAGEMENT' = 88, 'SYSTEM SHUTDOWN' = 89, 'SYSTEM DROP DNS CACHE' = 90, 'SYSTEM DROP MARK CACHE' = 91, 'SYSTEM DROP UNCOMPRESSED CACHE' = 92, 'SYSTEM DROP MMAP CACHE' = 93, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 94, 'SYSTEM DROP FILESYSTEM CACHE' = 95, 'SYSTEM DROP SCHEMA CACHE' = 96, 'SYSTEM DROP CACHE' = 97, 'SYSTEM RELOAD CONFIG' = 98, 'SYSTEM RELOAD USERS' = 99, 'SYSTEM RELOAD SYMBOLS' = 100, 'SYSTEM RELOAD DICTIONARY' = 101, 'SYSTEM RELOAD MODEL' = 102, 'SYSTEM RELOAD FUNCTION' = 103, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 104, 'SYSTEM RELOAD' = 105, 'SYSTEM RESTART DISK' = 106, 'SYSTEM MERGES' = 107, 'SYSTEM TTL MERGES' = 108, 'SYSTEM FETCHES' = 109, 'SYSTEM MOVES' = 110, 'SYSTEM DISTRIBUTED SENDS' = 111, 'SYSTEM REPLICATED SENDS' = 112, 'SYSTEM SENDS' = 113, 'SYSTEM REPLICATION QUEUES' = 114, 'SYSTEM DROP REPLICA' = 115, 'SYSTEM SYNC REPLICA' = 116, 'SYSTEM RESTART REPLICA' = 117, 'SYSTEM RESTORE REPLICA' = 118, 'SYSTEM SYNC DATABASE REPLICA' = 119, 'SYSTEM SYNC TRANSACTION LOG' = 120, 'SYSTEM FLUSH DISTRIBUTED' = 121, 'SYSTEM FLUSH LOGS' = 122, 'SYSTEM FLUSH' = 123, 'SYSTEM THREAD FUZZER' = 124, 'SYSTEM UNFREEZE' = 125, 'SYSTEM' = 126, 'dictGet' = 127, 'addressToLine' = 128, 'addressToLineWithInlines' = 129, 'addressToSymbol' = 130, 'demangle' = 131, 'INTROSPECTION' = 132, 'FILE' = 133, 'URL' = 134, 'REMOTE' = 135, 'MONGO' = 136, 'MEILISEARCH' = 137, 'MYSQL' = 138, 'POSTGRES' = 139, 'SQLITE' = 140, 'ODBC' = 141, 'JDBC' = 142, 'HDFS' = 143, 'S3' = 144, 'HIVE' = 145, 'SOURCES' = 146, 'CLUSTER' = 147, 'ALL' = 148, 'NONE' = 149),
|
||||
`aliases` Array(String),
|
||||
`level` Nullable(Enum8('GLOBAL' = 0, 'DATABASE' = 1, 'TABLE' = 2, 'DICTIONARY' = 3, 'VIEW' = 4, 'COLUMN' = 5)),
|
||||
`parent_group` Nullable(Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER TABLE' = 41, 'ALTER DATABASE' = 42, 'ALTER VIEW REFRESH' = 43, 'ALTER VIEW MODIFY QUERY' = 44, 'ALTER VIEW' = 45, 'ALTER' = 46, 'CREATE DATABASE' = 47, 'CREATE TABLE' = 48, 'CREATE VIEW' = 49, 'CREATE DICTIONARY' = 50, 'CREATE TEMPORARY TABLE' = 51, 'CREATE FUNCTION' = 52, 'CREATE' = 53, 'DROP DATABASE' = 54, 'DROP TABLE' = 55, 'DROP VIEW' = 56, 'DROP DICTIONARY' = 57, 'DROP FUNCTION' = 58, 'DROP' = 59, 'TRUNCATE' = 60, 'OPTIMIZE' = 61, 'BACKUP' = 62, 'KILL QUERY' = 63, 'KILL TRANSACTION' = 64, 'MOVE PARTITION BETWEEN SHARDS' = 65, 'CREATE USER' = 66, 'ALTER USER' = 67, 'DROP USER' = 68, 'CREATE ROLE' = 69, 'ALTER ROLE' = 70, 'DROP ROLE' = 71, 'ROLE ADMIN' = 72, 'CREATE ROW POLICY' = 73, 'ALTER ROW POLICY' = 74, 'DROP ROW POLICY' = 75, 'CREATE QUOTA' = 76, 'ALTER QUOTA' = 77, 'DROP QUOTA' = 78, 'CREATE SETTINGS PROFILE' = 79, 'ALTER SETTINGS PROFILE' = 80, 'DROP SETTINGS PROFILE' = 81, 'SHOW USERS' = 82, 'SHOW ROLES' = 83, 'SHOW ROW POLICIES' = 84, 'SHOW QUOTAS' = 85, 'SHOW SETTINGS PROFILES' = 86, 'SHOW ACCESS' = 87, 'ACCESS MANAGEMENT' = 88, 'SYSTEM SHUTDOWN' = 89, 'SYSTEM DROP DNS CACHE' = 90, 'SYSTEM DROP MARK CACHE' = 91, 'SYSTEM DROP UNCOMPRESSED CACHE' = 92, 'SYSTEM DROP MMAP CACHE' = 93, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 94, 'SYSTEM DROP FILESYSTEM CACHE' = 95, 'SYSTEM DROP SCHEMA CACHE' = 96, 'SYSTEM DROP CACHE' = 97, 'SYSTEM RELOAD CONFIG' = 98, 'SYSTEM RELOAD USERS' = 99, 'SYSTEM RELOAD SYMBOLS' = 100, 'SYSTEM RELOAD DICTIONARY' = 101, 'SYSTEM RELOAD MODEL' = 102, 'SYSTEM RELOAD FUNCTION' = 103, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 104, 'SYSTEM RELOAD' = 105, 'SYSTEM RESTART DISK' = 106, 'SYSTEM MERGES' = 107, 'SYSTEM TTL MERGES' = 108, 'SYSTEM FETCHES' = 109, 'SYSTEM MOVES' = 110, 'SYSTEM DISTRIBUTED SENDS' = 111, 'SYSTEM REPLICATED SENDS' = 112, 'SYSTEM SENDS' = 113, 'SYSTEM REPLICATION QUEUES' = 114, 'SYSTEM DROP REPLICA' = 115, 'SYSTEM SYNC REPLICA' = 116, 'SYSTEM RESTART REPLICA' = 117, 'SYSTEM RESTORE REPLICA' = 118, 'SYSTEM SYNC DATABASE REPLICA' = 119, 'SYSTEM SYNC TRANSACTION LOG' = 120, 'SYSTEM FLUSH DISTRIBUTED' = 121, 'SYSTEM FLUSH LOGS' = 122, 'SYSTEM FLUSH' = 123, 'SYSTEM THREAD FUZZER' = 124, 'SYSTEM UNFREEZE' = 125, 'SYSTEM' = 126, 'dictGet' = 127, 'addressToLine' = 128, 'addressToLineWithInlines' = 129, 'addressToSymbol' = 130, 'demangle' = 131, 'INTROSPECTION' = 132, 'FILE' = 133, 'URL' = 134, 'REMOTE' = 135, 'MONGO' = 136, 'MEILISEARCH' = 137, 'MYSQL' = 138, 'POSTGRES' = 139, 'SQLITE' = 140, 'ODBC' = 141, 'JDBC' = 142, 'HDFS' = 143, 'S3' = 144, 'HIVE' = 145, 'SOURCES' = 146, 'CLUSTER' = 147, 'ALL' = 148, 'NONE' = 149))
|
||||
`parent_group` Nullable(Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW FILESYSTEM CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER TABLE' = 41, 'ALTER DATABASE' = 42, 'ALTER VIEW REFRESH' = 43, 'ALTER VIEW MODIFY QUERY' = 44, 'ALTER VIEW' = 45, 'ALTER' = 46, 'CREATE DATABASE' = 47, 'CREATE TABLE' = 48, 'CREATE VIEW' = 49, 'CREATE DICTIONARY' = 50, 'CREATE TEMPORARY TABLE' = 51, 'CREATE FUNCTION' = 52, 'CREATE' = 53, 'DROP DATABASE' = 54, 'DROP TABLE' = 55, 'DROP VIEW' = 56, 'DROP DICTIONARY' = 57, 'DROP FUNCTION' = 58, 'DROP' = 59, 'TRUNCATE' = 60, 'OPTIMIZE' = 61, 'BACKUP' = 62, 'KILL QUERY' = 63, 'KILL TRANSACTION' = 64, 'MOVE PARTITION BETWEEN SHARDS' = 65, 'CREATE USER' = 66, 'ALTER USER' = 67, 'DROP USER' = 68, 'CREATE ROLE' = 69, 'ALTER ROLE' = 70, 'DROP ROLE' = 71, 'ROLE ADMIN' = 72, 'CREATE ROW POLICY' = 73, 'ALTER ROW POLICY' = 74, 'DROP ROW POLICY' = 75, 'CREATE QUOTA' = 76, 'ALTER QUOTA' = 77, 'DROP QUOTA' = 78, 'CREATE SETTINGS PROFILE' = 79, 'ALTER SETTINGS PROFILE' = 80, 'DROP SETTINGS PROFILE' = 81, 'SHOW USERS' = 82, 'SHOW ROLES' = 83, 'SHOW ROW POLICIES' = 84, 'SHOW QUOTAS' = 85, 'SHOW SETTINGS PROFILES' = 86, 'SHOW ACCESS' = 87, 'ACCESS MANAGEMENT' = 88, 'SYSTEM SHUTDOWN' = 89, 'SYSTEM DROP DNS CACHE' = 90, 'SYSTEM DROP MARK CACHE' = 91, 'SYSTEM DROP UNCOMPRESSED CACHE' = 92, 'SYSTEM DROP MMAP CACHE' = 93, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 94, 'SYSTEM DROP FILESYSTEM CACHE' = 95, 'SYSTEM DROP SCHEMA CACHE' = 96, 'SYSTEM DROP CACHE' = 97, 'SYSTEM RELOAD CONFIG' = 98, 'SYSTEM RELOAD USERS' = 99, 'SYSTEM RELOAD SYMBOLS' = 100, 'SYSTEM RELOAD DICTIONARY' = 101, 'SYSTEM RELOAD MODEL' = 102, 'SYSTEM RELOAD FUNCTION' = 103, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 104, 'SYSTEM RELOAD' = 105, 'SYSTEM RESTART DISK' = 106, 'SYSTEM MERGES' = 107, 'SYSTEM TTL MERGES' = 108, 'SYSTEM FETCHES' = 109, 'SYSTEM MOVES' = 110, 'SYSTEM DISTRIBUTED SENDS' = 111, 'SYSTEM REPLICATED SENDS' = 112, 'SYSTEM SENDS' = 113, 'SYSTEM REPLICATION QUEUES' = 114, 'SYSTEM DROP REPLICA' = 115, 'SYSTEM SYNC REPLICA' = 116, 'SYSTEM RESTART REPLICA' = 117, 'SYSTEM RESTORE REPLICA' = 118, 'SYSTEM SYNC DATABASE REPLICA' = 119, 'SYSTEM SYNC TRANSACTION LOG' = 120, 'SYSTEM FLUSH DISTRIBUTED' = 121, 'SYSTEM FLUSH LOGS' = 122, 'SYSTEM FLUSH' = 123, 'SYSTEM THREAD FUZZER' = 124, 'SYSTEM UNFREEZE' = 125, 'SYSTEM' = 126, 'dictGet' = 127, 'addressToLine' = 128, 'addressToLineWithInlines' = 129, 'addressToSymbol' = 130, 'demangle' = 131, 'INTROSPECTION' = 132, 'FILE' = 133, 'URL' = 134, 'REMOTE' = 135, 'MONGO' = 136, 'MEILISEARCH' = 137, 'MYSQL' = 138, 'POSTGRES' = 139, 'SQLITE' = 140, 'ODBC' = 141, 'JDBC' = 142, 'HDFS' = 143, 'S3' = 144, 'HIVE' = 145, 'SOURCES' = 146, 'CLUSTER' = 147, 'ALL' = 148, 'NONE' = 149))
|
||||
)
|
||||
ENGINE = SystemPrivileges
|
||||
COMMENT 'SYSTEM TABLE is built on the fly.'
|
||||
|
@ -1,5 +1,3 @@
|
||||
-- Tags: no-s3-storage
|
||||
|
||||
drop table if exists test_agg_proj_02302;
|
||||
|
||||
create table test_agg_proj_02302 (x Int32, y Int32, PROJECTION x_plus_y (select sum(x - y), argMax(x, y) group by x + y)) ENGINE = MergeTree order by tuple() settings index_granularity = 1;
|
||||
|
@ -1,3 +1,3 @@
|
||||
-- Tags: no-fasttest
|
||||
|
||||
DESCRIBE CACHE 's3_cache';
|
||||
DESCRIBE FILESYSTEM CACHE 's3_cache';
|
||||
|
12
tests/queries/0_stateless/02344_show_caches.reference
Normal file
12
tests/queries/0_stateless/02344_show_caches.reference
Normal file
@ -0,0 +1,12 @@
|
||||
cached_azure
|
||||
s3_cache_2
|
||||
s3_cache_4
|
||||
s3_cache_5
|
||||
local_cache
|
||||
s3_cache_small
|
||||
local_cache_2
|
||||
local_cache_3
|
||||
s3_cache_multi
|
||||
s3_cache_3
|
||||
s3_cache
|
||||
s3_cache_multi_2
|
2
tests/queries/0_stateless/02344_show_caches.sql
Normal file
2
tests/queries/0_stateless/02344_show_caches.sql
Normal file
@ -0,0 +1,2 @@
|
||||
-- Tags: no-fasttest, no-replicated-database, no-cpu-aarch64
|
||||
SHOW FILESYSTEM CACHES;
|
@ -0,0 +1,22 @@
|
||||
<clickhouse>
|
||||
<logger>
|
||||
<level>trace</level>
|
||||
<console>true</console>
|
||||
</logger>
|
||||
|
||||
<tcp_port>9000</tcp_port>
|
||||
<allow_implicit_no_password>0</allow_implicit_no_password>
|
||||
<path>.</path>
|
||||
<mark_cache_size>0</mark_cache_size>
|
||||
<!-- Sources to read users, roles, access rights, profiles of settings, quotas. -->
|
||||
<user_directories>
|
||||
<users_xml>
|
||||
<!-- Path to configuration file with predefined users. -->
|
||||
<path>users.xml</path>
|
||||
</users_xml>
|
||||
<local_directory>
|
||||
<!-- Path to folder where users created by SQL commands are stored. -->
|
||||
<path>./</path>
|
||||
</local_directory>
|
||||
</user_directories>
|
||||
</clickhouse>
|
85
tests/queries/0_stateless/02422_allow_implicit_no_password.sh
Executable file
85
tests/queries/0_stateless/02422_allow_implicit_no_password.sh
Executable file
@ -0,0 +1,85 @@
|
||||
#!/usr/bin/env bash
|
||||
# Tags: no-tsan, no-asan, no-ubsan, no-msan, no-parallel, no-fasttest
|
||||
# Tag no-tsan: requires jemalloc to track small allocations
|
||||
# Tag no-asan: requires jemalloc to track small allocations
|
||||
# Tag no-ubsan: requires jemalloc to track small allocations
|
||||
# Tag no-msan: requires jemalloc to track small allocations
|
||||
|
||||
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
# shellcheck source=../shell_config.sh
|
||||
. "$CURDIR"/../shell_config.sh
|
||||
|
||||
cp /etc/clickhouse-server/users.xml "$CURDIR"/users.xml
|
||||
sed -i 's/<password><\/password>/<password_sha256_hex>c64c5e4e53ea1a9f1427d2713b3a22bbebe8940bc807adaf654744b1568c70ab<\/password_sha256_hex>/g' "$CURDIR"/users.xml
|
||||
sed -i 's/<!-- <access_management>1<\/access_management> -->/<access_management>1<\/access_management>/g' "$CURDIR"/users.xml
|
||||
|
||||
server_opts=(
|
||||
"--config-file=$CURDIR/$(basename "${BASH_SOURCE[0]}" .sh).config.xml"
|
||||
"--"
|
||||
# to avoid multiple listen sockets (complexity for port discovering)
|
||||
"--listen_host=127.1"
|
||||
# we will discover the real port later.
|
||||
"--tcp_port=0"
|
||||
"--shutdown_wait_unfinished=0"
|
||||
)
|
||||
|
||||
CLICKHOUSE_WATCHDOG_ENABLE=0 $CLICKHOUSE_SERVER_BINARY "${server_opts[@]}" &> clickhouse-server.stderr &
|
||||
server_pid=$!
|
||||
|
||||
server_port=
|
||||
i=0 retries=300
|
||||
# wait until server will start to listen (max 30 seconds)
|
||||
while [[ -z $server_port ]] && [[ $i -lt $retries ]]; do
|
||||
server_port=$(lsof -n -a -P -i tcp -s tcp:LISTEN -p $server_pid 2>/dev/null | awk -F'[ :]' '/LISTEN/ { print $(NF-1) }')
|
||||
((++i))
|
||||
sleep 0.1
|
||||
if ! kill -0 $server_pid >& /dev/null; then
|
||||
echo "No server (pid $server_pid)"
|
||||
break
|
||||
fi
|
||||
done
|
||||
if [[ -z $server_port ]]; then
|
||||
echo "Cannot wait for LISTEN socket" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# wait for the server to start accepting tcp connections (max 30 seconds)
|
||||
i=0 retries=300
|
||||
while ! $CLICKHOUSE_CLIENT_BINARY -u default --password='1w2swhb1' --host 127.1 --port "$server_port" --format Null -q 'select 1' 2>/dev/null && [[ $i -lt $retries ]]; do
|
||||
sleep 0.1
|
||||
if ! kill -0 $server_pid >& /dev/null; then
|
||||
echo "No server (pid $server_pid)"
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
|
||||
if ! $CLICKHOUSE_CLIENT_BINARY -u default --password='1w2swhb1' --host 127.1 --port "$server_port" --format Null -q 'select 1'; then
|
||||
echo "Cannot wait until server will start accepting connections on <tcp_port>" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
$CLICKHOUSE_CLIENT_BINARY -u default --password='1w2swhb1' --host 127.1 --port "$server_port" -q "DROP USER IF EXISTS u1_02422, u2_02422, u3_02422";
|
||||
|
||||
$CLICKHOUSE_CLIENT_BINARY -u default --password='1w2swhb1' --host 127.1 --port "$server_port" -q "CREATE USER u1_02422" " -- { serverError 516 } --" &> /dev/null ;
|
||||
|
||||
$CLICKHOUSE_CLIENT_BINARY -u default --password='1w2swhb1' --host 127.1 --port "$server_port" -q "CREATE USER u2_02422 IDENTIFIED WITH no_password "
|
||||
|
||||
$CLICKHOUSE_CLIENT_BINARY -u default --password='1w2swhb1' --host 127.1 --port "$server_port" -q "CREATE USER u3_02422 IDENTIFIED BY 'qwe123'";
|
||||
|
||||
$CLICKHOUSE_CLIENT_BINARY -u default --password='1w2swhb1' --host 127.1 --port "$server_port" -q "DROP USER u2_02422, u3_02422";
|
||||
|
||||
|
||||
# no sleep, since flushing to stderr should not be buffered.
|
||||
grep 'User is not allowed to Create users' clickhouse-server.stderr
|
||||
|
||||
|
||||
# send TERM and save the error code to ensure that it is 0 (EXIT_SUCCESS)
|
||||
kill $server_pid
|
||||
wait $server_pid
|
||||
return_code=$?
|
||||
|
||||
rm -f clickhouse-server.stderr
|
||||
rm -f "$CURDIR"/users.xml
|
||||
|
||||
exit $return_code
|
Loading…
Reference in New Issue
Block a user