mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-09-20 00:30:49 +00:00
Merge branch 'master' of github.com:ClickHouse/ClickHouse into shabroo/master
This commit is contained in:
commit
225775f499
155
CHANGELOG.md
155
CHANGELOG.md
@ -1,9 +1,164 @@
|
||||
### Table of Contents
|
||||
**[ClickHouse release v24.2, 2024-02-29](#242)**<br/>
|
||||
**[ClickHouse release v24.1, 2024-01-30](#241)**<br/>
|
||||
**[Changelog for 2023](https://clickhouse.com/docs/en/whats-new/changelog/2023/)**<br/>
|
||||
|
||||
# 2024 Changelog
|
||||
|
||||
### <a id="242"></a> ClickHouse release 24.2, 2024-02-29
|
||||
|
||||
#### Backward Incompatible Change
|
||||
* Validate suspicious/experimental types in nested types. Previously we didn't validate such types (except JSON) in nested types like Array/Tuple/Map. [#59385](https://github.com/ClickHouse/ClickHouse/pull/59385) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Add sanity check for number of threads and block sizes. [#60138](https://github.com/ClickHouse/ClickHouse/pull/60138) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Don't infer floats in exponential notation by default. Add a setting `input_format_try_infer_exponent_floats` that will restore previous behaviour (disabled by default). Closes [#59476](https://github.com/ClickHouse/ClickHouse/issues/59476). [#59500](https://github.com/ClickHouse/ClickHouse/pull/59500) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Allow alter operations to be surrounded by parenthesis. The emission of parentheses can be controlled by the `format_alter_operations_with_parentheses` config. By default, in formatted queries the parentheses are emitted as we store the formatted alter operations in some places as metadata (e.g.: mutations). The new syntax clarifies some of the queries where alter operations end in a list. E.g.: `ALTER TABLE x MODIFY TTL date GROUP BY a, b, DROP COLUMN c` cannot be parsed properly with the old syntax. In the new syntax the query `ALTER TABLE x (MODIFY TTL date GROUP BY a, b), (DROP COLUMN c)` is obvious. Older versions are not able to read the new syntax, therefore using the new syntax might cause issues if newer and older version of ClickHouse are mixed in a single cluster. [#59532](https://github.com/ClickHouse/ClickHouse/pull/59532) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
|
||||
#### New Feature
|
||||
* Added new syntax which allows to specify definer user in View/Materialized View. This allows to execute selects/inserts from views without explicit grants for underlying tables. So, a View will encapsulate the grants. [#54901](https://github.com/ClickHouse/ClickHouse/pull/54901) [#60439](https://github.com/ClickHouse/ClickHouse/pull/60439) ([pufit](https://github.com/pufit)).
|
||||
* Try to detect file format automatically during schema inference if it's unknown in `file/s3/hdfs/url/azureBlobStorage` engines. Closes [#50576](https://github.com/ClickHouse/ClickHouse/issues/50576). [#59092](https://github.com/ClickHouse/ClickHouse/pull/59092) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Implement auto-adjustment for asynchronous insert timeouts. The following settings are introduced: async_insert_poll_timeout_ms, async_insert_use_adaptive_busy_timeout, async_insert_busy_timeout_min_ms, async_insert_busy_timeout_max_ms, async_insert_busy_timeout_increase_rate, async_insert_busy_timeout_decrease_rate. [#58486](https://github.com/ClickHouse/ClickHouse/pull/58486) ([Julia Kartseva](https://github.com/jkartseva)).
|
||||
* Allow to set up a quota for maximum sequential login failures. [#54737](https://github.com/ClickHouse/ClickHouse/pull/54737) ([Alexey Gerasimchuck](https://github.com/Demilivor)).
|
||||
* A new aggregate function `groupArrayIntersect`. Follows up: [#49862](https://github.com/ClickHouse/ClickHouse/issues/49862). [#59598](https://github.com/ClickHouse/ClickHouse/pull/59598) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
* Backup & Restore support for `AzureBlobStorage`. Resolves [#50747](https://github.com/ClickHouse/ClickHouse/issues/50747). [#56988](https://github.com/ClickHouse/ClickHouse/pull/56988) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* The user can now specify the template string directly in the query using `format_schema_rows_template` as an alternative to `format_template_row`. Closes [#31363](https://github.com/ClickHouse/ClickHouse/issues/31363). [#59088](https://github.com/ClickHouse/ClickHouse/pull/59088) ([Shaun Struwig](https://github.com/Blargian)).
|
||||
* Implemented automatic conversion of merge tree tables of different kinds to replicated engine. Create empty `convert_to_replicated` file in table's data directory (`/clickhouse/store/xxx/xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy/`) and that table will be converted automatically on next server start. [#57798](https://github.com/ClickHouse/ClickHouse/pull/57798) ([Kirill](https://github.com/kirillgarbar)).
|
||||
* Added function `seriesOutliersTukey` to detect outliers in series data using Tukey's fences algorithm. [#58632](https://github.com/ClickHouse/ClickHouse/pull/58632) ([Bhavna Jindal](https://github.com/bhavnajindal)).
|
||||
* Added query `ALTER TABLE table FORGET PARTITION partition` that removes ZooKeeper nodes, related to an empty partition. [#59507](https://github.com/ClickHouse/ClickHouse/pull/59507) ([Sergei Trifonov](https://github.com/serxa)). This is an expert-level feature.
|
||||
* Support JWT credentials file for the NATS table engine. [#59543](https://github.com/ClickHouse/ClickHouse/pull/59543) ([Nickolaj Jepsen](https://github.com/nickolaj-jepsen)).
|
||||
* Implemented system.dns_cache table, which can be useful for debugging DNS issues. [#59856](https://github.com/ClickHouse/ClickHouse/pull/59856) ([Kirill Nikiforov](https://github.com/allmazz)).
|
||||
* The codec `LZ4HC` will accept a new level 2, which is faster than the previous minimum level 3, at the expense of less compression. In previous versions, `LZ4HC(2)` and less was the same as `LZ4HC(3)`. Author: [Cyan4973](https://github.com/Cyan4973). [#60090](https://github.com/ClickHouse/ClickHouse/pull/60090) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Implemented `system.dns_cache` table, which can be useful for debugging DNS issues. New server setting dns_cache_max_size. [#60257](https://github.com/ClickHouse/ClickHouse/pull/60257) ([Kirill Nikiforov](https://github.com/allmazz)).
|
||||
* Support single-argument version for the `merge` table function, as `merge(['db_name', ] 'tables_regexp')`. [#60372](https://github.com/ClickHouse/ClickHouse/pull/60372) ([豪肥肥](https://github.com/HowePa)).
|
||||
* Support negative positional arguments. Closes [#57736](https://github.com/ClickHouse/ClickHouse/issues/57736). [#58292](https://github.com/ClickHouse/ClickHouse/pull/58292) ([flynn](https://github.com/ucasfl)).
|
||||
* Support specifying a set of permitted users for specific S3 settings in config using `user` key. [#60144](https://github.com/ClickHouse/ClickHouse/pull/60144) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
|
||||
#### Experimental Feature
|
||||
* Add function `variantType` that returns Enum with variant type name for each row. [#59398](https://github.com/ClickHouse/ClickHouse/pull/59398) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Support `LEFT JOIN`, `ALL INNER JOIN`, and simple subqueries for parallel replicas (only with analyzer). New setting `parallel_replicas_prefer_local_join` chooses local `JOIN` execution (by default) vs `GLOBAL JOIN`. All tables should exist on every replica from `cluster_for_parallel_replicas`. New settings `min_external_table_block_size_rows` and `min_external_table_block_size_bytes` are used to squash small blocks that are sent for temporary tables (only with analyzer). [#58916](https://github.com/ClickHouse/ClickHouse/pull/58916) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Allow concurrent table creation in the `Replicated` database during adding or recovering a new replica. [#59277](https://github.com/ClickHouse/ClickHouse/pull/59277) ([Konstantin Bogdanov](https://github.com/thevar1able)).
|
||||
* Implement comparison operator for `Variant` values and proper Field inserting into `Variant` column. Don't allow creating `Variant` type with similar variant types by default (allow uder a setting `allow_suspicious_variant_types`) Closes [#59996](https://github.com/ClickHouse/ClickHouse/issues/59996). Closes [#59850](https://github.com/ClickHouse/ClickHouse/issues/59850). [#60198](https://github.com/ClickHouse/ClickHouse/pull/60198) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Disable parallel replicas JOIN with CTE (not analyzer) [#59239](https://github.com/ClickHouse/ClickHouse/pull/59239) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
|
||||
#### Performance Improvement
|
||||
* Primary key will use less amount of memory. [#60049](https://github.com/ClickHouse/ClickHouse/pull/60049) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Improve memory usage for primary key and some other operations. [#60050](https://github.com/ClickHouse/ClickHouse/pull/60050) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* The tables' primary keys will be loaded in memory lazily on first access. This is controlled by the new MergeTree setting `primary_key_lazy_load`, which is on by default. This provides several advantages: - it will not be loaded for tables that are not used; - if there is not enough memory, an exception will be thrown on first use instead of at server startup. This provides several disadvantages: - the latency of loading the primary key will be paid on the first query rather than before accepting connections; this theoretically may introduce a thundering-herd problem. This closes [#11188](https://github.com/ClickHouse/ClickHouse/issues/11188). [#60093](https://github.com/ClickHouse/ClickHouse/pull/60093) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Vectorized distance functions used in vector search. [#58866](https://github.com/ClickHouse/ClickHouse/pull/58866) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Vectorized function `dotProduct` which is useful for vector search. [#60202](https://github.com/ClickHouse/ClickHouse/pull/60202) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Add short-circuit ability for `dictGetOrDefault` function. Closes [#52098](https://github.com/ClickHouse/ClickHouse/issues/52098). [#57767](https://github.com/ClickHouse/ClickHouse/pull/57767) ([jsc0218](https://github.com/jsc0218)).
|
||||
* Keeper improvement: cache only a certain amount of logs in-memory controlled by `latest_logs_cache_size_threshold` and `commit_logs_cache_size_threshold`. [#59460](https://github.com/ClickHouse/ClickHouse/pull/59460) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Keeper improvement: reduce size of data node even more. [#59592](https://github.com/ClickHouse/ClickHouse/pull/59592) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Continue optimizing branch miss of `if` function when result type is `Float*/Decimal*/*Int*`, follow up of https://github.com/ClickHouse/ClickHouse/pull/57885. [#59148](https://github.com/ClickHouse/ClickHouse/pull/59148) ([李扬](https://github.com/taiyang-li)).
|
||||
* Optimize `if` function when the input type is `Map`, the speed-up is up to ~10x. [#59413](https://github.com/ClickHouse/ClickHouse/pull/59413) ([李扬](https://github.com/taiyang-li)).
|
||||
* Improve performance of the `Int8` type by implementing strict aliasing (we already have it for `UInt8` and all other integer types). [#59485](https://github.com/ClickHouse/ClickHouse/pull/59485) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Optimize performance of sum/avg conditionally for bigint and big decimal types by reducing branch miss. [#59504](https://github.com/ClickHouse/ClickHouse/pull/59504) ([李扬](https://github.com/taiyang-li)).
|
||||
* Improve performance of SELECTs with active mutations. [#59531](https://github.com/ClickHouse/ClickHouse/pull/59531) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Optimized function `isNotNull` with AVX2. [#59621](https://github.com/ClickHouse/ClickHouse/pull/59621) ([李扬](https://github.com/taiyang-li)).
|
||||
* Improve ASOF JOIN performance for sorted or almost sorted data. [#59731](https://github.com/ClickHouse/ClickHouse/pull/59731) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* The previous default value equals to 1 MB for `async_insert_max_data_size` appeared to be too small. The new one would be 10 MiB. [#59536](https://github.com/ClickHouse/ClickHouse/pull/59536) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Use multiple threads while reading the metadata of tables from a backup while executing the RESTORE command. [#60040](https://github.com/ClickHouse/ClickHouse/pull/60040) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Now if `StorageBuffer` has more than 1 shard (`num_layers` > 1) background flush will happen simultaneously for all shards in multiple threads. [#60111](https://github.com/ClickHouse/ClickHouse/pull/60111) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
#### Improvement
|
||||
* When output format is Pretty format and a block consists of a single numeric value which exceeds one million, A readable number will be printed on table right. [#60379](https://github.com/ClickHouse/ClickHouse/pull/60379) ([rogeryk](https://github.com/rogeryk)).
|
||||
* Added settings `split_parts_ranges_into_intersecting_and_non_intersecting_final` and `split_intersecting_parts_ranges_into_layers_final`. These settings are needed to disable optimizations for queries with `FINAL` and needed for debug only. [#59705](https://github.com/ClickHouse/ClickHouse/pull/59705) ([Maksim Kita](https://github.com/kitaisreal)). Actually not only for that - they can also lower memory usage at the expense of performance.
|
||||
* Rename the setting `extract_kvp_max_pairs_per_row` to `extract_key_value_pairs_max_pairs_per_row`. The issue (unnecessary abbreviation in the setting name) was introduced in https://github.com/ClickHouse/ClickHouse/pull/43606. Fix the documentation of this setting. [#59683](https://github.com/ClickHouse/ClickHouse/pull/59683) ([Alexey Milovidov](https://github.com/alexey-milovidov)). [#59960](https://github.com/ClickHouse/ClickHouse/pull/59960) ([jsc0218](https://github.com/jsc0218)).
|
||||
* Running `ALTER COLUMN MATERIALIZE` on a column with `DEFAULT` or `MATERIALIZED` expression now precisely follows the semantics. [#58023](https://github.com/ClickHouse/ClickHouse/pull/58023) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Enabled an exponential backoff logic for errors during mutations. It will reduce the CPU usage, memory usage and log file sizes. [#58036](https://github.com/ClickHouse/ClickHouse/pull/58036) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||
* Add improvement to count the `InitialQuery` Profile Event. [#58195](https://github.com/ClickHouse/ClickHouse/pull/58195) ([Unalian](https://github.com/Unalian)).
|
||||
* Allow to define `volume_priority` in `storage_configuration`. [#58533](https://github.com/ClickHouse/ClickHouse/pull/58533) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||
* Add support for the `Date32` type in the `T64` codec. [#58738](https://github.com/ClickHouse/ClickHouse/pull/58738) ([Hongbin Ma](https://github.com/binmahone)).
|
||||
* Allow trailing commas in types with several items. [#59119](https://github.com/ClickHouse/ClickHouse/pull/59119) ([Aleksandr Musorin](https://github.com/AVMusorin)).
|
||||
* Settings for the Distributed table engine can now be specified in the server configuration file (similar to MergeTree settings), e.g. `<distributed> <flush_on_detach>false</flush_on_detach> </distributed>`. [#59291](https://github.com/ClickHouse/ClickHouse/pull/59291) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Retry disconnects and expired sessions when reading `system.zookeeper`. This is helpful when reading many rows from `system.zookeeper` table especially in the presence of fault-injected disconnects. [#59388](https://github.com/ClickHouse/ClickHouse/pull/59388) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Do not interpret numbers with leading zeroes as octals when `input_format_values_interpret_expressions=0`. [#59403](https://github.com/ClickHouse/ClickHouse/pull/59403) ([Joanna Hulboj](https://github.com/jh0x)).
|
||||
* At startup and whenever config files are changed, ClickHouse updates the hard memory limits of its total memory tracker. These limits are computed based on various server settings and cgroups limits (on Linux). Previously, setting `/sys/fs/cgroup/memory.max` (for cgroups v2) was hard-coded. As a result, cgroup v2 memory limits configured for nested groups (hierarchies), e.g. `/sys/fs/cgroup/my/nested/group/memory.max` were ignored. This is now fixed. The behavior of v1 memory limits remains unchanged. [#59435](https://github.com/ClickHouse/ClickHouse/pull/59435) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* New profile events added to observe the time spent on calculating PK/projections/secondary indices during `INSERT`-s. [#59436](https://github.com/ClickHouse/ClickHouse/pull/59436) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Allow to define a starting point for S3Queue with Ordered mode at the creation using a setting `s3queue_last_processed_path`. [#59446](https://github.com/ClickHouse/ClickHouse/pull/59446) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Made comments for system tables also available in `system.tables` in `clickhouse-local`. [#59493](https://github.com/ClickHouse/ClickHouse/pull/59493) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* `system.zookeeper` table: previously the whole result was accumulated in memory and returned as one big chunk. This change should help to reduce memory consumption when reading many rows from `system.zookeeper`, allow showing intermediate progress (how many rows have been read so far) and avoid hitting connection timeout when result set is big. [#59545](https://github.com/ClickHouse/ClickHouse/pull/59545) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Now dashboard understands both compressed and uncompressed state of URL's #hash (backward compatibility). Continuation of [#59124](https://github.com/ClickHouse/ClickHouse/issues/59124) . [#59548](https://github.com/ClickHouse/ClickHouse/pull/59548) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Bumped Intel QPL (used by codec `DEFLATE_QPL`) from v1.3.1 to v1.4.0 . Also fixed a bug for polling timeout mechanism, as we observed in same cases timeout won't work properly, if timeout happen, IAA and CPU may process buffer concurrently. So far, we'd better make sure IAA codec status is not QPL_STS_BEING_PROCESSED, then fallback to SW codec. [#59551](https://github.com/ClickHouse/ClickHouse/pull/59551) ([jasperzhu](https://github.com/jinjunzh)).
|
||||
* Do not show a warning about the server version in ClickHouse Cloud because ClickHouse Cloud handles seamless upgrades automatically. [#59657](https://github.com/ClickHouse/ClickHouse/pull/59657) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* After self-extraction temporary binary is moved instead copying. [#59661](https://github.com/ClickHouse/ClickHouse/pull/59661) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
* Fix stack unwinding on Apple macOS. This closes [#53653](https://github.com/ClickHouse/ClickHouse/issues/53653). [#59690](https://github.com/ClickHouse/ClickHouse/pull/59690) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Check for stack overflow in parsers even if the user misconfigured the `max_parser_depth` setting to a very high value. This closes [#59622](https://github.com/ClickHouse/ClickHouse/issues/59622). [#59697](https://github.com/ClickHouse/ClickHouse/pull/59697) ([Alexey Milovidov](https://github.com/alexey-milovidov)). [#60434](https://github.com/ClickHouse/ClickHouse/pull/60434)
|
||||
* Unify XML and SQL created named collection behaviour in Kafka storage. [#59710](https://github.com/ClickHouse/ClickHouse/pull/59710) ([Pervakov Grigorii](https://github.com/GrigoryPervakov)).
|
||||
* In case when `merge_max_block_size_bytes` is small enough and tables contain wide rows (strings or tuples) background merges may stuck in an endless loop. This behaviour is fixed. Follow-up for https://github.com/ClickHouse/ClickHouse/pull/59340. [#59812](https://github.com/ClickHouse/ClickHouse/pull/59812) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Allow uuid in replica_path if CREATE TABLE explicitly has it. [#59908](https://github.com/ClickHouse/ClickHouse/pull/59908) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add column `metadata_version` of ReplicatedMergeTree table in `system.tables` system table. [#59942](https://github.com/ClickHouse/ClickHouse/pull/59942) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Keeper improvement: send only Keeper related metrics/events for Prometheus. [#59945](https://github.com/ClickHouse/ClickHouse/pull/59945) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* The dashboard will display metrics across different ClickHouse versions even if the structure of system tables has changed after the upgrade. [#59967](https://github.com/ClickHouse/ClickHouse/pull/59967) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Allow loading AZ info from a file. [#59976](https://github.com/ClickHouse/ClickHouse/pull/59976) ([Konstantin Bogdanov](https://github.com/thevar1able)).
|
||||
* Keeper improvement: add retries on failures for Disk related operations. [#59980](https://github.com/ClickHouse/ClickHouse/pull/59980) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Add new config setting `backups.remove_backup_files_after_failure`: `<clickhouse> <backups> <remove_backup_files_after_failure>true</remove_backup_files_after_failure> </backups> </clickhouse>`. [#60002](https://github.com/ClickHouse/ClickHouse/pull/60002) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Copy S3 file GCP fallback to buffer copy in case GCP returned `Internal Error` with `GATEWAY_TIMEOUT` HTTP error code. [#60164](https://github.com/ClickHouse/ClickHouse/pull/60164) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Short circuit execution for `ULIDStringToDateTime`. [#60211](https://github.com/ClickHouse/ClickHouse/pull/60211) ([Juan Madurga](https://github.com/jlmadurga)).
|
||||
* Added `query_id` column for tables `system.backups` and `system.backup_log`. Added error stacktrace to `error` column. [#60220](https://github.com/ClickHouse/ClickHouse/pull/60220) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Connections through the MySQL port now automatically run with setting `prefer_column_name_to_alias = 1` to support QuickSight out-of-the-box. Also, settings `mysql_map_string_to_text_in_show_columns` and `mysql_map_fixed_string_to_text_in_show_columns` are now enabled by default, affecting also only MySQL connections. This increases compatibility with more BI tools. [#60365](https://github.com/ClickHouse/ClickHouse/pull/60365) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix a race condition in JavaScript code leading to duplicate charts on top of each other. [#60392](https://github.com/ClickHouse/ClickHouse/pull/60392) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Added builds and tests with coverage collection with introspection. Continuation of [#56102](https://github.com/ClickHouse/ClickHouse/issues/56102). [#58792](https://github.com/ClickHouse/ClickHouse/pull/58792) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update the Rust toolchain in `corrosion-cmake` when the CMake cross-compilation toolchain variable is set. [#59309](https://github.com/ClickHouse/ClickHouse/pull/59309) ([Aris Tritas](https://github.com/aris-aiven)).
|
||||
* Add some fuzzing to ASTLiterals. [#59383](https://github.com/ClickHouse/ClickHouse/pull/59383) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* If you want to run initdb scripts every time when ClickHouse container is starting you shoud initialize environment varible CLICKHOUSE_ALWAYS_RUN_INITDB_SCRIPTS. [#59808](https://github.com/ClickHouse/ClickHouse/pull/59808) ([Alexander Nikolaev](https://github.com/AlexNik)).
|
||||
* Remove ability to disable generic clickhouse components (like server/client/...), but keep some that requires extra libraries (like ODBC or keeper). [#59857](https://github.com/ClickHouse/ClickHouse/pull/59857) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Query fuzzer will fuzz SETTINGS inside queries. [#60087](https://github.com/ClickHouse/ClickHouse/pull/60087) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add support for building ClickHouse with clang-19 (master). [#60448](https://github.com/ClickHouse/ClickHouse/pull/60448) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
* Fix a "Non-ready set" error in TTL WHERE. [#57430](https://github.com/ClickHouse/ClickHouse/pull/57430) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix a bug in the `quantilesGK` function [#58216](https://github.com/ClickHouse/ClickHouse/pull/58216) ([李扬](https://github.com/taiyang-li)).
|
||||
* Fix a wrong behavior with `intDiv` for Decimal arguments [#59243](https://github.com/ClickHouse/ClickHouse/pull/59243) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
* Fix `translate` with FixedString input [#59356](https://github.com/ClickHouse/ClickHouse/pull/59356) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix digest calculation in Keeper [#59439](https://github.com/ClickHouse/ClickHouse/pull/59439) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix stacktraces for binaries without debug symbols [#59444](https://github.com/ClickHouse/ClickHouse/pull/59444) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix `ASTAlterCommand::formatImpl` in case of column specific settings… [#59445](https://github.com/ClickHouse/ClickHouse/pull/59445) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
* Fix `SELECT * FROM [...] ORDER BY ALL` with Analyzer [#59462](https://github.com/ClickHouse/ClickHouse/pull/59462) ([zhongyuankai](https://github.com/zhongyuankai)).
|
||||
* Fix possible uncaught exception during distributed query cancellation [#59487](https://github.com/ClickHouse/ClickHouse/pull/59487) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Make MAX use the same rules as permutation for complex types [#59498](https://github.com/ClickHouse/ClickHouse/pull/59498) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix corner case when passing `update_insert_deduplication_token_in_dependent_materialized_views` [#59544](https://github.com/ClickHouse/ClickHouse/pull/59544) ([Jordi Villar](https://github.com/jrdi)).
|
||||
* Fix incorrect result of arrayElement / map on empty value [#59594](https://github.com/ClickHouse/ClickHouse/pull/59594) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix crash in topK when merging empty states [#59603](https://github.com/ClickHouse/ClickHouse/pull/59603) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix distributed table with a constant sharding key [#59606](https://github.com/ClickHouse/ClickHouse/pull/59606) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix KQL issue found by WingFuzz [#59626](https://github.com/ClickHouse/ClickHouse/pull/59626) ([Yong Wang](https://github.com/kashwy)).
|
||||
* Fix error "Read beyond last offset" for AsynchronousBoundedReadBuffer [#59630](https://github.com/ClickHouse/ClickHouse/pull/59630) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Maintain function alias in RewriteSumFunctionWithSumAndCountVisitor [#59658](https://github.com/ClickHouse/ClickHouse/pull/59658) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix query start time on non initial queries [#59662](https://github.com/ClickHouse/ClickHouse/pull/59662) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Validate types of arguments for `minmax` skipping index [#59733](https://github.com/ClickHouse/ClickHouse/pull/59733) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix leftPad / rightPad function with FixedString input [#59739](https://github.com/ClickHouse/ClickHouse/pull/59739) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix AST fuzzer issue in function `countMatches` [#59752](https://github.com/ClickHouse/ClickHouse/pull/59752) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* RabbitMQ: fix having neither acked nor nacked messages [#59775](https://github.com/ClickHouse/ClickHouse/pull/59775) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix StorageURL doing some of the query execution in single thread [#59833](https://github.com/ClickHouse/ClickHouse/pull/59833) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* S3Queue: fix uninitialized value [#59897](https://github.com/ClickHouse/ClickHouse/pull/59897) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix parsing of partition expressions surrounded by parens [#59901](https://github.com/ClickHouse/ClickHouse/pull/59901) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
* Fix crash in JSONColumnsWithMetadata format over HTTP [#59925](https://github.com/ClickHouse/ClickHouse/pull/59925) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Do not rewrite sum to count if the return value differs in Analyzer [#59926](https://github.com/ClickHouse/ClickHouse/pull/59926) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* UniqExactSet read crash fix [#59928](https://github.com/ClickHouse/ClickHouse/pull/59928) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* ReplicatedMergeTree invalid metadata_version fix [#59946](https://github.com/ClickHouse/ClickHouse/pull/59946) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Fix data race in `StorageDistributed` [#59987](https://github.com/ClickHouse/ClickHouse/pull/59987) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Docker: run init scripts when option is enabled rather than disabled [#59991](https://github.com/ClickHouse/ClickHouse/pull/59991) ([jktng](https://github.com/jktng)).
|
||||
* Fix INSERT into `SQLite` with single quote (by escaping single quotes with a quote instead of backslash) [#60015](https://github.com/ClickHouse/ClickHouse/pull/60015) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix several logical errors in `arrayFold` [#60022](https://github.com/ClickHouse/ClickHouse/pull/60022) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix optimize_uniq_to_count removing the column alias [#60026](https://github.com/ClickHouse/ClickHouse/pull/60026) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix possible exception from S3Queue table on drop [#60036](https://github.com/ClickHouse/ClickHouse/pull/60036) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix formatting of NOT with single literals [#60042](https://github.com/ClickHouse/ClickHouse/pull/60042) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Use max_query_size from context in DDLLogEntry instead of hardcoded 4096 [#60083](https://github.com/ClickHouse/ClickHouse/pull/60083) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix inconsistent formatting of queries containing tables named `table`. Fix wrong formatting of queries with `UNION ALL`, `INTERSECT`, and `EXCEPT` when their structure wasn't linear. This closes #52349. Fix wrong formatting of `SYSTEM` queries, including `SYSTEM ... DROP FILESYSTEM CACHE`, `SYSTEM ... REFRESH/START/STOP/CANCEL/TEST VIEW`, `SYSTEM ENABLE/DISABLE FAILPOINT`. Fix formatting of parameterized DDL queries. Fix the formatting of the `DESCRIBE FILESYSTEM CACHE` query. Fix incorrect formatting of the `SET param_...` (a query setting a parameter). Fix incorrect formatting of `CREATE INDEX` queries. Fix inconsistent formatting of `CREATE USER` and similar queries. Fix inconsistent formatting of `CREATE SETTINGS PROFILE`. Fix incorrect formatting of `ALTER ... MODIFY REFRESH`. Fix inconsistent formatting of window functions if frame offsets were expressions. Fix inconsistent formatting of `RESPECT NULLS` and `IGNORE NULLS` if they were used after a function that implements an operator (such as `plus`). Fix idiotic formatting of `SYSTEM SYNC REPLICA ... LIGHTWEIGHT FROM ...`. Fix inconsistent formatting of invalid queries with `GROUP BY GROUPING SETS ... WITH ROLLUP/CUBE/TOTALS`. Fix inconsistent formatting of `GRANT CURRENT GRANTS`. Fix inconsistent formatting of `CREATE TABLE (... COLLATE)`. Additionally, I fixed the incorrect formatting of `EXPLAIN` in subqueries (#60102). Fixed incorrect formatting of lambda functions (#60012). Added a check so there is no way to miss these abominations in the future. [#60095](https://github.com/ClickHouse/ClickHouse/pull/60095) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix inconsistent formatting of explain in subqueries [#60102](https://github.com/ClickHouse/ClickHouse/pull/60102) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix cosineDistance crash with Nullable [#60150](https://github.com/ClickHouse/ClickHouse/pull/60150) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Allow casting of bools in string representation to true bools [#60160](https://github.com/ClickHouse/ClickHouse/pull/60160) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix `system.s3queue_log` [#60166](https://github.com/ClickHouse/ClickHouse/pull/60166) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix arrayReduce with nullable aggregate function name [#60188](https://github.com/ClickHouse/ClickHouse/pull/60188) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Hide sensitive info for `S3Queue` [#60233](https://github.com/ClickHouse/ClickHouse/pull/60233) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix http exception codes. [#60252](https://github.com/ClickHouse/ClickHouse/pull/60252) ([Austin Kothig](https://github.com/kothiga)).
|
||||
* S3Queue: fix a bug (also fixes flaky test_storage_s3_queue/test.py::test_shards_distributed) [#60282](https://github.com/ClickHouse/ClickHouse/pull/60282) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix use-of-uninitialized-value and invalid result in hashing functions with IPv6 [#60359](https://github.com/ClickHouse/ClickHouse/pull/60359) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix OptimizeDateOrDateTimeConverterWithPreimageVisitor with null arguments [#60453](https://github.com/ClickHouse/ClickHouse/pull/60453) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fixed a minor bug that prevented distributed table queries sent from either KQL or PRQL dialect clients to be executed on replicas. [#59674](https://github.com/ClickHouse/ClickHouse/issues/59674). [#60470](https://github.com/ClickHouse/ClickHouse/pull/60470) ([Alexey Milovidov](https://github.com/alexey-milovidov)) [#59674](https://github.com/ClickHouse/ClickHouse/pull/59674) ([Austin Kothig](https://github.com/kothiga)).
|
||||
|
||||
|
||||
### <a id="241"></a> ClickHouse release 24.1, 2024-01-30
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
@ -1656,6 +1656,33 @@ Result:
|
||||
└─────────────────────────┴─────────┘
|
||||
```
|
||||
|
||||
### output_format_pretty_single_large_number_tip_threshold {#output_format_pretty_single_large_number_tip_threshold}
|
||||
|
||||
Print a readable number tip on the right side of the table if the block consists of a single number which exceeds
|
||||
this value (except 0).
|
||||
|
||||
Possible values:
|
||||
|
||||
- 0 — The readable number tip will not be printed.
|
||||
- Positive integer — The readable number tip will be printed if the single number exceeds this value.
|
||||
|
||||
Default value: `1000000`.
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT 1000000000 as a;
|
||||
```
|
||||
|
||||
Result:
|
||||
```text
|
||||
┌──────────a─┐
|
||||
│ 1000000000 │ -- 1.00 billion
|
||||
└────────────┘
|
||||
```
|
||||
|
||||
## Template format settings {#template-format-settings}
|
||||
|
||||
### format_template_resultset {#format_template_resultset}
|
||||
|
@ -12,6 +12,11 @@ has a value of either type `T1` or `T2` or ... or `TN` or none of them (`NULL` v
|
||||
The order of nested types doesn't matter: Variant(T1, T2) = Variant(T2, T1).
|
||||
Nested types can be arbitrary types except Nullable(...), LowCardinality(Nullable(...)) and Variant(...) types.
|
||||
|
||||
:::note
|
||||
It's not recommended to use similar types as variants (for example different numeric types like `Variant(UInt32, Int64)` or different date types like `Variant(Date, DateTime)`),
|
||||
because working with values of such types can lead to ambiguity. By default, creating such `Variant` type will lead to an exception, but can be enabled using setting `allow_suspicious_variant_types`
|
||||
:::
|
||||
|
||||
:::note
|
||||
The Variant data type is an experimental feature. To use it, set `allow_experimental_variant_type = 1`.
|
||||
:::
|
||||
@ -272,3 +277,121 @@ $$)
|
||||
│ [1,2,3] │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ [1,2,3] │
|
||||
└─────────────────────┴───────────────┴──────┴───────┴─────────────────────┴─────────┘
|
||||
```
|
||||
|
||||
|
||||
## Comparing values of Variant type
|
||||
|
||||
Values of a `Variant` type can be compared only with values with the same `Variant` type.
|
||||
|
||||
The result of operator `<` for values `v1` with underlying type `T1` and `v2` with underlying type `T2` of a type `Variant(..., T1, ... T2, ...)` is defined as follows:
|
||||
- If `T1 = T2 = T`, the result will be `v1.T < v2.T` (underlying values will be compared).
|
||||
- If `T1 != T2`, the result will be `T1 < T2` (type names will be compared).
|
||||
|
||||
Examples:
|
||||
```sql
|
||||
CREATE TABLE test (v1 Variant(String, UInt64, Array(UInt32)), v2 Variant(String, UInt64, Array(UInt32))) ENGINE=Memory;
|
||||
INSERT INTO test VALUES (42, 42), (42, 43), (42, 'abc'), (42, [1, 2, 3]), (42, []), (42, NULL);
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT v2, variantType(v2) as v2_type from test order by v2;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─v2──────┬─v2_type───────┐
|
||||
│ [] │ Array(UInt32) │
|
||||
│ [1,2,3] │ Array(UInt32) │
|
||||
│ abc │ String │
|
||||
│ 42 │ UInt64 │
|
||||
│ 43 │ UInt64 │
|
||||
│ ᴺᵁᴸᴸ │ None │
|
||||
└─────────┴───────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT v1, variantType(v1) as v1_type, v2, variantType(v2) as v2_type, v1 = v2, v1 < v2, v1 > v2 from test;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─v1─┬─v1_type─┬─v2──────┬─v2_type───────┬─equals(v1, v2)─┬─less(v1, v2)─┬─greater(v1, v2)─┐
|
||||
│ 42 │ UInt64 │ 42 │ UInt64 │ 1 │ 0 │ 0 │
|
||||
│ 42 │ UInt64 │ 43 │ UInt64 │ 0 │ 1 │ 0 │
|
||||
│ 42 │ UInt64 │ abc │ String │ 0 │ 0 │ 1 │
|
||||
│ 42 │ UInt64 │ [1,2,3] │ Array(UInt32) │ 0 │ 0 │ 1 │
|
||||
│ 42 │ UInt64 │ [] │ Array(UInt32) │ 0 │ 0 │ 1 │
|
||||
│ 42 │ UInt64 │ ᴺᵁᴸᴸ │ None │ 0 │ 1 │ 0 │
|
||||
└────┴─────────┴─────────┴───────────────┴────────────────┴──────────────┴─────────────────┘
|
||||
|
||||
```
|
||||
|
||||
If you need to find the row with specific `Variant` value, you can do one of the following:
|
||||
|
||||
- Cast value to the corresponding `Variant` type:
|
||||
|
||||
```sql
|
||||
SELECT * FROM test WHERE v2 == [1,2,3]::Array(UInt32)::Variant(String, UInt64, Array(UInt32));
|
||||
```
|
||||
|
||||
```text
|
||||
┌─v1─┬─v2──────┐
|
||||
│ 42 │ [1,2,3] │
|
||||
└────┴─────────┘
|
||||
```
|
||||
|
||||
- Compare `Variant` subcolumn with required type:
|
||||
|
||||
```sql
|
||||
SELECT * FROM test WHERE v2.`Array(UInt32)` == [1,2,3] -- or using variantElement(v2, 'Array(UInt32)')
|
||||
```
|
||||
|
||||
```text
|
||||
┌─v1─┬─v2──────┐
|
||||
│ 42 │ [1,2,3] │
|
||||
└────┴─────────┘
|
||||
```
|
||||
|
||||
Sometimes it can be useful to make additional check on variant type as subcolumns with complex types like `Array/Map/Tuple` cannot be inside `Nullable` and will have default values instead of `NULL` on rows with different types:
|
||||
|
||||
```sql
|
||||
SELECT v2, v2.`Array(UInt32)`, variantType(v2) FROM test WHERE v2.`Array(UInt32)` == [];
|
||||
```
|
||||
|
||||
```text
|
||||
┌─v2───┬─v2.Array(UInt32)─┬─variantType(v2)─┐
|
||||
│ 42 │ [] │ UInt64 │
|
||||
│ 43 │ [] │ UInt64 │
|
||||
│ abc │ [] │ String │
|
||||
│ [] │ [] │ Array(UInt32) │
|
||||
│ ᴺᵁᴸᴸ │ [] │ None │
|
||||
└──────┴──────────────────┴─────────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT v2, v2.`Array(UInt32)`, variantType(v2) FROM test WHERE variantType(v2) == 'Array(UInt32)' AND v2.`Array(UInt32)` == [];
|
||||
```
|
||||
|
||||
```text
|
||||
┌─v2─┬─v2.Array(UInt32)─┬─variantType(v2)─┐
|
||||
│ [] │ [] │ Array(UInt32) │
|
||||
└────┴──────────────────┴─────────────────┘
|
||||
```
|
||||
|
||||
**Note:** values of variants with different numeric types are considered as different variants and not compared between each other, their type names are compared instead.
|
||||
|
||||
Example:
|
||||
|
||||
```sql
|
||||
SET allow_suspicious_variant_types = 1;
|
||||
CREATE TABLE test (v Variant(UInt32, Int64)) ENGINE=Memory;
|
||||
INSERT INTO test VALUES (1::UInt32), (1::Int64), (100::UInt32), (100::Int64);
|
||||
SELECT v, variantType(v) FROM test ORDER by v;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─v───┬─variantType(v)─┐
|
||||
│ 1 │ Int64 │
|
||||
│ 100 │ Int64 │
|
||||
│ 1 │ UInt32 │
|
||||
│ 100 │ UInt32 │
|
||||
└─────┴────────────────┘
|
||||
```
|
||||
|
1
programs/server/config.d/filesystem_cache_log.xml
Symbolic link
1
programs/server/config.d/filesystem_cache_log.xml
Symbolic link
@ -0,0 +1 @@
|
||||
../../../tests/config/config.d/filesystem_cache_log.xml
|
4
programs/server/config.d/filesystem_caches_path.xml
Normal file
4
programs/server/config.d/filesystem_caches_path.xml
Normal file
@ -0,0 +1,4 @@
|
||||
<clickhouse>
|
||||
<filesystem_caches_path>/tmp/filesystem_caches/</filesystem_caches_path>
|
||||
<custom_cached_disks_base_directory replace="replace">/tmp/filesystem_caches/</custom_cached_disks_base_directory>
|
||||
</clickhouse>
|
@ -1,7 +1,6 @@
|
||||
#include <Access/Common/AccessFlags.h>
|
||||
#include <Access/Common/AccessType.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <base/types.h>
|
||||
#include <boost/algorithm/string/case_conv.hpp>
|
||||
#include <boost/algorithm/string/replace.hpp>
|
||||
#include <boost/algorithm/string/split.hpp>
|
||||
|
186
src/Analyzer/Passes/AggregateFunctionOfGroupByKeysPass.cpp
Normal file
186
src/Analyzer/Passes/AggregateFunctionOfGroupByKeysPass.cpp
Normal file
@ -0,0 +1,186 @@
|
||||
#include <Analyzer/Passes/AggregateFunctionOfGroupByKeysPass.h>
|
||||
|
||||
#include <AggregateFunctions/AggregateFunctionFactory.h>
|
||||
|
||||
#include <Analyzer/ArrayJoinNode.h>
|
||||
#include <Analyzer/ColumnNode.h>
|
||||
#include <Analyzer/FunctionNode.h>
|
||||
#include <Analyzer/InDepthQueryTreeVisitor.h>
|
||||
#include <Analyzer/QueryNode.h>
|
||||
#include <Analyzer/TableNode.h>
|
||||
#include <Analyzer/UnionNode.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int LOGICAL_ERROR;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
/// Try to eliminate min/max/any/anyLast.
|
||||
class EliminateFunctionVisitor : public InDepthQueryTreeVisitorWithContext<EliminateFunctionVisitor>
|
||||
{
|
||||
public:
|
||||
using Base = InDepthQueryTreeVisitorWithContext<EliminateFunctionVisitor>;
|
||||
using Base::Base;
|
||||
|
||||
using GroupByKeysStack = std::vector<QueryTreeNodePtrWithHashSet>;
|
||||
|
||||
void enterImpl(QueryTreeNodePtr & node)
|
||||
{
|
||||
if (!getSettings().optimize_aggregators_of_group_by_keys)
|
||||
return;
|
||||
|
||||
/// Collect group by keys.
|
||||
auto * query_node = node->as<QueryNode>();
|
||||
if (!query_node)
|
||||
return;
|
||||
|
||||
if (!query_node->hasGroupBy())
|
||||
{
|
||||
group_by_keys_stack.push_back({});
|
||||
}
|
||||
else if (query_node->isGroupByWithTotals() || query_node->isGroupByWithCube() || query_node->isGroupByWithRollup())
|
||||
{
|
||||
/// Keep aggregator if group by is with totals/cube/rollup.
|
||||
group_by_keys_stack.push_back({});
|
||||
}
|
||||
else
|
||||
{
|
||||
QueryTreeNodePtrWithHashSet group_by_keys;
|
||||
for (auto & group_key : query_node->getGroupBy().getNodes())
|
||||
{
|
||||
/// For grouping sets case collect only keys that are presented in every set.
|
||||
if (auto * list = group_key->as<ListNode>())
|
||||
{
|
||||
QueryTreeNodePtrWithHashSet common_keys_set;
|
||||
for (auto & group_elem : list->getNodes())
|
||||
{
|
||||
if (group_by_keys.contains(group_elem))
|
||||
common_keys_set.insert(group_elem);
|
||||
}
|
||||
group_by_keys = std::move(common_keys_set);
|
||||
}
|
||||
else
|
||||
{
|
||||
group_by_keys.insert(group_key);
|
||||
}
|
||||
}
|
||||
group_by_keys_stack.push_back(std::move(group_by_keys));
|
||||
}
|
||||
}
|
||||
|
||||
/// Now we visit all nodes in QueryNode, we should remove group_by_keys from stack.
|
||||
void leaveImpl(QueryTreeNodePtr & node)
|
||||
{
|
||||
if (!getSettings().optimize_aggregators_of_group_by_keys)
|
||||
return;
|
||||
|
||||
if (node->getNodeType() == QueryTreeNodeType::FUNCTION)
|
||||
{
|
||||
if (aggregationCanBeEliminated(node, group_by_keys_stack.back()))
|
||||
node = node->as<FunctionNode>()->getArguments().getNodes()[0];
|
||||
}
|
||||
else if (node->getNodeType() == QueryTreeNodeType::QUERY)
|
||||
{
|
||||
group_by_keys_stack.pop_back();
|
||||
}
|
||||
}
|
||||
|
||||
static bool needChildVisit(VisitQueryTreeNodeType & parent [[maybe_unused]], VisitQueryTreeNodeType & child)
|
||||
{
|
||||
/// Skip ArrayJoin.
|
||||
return !child->as<ArrayJoinNode>();
|
||||
}
|
||||
|
||||
private:
|
||||
|
||||
struct NodeWithInfo
|
||||
{
|
||||
QueryTreeNodePtr node;
|
||||
bool parents_are_only_deterministic = false;
|
||||
};
|
||||
|
||||
bool aggregationCanBeEliminated(QueryTreeNodePtr & node, const QueryTreeNodePtrWithHashSet & group_by_keys)
|
||||
{
|
||||
if (group_by_keys.empty())
|
||||
return false;
|
||||
|
||||
auto * function = node->as<FunctionNode>();
|
||||
if (!function || !function->isAggregateFunction())
|
||||
return false;
|
||||
|
||||
if (!(function->getFunctionName() == "min"
|
||||
|| function->getFunctionName() == "max"
|
||||
|| function->getFunctionName() == "any"
|
||||
|| function->getFunctionName() == "anyLast"))
|
||||
return false;
|
||||
|
||||
std::vector<NodeWithInfo> candidates;
|
||||
auto & function_arguments = function->getArguments().getNodes();
|
||||
if (function_arguments.size() != 1)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Expected a single argument of function '{}' but received {}", function->getFunctionName(), function_arguments.size());
|
||||
|
||||
if (!function->getResultType()->equals(*function_arguments[0]->getResultType()))
|
||||
return false;
|
||||
|
||||
candidates.push_back({ function_arguments[0], true });
|
||||
|
||||
/// Using DFS we traverse function tree and try to find if it uses other keys as function arguments.
|
||||
while (!candidates.empty())
|
||||
{
|
||||
auto [candidate, parents_are_only_deterministic] = candidates.back();
|
||||
candidates.pop_back();
|
||||
|
||||
bool found = group_by_keys.contains(candidate);
|
||||
|
||||
switch (candidate->getNodeType())
|
||||
{
|
||||
case QueryTreeNodeType::FUNCTION:
|
||||
{
|
||||
auto * func = candidate->as<FunctionNode>();
|
||||
auto & arguments = func->getArguments().getNodes();
|
||||
if (arguments.empty())
|
||||
return false;
|
||||
|
||||
if (!found)
|
||||
{
|
||||
bool is_deterministic_function = parents_are_only_deterministic &&
|
||||
func->getFunctionOrThrow()->isDeterministicInScopeOfQuery();
|
||||
for (auto it = arguments.rbegin(); it != arguments.rend(); ++it)
|
||||
candidates.push_back({ *it, is_deterministic_function });
|
||||
}
|
||||
break;
|
||||
}
|
||||
case QueryTreeNodeType::COLUMN:
|
||||
if (!found)
|
||||
return false;
|
||||
break;
|
||||
case QueryTreeNodeType::CONSTANT:
|
||||
if (!parents_are_only_deterministic)
|
||||
return false;
|
||||
break;
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
GroupByKeysStack group_by_keys_stack;
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
void AggregateFunctionOfGroupByKeysPass::run(QueryTreeNodePtr & query_tree_node, ContextPtr context)
|
||||
{
|
||||
EliminateFunctionVisitor eliminator(context);
|
||||
eliminator.visit(query_tree_node);
|
||||
}
|
||||
|
||||
};
|
28
src/Analyzer/Passes/AggregateFunctionOfGroupByKeysPass.h
Normal file
28
src/Analyzer/Passes/AggregateFunctionOfGroupByKeysPass.h
Normal file
@ -0,0 +1,28 @@
|
||||
#pragma once
|
||||
|
||||
#include <Analyzer/IQueryTreePass.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
/** Eliminates min/max/any/anyLast aggregators of GROUP BY keys in SELECT section.
|
||||
*
|
||||
* Example: SELECT max(column) FROM table GROUP BY column;
|
||||
* Result: SELECT column FROM table GROUP BY column;
|
||||
*/
|
||||
class AggregateFunctionOfGroupByKeysPass final : public IQueryTreePass
|
||||
{
|
||||
public:
|
||||
String getName() override { return "AggregateFunctionOfGroupByKeys"; }
|
||||
|
||||
String getDescription() override
|
||||
{
|
||||
return "Eliminates min/max/any/anyLast aggregators of GROUP BY keys in SELECT section.";
|
||||
}
|
||||
|
||||
void run(QueryTreeNodePtr & query_tree_node, ContextPtr context) override;
|
||||
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -92,7 +92,7 @@ private:
|
||||
if (!found)
|
||||
{
|
||||
bool is_deterministic_function = parents_are_only_deterministic &&
|
||||
function->getFunctionOrThrow()->isDeterministicInScopeOfQuery();
|
||||
func->getFunctionOrThrow()->isDeterministicInScopeOfQuery();
|
||||
for (auto it = arguments.rbegin(); it != arguments.rend(); ++it)
|
||||
candidates.push_back({ *it, is_deterministic_function });
|
||||
}
|
||||
|
@ -6651,7 +6651,6 @@ void QueryAnalyzer::initializeTableExpressionData(const QueryTreeNodePtr & table
|
||||
if (column_default && column_default->kind == ColumnDefaultKind::Alias)
|
||||
{
|
||||
auto alias_expression = buildQueryTree(column_default->expression, scope.context);
|
||||
alias_expression = buildCastFunction(alias_expression, column_name_and_type.type, scope.context, false /*resolve*/);
|
||||
auto column_node = std::make_shared<ColumnNode>(column_name_and_type, std::move(alias_expression), table_expression_node);
|
||||
column_name_to_column_node.emplace(column_name_and_type.name, column_node);
|
||||
alias_columns_to_resolve.emplace_back(column_name_and_type.name, column_node);
|
||||
@ -6684,7 +6683,9 @@ void QueryAnalyzer::initializeTableExpressionData(const QueryTreeNodePtr & table
|
||||
alias_column_resolve_scope,
|
||||
false /*allow_lambda_expression*/,
|
||||
false /*allow_table_expression*/);
|
||||
|
||||
auto & resolved_expression = alias_column_to_resolve->getExpression();
|
||||
if (!resolved_expression->getResultType()->equals(*alias_column_to_resolve->getResultType()))
|
||||
resolved_expression = buildCastFunction(resolved_expression, alias_column_to_resolve->getResultType(), scope.context, true);
|
||||
column_name_to_column_node = std::move(alias_column_resolve_scope.column_name_to_column_node);
|
||||
column_name_to_column_node[alias_column_to_resolve_name] = alias_column_to_resolve;
|
||||
}
|
||||
|
@ -5,6 +5,7 @@
|
||||
#include <DataTypes/IDataType.h>
|
||||
#include <DataTypes/DataTypeTuple.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
#include <DataTypes/FieldToDataType.h>
|
||||
#include <Parsers/ParserSelectQuery.h>
|
||||
#include <Parsers/ParserSelectWithUnionQuery.h>
|
||||
#include <Parsers/ASTSelectWithUnionQuery.h>
|
||||
@ -557,7 +558,10 @@ QueryTreeNodePtr QueryTreeBuilder::buildExpression(const ASTPtr & expression, co
|
||||
}
|
||||
else if (const auto * ast_literal = expression->as<ASTLiteral>())
|
||||
{
|
||||
result = std::make_shared<ConstantNode>(ast_literal->value);
|
||||
if (context->getSettingsRef().allow_experimental_variant_type && context->getSettingsRef().use_variant_as_common_type)
|
||||
result = std::make_shared<ConstantNode>(ast_literal->value, applyVisitor(FieldToDataType<LeastSupertypeOnError::Variant>(), ast_literal->value));
|
||||
else
|
||||
result = std::make_shared<ConstantNode>(ast_literal->value);
|
||||
}
|
||||
else if (const auto * function = expression->as<ASTFunction>())
|
||||
{
|
||||
|
@ -46,6 +46,7 @@
|
||||
#include <Analyzer/Passes/CrossToInnerJoinPass.h>
|
||||
#include <Analyzer/Passes/ShardNumColumnToFunctionPass.h>
|
||||
#include <Analyzer/Passes/ConvertQueryToCNFPass.h>
|
||||
#include <Analyzer/Passes/AggregateFunctionOfGroupByKeysPass.h>
|
||||
#include <Analyzer/Passes/OptimizeDateOrDateTimeConverterWithPreimagePass.h>
|
||||
|
||||
|
||||
@ -164,7 +165,6 @@ private:
|
||||
|
||||
/** ClickHouse query tree pass manager.
|
||||
*
|
||||
* TODO: Support setting optimize_aggregators_of_group_by_keys.
|
||||
* TODO: Support setting optimize_monotonous_functions_in_order_by.
|
||||
* TODO: Add optimizations based on function semantics. Example: SELECT * FROM test_table WHERE id != id. (id is not nullable column).
|
||||
*/
|
||||
@ -264,6 +264,9 @@ void addQueryTreePasses(QueryTreePassManager & manager)
|
||||
manager.addPass(std::make_unique<RewriteArrayExistsToHasPass>());
|
||||
manager.addPass(std::make_unique<NormalizeCountVariantsPass>());
|
||||
|
||||
/// should before AggregateFunctionsArithmericOperationsPass
|
||||
manager.addPass(std::make_unique<AggregateFunctionOfGroupByKeysPass>());
|
||||
|
||||
manager.addPass(std::make_unique<AggregateFunctionsArithmericOperationsPass>());
|
||||
manager.addPass(std::make_unique<UniqInjectiveFunctionsEliminationPass>());
|
||||
manager.addPass(std::make_unique<OptimizeGroupByFunctionKeysPass>());
|
||||
|
@ -1915,7 +1915,7 @@ void ClientBase::processParsedSingleQuery(const String & full_query, const Strin
|
||||
for (const auto & [name, value] : set_query->query_parameters)
|
||||
query_parameters.insert_or_assign(name, value);
|
||||
|
||||
global_context->addQueryParameters(set_query->query_parameters);
|
||||
global_context->addQueryParameters(NameToNameMap{set_query->query_parameters.begin(), set_query->query_parameters.end()});
|
||||
}
|
||||
if (const auto * use_query = parsed_query->as<ASTUseQuery>())
|
||||
{
|
||||
|
@ -518,6 +518,23 @@ void ColumnAggregateFunction::insert(const Field & x)
|
||||
func->deserialize(data.back(), read_buffer, version, &arena);
|
||||
}
|
||||
|
||||
bool ColumnAggregateFunction::tryInsert(const DB::Field & x)
|
||||
{
|
||||
if (x.getType() != Field::Types::AggregateFunctionState)
|
||||
return false;
|
||||
|
||||
const auto & field_name = x.get<const AggregateFunctionStateData &>().name;
|
||||
if (type_string != field_name)
|
||||
return false;
|
||||
|
||||
ensureOwnership();
|
||||
Arena & arena = createOrGetArena();
|
||||
pushBackAndCreateState(data, arena, func.get());
|
||||
ReadBufferFromString read_buffer(x.get<const AggregateFunctionStateData &>().data);
|
||||
func->deserialize(data.back(), read_buffer, version, &arena);
|
||||
return true;
|
||||
}
|
||||
|
||||
void ColumnAggregateFunction::insertDefault()
|
||||
{
|
||||
ensureOwnership();
|
||||
|
@ -160,6 +160,8 @@ public:
|
||||
|
||||
void insert(const Field & x) override;
|
||||
|
||||
bool tryInsert(const Field & x) override;
|
||||
|
||||
void insertDefault() override;
|
||||
|
||||
StringRef serializeValueIntoArena(size_t n, Arena & arena, char const *& begin, const UInt8 *) const override;
|
||||
|
@ -305,6 +305,25 @@ void ColumnArray::insert(const Field & x)
|
||||
getOffsets().push_back(getOffsets().back() + size);
|
||||
}
|
||||
|
||||
bool ColumnArray::tryInsert(const Field & x)
|
||||
{
|
||||
if (x.getType() != Field::Types::Which::Array)
|
||||
return false;
|
||||
|
||||
const Array & array = x.get<const Array &>();
|
||||
size_t size = array.size();
|
||||
for (size_t i = 0; i < size; ++i)
|
||||
{
|
||||
if (!getData().tryInsert(array[i]))
|
||||
{
|
||||
getData().popBack(i);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
getOffsets().push_back(getOffsets().back() + size);
|
||||
return true;
|
||||
}
|
||||
|
||||
void ColumnArray::insertFrom(const IColumn & src_, size_t n)
|
||||
{
|
||||
|
@ -85,6 +85,7 @@ public:
|
||||
void updateHashFast(SipHash & hash) const override;
|
||||
void insertRangeFrom(const IColumn & src, size_t start, size_t length) override;
|
||||
void insert(const Field & x) override;
|
||||
bool tryInsert(const Field & x) override;
|
||||
void insertFrom(const IColumn & src_, size_t n) override;
|
||||
void insertDefault() override;
|
||||
void popBack(size_t n) override;
|
||||
|
@ -84,6 +84,7 @@ public:
|
||||
StringRef getDataAt(size_t) const override { throwMustBeDecompressed(); }
|
||||
bool isDefaultAt(size_t) const override { throwMustBeDecompressed(); }
|
||||
void insert(const Field &) override { throwMustBeDecompressed(); }
|
||||
bool tryInsert(const Field &) override { throwMustBeDecompressed(); }
|
||||
void insertRangeFrom(const IColumn &, size_t, size_t) override { throwMustBeDecompressed(); }
|
||||
void insertData(const char *, size_t) override { throwMustBeDecompressed(); }
|
||||
void insertDefault() override { throwMustBeDecompressed(); }
|
||||
|
@ -131,6 +131,15 @@ public:
|
||||
++s;
|
||||
}
|
||||
|
||||
bool tryInsert(const Field & field) override
|
||||
{
|
||||
auto tmp = data->cloneEmpty();
|
||||
if (!tmp->tryInsert(field))
|
||||
return false;
|
||||
++s;
|
||||
return true;
|
||||
}
|
||||
|
||||
void insertData(const char *, size_t) override
|
||||
{
|
||||
++s;
|
||||
|
@ -334,6 +334,16 @@ MutableColumnPtr ColumnDecimal<T>::cloneResized(size_t size) const
|
||||
return res;
|
||||
}
|
||||
|
||||
template <is_decimal T>
|
||||
bool ColumnDecimal<T>::tryInsert(const Field & x)
|
||||
{
|
||||
DecimalField<T> value;
|
||||
if (!x.tryGet<DecimalField<T>>(value))
|
||||
return false;
|
||||
data.push_back(value);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <is_decimal T>
|
||||
void ColumnDecimal<T>::insertData(const char * src, size_t /*length*/)
|
||||
{
|
||||
|
@ -62,6 +62,7 @@ public:
|
||||
void insertDefault() override { data.push_back(T()); }
|
||||
void insertManyDefaults(size_t length) override { data.resize_fill(data.size() + length); }
|
||||
void insert(const Field & x) override { data.push_back(x.get<T>()); }
|
||||
bool tryInsert(const Field & x) override;
|
||||
void insertRangeFrom(const IColumn & src, size_t start, size_t length) override;
|
||||
|
||||
void popBack(size_t n) override
|
||||
|
@ -63,6 +63,17 @@ void ColumnFixedString::insert(const Field & x)
|
||||
insertData(s.data(), s.size());
|
||||
}
|
||||
|
||||
bool ColumnFixedString::tryInsert(const Field & x)
|
||||
{
|
||||
if (x.getType() != Field::Types::Which::String)
|
||||
return false;
|
||||
const String & s = x.get<const String &>();
|
||||
if (s.size() > n)
|
||||
return false;
|
||||
insertData(s.data(), s.size());
|
||||
return true;
|
||||
}
|
||||
|
||||
void ColumnFixedString::insertFrom(const IColumn & src_, size_t index)
|
||||
{
|
||||
const ColumnFixedString & src = assert_cast<const ColumnFixedString &>(src_);
|
||||
|
@ -96,6 +96,8 @@ public:
|
||||
|
||||
void insert(const Field & x) override;
|
||||
|
||||
bool tryInsert(const Field & x) override;
|
||||
|
||||
void insertFrom(const IColumn & src_, size_t index) override;
|
||||
|
||||
void insertData(const char * pos, size_t length) override;
|
||||
|
@ -84,6 +84,11 @@ public:
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Cannot insert into {}", getName());
|
||||
}
|
||||
|
||||
bool tryInsert(const Field &) override
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
void insertDefault() override
|
||||
{
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Cannot insert into {}", getName());
|
||||
|
@ -140,6 +140,18 @@ void ColumnLowCardinality::insert(const Field & x)
|
||||
idx.insertPosition(dictionary.getColumnUnique().uniqueInsert(x));
|
||||
}
|
||||
|
||||
bool ColumnLowCardinality::tryInsert(const Field & x)
|
||||
{
|
||||
compactIfSharedDictionary();
|
||||
|
||||
size_t index;
|
||||
if (!dictionary.getColumnUnique().tryUniqueInsert(x, index))
|
||||
return false;
|
||||
|
||||
idx.insertPosition(index);
|
||||
return true;
|
||||
}
|
||||
|
||||
void ColumnLowCardinality::insertDefault()
|
||||
{
|
||||
idx.insertPosition(getDictionary().getDefaultValueIndex());
|
||||
|
@ -74,6 +74,7 @@ public:
|
||||
}
|
||||
|
||||
void insert(const Field & x) override;
|
||||
bool tryInsert(const Field & x) override;
|
||||
void insertDefault() override;
|
||||
|
||||
void insertFrom(const IColumn & src, size_t n) override;
|
||||
|
@ -102,6 +102,15 @@ void ColumnMap::insert(const Field & x)
|
||||
nested->insert(Array(map.begin(), map.end()));
|
||||
}
|
||||
|
||||
bool ColumnMap::tryInsert(const Field & x)
|
||||
{
|
||||
if (x.getType() != Field::Types::Which::Map)
|
||||
return false;
|
||||
|
||||
const auto & map = x.get<const Map &>();
|
||||
return nested->tryInsert(Array(map.begin(), map.end()));
|
||||
}
|
||||
|
||||
void ColumnMap::insertDefault()
|
||||
{
|
||||
nested->insertDefault();
|
||||
|
@ -56,6 +56,7 @@ public:
|
||||
StringRef getDataAt(size_t n) const override;
|
||||
void insertData(const char * pos, size_t length) override;
|
||||
void insert(const Field & x) override;
|
||||
bool tryInsert(const Field & x) override;
|
||||
void insertDefault() override;
|
||||
void popBack(size_t n) override;
|
||||
StringRef serializeValueIntoArena(size_t n, Arena & arena, char const *& begin, const UInt8 *) const override;
|
||||
|
@ -256,6 +256,22 @@ void ColumnNullable::insert(const Field & x)
|
||||
}
|
||||
}
|
||||
|
||||
bool ColumnNullable::tryInsert(const Field & x)
|
||||
{
|
||||
if (x.isNull())
|
||||
{
|
||||
getNestedColumn().insertDefault();
|
||||
getNullMapData().push_back(1);
|
||||
return true;
|
||||
}
|
||||
|
||||
if (!getNestedColumn().tryInsert(x))
|
||||
return false;
|
||||
|
||||
getNullMapData().push_back(0);
|
||||
return true;
|
||||
}
|
||||
|
||||
void ColumnNullable::insertFrom(const IColumn & src, size_t n)
|
||||
{
|
||||
const ColumnNullable & src_concrete = assert_cast<const ColumnNullable &>(src);
|
||||
|
@ -68,6 +68,7 @@ public:
|
||||
const char * skipSerializedInArena(const char * pos) const override;
|
||||
void insertRangeFrom(const IColumn & src, size_t start, size_t length) override;
|
||||
void insert(const Field & x) override;
|
||||
bool tryInsert(const Field & x) override;
|
||||
void insertFrom(const IColumn & src, size_t n) override;
|
||||
|
||||
void insertFromNotNullable(const IColumn & src, size_t n);
|
||||
|
@ -716,6 +716,15 @@ void ColumnObject::insert(const Field & field)
|
||||
++num_rows;
|
||||
}
|
||||
|
||||
bool ColumnObject::tryInsert(const Field & field)
|
||||
{
|
||||
if (field.getType() != Field::Types::Which::Object)
|
||||
return false;
|
||||
|
||||
insert(field);
|
||||
return true;
|
||||
}
|
||||
|
||||
void ColumnObject::insertDefault()
|
||||
{
|
||||
for (auto & entry : subcolumns)
|
||||
|
@ -209,6 +209,7 @@ public:
|
||||
void forEachSubcolumn(MutableColumnCallback callback) override;
|
||||
void forEachSubcolumnRecursively(RecursiveMutableColumnCallback callback) override;
|
||||
void insert(const Field & field) override;
|
||||
bool tryInsert(const Field & field) override;
|
||||
void insertDefault() override;
|
||||
void insertFrom(const IColumn & src, size_t n) override;
|
||||
void insertRangeFrom(const IColumn & src, size_t start, size_t length) override;
|
||||
|
@ -234,6 +234,15 @@ void ColumnSparse::insert(const Field & x)
|
||||
insertSingleValue([&](IColumn & column) { column.insert(x); });
|
||||
}
|
||||
|
||||
bool ColumnSparse::tryInsert(const Field & x)
|
||||
{
|
||||
if (!values->tryInsert(x))
|
||||
return false;
|
||||
|
||||
insertSingleValue([&](IColumn &) {}); /// Value already inserted, use no-op inserter.
|
||||
return true;
|
||||
}
|
||||
|
||||
void ColumnSparse::insertFrom(const IColumn & src, size_t n)
|
||||
{
|
||||
if (const auto * src_sparse = typeid_cast<const ColumnSparse *>(&src))
|
||||
|
@ -83,6 +83,7 @@ public:
|
||||
const char * skipSerializedInArena(const char *) const override;
|
||||
void insertRangeFrom(const IColumn & src, size_t start, size_t length) override;
|
||||
void insert(const Field & x) override;
|
||||
bool tryInsert(const Field & x) override;
|
||||
void insertFrom(const IColumn & src, size_t n) override;
|
||||
void insertDefault() override;
|
||||
void insertManyDefaults(size_t length) override;
|
||||
|
@ -128,6 +128,15 @@ public:
|
||||
offsets.push_back(new_size);
|
||||
}
|
||||
|
||||
bool tryInsert(const Field & x) override
|
||||
{
|
||||
if (x.getType() != Field::Types::Which::String)
|
||||
return false;
|
||||
|
||||
insert(x);
|
||||
return true;
|
||||
}
|
||||
|
||||
void insertFrom(const IColumn & src_, size_t n) override
|
||||
{
|
||||
const ColumnString & src = assert_cast<const ColumnString &>(src_);
|
||||
|
@ -148,6 +148,31 @@ void ColumnTuple::insert(const Field & x)
|
||||
columns[i]->insert(tuple[i]);
|
||||
}
|
||||
|
||||
bool ColumnTuple::tryInsert(const Field & x)
|
||||
{
|
||||
if (x.getType() != Field::Types::Which::Tuple)
|
||||
return false;
|
||||
|
||||
const auto & tuple = x.get<const Tuple &>();
|
||||
|
||||
const size_t tuple_size = columns.size();
|
||||
if (tuple.size() != tuple_size)
|
||||
return false;
|
||||
|
||||
for (size_t i = 0; i < tuple_size; ++i)
|
||||
{
|
||||
if (!columns[i]->tryInsert(tuple[i]))
|
||||
{
|
||||
for (size_t j = 0; j != i; ++j)
|
||||
columns[i]->popBack(1);
|
||||
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
void ColumnTuple::insertFrom(const IColumn & src_, size_t n)
|
||||
{
|
||||
const ColumnTuple & src = assert_cast<const ColumnTuple &>(src_);
|
||||
|
@ -58,6 +58,7 @@ public:
|
||||
StringRef getDataAt(size_t n) const override;
|
||||
void insertData(const char * pos, size_t length) override;
|
||||
void insert(const Field & x) override;
|
||||
bool tryInsert(const Field & x) override;
|
||||
void insertFrom(const IColumn & src_, size_t n) override;
|
||||
void insertDefault() override;
|
||||
void popBack(size_t n) override;
|
||||
|
@ -56,6 +56,7 @@ public:
|
||||
void nestedRemoveNullable() override;
|
||||
|
||||
size_t uniqueInsert(const Field & x) override;
|
||||
bool tryUniqueInsert(const Field & x, size_t & index) override;
|
||||
size_t uniqueInsertFrom(const IColumn & src, size_t n) override;
|
||||
MutableColumnPtr uniqueInsertRangeFrom(const IColumn & src, size_t start, size_t length) override;
|
||||
IColumnUnique::IndexesWithOverflow uniqueInsertRangeWithOverflow(const IColumn & src, size_t start, size_t length,
|
||||
@ -346,6 +347,26 @@ size_t ColumnUnique<ColumnType>::uniqueInsert(const Field & x)
|
||||
return uniqueInsertData(single_value_data.data, single_value_data.size);
|
||||
}
|
||||
|
||||
template <typename ColumnType>
|
||||
bool ColumnUnique<ColumnType>::tryUniqueInsert(const Field & x, size_t & index)
|
||||
{
|
||||
if (x.isNull())
|
||||
{
|
||||
if (!is_nullable)
|
||||
return false;
|
||||
index = getNullValueIndex();
|
||||
return true;
|
||||
}
|
||||
|
||||
auto single_value_column = column_holder->cloneEmpty();
|
||||
if (!single_value_column->tryInsert(x))
|
||||
return false;
|
||||
|
||||
auto single_value_data = single_value_column->getDataAt(0);
|
||||
index = uniqueInsertData(single_value_data.data, single_value_data.size);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename ColumnType>
|
||||
size_t ColumnUnique<ColumnType>::uniqueInsertFrom(const IColumn & src, size_t n)
|
||||
{
|
||||
|
@ -426,11 +426,30 @@ void ColumnVariant::insertData(const char *, size_t)
|
||||
}
|
||||
|
||||
void ColumnVariant::insert(const Field & x)
|
||||
{
|
||||
if (!tryInsert(x))
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Cannot insert field {} into column {}", toString(x), getName());
|
||||
}
|
||||
|
||||
bool ColumnVariant::tryInsert(const DB::Field & x)
|
||||
{
|
||||
if (x.isNull())
|
||||
{
|
||||
insertDefault();
|
||||
else
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Cannot insert field {} to column {}", toString(x), getName());
|
||||
return true;
|
||||
}
|
||||
|
||||
for (size_t i = 0; i != variants.size(); ++i)
|
||||
{
|
||||
if (variants[i]->tryInsert(x))
|
||||
{
|
||||
getLocalDiscriminators().push_back(i);
|
||||
getOffsets().push_back(variants[i]->size() - 1);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
void ColumnVariant::insertFrom(const IColumn & src_, size_t n)
|
||||
@ -805,7 +824,13 @@ ColumnPtr ColumnVariant::permute(const Permutation & perm, size_t limit) const
|
||||
{
|
||||
/// If we have only NULLs, permutation will take no effect, just return resized column.
|
||||
if (hasOnlyNulls())
|
||||
return cloneResized(limit);
|
||||
{
|
||||
if (limit)
|
||||
return cloneResized(limit);
|
||||
|
||||
/// If no limit, we can just return current immutable column.
|
||||
return this->getPtr();
|
||||
}
|
||||
|
||||
/// Optimization when we have only one non empty variant and no NULLs.
|
||||
/// In this case local_discriminators column is filled with identical values and offsets column
|
||||
@ -1073,16 +1098,79 @@ bool ColumnVariant::hasEqualValues() const
|
||||
return local_discriminators->hasEqualValues() && variants[localDiscriminatorAt(0)]->hasEqualValues();
|
||||
}
|
||||
|
||||
void ColumnVariant::getPermutation(IColumn::PermutationSortDirection, IColumn::PermutationSortStability, size_t, int, IColumn::Permutation & res) const
|
||||
int ColumnVariant::compareAt(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint) const
|
||||
{
|
||||
size_t s = local_discriminators->size();
|
||||
res.resize(s);
|
||||
for (size_t i = 0; i < s; ++i)
|
||||
res[i] = i;
|
||||
const auto & rhs_variant = assert_cast<const ColumnVariant &>(rhs);
|
||||
Discriminator left_discr = globalDiscriminatorAt(n);
|
||||
Discriminator right_discr = rhs_variant.globalDiscriminatorAt(m);
|
||||
|
||||
/// Check if we have NULLs and return result based on nan_direction_hint.
|
||||
if (left_discr == NULL_DISCRIMINATOR && right_discr == NULL_DISCRIMINATOR)
|
||||
return 0;
|
||||
else if (left_discr == NULL_DISCRIMINATOR)
|
||||
return nan_direction_hint;
|
||||
else if (right_discr == NULL_DISCRIMINATOR)
|
||||
return -nan_direction_hint;
|
||||
|
||||
/// If rows have different discriminators, row with least discriminator is considered the least.
|
||||
if (left_discr != right_discr)
|
||||
return left_discr < right_discr ? -1 : 1;
|
||||
|
||||
/// If rows have the same discriminators, compare actual values from corresponding variants.
|
||||
return getVariantByGlobalDiscriminator(left_discr).compareAt(offsetAt(n), rhs_variant.offsetAt(m), rhs_variant.getVariantByGlobalDiscriminator(right_discr), nan_direction_hint);
|
||||
}
|
||||
|
||||
void ColumnVariant::updatePermutation(IColumn::PermutationSortDirection, IColumn::PermutationSortStability, size_t, int, IColumn::Permutation &, DB::EqualRanges &) const
|
||||
void ColumnVariant::compareColumn(
|
||||
const IColumn & rhs, size_t rhs_row_num,
|
||||
PaddedPODArray<UInt64> * row_indexes, PaddedPODArray<Int8> & compare_results,
|
||||
int direction, int nan_direction_hint) const
|
||||
{
|
||||
return doCompareColumn<ColumnVariant>(assert_cast<const ColumnVariant &>(rhs), rhs_row_num, row_indexes,
|
||||
compare_results, direction, nan_direction_hint);
|
||||
}
|
||||
|
||||
struct ColumnVariant::ComparatorBase
|
||||
{
|
||||
const ColumnVariant & parent;
|
||||
int nan_direction_hint;
|
||||
|
||||
ComparatorBase(const ColumnVariant & parent_, int nan_direction_hint_)
|
||||
: parent(parent_), nan_direction_hint(nan_direction_hint_)
|
||||
{
|
||||
}
|
||||
|
||||
ALWAYS_INLINE int compare(size_t lhs, size_t rhs) const
|
||||
{
|
||||
int res = parent.compareAt(lhs, rhs, parent, nan_direction_hint);
|
||||
|
||||
return res;
|
||||
}
|
||||
};
|
||||
|
||||
void ColumnVariant::getPermutation(PermutationSortDirection direction, PermutationSortStability stability, size_t limit, int nan_direction_hint, Permutation & res) const
|
||||
{
|
||||
if (direction == IColumn::PermutationSortDirection::Ascending && stability == IColumn::PermutationSortStability::Unstable)
|
||||
getPermutationImpl(limit, res, ComparatorAscendingUnstable(*this, nan_direction_hint), DefaultSort(), DefaultPartialSort());
|
||||
else if (direction == IColumn::PermutationSortDirection::Ascending && stability == IColumn::PermutationSortStability::Stable)
|
||||
getPermutationImpl(limit, res, ComparatorAscendingStable(*this, nan_direction_hint), DefaultSort(), DefaultPartialSort());
|
||||
else if (direction == IColumn::PermutationSortDirection::Descending && stability == IColumn::PermutationSortStability::Unstable)
|
||||
getPermutationImpl(limit, res, ComparatorDescendingUnstable(*this, nan_direction_hint), DefaultSort(), DefaultPartialSort());
|
||||
else if (direction == IColumn::PermutationSortDirection::Descending && stability == IColumn::PermutationSortStability::Stable)
|
||||
getPermutationImpl(limit, res, ComparatorDescendingStable(*this, nan_direction_hint), DefaultSort(), DefaultPartialSort());
|
||||
}
|
||||
|
||||
void ColumnVariant::updatePermutation(IColumn::PermutationSortDirection direction, IColumn::PermutationSortStability stability, size_t limit, int nan_direction_hint, IColumn::Permutation & res, DB::EqualRanges & equal_ranges) const
|
||||
{
|
||||
auto comparator_equal = ComparatorEqual(*this, nan_direction_hint);
|
||||
|
||||
if (direction == IColumn::PermutationSortDirection::Ascending && stability == IColumn::PermutationSortStability::Unstable)
|
||||
updatePermutationImpl(limit, res, equal_ranges, ComparatorAscendingUnstable(*this, nan_direction_hint), comparator_equal, DefaultSort(), DefaultPartialSort());
|
||||
else if (direction == IColumn::PermutationSortDirection::Ascending && stability == IColumn::PermutationSortStability::Stable)
|
||||
updatePermutationImpl(limit, res, equal_ranges, ComparatorAscendingStable(*this, nan_direction_hint), comparator_equal, DefaultSort(), DefaultPartialSort());
|
||||
else if (direction == IColumn::PermutationSortDirection::Descending && stability == IColumn::PermutationSortStability::Unstable)
|
||||
updatePermutationImpl(limit, res, equal_ranges, ComparatorDescendingUnstable(*this, nan_direction_hint), comparator_equal, DefaultSort(), DefaultPartialSort());
|
||||
else if (direction == IColumn::PermutationSortDirection::Descending && stability == IColumn::PermutationSortStability::Stable)
|
||||
updatePermutationImpl(limit, res, equal_ranges, ComparatorDescendingStable(*this, nan_direction_hint), comparator_equal, DefaultSort(), DefaultPartialSort());
|
||||
}
|
||||
|
||||
void ColumnVariant::reserve(size_t n)
|
||||
|
@ -7,11 +7,6 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
}
|
||||
|
||||
/**
|
||||
* Column for storing Variant(...) type values.
|
||||
* Variant type represents a union of other data types.
|
||||
@ -70,6 +65,14 @@ public:
|
||||
static constexpr UInt8 NULL_DISCRIMINATOR = std::numeric_limits<Discriminator>::max(); /// 255
|
||||
static constexpr size_t MAX_NESTED_COLUMNS = std::numeric_limits<Discriminator>::max(); /// 255
|
||||
|
||||
struct ComparatorBase;
|
||||
|
||||
using ComparatorAscendingUnstable = ComparatorAscendingUnstableImpl<ComparatorBase>;
|
||||
using ComparatorAscendingStable = ComparatorAscendingStableImpl<ComparatorBase>;
|
||||
using ComparatorDescendingUnstable = ComparatorDescendingUnstableImpl<ComparatorBase>;
|
||||
using ComparatorDescendingStable = ComparatorDescendingStableImpl<ComparatorBase>;
|
||||
using ComparatorEqual = ComparatorEqualImpl<ComparatorBase>;
|
||||
|
||||
private:
|
||||
friend class COWHelper<IColumn, ColumnVariant>;
|
||||
|
||||
@ -174,6 +177,7 @@ public:
|
||||
StringRef getDataAt(size_t n) const override;
|
||||
void insertData(const char * pos, size_t length) override;
|
||||
void insert(const Field & x) override;
|
||||
bool tryInsert(const Field & x) override;
|
||||
void insertIntoVariant(const Field & x, Discriminator global_discr);
|
||||
void insertFrom(const IColumn & src_, size_t n) override;
|
||||
void insertRangeFrom(const IColumn & src, size_t start, size_t length) override;
|
||||
@ -197,21 +201,16 @@ public:
|
||||
MutableColumns scatter(ColumnIndex num_columns, const Selector & selector) const override;
|
||||
void gather(ColumnGathererStream & gatherer_stream) override;
|
||||
|
||||
/// Variant type is not comparable.
|
||||
int compareAt(size_t, size_t, const IColumn &, int) const override
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
void compareColumn(const IColumn &, size_t, PaddedPODArray<UInt64> *, PaddedPODArray<Int8> &, int, int) const override
|
||||
{
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method compareColumn is not supported for ColumnVariant");
|
||||
}
|
||||
int compareAt(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint) const override;
|
||||
void compareColumn(const IColumn & rhs, size_t rhs_row_num,
|
||||
PaddedPODArray<UInt64> * row_indexes, PaddedPODArray<Int8> & compare_results,
|
||||
int direction, int nan_direction_hint) const override;
|
||||
|
||||
bool hasEqualValues() const override;
|
||||
void getExtremes(Field & min, Field & max) const override;
|
||||
void getPermutation(IColumn::PermutationSortDirection direction, IColumn::PermutationSortStability stability,
|
||||
size_t limit, int nan_direction_hint, IColumn::Permutation & res) const override;
|
||||
|
||||
void updatePermutation(IColumn::PermutationSortDirection direction, IColumn::PermutationSortStability stability,
|
||||
size_t limit, int nan_direction_hint, IColumn::Permutation & res, EqualRanges & equal_ranges) const override;
|
||||
|
||||
|
@ -240,6 +240,14 @@ public:
|
||||
data.push_back(static_cast<T>(x.get<T>()));
|
||||
}
|
||||
|
||||
bool tryInsert(const DB::Field & x) override
|
||||
{
|
||||
NearestFieldType<T> value;
|
||||
if (!x.tryGet<NearestFieldType<T>>(value))
|
||||
return false;
|
||||
data.push_back(static_cast<T>(value));
|
||||
return true;
|
||||
}
|
||||
void insertRangeFrom(const IColumn & src, size_t start, size_t length) override;
|
||||
|
||||
ColumnPtr filter(const IColumn::Filter & filt, ssize_t result_size_hint) const override;
|
||||
|
@ -166,6 +166,10 @@ public:
|
||||
/// Is used to transform raw strings to Blocks (for example, inside input format parsers)
|
||||
virtual void insert(const Field & x) = 0;
|
||||
|
||||
/// Appends new value at the end of the column if it has appropriate type and
|
||||
/// returns true if insert is successful and false otherwise.
|
||||
virtual bool tryInsert(const Field & x) = 0;
|
||||
|
||||
/// Appends n-th element from other column with the same type.
|
||||
/// Is used in merge-sort and merges. It could be implemented in inherited classes more optimally than default implementation.
|
||||
virtual void insertFrom(const IColumn & src, size_t n);
|
||||
|
@ -36,6 +36,7 @@ public:
|
||||
Field operator[](size_t) const override;
|
||||
void get(size_t, Field &) const override;
|
||||
void insert(const Field &) override;
|
||||
bool tryInsert(const Field &) override { return false; }
|
||||
bool isDefaultAt(size_t) const override;
|
||||
|
||||
StringRef getDataAt(size_t) const override
|
||||
|
@ -37,6 +37,10 @@ public:
|
||||
/// Is used to transform raw strings to Blocks (for example, inside input format parsers)
|
||||
virtual size_t uniqueInsert(const Field & x) = 0;
|
||||
|
||||
/// Appends new value at the end of column if value has appropriate type (column's size is increased by 1).
|
||||
/// Return true if value is inserted and set @index to inserted value index and false otherwise.
|
||||
virtual bool tryUniqueInsert(const Field & x, size_t & index) = 0;
|
||||
|
||||
virtual size_t uniqueInsertFrom(const IColumn & src, size_t n) = 0;
|
||||
/// Appends range of elements from other column.
|
||||
/// Could be used to concatenate columns.
|
||||
@ -76,6 +80,11 @@ public:
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method insert is not supported for ColumnUnique.");
|
||||
}
|
||||
|
||||
bool tryInsert(const Field &) override
|
||||
{
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method tryInsert is not supported for ColumnUnique.");
|
||||
}
|
||||
|
||||
void insertRangeFrom(const IColumn &, size_t, size_t) override
|
||||
{
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method insertRangeFrom is not supported for ColumnUnique.");
|
||||
|
@ -203,7 +203,7 @@ uint64_t CgroupsMemoryUsageObserver::File::readMemoryUsage() const
|
||||
ReadBufferFromFileDescriptor buf(fd);
|
||||
buf.rewind();
|
||||
|
||||
uint64_t mem_usage;
|
||||
uint64_t mem_usage = 0;
|
||||
|
||||
switch (version)
|
||||
{
|
||||
@ -214,6 +214,8 @@ uint64_t CgroupsMemoryUsageObserver::File::readMemoryUsage() const
|
||||
/// rss 15
|
||||
/// [...]
|
||||
std::string key;
|
||||
bool found_rss = false;
|
||||
|
||||
while (!buf.eof())
|
||||
{
|
||||
readStringUntilWhitespace(key, buf);
|
||||
@ -221,14 +223,20 @@ uint64_t CgroupsMemoryUsageObserver::File::readMemoryUsage() const
|
||||
{
|
||||
std::string dummy;
|
||||
readStringUntilNewlineInto(dummy, buf);
|
||||
buf.ignore();
|
||||
continue;
|
||||
}
|
||||
|
||||
assertChar(' ', buf);
|
||||
readIntText(mem_usage, buf);
|
||||
assertChar('\n', buf);
|
||||
found_rss = true;
|
||||
break;
|
||||
}
|
||||
throw Exception(ErrorCodes::INCORRECT_DATA, "Cannot find 'rss' in '{}'", file_name);
|
||||
|
||||
if (!found_rss)
|
||||
throw Exception(ErrorCodes::INCORRECT_DATA, "Cannot find 'rss' in '{}'", file_name);
|
||||
|
||||
break;
|
||||
}
|
||||
case CgroupsVersion::V2:
|
||||
{
|
||||
|
@ -58,9 +58,10 @@ void moveChangelogBetweenDisks(
|
||||
const std::string & path_to,
|
||||
const KeeperContextPtr & keeper_context)
|
||||
{
|
||||
auto path_from = description->path;
|
||||
moveFileBetweenDisks(
|
||||
disk_from,
|
||||
description->path,
|
||||
path_from,
|
||||
disk_to,
|
||||
path_to,
|
||||
[&]
|
||||
|
@ -160,6 +160,7 @@ class IColumn;
|
||||
M(Bool, allow_suspicious_fixed_string_types, false, "In CREATE TABLE statement allows creating columns of type FixedString(n) with n > 256. FixedString with length >= 256 is suspicious and most likely indicates misuse", 0) \
|
||||
M(Bool, allow_suspicious_indices, false, "Reject primary/secondary indexes and sorting keys with identical expressions", 0) \
|
||||
M(Bool, allow_suspicious_ttl_expressions, false, "Reject TTL expressions that don't depend on any of table's columns. It indicates a user error most of the time.", 0) \
|
||||
M(Bool, allow_suspicious_variant_types, false, "In CREATE TABLE statement allows specifying Variant type with similar variant types (for example, with different numeric or date types). Enabling this setting may introduce some ambiguity when working with values with similar types.", 0) \
|
||||
M(Bool, compile_expressions, false, "Compile some scalar functions and operators to native code.", 0) \
|
||||
M(UInt64, min_count_to_compile_expression, 3, "The number of identical expressions before they are JIT-compiled", 0) \
|
||||
M(Bool, compile_aggregate_expressions, true, "Compile aggregate functions to native code.", 0) \
|
||||
@ -1121,6 +1122,7 @@ class IColumn;
|
||||
M(Bool, output_format_enable_streaming, false, "Enable streaming in output formats that support it.", 0) \
|
||||
M(Bool, output_format_write_statistics, true, "Write statistics about read rows, bytes, time elapsed in suitable output formats.", 0) \
|
||||
M(Bool, output_format_pretty_row_numbers, false, "Add row numbers before each row for pretty output format", 0) \
|
||||
M(UInt64, output_format_pretty_single_large_number_tip_threshold, 1'000'000, "Print a readable number tip on the right side of the table if the block consists of a single number which exceeds this value (except 0)", 0) \
|
||||
M(Bool, insert_distributed_one_random_shard, false, "If setting is enabled, inserting into distributed table will choose a random shard to write when there is no sharding key", 0) \
|
||||
\
|
||||
M(Bool, exact_rows_before_limit, false, "When enabled, ClickHouse will provide exact value for rows_before_limit_at_least statistic, but with the cost that the data before limit will have to be read completely", 0) \
|
||||
|
@ -86,8 +86,10 @@ namespace SettingsChangesHistory
|
||||
static std::map<ClickHouseVersion, SettingsChangesHistory::SettingsChanges> settings_changes_history =
|
||||
{
|
||||
{"24.2", {
|
||||
{"allow_suspicious_variant_types", true, false, "Don't allow creating Variant type with suspicious variants by default"},
|
||||
{"validate_experimental_and_suspicious_types_inside_nested_types", false, true, "Validate usage of experimental and suspicious types inside nested types"},
|
||||
{"output_format_values_escape_quote_with_quote", false, false, "If true escape ' with '', otherwise quoted with \\'"},
|
||||
{"output_format_pretty_single_large_number_tip_threshold", 0, 1'000'000, "Print a readable number tip on the right side of the table if the block consists of a single number which exceeds this value (except 0)"},
|
||||
{"input_format_try_infer_exponent_floats", true, false, "Don't infer floats in exponential notation by default"},
|
||||
{"query_plan_optimize_prewhere", true, true, "Allow to push down filter to PREWHERE expression for supported storages"},
|
||||
{"async_insert_max_data_size", 1000000, 10485760, "The previous value appeared to be too small."},
|
||||
|
@ -134,9 +134,9 @@ bool DataTypeVariant::hasDynamicSubcolumns() const
|
||||
return std::any_of(variants.begin(), variants.end(), [](auto && elem) { return elem->hasDynamicSubcolumns(); });
|
||||
}
|
||||
|
||||
std::optional<ColumnVariant::Discriminator> DataTypeVariant::tryGetVariantDiscriminator(const DataTypePtr & type) const
|
||||
std::optional<ColumnVariant::Discriminator> DataTypeVariant::tryGetVariantDiscriminator(const IDataType & type) const
|
||||
{
|
||||
String type_name = type->getName();
|
||||
String type_name = type.getName();
|
||||
for (size_t i = 0; i != variants.size(); ++i)
|
||||
{
|
||||
/// We don't use equals here, because it doesn't respect custom type names.
|
||||
|
@ -52,7 +52,7 @@ public:
|
||||
const DataTypes & getVariants() const { return variants; }
|
||||
|
||||
/// Check if Variant has provided type in the list of variants and return its discriminator.
|
||||
std::optional<ColumnVariant::Discriminator> tryGetVariantDiscriminator(const DataTypePtr & type) const;
|
||||
std::optional<ColumnVariant::Discriminator> tryGetVariantDiscriminator(const IDataType & type) const;
|
||||
|
||||
void forEachChild(const ChildCallback & callback) const override;
|
||||
|
||||
|
@ -208,5 +208,6 @@ DataTypePtr FieldToDataType<on_error>::operator()(const bool &) const
|
||||
template class FieldToDataType<LeastSupertypeOnError::Throw>;
|
||||
template class FieldToDataType<LeastSupertypeOnError::String>;
|
||||
template class FieldToDataType<LeastSupertypeOnError::Null>;
|
||||
template class FieldToDataType<LeastSupertypeOnError::Variant>;
|
||||
|
||||
}
|
||||
|
@ -640,7 +640,8 @@ DataTypePtr getLeastSupertypeOrString(const DataTypes & types)
|
||||
return getLeastSupertype<LeastSupertypeOnError::String>(types);
|
||||
}
|
||||
|
||||
DataTypePtr getLeastSupertypeOrVariant(const DataTypes & types)
|
||||
template<>
|
||||
DataTypePtr getLeastSupertype<LeastSupertypeOnError::Variant>(const DataTypes & types)
|
||||
{
|
||||
auto common_type = getLeastSupertype<LeastSupertypeOnError::Null>(types);
|
||||
if (common_type)
|
||||
@ -666,6 +667,11 @@ DataTypePtr getLeastSupertypeOrVariant(const DataTypes & types)
|
||||
return std::make_shared<DataTypeVariant>(variants);
|
||||
}
|
||||
|
||||
DataTypePtr getLeastSupertypeOrVariant(const DataTypes & types)
|
||||
{
|
||||
return getLeastSupertype<LeastSupertypeOnError::Variant>(types);
|
||||
}
|
||||
|
||||
DataTypePtr tryGetLeastSupertype(const DataTypes & types)
|
||||
{
|
||||
return getLeastSupertype<LeastSupertypeOnError::Null>(types);
|
||||
|
@ -9,6 +9,7 @@ enum class LeastSupertypeOnError
|
||||
Throw,
|
||||
String,
|
||||
Null,
|
||||
Variant,
|
||||
};
|
||||
|
||||
/** Get data type that covers all possible values of passed data types.
|
||||
|
@ -975,7 +975,6 @@ void HashedArrayDictionary<dictionary_key_type, sharded>::loadData()
|
||||
{
|
||||
if (!source_ptr->hasUpdateField())
|
||||
{
|
||||
|
||||
std::optional<DictionaryParallelLoaderType> parallel_loader;
|
||||
if constexpr (sharded)
|
||||
parallel_loader.emplace(*this);
|
||||
@ -988,6 +987,7 @@ void HashedArrayDictionary<dictionary_key_type, sharded>::loadData()
|
||||
|
||||
size_t total_rows = 0;
|
||||
size_t total_blocks = 0;
|
||||
String dictionary_name = getFullName();
|
||||
|
||||
Block block;
|
||||
while (true)
|
||||
@ -1007,7 +1007,7 @@ void HashedArrayDictionary<dictionary_key_type, sharded>::loadData()
|
||||
|
||||
if (parallel_loader)
|
||||
{
|
||||
parallel_loader->addBlock(block);
|
||||
parallel_loader->addBlock(std::move(block));
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -1020,10 +1020,12 @@ void HashedArrayDictionary<dictionary_key_type, sharded>::loadData()
|
||||
if (parallel_loader)
|
||||
parallel_loader->finish();
|
||||
|
||||
LOG_DEBUG(getLogger("HashedArrayDictionary"),
|
||||
"Finished {}reading {} blocks with {} rows from pipeline in {:.2f} sec and inserted into hashtable in {:.2f} sec",
|
||||
LOG_DEBUG(log,
|
||||
"Finished {}reading {} blocks with {} rows to dictionary {} from pipeline in {:.2f} sec and inserted into hashtable in {:.2f} sec",
|
||||
configuration.use_async_executor ? "asynchronous " : "",
|
||||
total_blocks, total_rows, pull_time_microseconds / 1000000.0, process_time_microseconds / 1000000.0);
|
||||
total_blocks, total_rows,
|
||||
dictionary_name,
|
||||
pull_time_microseconds / 1000000.0, process_time_microseconds / 1000000.0);
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -1167,6 +1169,9 @@ void registerDictionaryArrayHashed(DictionaryFactory & factory)
|
||||
|
||||
HashedArrayDictionaryStorageConfiguration configuration{require_nonempty, dict_lifetime, static_cast<size_t>(shards)};
|
||||
|
||||
if (source_ptr->hasUpdateField() && shards > 1)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "{}: SHARDS parameter does not supports for updatable source (UPDATE_FIELD)", full_name);
|
||||
|
||||
ContextMutablePtr context = copyContextAndApplySettingsFromDictionaryConfig(global_context, config, config_prefix);
|
||||
const auto & settings = context->getSettingsRef();
|
||||
|
||||
|
@ -374,9 +374,10 @@ HashedDictionary<dictionary_key_type, sparse, sharded>::~HashedDictionary()
|
||||
}
|
||||
}
|
||||
|
||||
LOG_TRACE(log, "Destroying {} non empty hash tables (using {} threads)", hash_tables_count, pool.getMaxThreads());
|
||||
String dictionary_name = getFullName();
|
||||
LOG_TRACE(log, "Destroying {} non empty hash tables for dictionary {} (using {} threads) ", hash_tables_count, dictionary_name, pool.getMaxThreads());
|
||||
pool.wait();
|
||||
LOG_TRACE(log, "Hash tables destroyed");
|
||||
LOG_TRACE(log, "Hash tables for dictionary {} destroyed", dictionary_name);
|
||||
}
|
||||
|
||||
template <DictionaryKeyType dictionary_key_type, bool sparse, bool sharded>
|
||||
@ -833,6 +834,7 @@ void HashedDictionary<dictionary_key_type, sparse, sharded>::createAttributes()
|
||||
attributes.reserve(size);
|
||||
|
||||
HashTableGrowerWithPrecalculationAndMaxLoadFactor grower(configuration.max_load_factor);
|
||||
String dictionary_name = getFullName();
|
||||
|
||||
for (const auto & dictionary_attribute : dict_struct.attributes)
|
||||
{
|
||||
@ -862,9 +864,9 @@ void HashedDictionary<dictionary_key_type, sparse, sharded>::createAttributes()
|
||||
}
|
||||
|
||||
if constexpr (IsBuiltinHashTable<typename CollectionsHolder<ValueType>::value_type>)
|
||||
LOG_TRACE(log, "Using builtin hash table for {} attribute", dictionary_attribute.name);
|
||||
LOG_TRACE(log, "Using builtin hash table for {} attribute of {}", dictionary_attribute.name, dictionary_name);
|
||||
else
|
||||
LOG_TRACE(log, "Using sparsehash for {} attribute", dictionary_attribute.name);
|
||||
LOG_TRACE(log, "Using sparsehash for {} attribute of {}", dictionary_attribute.name, dictionary_name);
|
||||
};
|
||||
|
||||
callOnDictionaryAttributeType(dictionary_attribute.underlying_type, type_call);
|
||||
|
@ -46,12 +46,13 @@ class HashedDictionaryParallelLoader : public boost::noncopyable
|
||||
public:
|
||||
explicit HashedDictionaryParallelLoader(DictionaryType & dictionary_)
|
||||
: dictionary(dictionary_)
|
||||
, dictionary_name(dictionary.getFullName())
|
||||
, shards(dictionary.configuration.shards)
|
||||
, pool(CurrentMetrics::HashedDictionaryThreads, CurrentMetrics::HashedDictionaryThreadsActive, CurrentMetrics::HashedDictionaryThreadsScheduled, shards)
|
||||
, shards_queues(shards)
|
||||
{
|
||||
UInt64 backlog = dictionary.configuration.shard_load_queue_backlog;
|
||||
LOG_TRACE(dictionary.log, "Will load the dictionary using {} threads (with {} backlog)", shards, backlog);
|
||||
LOG_TRACE(dictionary.log, "Will load the {} dictionary using {} threads (with {} backlog)", dictionary_name, shards, backlog);
|
||||
|
||||
shards_slots.resize(shards);
|
||||
iota(shards_slots.data(), shards_slots.size(), UInt64(0));
|
||||
@ -82,6 +83,7 @@ public:
|
||||
{
|
||||
IColumn::Selector selector = createShardSelector(block, shards_slots);
|
||||
Blocks shards_blocks = splitBlock(selector, block);
|
||||
block.clear();
|
||||
|
||||
for (size_t shard = 0; shard < shards; ++shard)
|
||||
{
|
||||
@ -98,7 +100,7 @@ public:
|
||||
Stopwatch watch;
|
||||
pool.wait();
|
||||
UInt64 elapsed_ms = watch.elapsedMilliseconds();
|
||||
LOG_TRACE(dictionary.log, "Processing the tail took {}ms", elapsed_ms);
|
||||
LOG_TRACE(dictionary.log, "Processing the tail of dictionary {} took {}ms", dictionary_name, elapsed_ms);
|
||||
}
|
||||
|
||||
~HashedDictionaryParallelLoader()
|
||||
@ -119,6 +121,7 @@ public:
|
||||
|
||||
private:
|
||||
DictionaryType & dictionary;
|
||||
String dictionary_name;
|
||||
const size_t shards;
|
||||
ThreadPool pool;
|
||||
std::vector<std::optional<ConcurrentBoundedQueue<Block>>> shards_queues;
|
||||
|
@ -80,7 +80,7 @@ void registerDictionaryHashed(DictionaryFactory & factory)
|
||||
};
|
||||
|
||||
if (source_ptr->hasUpdateField() && shards > 1)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,"{}: SHARDS parameter does not supports for updatable source (UPDATE_FIELD)", full_name);
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "{}: SHARDS parameter does not supports for updatable source (UPDATE_FIELD)", full_name);
|
||||
|
||||
if (dictionary_key_type == DictionaryKeyType::Simple)
|
||||
{
|
||||
|
@ -150,6 +150,7 @@ FormatSettings getFormatSettings(const ContextPtr & context, const Settings & se
|
||||
format_settings.pretty.max_rows = settings.output_format_pretty_max_rows;
|
||||
format_settings.pretty.max_value_width = settings.output_format_pretty_max_value_width;
|
||||
format_settings.pretty.output_format_pretty_row_numbers = settings.output_format_pretty_row_numbers;
|
||||
format_settings.pretty.output_format_pretty_single_large_number_tip_threshold = settings.output_format_pretty_single_large_number_tip_threshold;
|
||||
format_settings.protobuf.input_flatten_google_wrappers = settings.input_format_protobuf_flatten_google_wrappers;
|
||||
format_settings.protobuf.output_nullables_with_google_wrappers = settings.output_format_protobuf_nullables_with_google_wrappers;
|
||||
format_settings.protobuf.skip_fields_with_unsupported_types_in_schema_inference = settings.input_format_protobuf_skip_fields_with_unsupported_types_in_schema_inference;
|
||||
|
@ -277,6 +277,7 @@ struct FormatSettings
|
||||
SettingFieldUInt64Auto color{"auto"};
|
||||
|
||||
bool output_format_pretty_row_numbers = false;
|
||||
UInt64 output_format_pretty_single_large_number_tip_threshold = 1'000'000;
|
||||
|
||||
enum class Charset
|
||||
{
|
||||
|
@ -4213,7 +4213,7 @@ arguments, result_type, input_rows_count); \
|
||||
};
|
||||
}
|
||||
|
||||
auto variant_discr_opt = to_variant.tryGetVariantDiscriminator(removeNullableOrLowCardinalityNullable(from_type));
|
||||
auto variant_discr_opt = to_variant.tryGetVariantDiscriminator(*removeNullableOrLowCardinalityNullable(from_type));
|
||||
if (!variant_discr_opt)
|
||||
throw Exception(ErrorCodes::CANNOT_CONVERT_TYPE, "Cannot convert type {} to {}. Conversion to Variant allowed only for types from this Variant", from_type->getName(), to_variant.getName());
|
||||
|
||||
|
@ -91,7 +91,8 @@ static inline void writeProbablyQuotedStringImpl(StringRef s, WriteBuffer & buf,
|
||||
if (isValidIdentifier(s.toView())
|
||||
/// This are valid identifiers but are problematic if present unquoted in SQL query.
|
||||
&& !(s.size == strlen("distinct") && 0 == strncasecmp(s.data, "distinct", strlen("distinct")))
|
||||
&& !(s.size == strlen("all") && 0 == strncasecmp(s.data, "all", strlen("all"))))
|
||||
&& !(s.size == strlen("all") && 0 == strncasecmp(s.data, "all", strlen("all")))
|
||||
&& !(s.size == strlen("table") && 0 == strncasecmp(s.data, "table", strlen("table"))))
|
||||
{
|
||||
writeString(s, buf);
|
||||
}
|
||||
|
@ -1322,7 +1322,12 @@ void ActionsMatcher::visit(const ASTFunction & node, const ASTPtr & ast, Data &
|
||||
void ActionsMatcher::visit(const ASTLiteral & literal, const ASTPtr & /* ast */,
|
||||
Data & data)
|
||||
{
|
||||
DataTypePtr type = applyVisitor(FieldToDataType(), literal.value);
|
||||
DataTypePtr type;
|
||||
if (data.getContext()->getSettingsRef().allow_experimental_variant_type && data.getContext()->getSettingsRef().use_variant_as_common_type)
|
||||
type = applyVisitor(FieldToDataType<LeastSupertypeOnError::Variant>(), literal.value);
|
||||
else
|
||||
type = applyVisitor(FieldToDataType(), literal.value);
|
||||
|
||||
const auto value = convertFieldToType(literal.value, *type);
|
||||
|
||||
// FIXME why do we have a second pass with a clean sample block over the same
|
||||
|
@ -19,7 +19,7 @@ BlockIO InterpreterSetQuery::execute()
|
||||
getContext()->checkSettingsConstraints(ast.changes, SettingSource::QUERY);
|
||||
auto session_context = getContext()->getSessionContext();
|
||||
session_context->applySettingsChanges(ast.changes);
|
||||
session_context->addQueryParameters(ast.query_parameters);
|
||||
session_context->addQueryParameters(NameToNameMap{ast.query_parameters.begin(), ast.query_parameters.end()});
|
||||
session_context->resetSettingsToDefaultValue(ast.default_settings);
|
||||
return {};
|
||||
}
|
||||
|
@ -20,7 +20,6 @@
|
||||
#include <Interpreters/ActionLocksManager.h>
|
||||
#include <Interpreters/InterpreterCreateQuery.h>
|
||||
#include <Interpreters/InterpreterRenameQuery.h>
|
||||
#include <Interpreters/QueryLog.h>
|
||||
#include <Interpreters/executeDDLQueryOnCluster.h>
|
||||
#include <Interpreters/QueryThreadLog.h>
|
||||
#include <Interpreters/QueryViewsLog.h>
|
||||
@ -36,7 +35,6 @@
|
||||
#include <Interpreters/ProcessorsProfileLog.h>
|
||||
#include <Interpreters/AsynchronousInsertLog.h>
|
||||
#include <Interpreters/BackupLog.h>
|
||||
#include <IO/S3/BlobStorageLogWriter.h>
|
||||
#include <Interpreters/JIT/CompiledExpressionCache.h>
|
||||
#include <Interpreters/TransactionLog.h>
|
||||
#include <Interpreters/AsynchronousInsertQueue.h>
|
||||
@ -44,7 +42,6 @@
|
||||
#include <Access/AccessControl.h>
|
||||
#include <Access/ContextAccess.h>
|
||||
#include <Access/Common/AllowedClientHosts.h>
|
||||
#include <Databases/IDatabase.h>
|
||||
#include <Databases/DatabaseReplicated.h>
|
||||
#include <Disks/ObjectStorages/IMetadataStorage.h>
|
||||
#include <Storages/StorageDistributed.h>
|
||||
@ -362,18 +359,22 @@ BlockIO InterpreterSystemQuery::execute()
|
||||
getContext()->checkAccess(AccessType::SYSTEM_DROP_QUERY_CACHE);
|
||||
getContext()->clearQueryCache();
|
||||
break;
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
case Type::DROP_COMPILED_EXPRESSION_CACHE:
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
getContext()->checkAccess(AccessType::SYSTEM_DROP_COMPILED_EXPRESSION_CACHE);
|
||||
if (auto * cache = CompiledExpressionCacheFactory::instance().tryGetCache())
|
||||
cache->clear();
|
||||
break;
|
||||
#else
|
||||
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "The server was compiled without the support for JIT compilation");
|
||||
#endif
|
||||
#if USE_AWS_S3
|
||||
case Type::DROP_S3_CLIENT_CACHE:
|
||||
#if USE_AWS_S3
|
||||
getContext()->checkAccess(AccessType::SYSTEM_DROP_S3_CLIENT_CACHE);
|
||||
S3::ClientCacheRegistry::instance().clearCacheForAll();
|
||||
break;
|
||||
#else
|
||||
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "The server was compiled without the support for AWS S3");
|
||||
#endif
|
||||
|
||||
case Type::DROP_FILESYSTEM_CACHE:
|
||||
@ -768,6 +769,12 @@ BlockIO InterpreterSystemQuery::execute()
|
||||
flushJemallocProfile("/tmp/jemalloc_clickhouse");
|
||||
break;
|
||||
}
|
||||
#else
|
||||
case Type::JEMALLOC_PURGE:
|
||||
case Type::JEMALLOC_ENABLE_PROFILE:
|
||||
case Type::JEMALLOC_DISABLE_PROFILE:
|
||||
case Type::JEMALLOC_FLUSH_PROFILE:
|
||||
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "The server was compiled without JEMalloc");
|
||||
#endif
|
||||
default:
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown type of SYSTEM query");
|
||||
@ -1081,7 +1088,9 @@ void InterpreterSystemQuery::syncReplica(ASTSystemQuery & query)
|
||||
{
|
||||
LOG_TRACE(log, "Synchronizing entries in replica's queue with table's log and waiting for current last entry to be processed");
|
||||
auto sync_timeout = getContext()->getSettingsRef().receive_timeout.totalMilliseconds();
|
||||
if (!storage_replicated->waitForProcessingQueue(sync_timeout, query.sync_replica_mode, query.src_replicas))
|
||||
|
||||
std::unordered_set<std::string> replicas(query.src_replicas.begin(), query.src_replicas.end());
|
||||
if (!storage_replicated->waitForProcessingQueue(sync_timeout, query.sync_replica_mode, replicas))
|
||||
{
|
||||
LOG_ERROR(log, "SYNC REPLICA {}: Timed out.", table_id.getNameForLogs());
|
||||
throw Exception(ErrorCodes::TIMEOUT_EXCEEDED, "SYNC REPLICA {}: command timed out. " \
|
||||
@ -1186,9 +1195,7 @@ AccessRightsElements InterpreterSystemQuery::getRequiredAccessForDDLOnCluster()
|
||||
case Type::DROP_MARK_CACHE:
|
||||
case Type::DROP_MMAP_CACHE:
|
||||
case Type::DROP_QUERY_CACHE:
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
case Type::DROP_COMPILED_EXPRESSION_CACHE:
|
||||
#endif
|
||||
case Type::DROP_UNCOMPRESSED_CACHE:
|
||||
case Type::DROP_INDEX_MARK_CACHE:
|
||||
case Type::DROP_INDEX_UNCOMPRESSED_CACHE:
|
||||
@ -1196,9 +1203,7 @@ AccessRightsElements InterpreterSystemQuery::getRequiredAccessForDDLOnCluster()
|
||||
case Type::SYNC_FILESYSTEM_CACHE:
|
||||
case Type::DROP_SCHEMA_CACHE:
|
||||
case Type::DROP_FORMAT_SCHEMA_CACHE:
|
||||
#if USE_AWS_S3
|
||||
case Type::DROP_S3_CLIENT_CACHE:
|
||||
#endif
|
||||
{
|
||||
required_access.emplace_back(AccessType::SYSTEM_DROP_CACHE);
|
||||
break;
|
||||
@ -1414,7 +1419,6 @@ AccessRightsElements InterpreterSystemQuery::getRequiredAccessForDDLOnCluster()
|
||||
required_access.emplace_back(AccessType::SYSTEM_LISTEN);
|
||||
break;
|
||||
}
|
||||
#if USE_JEMALLOC
|
||||
case Type::JEMALLOC_PURGE:
|
||||
case Type::JEMALLOC_ENABLE_PROFILE:
|
||||
case Type::JEMALLOC_DISABLE_PROFILE:
|
||||
@ -1423,7 +1427,6 @@ AccessRightsElements InterpreterSystemQuery::getRequiredAccessForDDLOnCluster()
|
||||
required_access.emplace_back(AccessType::SYSTEM_JEMALLOC);
|
||||
break;
|
||||
}
|
||||
#endif
|
||||
case Type::STOP_THREAD_FUZZER:
|
||||
case Type::START_THREAD_FUZZER:
|
||||
case Type::ENABLE_FAILPOINT:
|
||||
|
@ -16,6 +16,7 @@
|
||||
#include <DataTypes/DataTypeLowCardinality.h>
|
||||
#include <DataTypes/DataTypeNullable.h>
|
||||
#include <DataTypes/DataTypeAggregateFunction.h>
|
||||
#include <DataTypes/DataTypeVariant.h>
|
||||
|
||||
#include <Core/AccurateComparison.h>
|
||||
|
||||
@ -487,6 +488,18 @@ Field convertFieldToTypeImpl(const Field & src, const IDataType & type, const ID
|
||||
return object;
|
||||
}
|
||||
}
|
||||
else if (const DataTypeVariant * type_variant = typeid_cast<const DataTypeVariant *>(&type))
|
||||
{
|
||||
/// If we have type hint and Variant contains such type, no need to convert field.
|
||||
if (from_type_hint && type_variant->tryGetVariantDiscriminator(*from_type_hint))
|
||||
return src;
|
||||
|
||||
/// Create temporary column and check if we can insert this field to the variant.
|
||||
/// If we can insert, no need to convert anything.
|
||||
auto col = type_variant->createColumn();
|
||||
if (col->tryInsert(src))
|
||||
return src;
|
||||
}
|
||||
|
||||
/// Conversion from string by parsing.
|
||||
if (src.getType() == Field::Types::String)
|
||||
|
@ -103,6 +103,7 @@ namespace ErrorCodes
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
extern const int QUERY_WAS_CANCELLED;
|
||||
extern const int INCORRECT_DATA;
|
||||
extern const int SYNTAX_ERROR;
|
||||
}
|
||||
|
||||
namespace FailPoints
|
||||
@ -726,16 +727,32 @@ static std::tuple<ASTPtr, BlockIO> executeQueryImpl(
|
||||
/// TODO: parser should fail early when max_query_size limit is reached.
|
||||
ast = parseQuery(parser, begin, end, "", max_query_size, settings.max_parser_depth);
|
||||
|
||||
#if 0
|
||||
#ifndef NDEBUG
|
||||
/// Verify that AST formatting is consistent:
|
||||
/// If you format AST, parse it back, and format it again, you get the same string.
|
||||
|
||||
String formatted1 = ast->formatWithPossiblyHidingSensitiveData(0, true, true);
|
||||
|
||||
ASTPtr ast2 = parseQuery(parser,
|
||||
formatted1.data(),
|
||||
formatted1.data() + formatted1.size(),
|
||||
"", max_query_size, settings.max_parser_depth);
|
||||
/// The query can become more verbose after formatting, so:
|
||||
size_t new_max_query_size = max_query_size > 0 ? (1000 + 2 * max_query_size) : 0;
|
||||
|
||||
ASTPtr ast2;
|
||||
try
|
||||
{
|
||||
ast2 = parseQuery(parser,
|
||||
formatted1.data(),
|
||||
formatted1.data() + formatted1.size(),
|
||||
"", new_max_query_size, settings.max_parser_depth);
|
||||
}
|
||||
catch (const Exception & e)
|
||||
{
|
||||
if (e.code() == ErrorCodes::SYNTAX_ERROR)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||
"Inconsistent AST formatting: the query:\n{}\ncannot parse.",
|
||||
formatted1);
|
||||
else
|
||||
throw;
|
||||
}
|
||||
|
||||
chassert(ast2);
|
||||
|
||||
|
@ -121,7 +121,12 @@ Block getHeaderForProcessingStage(
|
||||
|
||||
auto & table_expression_data = query_info.planner_context->getTableExpressionDataOrThrow(left_table_expression);
|
||||
const auto & query_context = query_info.planner_context->getQueryContext();
|
||||
auto columns = table_expression_data.getColumns();
|
||||
|
||||
NamesAndTypes columns;
|
||||
const auto & column_name_to_column = table_expression_data.getColumnNameToColumn();
|
||||
for (const auto & column_name : table_expression_data.getSelectedColumnsNames())
|
||||
columns.push_back(column_name_to_column.at(column_name));
|
||||
|
||||
auto new_query_node = buildSubqueryToReadColumnsFromTableExpression(columns, left_table_expression, query_context);
|
||||
query = new_query_node->toAST();
|
||||
}
|
||||
|
@ -1,21 +1,23 @@
|
||||
#include <DataTypes/DataTypeFixedString.h>
|
||||
#include <DataTypes/DataTypeLowCardinality.h>
|
||||
#include <DataTypes/DataTypeNullable.h>
|
||||
#include <DataTypes/DataTypeVariant.h>
|
||||
#include <DataTypes/getLeastSupertype.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Interpreters/InterpreterCreateQuery.h>
|
||||
#include <Interpreters/parseColumnsListForTableFunction.h>
|
||||
#include <Parsers/ASTExpressionList.h>
|
||||
#include <Parsers/ParserCreateQuery.h>
|
||||
#include <Parsers/parseQuery.h>
|
||||
#include <Interpreters/InterpreterCreateQuery.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Interpreters/parseColumnsListForTableFunction.h>
|
||||
#include <DataTypes/DataTypeLowCardinality.h>
|
||||
#include <DataTypes/DataTypeFixedString.h>
|
||||
#include <DataTypes/DataTypeNullable.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int SUSPICIOUS_TYPE_FOR_LOW_CARDINALITY;
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int SUSPICIOUS_TYPE_FOR_LOW_CARDINALITY;
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
|
||||
}
|
||||
|
||||
@ -73,6 +75,34 @@ void validateDataType(const DataTypePtr & type_to_check, const DataTypeValidatio
|
||||
data_type.getName());
|
||||
}
|
||||
}
|
||||
|
||||
if (!settings.allow_suspicious_variant_types)
|
||||
{
|
||||
if (const auto * variant_type = typeid_cast<const DataTypeVariant *>(&data_type))
|
||||
{
|
||||
const auto & variants = variant_type->getVariants();
|
||||
chassert(!variants.empty());
|
||||
for (size_t i = 0; i < variants.size() - 1; ++i)
|
||||
{
|
||||
for (size_t j = i + 1; j < variants.size(); ++j)
|
||||
{
|
||||
if (auto supertype = tryGetLeastSupertype(DataTypes{variants[i], variants[j]}))
|
||||
{
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_COLUMN,
|
||||
"Cannot create column with type '{}' because variants '{}' and '{}' have similar types and working with values "
|
||||
"of these types may lead to ambiguity. "
|
||||
"Consider using common single variant '{}' instead of these 2 variants or set setting allow_suspicious_variant_types = 1 "
|
||||
"in order to allow it",
|
||||
data_type.getName(),
|
||||
variants[i]->getName(),
|
||||
variants[j]->getName(),
|
||||
supertype->getName());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
validate_callback(*type_to_check);
|
||||
@ -105,7 +135,8 @@ bool tryParseColumnsListFromString(const std::string & structure, ColumnsDescrip
|
||||
|
||||
const char * start = structure.data();
|
||||
const char * end = structure.data() + structure.size();
|
||||
ASTPtr columns_list_raw = tryParseQuery(parser, start, end, error, false, "columns declaration list", false, settings.max_query_size, settings.max_parser_depth);
|
||||
ASTPtr columns_list_raw = tryParseQuery(
|
||||
parser, start, end, error, false, "columns declaration list", false, settings.max_query_size, settings.max_parser_depth);
|
||||
if (!columns_list_raw)
|
||||
return false;
|
||||
|
||||
|
@ -19,6 +19,7 @@ struct DataTypeValidationSettings
|
||||
, allow_experimental_object_type(settings.allow_experimental_object_type)
|
||||
, allow_suspicious_fixed_string_types(settings.allow_suspicious_fixed_string_types)
|
||||
, allow_experimental_variant_type(settings.allow_experimental_variant_type)
|
||||
, allow_suspicious_variant_types(settings.allow_suspicious_variant_types)
|
||||
, validate_nested_types(settings.validate_experimental_and_suspicious_types_inside_nested_types)
|
||||
{
|
||||
}
|
||||
@ -27,6 +28,7 @@ struct DataTypeValidationSettings
|
||||
bool allow_experimental_object_type = true;
|
||||
bool allow_suspicious_fixed_string_types = true;
|
||||
bool allow_experimental_variant_type = true;
|
||||
bool allow_suspicious_variant_types = true;
|
||||
bool validate_nested_types = true;
|
||||
};
|
||||
|
||||
|
@ -456,13 +456,13 @@ void ASTAlterCommand::formatImpl(const FormatSettings & settings, FormatState &
|
||||
}
|
||||
else if (type == ASTAlterCommand::MODIFY_QUERY)
|
||||
{
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "MODIFY QUERY " << settings.nl_or_ws
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "MODIFY QUERY" << settings.nl_or_ws
|
||||
<< (settings.hilite ? hilite_none : "");
|
||||
select->formatImpl(settings, state, frame);
|
||||
}
|
||||
else if (type == ASTAlterCommand::MODIFY_REFRESH)
|
||||
{
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "MODIFY REFRESH " << settings.nl_or_ws
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "MODIFY" << settings.nl_or_ws
|
||||
<< (settings.hilite ? hilite_none : "");
|
||||
refresh->formatImpl(settings, state, frame);
|
||||
}
|
||||
@ -630,16 +630,19 @@ void ASTAlterQuery::formatQueryImpl(const FormatSettings & settings, FormatState
|
||||
|
||||
if (table)
|
||||
{
|
||||
settings.ostr << indent_str;
|
||||
if (database)
|
||||
{
|
||||
settings.ostr << indent_str << backQuoteIfNeed(getDatabase());
|
||||
settings.ostr << ".";
|
||||
database->formatImpl(settings, state, frame);
|
||||
settings.ostr << '.';
|
||||
}
|
||||
settings.ostr << indent_str << backQuoteIfNeed(getTable());
|
||||
|
||||
table->formatImpl(settings, state, frame);
|
||||
}
|
||||
else if (alter_object == AlterObjectType::DATABASE && database)
|
||||
{
|
||||
settings.ostr << indent_str << backQuoteIfNeed(getDatabase());
|
||||
settings.ostr << indent_str;
|
||||
database->formatImpl(settings, state, frame);
|
||||
}
|
||||
|
||||
formatOnCluster(settings);
|
||||
|
@ -49,10 +49,11 @@ protected:
|
||||
{
|
||||
if (database)
|
||||
{
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << indent_str << backQuoteIfNeed(getDatabase()) << (settings.hilite ? hilite_none : "");
|
||||
settings.ostr << ".";
|
||||
database->formatImpl(settings, state, frame);
|
||||
settings.ostr << '.';
|
||||
}
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << indent_str << backQuoteIfNeed(getTable()) << (settings.hilite ? hilite_none : "");
|
||||
|
||||
table->formatImpl(settings, state, frame);
|
||||
}
|
||||
|
||||
if (partition)
|
||||
|
@ -48,10 +48,11 @@ void ASTCreateIndexQuery::formatQueryImpl(const FormatSettings & settings, Forma
|
||||
{
|
||||
if (database)
|
||||
{
|
||||
settings.ostr << indent_str << backQuoteIfNeed(getDatabase());
|
||||
settings.ostr << ".";
|
||||
database->formatImpl(settings, state, frame);
|
||||
settings.ostr << '.';
|
||||
}
|
||||
settings.ostr << indent_str << backQuoteIfNeed(getTable());
|
||||
|
||||
table->formatImpl(settings, state, frame);
|
||||
}
|
||||
|
||||
formatOnCluster(settings);
|
||||
|
@ -16,7 +16,7 @@ class ASTCreateIndexQuery : public ASTQueryWithTableAndOutput, public ASTQueryWi
|
||||
public:
|
||||
ASTPtr index_name;
|
||||
|
||||
/// Stores the IndexDeclaration here.
|
||||
/// Stores the ASTIndexDeclaration here.
|
||||
ASTPtr index_decl;
|
||||
|
||||
bool if_not_exists{false};
|
||||
|
@ -272,8 +272,9 @@ void ASTCreateQuery::formatQueryImpl(const FormatSettings & settings, FormatStat
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "")
|
||||
<< (attach ? "ATTACH DATABASE " : "CREATE DATABASE ")
|
||||
<< (if_not_exists ? "IF NOT EXISTS " : "")
|
||||
<< (settings.hilite ? hilite_none : "")
|
||||
<< backQuoteIfNeed(getDatabase());
|
||||
<< (settings.hilite ? hilite_none : "");
|
||||
|
||||
database->formatImpl(settings, state, frame);
|
||||
|
||||
if (uuid != UUIDHelpers::Nil)
|
||||
{
|
||||
@ -328,8 +329,15 @@ void ASTCreateQuery::formatQueryImpl(const FormatSettings & settings, FormatStat
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << (temporary ? "TEMPORARY " : "")
|
||||
<< what << " "
|
||||
<< (if_not_exists ? "IF NOT EXISTS " : "")
|
||||
<< (settings.hilite ? hilite_none : "")
|
||||
<< (database ? backQuoteIfNeed(getDatabase()) + "." : "") << backQuoteIfNeed(getTable());
|
||||
<< (settings.hilite ? hilite_none : "");
|
||||
|
||||
if (database)
|
||||
{
|
||||
database->formatImpl(settings, state, frame);
|
||||
settings.ostr << '.';
|
||||
}
|
||||
|
||||
table->formatImpl(settings, state, frame);
|
||||
|
||||
if (uuid != UUIDHelpers::Nil)
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << " UUID " << (settings.hilite ? hilite_none : "")
|
||||
@ -361,8 +369,16 @@ void ASTCreateQuery::formatQueryImpl(const FormatSettings & settings, FormatStat
|
||||
|
||||
/// Always DICTIONARY
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << action << " DICTIONARY "
|
||||
<< (if_not_exists ? "IF NOT EXISTS " : "") << (settings.hilite ? hilite_none : "")
|
||||
<< (database ? backQuoteIfNeed(getDatabase()) + "." : "") << backQuoteIfNeed(getTable());
|
||||
<< (if_not_exists ? "IF NOT EXISTS " : "") << (settings.hilite ? hilite_none : "");
|
||||
|
||||
if (database)
|
||||
{
|
||||
database->formatImpl(settings, state, frame);
|
||||
settings.ostr << '.';
|
||||
}
|
||||
|
||||
table->formatImpl(settings, state, frame);
|
||||
|
||||
if (uuid != UUIDHelpers::Nil)
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << " UUID " << (settings.hilite ? hilite_none : "")
|
||||
<< quoteString(toString(uuid));
|
||||
|
@ -36,10 +36,11 @@ void ASTDeleteQuery::formatQueryImpl(const FormatSettings & settings, FormatStat
|
||||
|
||||
if (database)
|
||||
{
|
||||
settings.ostr << backQuoteIfNeed(getDatabase());
|
||||
settings.ostr << ".";
|
||||
database->formatImpl(settings, state, frame);
|
||||
settings.ostr << '.';
|
||||
}
|
||||
settings.ostr << backQuoteIfNeed(getTable());
|
||||
|
||||
table->formatImpl(settings, state, frame);
|
||||
|
||||
formatOnCluster(settings);
|
||||
|
||||
|
23
src/Parsers/ASTDescribeCacheQuery.cpp
Normal file
23
src/Parsers/ASTDescribeCacheQuery.cpp
Normal file
@ -0,0 +1,23 @@
|
||||
#include <Parsers/ASTDescribeCacheQuery.h>
|
||||
#include <Common/quoteString.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
String ASTDescribeCacheQuery::getID(char) const { return "DescribeCacheQuery"; }
|
||||
|
||||
ASTPtr ASTDescribeCacheQuery::clone() const
|
||||
{
|
||||
auto res = std::make_shared<ASTDescribeCacheQuery>(*this);
|
||||
cloneOutputOptions(*res);
|
||||
return res;
|
||||
}
|
||||
|
||||
void ASTDescribeCacheQuery::formatQueryImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const
|
||||
{
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "DESCRIBE FILESYSTEM CACHE" << (settings.hilite ? hilite_none : "")
|
||||
<< " " << quoteString(cache_name);
|
||||
}
|
||||
|
||||
}
|
@ -1,6 +1,8 @@
|
||||
#pragma once
|
||||
|
||||
#include <Parsers/ASTQueryWithOutput.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
@ -9,20 +11,11 @@ class ASTDescribeCacheQuery : public ASTQueryWithOutput
|
||||
public:
|
||||
String cache_name;
|
||||
|
||||
String getID(char) const override { return "DescribeCacheQuery"; }
|
||||
|
||||
ASTPtr clone() const override
|
||||
{
|
||||
auto res = std::make_shared<ASTDescribeCacheQuery>(*this);
|
||||
cloneOutputOptions(*res);
|
||||
return res;
|
||||
}
|
||||
String getID(char) const override;
|
||||
ASTPtr clone() const override;
|
||||
|
||||
protected:
|
||||
void formatQueryImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override
|
||||
{
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "DESCRIBE FILESYSTEM CACHE" << (settings.hilite ? hilite_none : "") << " " << cache_name;
|
||||
}
|
||||
void formatQueryImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -43,10 +43,11 @@ void ASTDropIndexQuery::formatQueryImpl(const FormatSettings & settings, FormatS
|
||||
{
|
||||
if (database)
|
||||
{
|
||||
settings.ostr << indent_str << backQuoteIfNeed(getDatabase());
|
||||
settings.ostr << ".";
|
||||
database->formatImpl(settings, state, frame);
|
||||
settings.ostr << '.';
|
||||
}
|
||||
settings.ostr << indent_str << backQuoteIfNeed(getTable());
|
||||
|
||||
table->formatImpl(settings, state, frame);
|
||||
}
|
||||
|
||||
formatOnCluster(settings);
|
||||
|
@ -32,7 +32,7 @@ ASTPtr ASTDropQuery::clone() const
|
||||
return res;
|
||||
}
|
||||
|
||||
void ASTDropQuery::formatQueryImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const
|
||||
void ASTDropQuery::formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const
|
||||
{
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "");
|
||||
if (kind == ASTDropQuery::Kind::Drop)
|
||||
@ -47,7 +47,6 @@ void ASTDropQuery::formatQueryImpl(const FormatSettings & settings, FormatState
|
||||
if (temporary)
|
||||
settings.ostr << "TEMPORARY ";
|
||||
|
||||
|
||||
if (!table && database)
|
||||
settings.ostr << "DATABASE ";
|
||||
else if (is_dictionary)
|
||||
@ -66,9 +65,19 @@ void ASTDropQuery::formatQueryImpl(const FormatSettings & settings, FormatState
|
||||
settings.ostr << (settings.hilite ? hilite_none : "");
|
||||
|
||||
if (!table && database)
|
||||
settings.ostr << backQuoteIfNeed(getDatabase());
|
||||
{
|
||||
database->formatImpl(settings, state, frame);
|
||||
}
|
||||
else
|
||||
settings.ostr << (database ? backQuoteIfNeed(getDatabase()) + "." : "") << backQuoteIfNeed(getTable());
|
||||
{
|
||||
if (database)
|
||||
{
|
||||
database->formatImpl(settings, state, frame);
|
||||
settings.ostr << '.';
|
||||
}
|
||||
|
||||
table->formatImpl(settings, state, frame);
|
||||
}
|
||||
|
||||
formatOnCluster(settings);
|
||||
|
||||
|
@ -17,21 +17,23 @@ void ASTExpressionList::formatImpl(const FormatSettings & settings, FormatState
|
||||
if (frame.expression_list_prepend_whitespace)
|
||||
settings.ostr << ' ';
|
||||
|
||||
for (ASTs::const_iterator it = children.begin(); it != children.end(); ++it)
|
||||
for (size_t i = 0, size = children.size(); i < size; ++i)
|
||||
{
|
||||
if (it != children.begin())
|
||||
if (i)
|
||||
{
|
||||
if (separator)
|
||||
settings.ostr << separator;
|
||||
settings.ostr << ' ';
|
||||
}
|
||||
|
||||
FormatStateStacked frame_nested = frame;
|
||||
frame_nested.surround_each_list_element_with_parens = false;
|
||||
frame_nested.list_element_index = i;
|
||||
|
||||
if (frame.surround_each_list_element_with_parens)
|
||||
settings.ostr << "(";
|
||||
|
||||
FormatStateStacked frame_nested = frame;
|
||||
frame_nested.surround_each_list_element_with_parens = false;
|
||||
(*it)->formatImpl(settings, state, frame_nested);
|
||||
children[i]->formatImpl(settings, state, frame_nested);
|
||||
|
||||
if (frame.surround_each_list_element_with_parens)
|
||||
settings.ostr << ")";
|
||||
@ -50,25 +52,23 @@ void ASTExpressionList::formatImplMultiline(const FormatSettings & settings, For
|
||||
|
||||
++frame.indent;
|
||||
|
||||
for (ASTs::const_iterator it = children.begin(); it != children.end(); ++it)
|
||||
for (size_t i = 0, size = children.size(); i < size; ++i)
|
||||
{
|
||||
if (it != children.begin())
|
||||
{
|
||||
if (separator)
|
||||
settings.ostr << separator;
|
||||
}
|
||||
if (i && separator)
|
||||
settings.ostr << separator;
|
||||
|
||||
if (children.size() > 1 || frame.expression_list_always_start_on_new_line)
|
||||
if (size > 1 || frame.expression_list_always_start_on_new_line)
|
||||
settings.ostr << indent_str;
|
||||
|
||||
FormatStateStacked frame_nested = frame;
|
||||
frame_nested.expression_list_always_start_on_new_line = false;
|
||||
frame_nested.surround_each_list_element_with_parens = false;
|
||||
frame_nested.list_element_index = i;
|
||||
|
||||
if (frame.surround_each_list_element_with_parens)
|
||||
settings.ostr << "(";
|
||||
|
||||
(*it)->formatImpl(settings, state, frame_nested);
|
||||
children[i]->formatImpl(settings, state, frame_nested);
|
||||
|
||||
if (frame.surround_each_list_element_with_parens)
|
||||
settings.ostr << ")";
|
||||
|
@ -813,8 +813,7 @@ void ASTFunction::formatImplWithoutAlias(const FormatSettings & settings, Format
|
||||
|
||||
/// Should this function to be written as operator?
|
||||
bool written = false;
|
||||
|
||||
if (arguments && !parameters)
|
||||
if (arguments && !parameters && nulls_action == NullsAction::EMPTY)
|
||||
{
|
||||
/// Unary prefix operators.
|
||||
if (arguments->children.size() == 1)
|
||||
@ -1049,8 +1048,10 @@ void ASTFunction::formatImplWithoutAlias(const FormatSettings & settings, Format
|
||||
{
|
||||
/// Special case: zero elements tuple in lhs of lambda is printed as ().
|
||||
/// Special case: one-element tuple in lhs of lambda is printed as its element.
|
||||
/// If lambda function is not the first element in the list, it has to be put in parentheses.
|
||||
/// Example: f(x, (y -> z)) should not be printed as f((x, y) -> z).
|
||||
|
||||
if (frame.need_parens)
|
||||
if (frame.need_parens || frame.list_element_index > 0)
|
||||
settings.ostr << '(';
|
||||
|
||||
if (first_argument_is_tuple
|
||||
@ -1067,7 +1068,7 @@ void ASTFunction::formatImplWithoutAlias(const FormatSettings & settings, Format
|
||||
|
||||
settings.ostr << (settings.hilite ? hilite_operator : "") << " -> " << (settings.hilite ? hilite_none : "");
|
||||
arguments->children[1]->formatImpl(settings, state, nested_need_parens);
|
||||
if (frame.need_parens)
|
||||
if (frame.need_parens || frame.list_element_index > 0)
|
||||
settings.ostr << ')';
|
||||
written = true;
|
||||
}
|
||||
@ -1244,6 +1245,7 @@ void ASTFunction::formatImplWithoutAlias(const FormatSettings & settings, Format
|
||||
continue;
|
||||
}
|
||||
|
||||
nested_dont_need_parens.list_element_index = i;
|
||||
argument->formatImpl(settings, state, nested_dont_need_parens);
|
||||
}
|
||||
}
|
||||
|
@ -36,7 +36,7 @@ void ASTIndexDeclaration::formatImpl(const FormatSettings & s, FormatState & sta
|
||||
s.ostr << ")";
|
||||
}
|
||||
else
|
||||
expr->formatImpl(s, state, frame);
|
||||
expr->formatImpl(s, state, frame);
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -59,4 +59,3 @@ void ASTIndexDeclaration::formatImpl(const FormatSettings & s, FormatState & sta
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
@ -68,8 +68,13 @@ void ASTInsertQuery::formatImpl(const FormatSettings & settings, FormatState & s
|
||||
}
|
||||
else
|
||||
{
|
||||
settings.ostr << (settings.hilite ? hilite_none : "")
|
||||
<< (database ? backQuoteIfNeed(getDatabase()) + "." : "") << backQuoteIfNeed(getTable());
|
||||
if (database)
|
||||
{
|
||||
database->formatImpl(settings, state, frame);
|
||||
settings.ostr << '.';
|
||||
}
|
||||
|
||||
table->formatImpl(settings, state, frame);
|
||||
}
|
||||
|
||||
if (columns)
|
||||
|
@ -7,8 +7,15 @@ namespace DB
|
||||
|
||||
void ASTOptimizeQuery::formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const
|
||||
{
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "OPTIMIZE TABLE " << (settings.hilite ? hilite_none : "")
|
||||
<< (database ? backQuoteIfNeed(getDatabase()) + "." : "") << backQuoteIfNeed(getTable());
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "OPTIMIZE TABLE " << (settings.hilite ? hilite_none : "");
|
||||
|
||||
if (database)
|
||||
{
|
||||
database->formatImpl(settings, state, frame);
|
||||
settings.ostr << '.';
|
||||
}
|
||||
|
||||
table->formatImpl(settings, state, frame);
|
||||
|
||||
formatOnCluster(settings);
|
||||
|
||||
|
@ -64,11 +64,5 @@ void ASTQueryWithTableAndOutput::cloneTableOptions(ASTQueryWithTableAndOutput &
|
||||
cloned.children.push_back(cloned.table);
|
||||
}
|
||||
}
|
||||
void ASTQueryWithTableAndOutput::formatHelper(const FormatSettings & settings, const char * name) const
|
||||
{
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << name << " " << (settings.hilite ? hilite_none : "");
|
||||
settings.ostr << (database ? backQuoteIfNeed(getDatabase()) + "." : "") << backQuoteIfNeed(getTable());
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
@ -28,9 +28,6 @@ public:
|
||||
void setTable(const String & name);
|
||||
|
||||
void cloneTableOptions(ASTQueryWithTableAndOutput & cloned) const;
|
||||
|
||||
protected:
|
||||
void formatHelper(const FormatSettings & settings, const char * name) const;
|
||||
};
|
||||
|
||||
|
||||
@ -52,9 +49,19 @@ public:
|
||||
QueryKind getQueryKind() const override { return QueryKind::Show; }
|
||||
|
||||
protected:
|
||||
void formatQueryImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override
|
||||
void formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override
|
||||
{
|
||||
formatHelper(settings, temporary ? AstIDAndQueryNames::QueryTemporary : AstIDAndQueryNames::Query);
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "")
|
||||
<< (temporary ? AstIDAndQueryNames::QueryTemporary : AstIDAndQueryNames::Query)
|
||||
<< " " << (settings.hilite ? hilite_none : "");
|
||||
|
||||
if (database)
|
||||
{
|
||||
database->formatImpl(settings, state, frame);
|
||||
settings.ostr << '.';
|
||||
}
|
||||
|
||||
table->formatImpl(settings, state, frame);
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -84,7 +84,7 @@ public:
|
||||
QueryKind getQueryKind() const override { return QueryKind::Rename; }
|
||||
|
||||
protected:
|
||||
void formatQueryImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override
|
||||
void formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override
|
||||
{
|
||||
if (database)
|
||||
{
|
||||
@ -93,9 +93,9 @@ protected:
|
||||
if (elements.at(0).if_exists)
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "IF EXISTS " << (settings.hilite ? hilite_none : "");
|
||||
|
||||
settings.ostr << backQuoteIfNeed(elements.at(0).from.getDatabase());
|
||||
elements.at(0).from.database->formatImpl(settings, state, frame);
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << " TO " << (settings.hilite ? hilite_none : "");
|
||||
settings.ostr << backQuoteIfNeed(elements.at(0).to.getDatabase());
|
||||
elements.at(0).to.database->formatImpl(settings, state, frame);
|
||||
formatOnCluster(settings);
|
||||
return;
|
||||
}
|
||||
@ -119,9 +119,26 @@ protected:
|
||||
|
||||
if (it->if_exists)
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "IF EXISTS " << (settings.hilite ? hilite_none : "");
|
||||
settings.ostr << (it->from.database ? backQuoteIfNeed(it->from.getDatabase()) + "." : "") << backQuoteIfNeed(it->from.getTable())
|
||||
<< (settings.hilite ? hilite_keyword : "") << (exchange ? " AND " : " TO ") << (settings.hilite ? hilite_none : "")
|
||||
<< (it->to.database ? backQuoteIfNeed(it->to.getDatabase()) + "." : "") << backQuoteIfNeed(it->to.getTable());
|
||||
|
||||
|
||||
if (it->from.database)
|
||||
{
|
||||
it->from.database->formatImpl(settings, state, frame);
|
||||
settings.ostr << '.';
|
||||
}
|
||||
|
||||
it->from.table->formatImpl(settings, state, frame);
|
||||
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << (exchange ? " AND " : " TO ") << (settings.hilite ? hilite_none : "");
|
||||
|
||||
if (it->to.database)
|
||||
{
|
||||
it->to.database->formatImpl(settings, state, frame);
|
||||
settings.ostr << '.';
|
||||
}
|
||||
|
||||
it->to.table->formatImpl(settings, state, frame);
|
||||
|
||||
}
|
||||
|
||||
formatOnCluster(settings);
|
||||
|
@ -108,12 +108,6 @@ void ASTSelectQuery::formatImpl(const FormatSettings & s, FormatState & state, F
|
||||
if (group_by_all)
|
||||
s.ostr << (s.hilite ? hilite_keyword : "") << s.nl_or_ws << indent_str << "GROUP BY ALL" << (s.hilite ? hilite_none : "");
|
||||
|
||||
if (group_by_with_rollup)
|
||||
s.ostr << (s.hilite ? hilite_keyword : "") << s.nl_or_ws << indent_str << (s.one_line ? "" : " ") << "WITH ROLLUP" << (s.hilite ? hilite_none : "");
|
||||
|
||||
if (group_by_with_cube)
|
||||
s.ostr << (s.hilite ? hilite_keyword : "") << s.nl_or_ws << indent_str << (s.one_line ? "" : " ") << "WITH CUBE" << (s.hilite ? hilite_none : "");
|
||||
|
||||
if (group_by_with_grouping_sets && groupBy())
|
||||
{
|
||||
auto nested_frame = frame;
|
||||
@ -128,6 +122,12 @@ void ASTSelectQuery::formatImpl(const FormatSettings & s, FormatState & state, F
|
||||
s.ostr << ")";
|
||||
}
|
||||
|
||||
if (group_by_with_rollup)
|
||||
s.ostr << (s.hilite ? hilite_keyword : "") << s.nl_or_ws << indent_str << (s.one_line ? "" : " ") << "WITH ROLLUP" << (s.hilite ? hilite_none : "");
|
||||
|
||||
if (group_by_with_cube)
|
||||
s.ostr << (s.hilite ? hilite_keyword : "") << s.nl_or_ws << indent_str << (s.one_line ? "" : " ") << "WITH CUBE" << (s.hilite ? hilite_none : "");
|
||||
|
||||
if (group_by_with_totals)
|
||||
s.ostr << (s.hilite ? hilite_keyword : "") << s.nl_or_ws << indent_str << (s.one_line ? "" : " ") << "WITH TOTALS" << (s.hilite ? hilite_none : "");
|
||||
|
||||
|
@ -63,17 +63,12 @@ void ASTSelectWithUnionQuery::formatQueryImpl(const FormatSettings & settings, F
|
||||
|
||||
if (auto * node = (*it)->as<ASTSelectWithUnionQuery>())
|
||||
{
|
||||
settings.ostr << settings.nl_or_ws << indent_str;
|
||||
if (it != list_of_selects->children.begin())
|
||||
settings.ostr << settings.nl_or_ws;
|
||||
|
||||
if (node->list_of_selects->children.size() == 1)
|
||||
{
|
||||
(node->list_of_selects->children.at(0))->formatImpl(settings, state, frame);
|
||||
}
|
||||
else
|
||||
{
|
||||
auto sub_query = std::make_shared<ASTSubquery>(*it);
|
||||
sub_query->formatImpl(settings, state, frame);
|
||||
}
|
||||
settings.ostr << indent_str;
|
||||
auto sub_query = std::make_shared<ASTSubquery>(*it);
|
||||
sub_query->formatImpl(settings, state, frame);
|
||||
}
|
||||
else
|
||||
{
|
||||
|
@ -3,6 +3,7 @@
|
||||
#include <Common/SipHash.h>
|
||||
#include <Common/FieldVisitorHash.h>
|
||||
#include <Common/FieldVisitorToString.h>
|
||||
#include <Common/quoteString.h>
|
||||
#include <IO/Operators.h>
|
||||
#include <IO/WriteBufferFromString.h>
|
||||
|
||||
@ -106,7 +107,7 @@ void ASTSetQuery::formatImpl(const FormatSettings & format, FormatState &, Forma
|
||||
first = false;
|
||||
|
||||
formatSettingName(QUERY_PARAMETER_NAME_PREFIX + name, format.ostr);
|
||||
format.ostr << " = " << value;
|
||||
format.ostr << " = " << quoteString(value);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -25,7 +25,7 @@ public:
|
||||
SettingsChanges changes;
|
||||
/// settings that will be reset to default value
|
||||
std::vector<String> default_settings;
|
||||
NameToNameMap query_parameters;
|
||||
NameToNameVector query_parameters;
|
||||
|
||||
/** Get the text that identifies this element. */
|
||||
String getID(char) const override { return "Set"; }
|
||||
|
@ -7,9 +7,15 @@
|
||||
|
||||
#include <magic_enum.hpp>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int LOGICAL_ERROR;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
std::vector<std::string> getTypeIndexToTypeName()
|
||||
@ -85,7 +91,7 @@ void ASTSystemQuery::setTable(const String & name)
|
||||
}
|
||||
}
|
||||
|
||||
void ASTSystemQuery::formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const
|
||||
void ASTSystemQuery::formatImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const
|
||||
{
|
||||
auto print_identifier = [&](const String & identifier) -> WriteBuffer &
|
||||
{
|
||||
@ -104,9 +110,11 @@ void ASTSystemQuery::formatImpl(const FormatSettings & settings, FormatState &,
|
||||
{
|
||||
if (database)
|
||||
{
|
||||
print_identifier(getDatabase()) << ".";
|
||||
database->formatImpl(settings, state, frame);
|
||||
settings.ostr << '.';
|
||||
}
|
||||
print_identifier(getTable());
|
||||
|
||||
table->formatImpl(settings, state, frame);
|
||||
return settings.ostr;
|
||||
};
|
||||
|
||||
@ -144,185 +152,273 @@ void ASTSystemQuery::formatImpl(const FormatSettings & settings, FormatState &,
|
||||
if (!cluster.empty())
|
||||
formatOnCluster(settings);
|
||||
|
||||
if ( type == Type::STOP_MERGES
|
||||
|| type == Type::START_MERGES
|
||||
|| type == Type::STOP_TTL_MERGES
|
||||
|| type == Type::START_TTL_MERGES
|
||||
|| type == Type::STOP_MOVES
|
||||
|| type == Type::START_MOVES
|
||||
|| type == Type::STOP_FETCHES
|
||||
|| type == Type::START_FETCHES
|
||||
|| type == Type::STOP_REPLICATED_SENDS
|
||||
|| type == Type::START_REPLICATED_SENDS
|
||||
|| type == Type::STOP_REPLICATION_QUEUES
|
||||
|| type == Type::START_REPLICATION_QUEUES
|
||||
|| type == Type::STOP_DISTRIBUTED_SENDS
|
||||
|| type == Type::START_DISTRIBUTED_SENDS
|
||||
|| type == Type::STOP_PULLING_REPLICATION_LOG
|
||||
|| type == Type::START_PULLING_REPLICATION_LOG
|
||||
|| type == Type::STOP_CLEANUP
|
||||
|| type == Type::START_CLEANUP)
|
||||
switch (type)
|
||||
{
|
||||
if (table)
|
||||
case Type::STOP_MERGES:
|
||||
case Type::START_MERGES:
|
||||
case Type::STOP_TTL_MERGES:
|
||||
case Type::START_TTL_MERGES:
|
||||
case Type::STOP_MOVES:
|
||||
case Type::START_MOVES:
|
||||
case Type::STOP_FETCHES:
|
||||
case Type::START_FETCHES:
|
||||
case Type::STOP_REPLICATED_SENDS:
|
||||
case Type::START_REPLICATED_SENDS:
|
||||
case Type::STOP_REPLICATION_QUEUES:
|
||||
case Type::START_REPLICATION_QUEUES:
|
||||
case Type::STOP_DISTRIBUTED_SENDS:
|
||||
case Type::START_DISTRIBUTED_SENDS:
|
||||
case Type::STOP_PULLING_REPLICATION_LOG:
|
||||
case Type::START_PULLING_REPLICATION_LOG:
|
||||
case Type::STOP_CLEANUP:
|
||||
case Type::START_CLEANUP:
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_database_table();
|
||||
}
|
||||
else if (!volume.empty())
|
||||
print_on_volume();
|
||||
}
|
||||
else if ( type == Type::RESTART_REPLICA
|
||||
|| type == Type::RESTORE_REPLICA
|
||||
|| type == Type::SYNC_REPLICA
|
||||
|| type == Type::WAIT_LOADING_PARTS
|
||||
|| type == Type::FLUSH_DISTRIBUTED
|
||||
|| type == Type::RELOAD_DICTIONARY
|
||||
|| type == Type::RELOAD_MODEL
|
||||
|| type == Type::RELOAD_FUNCTION
|
||||
|| type == Type::RESTART_DISK
|
||||
|| type == Type::DROP_DISK_METADATA_CACHE)
|
||||
{
|
||||
if (table)
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_database_table();
|
||||
}
|
||||
else if (!target_model.empty())
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_identifier(target_model);
|
||||
}
|
||||
else if (!target_function.empty())
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_identifier(target_function);
|
||||
}
|
||||
else if (!disk.empty())
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_identifier(disk);
|
||||
}
|
||||
|
||||
if (sync_replica_mode != SyncReplicaMode::DEFAULT)
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_keyword(magic_enum::enum_name(sync_replica_mode));
|
||||
|
||||
// If the mode is LIGHTWEIGHT and specific source replicas are specified
|
||||
if (sync_replica_mode == SyncReplicaMode::LIGHTWEIGHT && !src_replicas.empty())
|
||||
if (table)
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_keyword("FROM");
|
||||
print_database_table();
|
||||
}
|
||||
else if (!volume.empty())
|
||||
{
|
||||
print_on_volume();
|
||||
}
|
||||
break;
|
||||
}
|
||||
case Type::RESTART_REPLICA:
|
||||
case Type::RESTORE_REPLICA:
|
||||
case Type::SYNC_REPLICA:
|
||||
case Type::WAIT_LOADING_PARTS:
|
||||
case Type::FLUSH_DISTRIBUTED:
|
||||
case Type::RELOAD_DICTIONARY:
|
||||
case Type::RELOAD_MODEL:
|
||||
case Type::RELOAD_FUNCTION:
|
||||
case Type::RESTART_DISK:
|
||||
case Type::DROP_DISK_METADATA_CACHE:
|
||||
{
|
||||
if (table)
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_database_table();
|
||||
}
|
||||
else if (!target_model.empty())
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_identifier(target_model);
|
||||
}
|
||||
else if (!target_function.empty())
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_identifier(target_function);
|
||||
}
|
||||
else if (!disk.empty())
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_identifier(disk);
|
||||
}
|
||||
|
||||
for (auto it = src_replicas.begin(); it != src_replicas.end(); ++it)
|
||||
if (sync_replica_mode != SyncReplicaMode::DEFAULT)
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_keyword(magic_enum::enum_name(sync_replica_mode));
|
||||
|
||||
// If the mode is LIGHTWEIGHT and specific source replicas are specified
|
||||
if (sync_replica_mode == SyncReplicaMode::LIGHTWEIGHT && !src_replicas.empty())
|
||||
{
|
||||
print_identifier(*it);
|
||||
settings.ostr << ' ';
|
||||
print_keyword("FROM");
|
||||
settings.ostr << ' ';
|
||||
|
||||
// Add a comma and space after each identifier, except the last one
|
||||
if (std::next(it) != src_replicas.end())
|
||||
settings.ostr << ", ";
|
||||
bool first = true;
|
||||
for (const auto & src : src_replicas)
|
||||
{
|
||||
if (!first)
|
||||
settings.ostr << ", ";
|
||||
first = false;
|
||||
settings.ostr << quoteString(src);
|
||||
}
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
else if (type == Type::SYNC_DATABASE_REPLICA)
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_identifier(database->as<ASTIdentifier>()->name());
|
||||
}
|
||||
else if (type == Type::DROP_REPLICA || type == Type::DROP_DATABASE_REPLICA)
|
||||
{
|
||||
print_drop_replica();
|
||||
}
|
||||
else if (type == Type::SUSPEND)
|
||||
{
|
||||
print_keyword(" FOR ") << seconds;
|
||||
print_keyword(" SECOND");
|
||||
}
|
||||
else if (type == Type::DROP_FORMAT_SCHEMA_CACHE)
|
||||
{
|
||||
if (!schema_cache_format.empty())
|
||||
{
|
||||
print_keyword(" FOR ");
|
||||
print_identifier(schema_cache_format);
|
||||
}
|
||||
}
|
||||
else if (type == Type::DROP_FILESYSTEM_CACHE)
|
||||
{
|
||||
if (!filesystem_cache_name.empty())
|
||||
case Type::SYNC_DATABASE_REPLICA:
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_identifier(filesystem_cache_name);
|
||||
if (!key_to_drop.empty())
|
||||
print_identifier(database->as<ASTIdentifier>()->name());
|
||||
break;
|
||||
}
|
||||
case Type::DROP_REPLICA:
|
||||
case Type::DROP_DATABASE_REPLICA:
|
||||
{
|
||||
print_drop_replica();
|
||||
break;
|
||||
}
|
||||
case Type::SUSPEND:
|
||||
{
|
||||
print_keyword(" FOR ") << seconds;
|
||||
print_keyword(" SECOND");
|
||||
break;
|
||||
}
|
||||
case Type::DROP_FORMAT_SCHEMA_CACHE:
|
||||
{
|
||||
if (!schema_cache_format.empty())
|
||||
{
|
||||
print_keyword(" KEY ");
|
||||
print_identifier(key_to_drop);
|
||||
if (offset_to_drop.has_value())
|
||||
print_keyword(" FOR ");
|
||||
print_identifier(schema_cache_format);
|
||||
}
|
||||
break;
|
||||
}
|
||||
case Type::DROP_FILESYSTEM_CACHE:
|
||||
{
|
||||
if (!filesystem_cache_name.empty())
|
||||
{
|
||||
settings.ostr << ' ' << quoteString(filesystem_cache_name);
|
||||
if (!key_to_drop.empty())
|
||||
{
|
||||
print_keyword(" OFFSET ");
|
||||
settings.ostr << offset_to_drop.value();
|
||||
print_keyword(" KEY ");
|
||||
print_identifier(key_to_drop);
|
||||
if (offset_to_drop.has_value())
|
||||
{
|
||||
print_keyword(" OFFSET ");
|
||||
settings.ostr << offset_to_drop.value();
|
||||
}
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
else if (type == Type::DROP_SCHEMA_CACHE)
|
||||
{
|
||||
if (!schema_cache_storage.empty())
|
||||
case Type::DROP_SCHEMA_CACHE:
|
||||
{
|
||||
print_keyword(" FOR ");
|
||||
print_identifier(schema_cache_storage);
|
||||
}
|
||||
}
|
||||
else if (type == Type::UNFREEZE)
|
||||
{
|
||||
print_keyword(" WITH NAME ");
|
||||
settings.ostr << quoteString(backup_name);
|
||||
}
|
||||
else if (type == Type::START_LISTEN || type == Type::STOP_LISTEN)
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_keyword(ServerType::serverTypeToString(server_type.type));
|
||||
|
||||
if (server_type.type == ServerType::Type::CUSTOM)
|
||||
settings.ostr << ' ' << quoteString(server_type.custom_name);
|
||||
|
||||
bool comma = false;
|
||||
|
||||
if (!server_type.exclude_types.empty())
|
||||
{
|
||||
print_keyword(" EXCEPT");
|
||||
|
||||
for (auto cur_type : server_type.exclude_types)
|
||||
if (!schema_cache_storage.empty())
|
||||
{
|
||||
if (cur_type == ServerType::Type::CUSTOM)
|
||||
continue;
|
||||
|
||||
if (comma)
|
||||
settings.ostr << ',';
|
||||
else
|
||||
comma = true;
|
||||
|
||||
settings.ostr << ' ';
|
||||
print_keyword(ServerType::serverTypeToString(cur_type));
|
||||
print_keyword(" FOR ");
|
||||
print_identifier(schema_cache_storage);
|
||||
}
|
||||
break;
|
||||
}
|
||||
case Type::UNFREEZE:
|
||||
{
|
||||
print_keyword(" WITH NAME ");
|
||||
settings.ostr << quoteString(backup_name);
|
||||
break;
|
||||
}
|
||||
case Type::START_LISTEN:
|
||||
case Type::STOP_LISTEN:
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_keyword(ServerType::serverTypeToString(server_type.type));
|
||||
|
||||
if (server_type.exclude_types.contains(ServerType::Type::CUSTOM))
|
||||
if (server_type.type == ServerType::Type::CUSTOM)
|
||||
settings.ostr << ' ' << quoteString(server_type.custom_name);
|
||||
|
||||
bool comma = false;
|
||||
|
||||
if (!server_type.exclude_types.empty())
|
||||
{
|
||||
for (const auto & cur_name : server_type.exclude_custom_names)
|
||||
print_keyword(" EXCEPT");
|
||||
|
||||
for (auto cur_type : server_type.exclude_types)
|
||||
{
|
||||
if (cur_type == ServerType::Type::CUSTOM)
|
||||
continue;
|
||||
|
||||
if (comma)
|
||||
settings.ostr << ',';
|
||||
else
|
||||
comma = true;
|
||||
|
||||
settings.ostr << ' ';
|
||||
print_keyword(ServerType::serverTypeToString(ServerType::Type::CUSTOM));
|
||||
settings.ostr << " " << quoteString(cur_name);
|
||||
print_keyword(ServerType::serverTypeToString(cur_type));
|
||||
}
|
||||
|
||||
if (server_type.exclude_types.contains(ServerType::Type::CUSTOM))
|
||||
{
|
||||
for (const auto & cur_name : server_type.exclude_custom_names)
|
||||
{
|
||||
if (comma)
|
||||
settings.ostr << ',';
|
||||
else
|
||||
comma = true;
|
||||
|
||||
settings.ostr << ' ';
|
||||
print_keyword(ServerType::serverTypeToString(ServerType::Type::CUSTOM));
|
||||
settings.ostr << " " << quoteString(cur_name);
|
||||
}
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
case Type::ENABLE_FAILPOINT:
|
||||
case Type::DISABLE_FAILPOINT:
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_identifier(fail_point_name);
|
||||
break;
|
||||
}
|
||||
case Type::REFRESH_VIEW:
|
||||
case Type::START_VIEW:
|
||||
case Type::STOP_VIEW:
|
||||
case Type::CANCEL_VIEW:
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_database_table();
|
||||
break;
|
||||
}
|
||||
case Type::TEST_VIEW:
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_database_table();
|
||||
|
||||
if (!fake_time_for_view)
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_keyword("UNSET FAKE TIME");
|
||||
}
|
||||
else
|
||||
{
|
||||
settings.ostr << ' ';
|
||||
print_keyword("SET FAKE TIME");
|
||||
settings.ostr << " '" << LocalDateTime(*fake_time_for_view) << "'";
|
||||
}
|
||||
break;
|
||||
}
|
||||
case Type::KILL:
|
||||
case Type::SHUTDOWN:
|
||||
case Type::DROP_DNS_CACHE:
|
||||
case Type::DROP_MMAP_CACHE:
|
||||
case Type::DROP_QUERY_CACHE:
|
||||
case Type::DROP_MARK_CACHE:
|
||||
case Type::DROP_INDEX_MARK_CACHE:
|
||||
case Type::DROP_UNCOMPRESSED_CACHE:
|
||||
case Type::DROP_INDEX_UNCOMPRESSED_CACHE:
|
||||
case Type::DROP_COMPILED_EXPRESSION_CACHE:
|
||||
case Type::DROP_S3_CLIENT_CACHE:
|
||||
case Type::RESET_COVERAGE:
|
||||
case Type::RESTART_REPLICAS:
|
||||
case Type::JEMALLOC_PURGE:
|
||||
case Type::JEMALLOC_ENABLE_PROFILE:
|
||||
case Type::JEMALLOC_DISABLE_PROFILE:
|
||||
case Type::JEMALLOC_FLUSH_PROFILE:
|
||||
case Type::SYNC_TRANSACTION_LOG:
|
||||
case Type::SYNC_FILE_CACHE:
|
||||
case Type::SYNC_FILESYSTEM_CACHE:
|
||||
case Type::REPLICA_READY: /// Obsolete
|
||||
case Type::REPLICA_UNREADY: /// Obsolete
|
||||
case Type::RELOAD_DICTIONARIES:
|
||||
case Type::RELOAD_EMBEDDED_DICTIONARIES:
|
||||
case Type::RELOAD_MODELS:
|
||||
case Type::RELOAD_FUNCTIONS:
|
||||
case Type::RELOAD_CONFIG:
|
||||
case Type::RELOAD_USERS:
|
||||
case Type::RELOAD_ASYNCHRONOUS_METRICS:
|
||||
case Type::FLUSH_LOGS:
|
||||
case Type::FLUSH_ASYNC_INSERT_QUEUE:
|
||||
case Type::START_THREAD_FUZZER:
|
||||
case Type::STOP_THREAD_FUZZER:
|
||||
case Type::START_VIEWS:
|
||||
case Type::STOP_VIEWS:
|
||||
break;
|
||||
case Type::UNKNOWN:
|
||||
case Type::END:
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Unknown SYSTEM command");
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -28,16 +28,12 @@ public:
|
||||
DROP_INDEX_UNCOMPRESSED_CACHE,
|
||||
DROP_MMAP_CACHE,
|
||||
DROP_QUERY_CACHE,
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
DROP_COMPILED_EXPRESSION_CACHE,
|
||||
#endif
|
||||
DROP_FILESYSTEM_CACHE,
|
||||
DROP_DISK_METADATA_CACHE,
|
||||
DROP_SCHEMA_CACHE,
|
||||
DROP_FORMAT_SCHEMA_CACHE,
|
||||
#if USE_AWS_S3
|
||||
DROP_S3_CLIENT_CACHE,
|
||||
#endif
|
||||
STOP_LISTEN,
|
||||
START_LISTEN,
|
||||
RESTART_REPLICAS,
|
||||
@ -46,12 +42,10 @@ public:
|
||||
WAIT_LOADING_PARTS,
|
||||
DROP_REPLICA,
|
||||
DROP_DATABASE_REPLICA,
|
||||
#if USE_JEMALLOC
|
||||
JEMALLOC_PURGE,
|
||||
JEMALLOC_ENABLE_PROFILE,
|
||||
JEMALLOC_DISABLE_PROFILE,
|
||||
JEMALLOC_FLUSH_PROFILE,
|
||||
#endif
|
||||
SYNC_REPLICA,
|
||||
SYNC_DATABASE_REPLICA,
|
||||
SYNC_TRANSACTION_LOG,
|
||||
@ -145,7 +139,7 @@ public:
|
||||
|
||||
SyncReplicaMode sync_replica_mode = SyncReplicaMode::DEFAULT;
|
||||
|
||||
std::unordered_set<String> src_replicas;
|
||||
std::vector<String> src_replicas;
|
||||
|
||||
ServerType server_type;
|
||||
|
||||
|
@ -19,18 +19,25 @@ ASTPtr ASTUndropQuery::clone() const
|
||||
return res;
|
||||
}
|
||||
|
||||
void ASTUndropQuery::formatQueryImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const
|
||||
void ASTUndropQuery::formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const
|
||||
{
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "");
|
||||
settings.ostr << "UNDROP ";
|
||||
settings.ostr << "TABLE ";
|
||||
settings.ostr << (settings.hilite ? hilite_none : "");
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "")
|
||||
<< "UNDROP TABLE"
|
||||
<< (settings.hilite ? hilite_none : "")
|
||||
<< " ";
|
||||
|
||||
assert (table);
|
||||
if (!database)
|
||||
settings.ostr << backQuoteIfNeed(getTable());
|
||||
else
|
||||
settings.ostr << backQuoteIfNeed(getDatabase()) + "." << backQuoteIfNeed(getTable());
|
||||
chassert(table);
|
||||
|
||||
if (table)
|
||||
{
|
||||
if (database)
|
||||
{
|
||||
database->formatImpl(settings, state, frame);
|
||||
settings.ostr << '.';
|
||||
}
|
||||
|
||||
table->formatImpl(settings, state, frame);
|
||||
}
|
||||
|
||||
if (uuid != UUIDHelpers::Nil)
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << " UUID " << (settings.hilite ? hilite_none : "")
|
||||
|
@ -40,22 +40,29 @@ public:
|
||||
QueryKind getQueryKind() const override { return QueryKind::Create; }
|
||||
|
||||
protected:
|
||||
void formatQueryImpl(const FormatSettings & s, FormatState & state, FormatStateStacked frame) const override
|
||||
void formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override
|
||||
{
|
||||
std::string indent_str = s.one_line ? "" : std::string(4 * frame.indent, ' ');
|
||||
std::string indent_str = settings.one_line ? "" : std::string(4 * frame.indent, ' ');
|
||||
|
||||
s.ostr << (s.hilite ? hilite_keyword : "") << "WATCH " << (s.hilite ? hilite_none : "")
|
||||
<< (database ? backQuoteIfNeed(getDatabase()) + "." : "") << backQuoteIfNeed(getTable());
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "WATCH " << (settings.hilite ? hilite_none : "");
|
||||
|
||||
if (database)
|
||||
{
|
||||
database->formatImpl(settings, state, frame);
|
||||
settings.ostr << '.';
|
||||
}
|
||||
|
||||
table->formatImpl(settings, state, frame);
|
||||
|
||||
if (is_watch_events)
|
||||
{
|
||||
s.ostr << " " << (s.hilite ? hilite_keyword : "") << "EVENTS" << (s.hilite ? hilite_none : "");
|
||||
settings.ostr << " " << (settings.hilite ? hilite_keyword : "") << "EVENTS" << (settings.hilite ? hilite_none : "");
|
||||
}
|
||||
|
||||
if (limit_length)
|
||||
{
|
||||
s.ostr << (s.hilite ? hilite_keyword : "") << s.nl_or_ws << indent_str << "LIMIT " << (s.hilite ? hilite_none : "");
|
||||
limit_length->formatImpl(s, state, frame);
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << settings.nl_or_ws << indent_str << "LIMIT " << (settings.hilite ? hilite_none : "");
|
||||
limit_length->formatImpl(settings, state, frame);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
@ -94,9 +94,9 @@ void ASTWindowDefinition::formatImpl(const FormatSettings & settings,
|
||||
if (!frame_is_default)
|
||||
{
|
||||
if (need_space)
|
||||
{
|
||||
settings.ostr << " ";
|
||||
}
|
||||
|
||||
format_frame.need_parens = true;
|
||||
|
||||
settings.ostr << frame_type << " BETWEEN ";
|
||||
if (frame_begin_type == WindowFrame::BoundaryType::Current)
|
||||
|
@ -41,7 +41,6 @@ struct ASTWindowListElement : public IAST
|
||||
// ASTWindowDefinition
|
||||
ASTPtr definition;
|
||||
|
||||
|
||||
ASTPtr clone() const override;
|
||||
|
||||
String getID(char delimiter) const override;
|
||||
|
@ -17,7 +17,7 @@ namespace
|
||||
{
|
||||
if (std::exchange(need_comma, true))
|
||||
settings.ostr << ", ";
|
||||
settings.ostr << backQuoteIfNeed(name);
|
||||
settings.ostr << backQuote(name);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -18,7 +18,6 @@ namespace
|
||||
<< quoteString(new_name);
|
||||
}
|
||||
|
||||
|
||||
void formatAuthenticationData(const ASTAuthenticationData & auth_data, const IAST::FormatSettings & settings)
|
||||
{
|
||||
auth_data.format(settings);
|
||||
|
@ -93,6 +93,29 @@ namespace
|
||||
if (no_output)
|
||||
settings.ostr << (settings.hilite ? IAST::hilite_keyword : "") << "USAGE ON " << (settings.hilite ? IAST::hilite_none : "") << "*.*";
|
||||
}
|
||||
|
||||
|
||||
void formatCurrentGrantsElements(const AccessRightsElements & elements, const IAST::FormatSettings & settings)
|
||||
{
|
||||
for (size_t i = 0; i != elements.size(); ++i)
|
||||
{
|
||||
const auto & element = elements[i];
|
||||
|
||||
bool next_element_on_same_db_and_table = false;
|
||||
if (i != elements.size() - 1)
|
||||
{
|
||||
const auto & next_element = elements[i + 1];
|
||||
if (element.sameDatabaseAndTableAndParameter(next_element))
|
||||
next_element_on_same_db_and_table = true;
|
||||
}
|
||||
|
||||
if (!next_element_on_same_db_and_table)
|
||||
{
|
||||
settings.ostr << " ";
|
||||
formatONClause(element, settings);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@ -148,9 +171,14 @@ void ASTGrantQuery::formatImpl(const FormatSettings & settings, FormatState &, F
|
||||
"to grant or revoke, not both of them");
|
||||
}
|
||||
else if (current_grants)
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << " CURRENT GRANTS" << (settings.hilite ? hilite_none : "");
|
||||
{
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "CURRENT GRANTS" << (settings.hilite ? hilite_none : "");
|
||||
formatCurrentGrantsElements(access_rights_elements, settings);
|
||||
}
|
||||
else
|
||||
{
|
||||
formatElementsWithoutOptions(access_rights_elements, settings);
|
||||
}
|
||||
|
||||
settings.ostr << (settings.hilite ? IAST::hilite_keyword : "") << (is_revoke ? " FROM " : " TO ")
|
||||
<< (settings.hilite ? IAST::hilite_none : "");
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user