mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-23 08:02:02 +00:00
Merge branch 'master' into fix-error-in-stress-test
This commit is contained in:
commit
62fc6cf007
253
CHANGELOG.md
253
CHANGELOG.md
@ -1,4 +1,5 @@
|
||||
### Table of Contents
|
||||
**[ClickHouse release v23.5, 2023-06-08](#235)**<br/>
|
||||
**[ClickHouse release v23.4, 2023-04-26](#234)**<br/>
|
||||
**[ClickHouse release v23.3 LTS, 2023-03-30](#233)**<br/>
|
||||
**[ClickHouse release v23.2, 2023-02-23](#232)**<br/>
|
||||
@ -7,6 +8,258 @@
|
||||
|
||||
# 2023 Changelog
|
||||
|
||||
### <a id="235"></a> ClickHouse release 23.5, 2023-06-08
|
||||
|
||||
#### Upgrade Notes
|
||||
* Compress marks and primary key by default. It significantly reduces the cold query time. Upgrade notes: the support for compressed marks and primary key has been added in version 22.9. If you turned on compressed marks or primary key or installed version 23.5 or newer, which has compressed marks or primary key on by default, you will not be able to downgrade to version 22.8 or earlier. You can also explicitly disable compressed marks or primary keys by specifying the `compress_marks` and `compress_primary_key` settings in the `<merge_tree>` section of the server configuration file. **Upgrade notes:** If you upgrade from versions prior to 22.9, you should either upgrade all replicas at once or disable the compression before upgrade, or upgrade through an intermediate version, where the compressed marks are supported but not enabled by default, such as 23.3. [#42587](https://github.com/ClickHouse/ClickHouse/pull/42587) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Make local object storage work consistently with s3 object storage, fix problem with append (closes [#48465](https://github.com/ClickHouse/ClickHouse/issues/48465)), make it configurable as independent storage. The change is backward incompatible because the cache on top of local object storage is not incompatible to previous versions. [#48791](https://github.com/ClickHouse/ClickHouse/pull/48791) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* The experimental feature "in-memory data parts" is removed. The data format is still supported, but the settings are no-op, and compact or wide parts will be used instead. This closes [#45409](https://github.com/ClickHouse/ClickHouse/issues/45409). [#49429](https://github.com/ClickHouse/ClickHouse/pull/49429) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Changed default values of settings `parallelize_output_from_storages` and `input_format_parquet_preserve_order`. This allows ClickHouse to reorder rows when reading from files (e.g. CSV or Parquet), greatly improving performance in many cases. To restore the old behavior of preserving order, use `parallelize_output_from_storages = 0`, `input_format_parquet_preserve_order = 1`. [#49479](https://github.com/ClickHouse/ClickHouse/pull/49479) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Make projections production-ready. Add the `optimize_use_projections` setting to control whether the projections will be selected for SELECT queries. The setting `allow_experimental_projection_optimization` is obsolete and does nothing. [#49719](https://github.com/ClickHouse/ClickHouse/pull/49719) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Mark `joinGet` as non-deterministic (so as `dictGet`). It allows using them in mutations without an extra setting. [#49843](https://github.com/ClickHouse/ClickHouse/pull/49843) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Revert the "`groupArray` returns cannot be nullable" change (due to binary compatibility breakage for `groupArray`/`groupArrayLast`/`groupArraySample` over `Nullable` types, which likely will lead to `TOO_LARGE_ARRAY_SIZE` or `CANNOT_READ_ALL_DATA`). [#49971](https://github.com/ClickHouse/ClickHouse/pull/49971) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Setting `enable_memory_bound_merging_of_aggregation_results` is enabled by default. If you update from version prior to 22.12, we recommend to set this flag to `false` until update is finished. [#50319](https://github.com/ClickHouse/ClickHouse/pull/50319) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
|
||||
#### New Feature
|
||||
* Added native ClickHouse Keeper CLI Client, it is available as `clickhouse keeper-client` [#47414](https://github.com/ClickHouse/ClickHouse/pull/47414) ([pufit](https://github.com/pufit)).
|
||||
* Add `urlCluster` table function. Refactor all *Cluster table functions to reduce code duplication. Make schema inference work for all possible *Cluster function signatures and for named collections. Closes [#38499](https://github.com/ClickHouse/ClickHouse/issues/38499). [#45427](https://github.com/ClickHouse/ClickHouse/pull/45427) ([attack204](https://github.com/attack204)), Pavel Kruglov.
|
||||
* The query cache can now be used for production workloads. [#47977](https://github.com/ClickHouse/ClickHouse/pull/47977) ([Robert Schulze](https://github.com/rschu1ze)). The query cache can now support queries with totals and extremes modifier. [#48853](https://github.com/ClickHouse/ClickHouse/pull/48853) ([Robert Schulze](https://github.com/rschu1ze)). Make `allow_experimental_query_cache` setting as obsolete for backward-compatibility. It was removed in https://github.com/ClickHouse/ClickHouse/pull/47977. [#49934](https://github.com/ClickHouse/ClickHouse/pull/49934) ([Timur Solodovnikov](https://github.com/tsolodov)).
|
||||
* Geographical data types (`Point`, `Ring`, `Polygon`, and `MultiPolygon`) are production-ready. [#50022](https://github.com/ClickHouse/ClickHouse/pull/50022) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add schema inference to PostgreSQL, MySQL, MeiliSearch, and SQLite table engines. Closes [#49972](https://github.com/ClickHouse/ClickHouse/issues/49972). [#50000](https://github.com/ClickHouse/ClickHouse/pull/50000) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Password type in queries like `CREATE USER u IDENTIFIED BY 'p'` will be automatically set according to the setting `default_password_type` in the `config.xml` on the server. Closes [#42915](https://github.com/ClickHouse/ClickHouse/issues/42915). [#44674](https://github.com/ClickHouse/ClickHouse/pull/44674) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Add bcrypt password authentication type. Closes [#34599](https://github.com/ClickHouse/ClickHouse/issues/34599). [#44905](https://github.com/ClickHouse/ClickHouse/pull/44905) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Introduces new keyword `INTO OUTFILE 'file.txt' APPEND`. [#48880](https://github.com/ClickHouse/ClickHouse/pull/48880) ([alekar](https://github.com/alekar)).
|
||||
* Added `system.zookeeper_connection` table that shows information about Keeper connections. [#45245](https://github.com/ClickHouse/ClickHouse/pull/45245) ([mateng915](https://github.com/mateng0915)).
|
||||
* Add new function `generateRandomStructure` that generates random table structure. It can be used in combination with table function `generateRandom`. [#47409](https://github.com/ClickHouse/ClickHouse/pull/47409) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Allow the use of `CASE` without an `ELSE` branch and extended `transform` to deal with more types. Also fix some issues that made transform() return incorrect results when decimal types were mixed with other numeric types. [#48300](https://github.com/ClickHouse/ClickHouse/pull/48300) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
|
||||
* Added [server-side encryption using KMS keys](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html) with S3 tables, and the `header` setting with S3 disks. Closes [#48723](https://github.com/ClickHouse/ClickHouse/issues/48723). [#48724](https://github.com/ClickHouse/ClickHouse/pull/48724) ([Johann Gan](https://github.com/johanngan)).
|
||||
* Add MemoryTracker for the background tasks (merges and mutation). Introduces `merges_mutations_memory_usage_soft_limit` and `merges_mutations_memory_usage_to_ram_ratio` settings that represent the soft memory limit for merges and mutations. If this limit is reached ClickHouse won't schedule new merge or mutation tasks. Also `MergesMutationsMemoryTracking` metric is introduced to allow observing current memory usage of background tasks. Resubmit [#46089](https://github.com/ClickHouse/ClickHouse/issues/46089). Closes [#48774](https://github.com/ClickHouse/ClickHouse/issues/48774). [#48787](https://github.com/ClickHouse/ClickHouse/pull/48787) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Function `dotProduct` work for array. [#49050](https://github.com/ClickHouse/ClickHouse/pull/49050) ([FFFFFFFHHHHHHH](https://github.com/FFFFFFFHHHHHHH)).
|
||||
* Support statement `SHOW INDEX` to improve compatibility with MySQL. [#49158](https://github.com/ClickHouse/ClickHouse/pull/49158) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Add virtual column `_file` and `_path` support to table function `url`. - Impove error message for table function `url`. - resolves [#49231](https://github.com/ClickHouse/ClickHouse/issues/49231) - resolves [#49232](https://github.com/ClickHouse/ClickHouse/issues/49232). [#49356](https://github.com/ClickHouse/ClickHouse/pull/49356) ([Ziyi Tan](https://github.com/Ziy1-Tan)).
|
||||
* Adding the `grants` field in the users.xml file, which allows specifying grants for users. [#49381](https://github.com/ClickHouse/ClickHouse/pull/49381) ([pufit](https://github.com/pufit)).
|
||||
* Support full/right join by using grace hash join algorithm. [#49483](https://github.com/ClickHouse/ClickHouse/pull/49483) ([lgbo](https://github.com/lgbo-ustc)).
|
||||
* `WITH FILL` modifier groups filling by sorting prefix. Controlled by `use_with_fill_by_sorting_prefix` setting (enabled by default). Related to [#33203](https://github.com/ClickHouse/ClickHouse/issues/33203)#issuecomment-1418736794. [#49503](https://github.com/ClickHouse/ClickHouse/pull/49503) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Clickhouse-client now accepts queries after "--multiquery" when "--query" (or "-q") is absent. example: clickhouse-client --multiquery "select 1; select 2;". [#49870](https://github.com/ClickHouse/ClickHouse/pull/49870) ([Alexey Gerasimchuk](https://github.com/Demilivor)).
|
||||
* Add separate `handshake_timeout` for receiving Hello packet from replica. Closes [#48854](https://github.com/ClickHouse/ClickHouse/issues/48854). [#49948](https://github.com/ClickHouse/ClickHouse/pull/49948) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Added a function "space" which repeats a space as many times as specified. [#50103](https://github.com/ClickHouse/ClickHouse/pull/50103) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Added --input_format_csv_trim_whitespaces option. [#50215](https://github.com/ClickHouse/ClickHouse/pull/50215) ([Alexey Gerasimchuk](https://github.com/Demilivor)).
|
||||
* Allow the `dictGetAll` function for regexp tree dictionaries to return values from multiple matches as arrays. Closes [#50254](https://github.com/ClickHouse/ClickHouse/issues/50254). [#50255](https://github.com/ClickHouse/ClickHouse/pull/50255) ([Johann Gan](https://github.com/johanngan)).
|
||||
* Added `toLastDayOfWeek` function to round a date or a date with time up to the nearest Saturday or Sunday. [#50315](https://github.com/ClickHouse/ClickHouse/pull/50315) ([Victor Krasnov](https://github.com/sirvickr)).
|
||||
* Ability to ignore a skip index by specifying `ignore_data_skipping_indices`. [#50329](https://github.com/ClickHouse/ClickHouse/pull/50329) ([Boris Kuschel](https://github.com/bkuschel)).
|
||||
* Add `system.user_processes` table and `SHOW USER PROCESSES` query to show memory info and ProfileEvents on user level. [#50492](https://github.com/ClickHouse/ClickHouse/pull/50492) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
* Add server and format settings `display_secrets_in_show_and_select` for displaying secrets of tables, databases, table functions, and dictionaries. Add privilege `displaySecretsInShowAndSelect` controlling which users can view secrets. [#46528](https://github.com/ClickHouse/ClickHouse/pull/46528) ([Mike Kot](https://github.com/myrrc)).
|
||||
* Allow to set up a ROW POLICY for all tables that belong to a DATABASE. [#47640](https://github.com/ClickHouse/ClickHouse/pull/47640) ([Ilya Golshtein](https://github.com/ilejn)).
|
||||
|
||||
#### Performance Improvement
|
||||
* Compress marks and primary key by default. It significantly reduces the cold query time. Upgrade notes: the support for compressed marks and primary key has been added in version 22.9. If you turned on compressed marks or primary key or installed version 23.5 or newer, which has compressed marks or primary key on by default, you will not be able to downgrade to version 22.8 or earlier. You can also explicitly disable compressed marks or primary keys by specifying the `compress_marks` and `compress_primary_key` settings in the `<merge_tree>` section of the server configuration file. [#42587](https://github.com/ClickHouse/ClickHouse/pull/42587) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* New setting s3_max_inflight_parts_for_one_file sets the limit of concurrently loaded parts with multipart upload request in scope of one file. [#49961](https://github.com/ClickHouse/ClickHouse/pull/49961) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* When reading from multiple files reduce parallel parsing threads for each file. Resolves [#42192](https://github.com/ClickHouse/ClickHouse/issues/42192). [#46661](https://github.com/ClickHouse/ClickHouse/pull/46661) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Use aggregate projection only if it reads fewer granules than normal reading. It should help in case if query hits the PK of the table, but not the projection. Fixes [#49150](https://github.com/ClickHouse/ClickHouse/issues/49150). [#49417](https://github.com/ClickHouse/ClickHouse/pull/49417) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Do not store blocks in `ANY` hash join if nothing is inserted. [#48633](https://github.com/ClickHouse/ClickHouse/pull/48633) ([vdimir](https://github.com/vdimir)).
|
||||
* Fixes aggregate combinator `-If` when JIT compiled, and enable JIT compilation for aggregate functions. Closes [#48120](https://github.com/ClickHouse/ClickHouse/issues/48120). [#49083](https://github.com/ClickHouse/ClickHouse/pull/49083) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* For reading from remote tables we use smaller tasks (instead of reading the whole part) to make tasks stealing work * task size is determined by size of columns to read * always use 1mb buffers for reading from s3 * boundaries of cache segments aligned to 1mb so they have decent size even with small tasks. it also should prevent fragmentation. [#49287](https://github.com/ClickHouse/ClickHouse/pull/49287) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Introduced settings: - `merge_max_block_size_bytes` to limit the amount of memory used for background operations. - `vertical_merge_algorithm_min_bytes_to_activate` to add another condition to activate vertical merges. [#49313](https://github.com/ClickHouse/ClickHouse/pull/49313) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Default size of a read buffer for reading from local filesystem changed to a slightly better value. Also two new settings are introduced: `max_read_buffer_size_local_fs` and `max_read_buffer_size_remote_fs`. [#49321](https://github.com/ClickHouse/ClickHouse/pull/49321) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Improve memory usage and speed of `SPARSE_HASHED`/`HASHED` dictionaries (e.g. `SPARSE_HASHED` now eats 2.6x less memory, and is ~2x faster). [#49380](https://github.com/ClickHouse/ClickHouse/pull/49380) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Optimize the `system.query_log` and `system.query_thread_log` tables by applying `LowCardinality` when appropriate. The queries over these tables will be faster. [#49530](https://github.com/ClickHouse/ClickHouse/pull/49530) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Better performance when reading local `Parquet` files (through parallel reading). [#49539](https://github.com/ClickHouse/ClickHouse/pull/49539) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Improve the performance of `RIGHT/FULL JOIN` by up to 2 times in certain scenarios, especially when joining a small left table with a large right table. [#49585](https://github.com/ClickHouse/ClickHouse/pull/49585) ([lgbo](https://github.com/lgbo-ustc)).
|
||||
* Improve performance of BLAKE3 by 11% by enabling LTO for Rust. [#49600](https://github.com/ClickHouse/ClickHouse/pull/49600) ([Azat Khuzhin](https://github.com/azat)). Now it is on par with C++.
|
||||
* Optimize the structure of the `system.opentelemetry_span_log`. Use `LowCardinality` where appropriate. Although this table is generally stupid (it is using the Map data type even for common attributes), it will be slightly better. [#49647](https://github.com/ClickHouse/ClickHouse/pull/49647) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Try to reserve hash table's size in `grace_hash` join. [#49816](https://github.com/ClickHouse/ClickHouse/pull/49816) ([lgbo](https://github.com/lgbo-ustc)).
|
||||
* As is addressed in issue [#49748](https://github.com/ClickHouse/ClickHouse/issues/49748), the predicates with date converters, such as `toYear`, `toYYYYMM`, could be rewritten with the equivalent date (YYYY-MM-DD) comparisons at the AST level. And this transformation could bring performance improvement as it is free from the expensive date converter and the comparison between dates (or integers in the low level representation) is quite low-cost. The [prototype](https://github.com/ZhiguoZh/ClickHouse/commit/c7f1753f0c9363a19d95fa46f1cfed1d9f505ee0) shows that, with all identified date converters optimized, the overall QPS of the 13 queries is enhanced by **~11%** on the ICX server (Intel Xeon Platinum 8380 CPU, 80 cores, 160 threads). [#50062](https://github.com/ClickHouse/ClickHouse/pull/50062) [#50307](https://github.com/ClickHouse/ClickHouse/pull/50307) ([Zhiguo Zhou](https://github.com/ZhiguoZh)).
|
||||
* Parallel merge of `uniqExactIf` states. Closes [#49885](https://github.com/ClickHouse/ClickHouse/issues/49885). [#50285](https://github.com/ClickHouse/ClickHouse/pull/50285) ([flynn](https://github.com/ucasfl)).
|
||||
* Keeper improvement: add `CheckNotExists` request to Keeper, which allows to improve the performance of Replicated tables. [#48897](https://github.com/ClickHouse/ClickHouse/pull/48897) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Keeper performance improvements: avoid serializing same request twice while processing. Cache deserialization results of large requests. Controlled by new coordination setting `min_request_size_for_cache`. [#49004](https://github.com/ClickHouse/ClickHouse/pull/49004) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Reduced number of `List` ZooKeeper requests when selecting parts to merge and a lot of partitions do not have anything to merge. [#49637](https://github.com/ClickHouse/ClickHouse/pull/49637) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Rework locking in the FS cache [#44985](https://github.com/ClickHouse/ClickHouse/pull/44985) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Disable pure parallel replicas if trivial count optimization is possible. [#50594](https://github.com/ClickHouse/ClickHouse/pull/50594) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Don't send head request for all keys in Iceberg schema inference, only for keys that are used for reaing data. [#50203](https://github.com/ClickHouse/ClickHouse/pull/50203) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Setting `enable_memory_bound_merging_of_aggregation_results` is enabled by default. [#50319](https://github.com/ClickHouse/ClickHouse/pull/50319) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
|
||||
#### Experimental Feature
|
||||
* `DEFLATE_QPL` codec lower the minimum simd version to SSE 4.2. [doc change in qpl](https://github.com/intel/qpl/commit/3f8f5cea27739f5261e8fd577dc233ffe88bf679) - Intel® QPL relies on a run-time kernels dispatcher and cpuid check to choose the best available implementation(sse/avx2/avx512) - restructured cmakefile for qpl build in clickhouse to align with latest upstream qpl. [#49811](https://github.com/ClickHouse/ClickHouse/pull/49811) ([jasperzhu](https://github.com/jinjunzh)).
|
||||
* Add initial support to do JOINs with pure parallel replicas. [#49544](https://github.com/ClickHouse/ClickHouse/pull/49544) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* More parallelism on `Outdated` parts removal with "zero-copy replication". [#49630](https://github.com/ClickHouse/ClickHouse/pull/49630) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Parallel Replicas: 1) Fixed an error `NOT_FOUND_COLUMN_IN_BLOCK` in case of using parallel replicas with non-replicated storage with disabled setting `parallel_replicas_for_non_replicated_merge_tree` 2) Now `allow_experimental_parallel_reading_from_replicas` have 3 possible values - 0, 1 and 2. 0 - disabled, 1 - enabled, silently disable them in case of failure (in case of FINAL or JOIN), 2 - enabled, throw an expection in case of failure. 3) If FINAL modifier is used in SELECT query and parallel replicas are enabled, ClickHouse will try to disable them if `allow_experimental_parallel_reading_from_replicas` is set to 1 and throw an exception otherwise. [#50195](https://github.com/ClickHouse/ClickHouse/pull/50195) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* When parallel replicas are enabled they will always skip unavailable servers (the behavior is controlled by the setting `skip_unavailable_shards`, enabled by default and can be only disabled). This closes: [#48565](https://github.com/ClickHouse/ClickHouse/issues/48565). [#50293](https://github.com/ClickHouse/ClickHouse/pull/50293) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
|
||||
#### Improvement
|
||||
* The `BACKUP` command will not decrypt data from encrypted disks while making a backup. Instead the data will be stored in a backup in encrypted form. Such backups can be restored only to an encrypted disk with the same (or extended) list of encryption keys. [#48896](https://github.com/ClickHouse/ClickHouse/pull/48896) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Added possibility to use temporary tables in FROM part of ATTACH PARTITION FROM and REPLACE PARTITION FROM. [#49436](https://github.com/ClickHouse/ClickHouse/pull/49436) ([Roman Vasin](https://github.com/rvasin)).
|
||||
* Added setting `async_insert` for `MergeTree` tables. It has the same meaning as query-level setting `async_insert` and enables asynchronous inserts for specific table. Note: it doesn't take effect for insert queries from `clickhouse-client`, use query-level setting in that case. [#49122](https://github.com/ClickHouse/ClickHouse/pull/49122) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Add support for size suffixes in quota creation statement parameters. [#49087](https://github.com/ClickHouse/ClickHouse/pull/49087) ([Eridanus](https://github.com/Eridanus117)).
|
||||
* Extend `first_value` and `last_value` to accept NULL. [#46467](https://github.com/ClickHouse/ClickHouse/pull/46467) ([lgbo](https://github.com/lgbo-ustc)).
|
||||
* Add alias `str_to_map` and `mapFromString` for `extractKeyValuePairs`. closes https://github.com/clickhouse/clickhouse/issues/47185. [#49466](https://github.com/ClickHouse/ClickHouse/pull/49466) ([flynn](https://github.com/ucasfl)).
|
||||
* Add support for CGroup version 2 for asynchronous metrics about the memory usage and availability. This closes [#37983](https://github.com/ClickHouse/ClickHouse/issues/37983). [#45999](https://github.com/ClickHouse/ClickHouse/pull/45999) ([sichenzhao](https://github.com/sichenzhao)).
|
||||
* Cluster table functions should always skip unavailable shards. close [#46314](https://github.com/ClickHouse/ClickHouse/issues/46314). [#46765](https://github.com/ClickHouse/ClickHouse/pull/46765) ([zk_kiger](https://github.com/zk-kiger)).
|
||||
* Allow CSV file to contain empty columns in its header. [#47496](https://github.com/ClickHouse/ClickHouse/pull/47496) ([你不要过来啊](https://github.com/iiiuwioajdks)).
|
||||
* Add Google Cloud Storage S3 compatible table function `gcs`. Like the `oss` and `cosn` functions, it is just an alias over the `s3` table function, and it does not bring any new features. [#47815](https://github.com/ClickHouse/ClickHouse/pull/47815) ([Kuba Kaflik](https://github.com/jkaflik)).
|
||||
* Add ability to use strict parts size for S3 (compatibility with CloudFlare R2 S3 Storage). [#48492](https://github.com/ClickHouse/ClickHouse/pull/48492) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Added new columns with info about `Replicated` database replicas to `system.clusters`: `database_shard_name`, `database_replica_name`, `is_active`. Added an optional `FROM SHARD` clause to `SYSTEM DROP DATABASE REPLICA` query. [#48548](https://github.com/ClickHouse/ClickHouse/pull/48548) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Add a new column `zookeeper_name` in system.replicas, to indicate on which (auxiliary) zookeeper cluster the replicated table's metadata is stored. [#48549](https://github.com/ClickHouse/ClickHouse/pull/48549) ([cangyin](https://github.com/cangyin)).
|
||||
* `IN` operator support the comparison of `Date` and `Date32`. Closes [#48736](https://github.com/ClickHouse/ClickHouse/issues/48736). [#48806](https://github.com/ClickHouse/ClickHouse/pull/48806) ([flynn](https://github.com/ucasfl)).
|
||||
* Support for erasure codes in `HDFS`, author: @M1eyu2018, @tomscut. [#48833](https://github.com/ClickHouse/ClickHouse/pull/48833) ([M1eyu](https://github.com/M1eyu2018)).
|
||||
* Implement SYSTEM DROP REPLICA from auxillary ZooKeeper clusters, may be close [#48931](https://github.com/ClickHouse/ClickHouse/issues/48931). [#48932](https://github.com/ClickHouse/ClickHouse/pull/48932) ([wangxiaobo](https://github.com/wzb5212)).
|
||||
* Add Array data type to MongoDB. Closes [#48598](https://github.com/ClickHouse/ClickHouse/issues/48598). [#48983](https://github.com/ClickHouse/ClickHouse/pull/48983) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Support storing `Interval` data types in tables. [#49085](https://github.com/ClickHouse/ClickHouse/pull/49085) ([larryluogit](https://github.com/larryluogit)).
|
||||
* Allow using `ntile` window function without explicit window frame definition: `ntile(3) OVER (ORDER BY a)`, close [#46763](https://github.com/ClickHouse/ClickHouse/issues/46763). [#49093](https://github.com/ClickHouse/ClickHouse/pull/49093) ([vdimir](https://github.com/vdimir)).
|
||||
* Added settings (`number_of_mutations_to_delay`, `number_of_mutations_to_throw`) to delay or throw `ALTER` queries that create mutations (`ALTER UPDATE`, `ALTER DELETE`, `ALTER MODIFY COLUMN`, ...) in case when table already has a lot of unfinished mutations. [#49117](https://github.com/ClickHouse/ClickHouse/pull/49117) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Catch exception from `create_directories` in filesystem cache. [#49203](https://github.com/ClickHouse/ClickHouse/pull/49203) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Copies embedded examples to a new field `example` in `system.functions` to supplement the field `description`. [#49222](https://github.com/ClickHouse/ClickHouse/pull/49222) ([Dan Roscigno](https://github.com/DanRoscigno)).
|
||||
* Enable connection options for the MongoDB dictionary. Example: ``` xml <source> <mongodb> <host>localhost</host> <port>27017</port> <user></user> <password></password> <db>test</db> <collection>dictionary_source</collection> <options>ssl=true</options> </mongodb> </source> ``` ### Documentation entry for user-facing changes. [#49225](https://github.com/ClickHouse/ClickHouse/pull/49225) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||
* Added an alias `asymptotic` for `asymp` computational method for `kolmogorovSmirnovTest`. Improved documentation. [#49286](https://github.com/ClickHouse/ClickHouse/pull/49286) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Aggregation function groupBitAnd/Or/Xor now work on signed integer data. This makes them consistent with the behavior of scalar functions bitAnd/Or/Xor. [#49292](https://github.com/ClickHouse/ClickHouse/pull/49292) ([exmy](https://github.com/exmy)).
|
||||
* Split function-documentation into more fine-granular fields. [#49300](https://github.com/ClickHouse/ClickHouse/pull/49300) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Use multiple threads shared between all tables within a server to load outdated data parts. The the size of the pool and its queue is controlled by `max_outdated_parts_loading_thread_pool_size` and `outdated_part_loading_thread_pool_queue_size` settings. [#49317](https://github.com/ClickHouse/ClickHouse/pull/49317) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Don't overestimate the size of processed data for `LowCardinality` columns when they share dictionaries between blocks. This closes [#49322](https://github.com/ClickHouse/ClickHouse/issues/49322). See also [#48745](https://github.com/ClickHouse/ClickHouse/issues/48745). [#49323](https://github.com/ClickHouse/ClickHouse/pull/49323) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Parquet writer now uses reasonable row group size when invoked through `OUTFILE`. [#49325](https://github.com/ClickHouse/ClickHouse/pull/49325) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Allow restricted keywords like `ARRAY` as an alias if the alias is quoted. Closes [#49324](https://github.com/ClickHouse/ClickHouse/issues/49324). [#49360](https://github.com/ClickHouse/ClickHouse/pull/49360) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Data parts loading and deletion jobs were moved to shared server-wide pools instead of per-table pools. Pools sizes are controlled via settings `max_active_parts_loading_thread_pool_size`, `max_outdated_parts_loading_thread_pool_size` and `max_parts_cleaning_thread_pool_size` in top-level config. Table-level settings `max_part_loading_threads` and `max_part_removal_threads` became obsolete. [#49474](https://github.com/ClickHouse/ClickHouse/pull/49474) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Allow `?password=pass` in URL of the Play UI. Password is replaced in browser history. [#49505](https://github.com/ClickHouse/ClickHouse/pull/49505) ([Mike Kot](https://github.com/myrrc)).
|
||||
* Allow reading zero-size objects from remote filesystems. (because empty files are not backup'd, so we might end up with zero blobs in metadata file). Closes [#49480](https://github.com/ClickHouse/ClickHouse/issues/49480). [#49519](https://github.com/ClickHouse/ClickHouse/pull/49519) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Attach thread MemoryTracker to `total_memory_tracker` after `ThreadGroup` detached. [#49527](https://github.com/ClickHouse/ClickHouse/pull/49527) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Fix parameterized views when a query parameter is used multiple times in the query. [#49556](https://github.com/ClickHouse/ClickHouse/pull/49556) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Release memory allocated for the last sent ProfileEvents snapshot in the context of a query. Followup [#47564](https://github.com/ClickHouse/ClickHouse/issues/47564). [#49561](https://github.com/ClickHouse/ClickHouse/pull/49561) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Function "makeDate" now provides a MySQL-compatible overload (year & day of the year argument). [#49603](https://github.com/ClickHouse/ClickHouse/pull/49603) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Support `dictionary` table function for `RegExpTreeDictionary`. [#49666](https://github.com/ClickHouse/ClickHouse/pull/49666) ([Han Fei](https://github.com/hanfei1991)).
|
||||
* Added weighted fair IO scheduling policy. Added dynamic resource manager, which allows IO scheduling hierarchy to be updated in runtime w/o server restarts. [#49671](https://github.com/ClickHouse/ClickHouse/pull/49671) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Add compose request after multipart upload to GCS. This enables the usage of copy operation on objects uploaded with the multipart upload. It's recommended to set `s3_strict_upload_part_size` to some value because compose request can fail on objects created with parts of different sizes. [#49693](https://github.com/ClickHouse/ClickHouse/pull/49693) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* For the `extractKeyValuePairs` function: improve the "best-effort" parsing logic to accept `key_value_delimiter` as a valid part of the value. This also simplifies branching and might even speed up things a bit. [#49760](https://github.com/ClickHouse/ClickHouse/pull/49760) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Add `initial_query_id` field for system.processors_profile_log [#49777](https://github.com/ClickHouse/ClickHouse/pull/49777) ([helifu](https://github.com/helifu)).
|
||||
* System log tables can now have custom sorting keys. [#49778](https://github.com/ClickHouse/ClickHouse/pull/49778) ([helifu](https://github.com/helifu)).
|
||||
* A new field `partitions` to `system.query_log` is used to indicate which partitions are participating in the calculation. [#49779](https://github.com/ClickHouse/ClickHouse/pull/49779) ([helifu](https://github.com/helifu)).
|
||||
* Added `enable_the_endpoint_id_with_zookeeper_name_prefix` setting for `ReplicatedMergeTree` (disabled by default). When enabled, it adds ZooKeeper cluster name to table's interserver communication endpoint. It avoids `Duplicate interserver IO endpoint` errors when having replicated tables with the same path, but different auxiliary ZooKeepers. [#49780](https://github.com/ClickHouse/ClickHouse/pull/49780) ([helifu](https://github.com/helifu)).
|
||||
* Add query parameters to `clickhouse-local`. Closes [#46561](https://github.com/ClickHouse/ClickHouse/issues/46561). [#49785](https://github.com/ClickHouse/ClickHouse/pull/49785) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Allow loading dictionaries and functions from YAML by default. In previous versions, it required editing the `dictionaries_config` or `user_defined_executable_functions_config` in the configuration file, as they expected `*.xml` files. [#49812](https://github.com/ClickHouse/ClickHouse/pull/49812) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* The Kafka table engine now allows to use alias columns. [#49824](https://github.com/ClickHouse/ClickHouse/pull/49824) ([Aleksandr Musorin](https://github.com/AVMusorin)).
|
||||
* Add setting to limit the max number of pairs produced by `extractKeyValuePairs`, a safeguard to avoid using way too much memory. [#49836](https://github.com/ClickHouse/ClickHouse/pull/49836) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Add support for (an unusual) case where the arguments in the `IN` operator are single-element tuples. [#49844](https://github.com/ClickHouse/ClickHouse/pull/49844) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||
* `bitHammingDistance` function support `String` and `FixedString` data type. Closes [#48827](https://github.com/ClickHouse/ClickHouse/issues/48827). [#49858](https://github.com/ClickHouse/ClickHouse/pull/49858) ([flynn](https://github.com/ucasfl)).
|
||||
* Fix timeout resetting errors in the client on OS X. [#49863](https://github.com/ClickHouse/ClickHouse/pull/49863) ([alekar](https://github.com/alekar)).
|
||||
* Add support for big integers, such as UInt128, Int128, UInt256, and Int256 in the function `bitCount`. This enables Hamming distance over large bit masks for AI applications. [#49867](https://github.com/ClickHouse/ClickHouse/pull/49867) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fingerprints to be used instead of key IDs in encrypted disks. This simplifies the configuration of encrypted disks. [#49882](https://github.com/ClickHouse/ClickHouse/pull/49882) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Add UUID data type to PostgreSQL. Closes [#49739](https://github.com/ClickHouse/ClickHouse/issues/49739). [#49894](https://github.com/ClickHouse/ClickHouse/pull/49894) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Function `toUnixTimestamp` now accepts `Date` and `Date32` arguments. [#49989](https://github.com/ClickHouse/ClickHouse/pull/49989) ([Victor Krasnov](https://github.com/sirvickr)).
|
||||
* Charge only server memory for dictionaries. [#49995](https://github.com/ClickHouse/ClickHouse/pull/49995) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* The server will allow using the `SQL_*` settings such as `SQL_AUTO_IS_NULL` as no-ops for MySQL compatibility. This closes [#49927](https://github.com/ClickHouse/ClickHouse/issues/49927). [#50013](https://github.com/ClickHouse/ClickHouse/pull/50013) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Preserve initial_query_id for ON CLUSTER queries, which is useful for introspection (under `distributed_ddl_entry_format_version=5`). [#50015](https://github.com/ClickHouse/ClickHouse/pull/50015) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Preserve backward incompatibility for renamed settings by using aliases (`allow_experimental_projection_optimization` for `optimize_use_projections`, `allow_experimental_lightweight_delete` for `enable_lightweight_delete`). [#50044](https://github.com/ClickHouse/ClickHouse/pull/50044) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Support passing FQDN through setting my_hostname to register cluster node in keeper. Add setting of invisible to support multi compute groups. A compute group as a cluster, is invisible to other compute groups. [#50186](https://github.com/ClickHouse/ClickHouse/pull/50186) ([Yangkuan Liu](https://github.com/LiuYangkuan)).
|
||||
* Fix PostgreSQL reading all the data even though `LIMIT n` could be specified. [#50187](https://github.com/ClickHouse/ClickHouse/pull/50187) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Add new profile events for queries with subqueries (`QueriesWithSubqueries`/`SelectQueriesWithSubqueries`/`InsertQueriesWithSubqueries`). [#50204](https://github.com/ClickHouse/ClickHouse/pull/50204) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Adding the roles field in the users.xml file, which allows specifying roles with grants via a config file. [#50278](https://github.com/ClickHouse/ClickHouse/pull/50278) ([pufit](https://github.com/pufit)).
|
||||
* Report `CGroupCpuCfsPeriod` and `CGroupCpuCfsQuota` in AsynchronousMetrics. - Respect cgroup v2 memory limits during server startup. [#50379](https://github.com/ClickHouse/ClickHouse/pull/50379) ([alekar](https://github.com/alekar)).
|
||||
* Add a signal handler for SIGQUIT to work the same way as SIGINT. Closes [#50298](https://github.com/ClickHouse/ClickHouse/issues/50298). [#50435](https://github.com/ClickHouse/ClickHouse/pull/50435) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* In case JSON parse fails due to the large size of the object output the last position to allow debugging. [#50474](https://github.com/ClickHouse/ClickHouse/pull/50474) ([Valentin Alexeev](https://github.com/valentinalexeev)).
|
||||
* Support decimals with not fixed size. Closes [#49130](https://github.com/ClickHouse/ClickHouse/issues/49130). [#50586](https://github.com/ClickHouse/ClickHouse/pull/50586) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* New and improved `keeper-bench`. Everything can be customized from YAML/XML file: - request generator - each type of request generator can have a specific set of fields - multi requests can be generated just by doing the same under `multi` key - for each request or subrequest in multi a `weight` field can be defined to control distribution - define trees that need to be setup for a test run - hosts can be defined with all timeouts customizable and it's possible to control how many sessions to generate for each host - integers defined with `min_value` and `max_value` fields are random number generators. [#48547](https://github.com/ClickHouse/ClickHouse/pull/48547) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Io_uring is not supported on macos, don't choose it when running tests on local to avoid occassional failures. [#49250](https://github.com/ClickHouse/ClickHouse/pull/49250) ([Frank Chen](https://github.com/FrankChen021)).
|
||||
* Support named fault injection for testing. [#49361](https://github.com/ClickHouse/ClickHouse/pull/49361) ([Han Fei](https://github.com/hanfei1991)).
|
||||
* Allow running ClickHouse in the OS where the `prctl` (process control) syscall is not available, such as AWS Lambda. [#49538](https://github.com/ClickHouse/ClickHouse/pull/49538) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed the issue of build conflict between contrib/isa-l and isa-l in qpl [49296](https://github.com/ClickHouse/ClickHouse/issues/49296). [#49584](https://github.com/ClickHouse/ClickHouse/pull/49584) ([jasperzhu](https://github.com/jinjunzh)).
|
||||
* Utilities are now only build if explicitly requested ("-DENABLE_UTILS=1") instead of by default, this reduces link times in typical development builds. [#49620](https://github.com/ClickHouse/ClickHouse/pull/49620) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Pull build description of idxd-config into a separate CMake file to avoid accidental removal in future. [#49651](https://github.com/ClickHouse/ClickHouse/pull/49651) ([jasperzhu](https://github.com/jinjunzh)).
|
||||
* Add CI check with an enabled analyzer in the master. Follow-up [#49562](https://github.com/ClickHouse/ClickHouse/issues/49562). [#49668](https://github.com/ClickHouse/ClickHouse/pull/49668) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Switch to LLVM/clang 16. [#49678](https://github.com/ClickHouse/ClickHouse/pull/49678) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Allow building ClickHouse with clang-17. [#49851](https://github.com/ClickHouse/ClickHouse/pull/49851) ([Alexey Milovidov](https://github.com/alexey-milovidov)). [#50410](https://github.com/ClickHouse/ClickHouse/pull/50410) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* ClickHouse is now easier to be integrated into other cmake projects. [#49991](https://github.com/ClickHouse/ClickHouse/pull/49991) ([Amos Bird](https://github.com/amosbird)). (Which is strongly discouraged - Alexey Milovidov).
|
||||
* Fix strange additional QEMU logging after [#47151](https://github.com/ClickHouse/ClickHouse/issues/47151), see https://s3.amazonaws.com/clickhouse-test-reports/50078/a4743996ee4f3583884d07bcd6501df0cfdaa346/stateless_tests__release__databasereplicated__[3_4].html. [#50442](https://github.com/ClickHouse/ClickHouse/pull/50442) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* ClickHouse can work on Linux RISC-V 6.1.22. This closes [#50456](https://github.com/ClickHouse/ClickHouse/issues/50456). [#50457](https://github.com/ClickHouse/ClickHouse/pull/50457) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Bump internal protobuf to v3.18 (fixes bogus CVE-2022-1941). [#50400](https://github.com/ClickHouse/ClickHouse/pull/50400) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Bump internal libxml2 to v2.10.4 (fixes bogus CVE-2023-28484 and bogus CVE-2023-29469). [#50402](https://github.com/ClickHouse/ClickHouse/pull/50402) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Bump c-ares to v1.19.1 (bogus CVE-2023-32067, bogus CVE-2023-31130, bogus CVE-2023-31147). [#50403](https://github.com/ClickHouse/ClickHouse/pull/50403) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix bogus CVE-2022-2469 in libgsasl. [#50404](https://github.com/ClickHouse/ClickHouse/pull/50404) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
|
||||
* ActionsDAG: fix wrong optimization [#47584](https://github.com/ClickHouse/ClickHouse/pull/47584) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
|
||||
* Correctly handle concurrent snapshots in Keeper [#48466](https://github.com/ClickHouse/ClickHouse/pull/48466) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* MergeTreeMarksLoader holds DataPart instead of DataPartStorage [#48515](https://github.com/ClickHouse/ClickHouse/pull/48515) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Sequence state fix [#48603](https://github.com/ClickHouse/ClickHouse/pull/48603) ([Ilya Golshtein](https://github.com/ilejn)).
|
||||
* Back/Restore concurrency check on previous fails [#48726](https://github.com/ClickHouse/ClickHouse/pull/48726) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Fix Attaching a table with non-existent ZK path does not increase the ReadonlyReplica metric [#48954](https://github.com/ClickHouse/ClickHouse/pull/48954) ([wangxiaobo](https://github.com/wzb5212)).
|
||||
* Fix possible terminate called for uncaught exception in some places [#49112](https://github.com/ClickHouse/ClickHouse/pull/49112) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix key not found error for queries with multiple StorageJoin [#49137](https://github.com/ClickHouse/ClickHouse/pull/49137) ([vdimir](https://github.com/vdimir)).
|
||||
* Fix wrong query result when using nullable primary key [#49172](https://github.com/ClickHouse/ClickHouse/pull/49172) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Fix reinterpretAs*() on big endian machines [#49198](https://github.com/ClickHouse/ClickHouse/pull/49198) ([Suzy Wang](https://github.com/SuzyWangIBMer)).
|
||||
* (Experimental zero-copy replication) Lock zero copy parts more atomically [#49211](https://github.com/ClickHouse/ClickHouse/pull/49211) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix race on Outdated parts loading [#49223](https://github.com/ClickHouse/ClickHouse/pull/49223) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix all key value is null and group use rollup return wrong answer [#49282](https://github.com/ClickHouse/ClickHouse/pull/49282) ([Shuai li](https://github.com/loneylee)).
|
||||
* Fix calculating load_factor for HASHED dictionaries with SHARDS [#49319](https://github.com/ClickHouse/ClickHouse/pull/49319) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Disallow configuring compression CODECs for alias columns [#49363](https://github.com/ClickHouse/ClickHouse/pull/49363) ([Timur Solodovnikov](https://github.com/tsolodov)).
|
||||
* Fix bug in removal of existing part directory [#49365](https://github.com/ClickHouse/ClickHouse/pull/49365) ([alesapin](https://github.com/alesapin)).
|
||||
* Properly fix GCS when HMAC is used [#49390](https://github.com/ClickHouse/ClickHouse/pull/49390) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix fuzz bug when subquery set is not built when reading from remote() [#49425](https://github.com/ClickHouse/ClickHouse/pull/49425) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Invert `shutdown_wait_unfinished_queries` [#49427](https://github.com/ClickHouse/ClickHouse/pull/49427) ([Konstantin Bogdanov](https://github.com/thevar1able)).
|
||||
* (Experimental zero-copy replication) Fix another zero copy bug [#49473](https://github.com/ClickHouse/ClickHouse/pull/49473) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix postgres database setting [#49481](https://github.com/ClickHouse/ClickHouse/pull/49481) ([Mal Curtis](https://github.com/snikch)).
|
||||
* Correctly handle `s3Cluster` arguments [#49490](https://github.com/ClickHouse/ClickHouse/pull/49490) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix bug in TraceCollector destructor. [#49508](https://github.com/ClickHouse/ClickHouse/pull/49508) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
* Fix AsynchronousReadIndirectBufferFromRemoteFS breaking on short seeks [#49525](https://github.com/ClickHouse/ClickHouse/pull/49525) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Fix dictionaries loading order [#49560](https://github.com/ClickHouse/ClickHouse/pull/49560) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Forbid the change of data type of Object('json') column [#49563](https://github.com/ClickHouse/ClickHouse/pull/49563) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix stress test (Logical error: Expected 7134 >= 11030) [#49623](https://github.com/ClickHouse/ClickHouse/pull/49623) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix bug in DISTINCT [#49628](https://github.com/ClickHouse/ClickHouse/pull/49628) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix: DISTINCT in order with zero values in non-sorted columns [#49636](https://github.com/ClickHouse/ClickHouse/pull/49636) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Fix one-off error in big integers found by UBSan with fuzzer [#49645](https://github.com/ClickHouse/ClickHouse/pull/49645) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix reading from sparse columns after restart [#49660](https://github.com/ClickHouse/ClickHouse/pull/49660) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix assert in SpanHolder::finish() with fibers [#49673](https://github.com/ClickHouse/ClickHouse/pull/49673) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix short circuit functions and mutations with sparse arguments [#49716](https://github.com/ClickHouse/ClickHouse/pull/49716) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix writing appended files to incremental backups [#49725](https://github.com/ClickHouse/ClickHouse/pull/49725) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix "There is no physical column _row_exists in table" error occurring during lightweight delete mutation on a table with Object column. [#49737](https://github.com/ClickHouse/ClickHouse/pull/49737) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Fix msan issue in randomStringUTF8(uneven number) [#49750](https://github.com/ClickHouse/ClickHouse/pull/49750) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix aggregate function kolmogorovSmirnovTest [#49768](https://github.com/ClickHouse/ClickHouse/pull/49768) ([FFFFFFFHHHHHHH](https://github.com/FFFFFFFHHHHHHH)).
|
||||
* Fix settings aliases in native protocol [#49776](https://github.com/ClickHouse/ClickHouse/pull/49776) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix `arrayMap` with array of tuples with single argument [#49789](https://github.com/ClickHouse/ClickHouse/pull/49789) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix per-query IO/BACKUPs throttling settings [#49797](https://github.com/ClickHouse/ClickHouse/pull/49797) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix setting NULL in profile definition [#49831](https://github.com/ClickHouse/ClickHouse/pull/49831) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix a bug with projections and the aggregate_functions_null_for_empty setting (for query_plan_optimize_projection) [#49873](https://github.com/ClickHouse/ClickHouse/pull/49873) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix processing pending batch for Distributed async INSERT after restart [#49884](https://github.com/ClickHouse/ClickHouse/pull/49884) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix assertion in CacheMetadata::doCleanup [#49914](https://github.com/ClickHouse/ClickHouse/pull/49914) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* fix `is_prefix` in OptimizeRegularExpression [#49919](https://github.com/ClickHouse/ClickHouse/pull/49919) ([Han Fei](https://github.com/hanfei1991)).
|
||||
* Fix metrics `WriteBufferFromS3Bytes`, `WriteBufferFromS3Microseconds` and `WriteBufferFromS3RequestsErrors` [#49930](https://github.com/ClickHouse/ClickHouse/pull/49930) ([Aleksandr Musorin](https://github.com/AVMusorin)).
|
||||
* Fix IPv6 encoding in protobuf [#49933](https://github.com/ClickHouse/ClickHouse/pull/49933) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
* Fix possible Logical error on bad Nullable parsing for text formats [#49960](https://github.com/ClickHouse/ClickHouse/pull/49960) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Add setting output_format_parquet_compliant_nested_types to produce more compatible Parquet files [#50001](https://github.com/ClickHouse/ClickHouse/pull/50001) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Fix logical error in stress test "Not enough space to add ..." [#50021](https://github.com/ClickHouse/ClickHouse/pull/50021) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Avoid deadlock when starting table in attach thread of `ReplicatedMergeTree` [#50026](https://github.com/ClickHouse/ClickHouse/pull/50026) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix assert in SpanHolder::finish() with fibers attempt 2 [#50034](https://github.com/ClickHouse/ClickHouse/pull/50034) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Add proper escaping for DDL OpenTelemetry context serialization [#50045](https://github.com/ClickHouse/ClickHouse/pull/50045) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix reporting broken projection parts [#50052](https://github.com/ClickHouse/ClickHouse/pull/50052) ([Amos Bird](https://github.com/amosbird)).
|
||||
* JIT compilation not equals NaN fix [#50056](https://github.com/ClickHouse/ClickHouse/pull/50056) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Fix crashing in case of Replicated database without arguments [#50058](https://github.com/ClickHouse/ClickHouse/pull/50058) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix crash with `multiIf` and constant condition and nullable arguments [#50123](https://github.com/ClickHouse/ClickHouse/pull/50123) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix invalid index analysis for date related keys [#50153](https://github.com/ClickHouse/ClickHouse/pull/50153) ([Amos Bird](https://github.com/amosbird)).
|
||||
* do not allow modify order by when there are no order by cols [#50154](https://github.com/ClickHouse/ClickHouse/pull/50154) ([Han Fei](https://github.com/hanfei1991)).
|
||||
* Fix broken index analysis when binary operator contains a null constant argument [#50177](https://github.com/ClickHouse/ClickHouse/pull/50177) ([Amos Bird](https://github.com/amosbird)).
|
||||
* clickhouse-client: disallow usage of `--query` and `--queries-file` at the same time [#50210](https://github.com/ClickHouse/ClickHouse/pull/50210) ([Alexey Gerasimchuk](https://github.com/Demilivor)).
|
||||
* Fix UB for INTO OUTFILE extensions (APPEND / AND STDOUT) and WATCH EVENTS [#50216](https://github.com/ClickHouse/ClickHouse/pull/50216) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix skipping spaces at end of row in CustomSeparatedIgnoreSpaces format [#50224](https://github.com/ClickHouse/ClickHouse/pull/50224) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix iceberg metadata parsing [#50232](https://github.com/ClickHouse/ClickHouse/pull/50232) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix nested distributed SELECT in WITH clause [#50234](https://github.com/ClickHouse/ClickHouse/pull/50234) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix msan issue in keyed siphash [#50245](https://github.com/ClickHouse/ClickHouse/pull/50245) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix bugs in Poco sockets in non-blocking mode, use true non-blocking sockets [#50252](https://github.com/ClickHouse/ClickHouse/pull/50252) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix checksum calculation for backup entries [#50264](https://github.com/ClickHouse/ClickHouse/pull/50264) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Comparison functions NaN fix [#50287](https://github.com/ClickHouse/ClickHouse/pull/50287) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* JIT aggregation nullable key fix [#50291](https://github.com/ClickHouse/ClickHouse/pull/50291) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Fix clickhouse-local crashing when writing empty Arrow or Parquet output [#50328](https://github.com/ClickHouse/ClickHouse/pull/50328) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Fix crash when Pool::Entry::disconnect() is called [#50334](https://github.com/ClickHouse/ClickHouse/pull/50334) ([Val Doroshchuk](https://github.com/valbok)).
|
||||
* Improved fetch part by holding directory lock longer [#50339](https://github.com/ClickHouse/ClickHouse/pull/50339) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Fix bitShift* functions with both constant arguments [#50343](https://github.com/ClickHouse/ClickHouse/pull/50343) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix Keeper deadlock on exception when preprocessing requests. [#50387](https://github.com/ClickHouse/ClickHouse/pull/50387) ([frinkr](https://github.com/frinkr)).
|
||||
* Fix hashing of const integer values [#50421](https://github.com/ClickHouse/ClickHouse/pull/50421) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix merge_tree_min_rows_for_seek/merge_tree_min_bytes_for_seek for data skipping indexes [#50432](https://github.com/ClickHouse/ClickHouse/pull/50432) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Limit the number of in-flight tasks for loading outdated parts [#50450](https://github.com/ClickHouse/ClickHouse/pull/50450) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Keeper fix: apply uncommitted state after snapshot install [#50483](https://github.com/ClickHouse/ClickHouse/pull/50483) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix incorrect constant folding [#50536](https://github.com/ClickHouse/ClickHouse/pull/50536) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix logical error in stress test (Not enough space to add ...) [#50583](https://github.com/ClickHouse/ClickHouse/pull/50583) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix converting Null to LowCardinality(Nullable) in values table function [#50637](https://github.com/ClickHouse/ClickHouse/pull/50637) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Revert invalid RegExpTreeDictionary optimization [#50642](https://github.com/ClickHouse/ClickHouse/pull/50642) ([Johann Gan](https://github.com/johanngan)).
|
||||
|
||||
### <a id="234"></a> ClickHouse release 23.4, 2023-04-26
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
@ -127,6 +127,9 @@ namespace Net
|
||||
|
||||
void setResolvedHost(std::string resolved_host) { _resolved_host.swap(resolved_host); }
|
||||
|
||||
std::string getResolvedHost() const { return _resolved_host; }
|
||||
/// Returns the resolved IP address of the target HTTP server.
|
||||
|
||||
Poco::UInt16 getPort() const;
|
||||
/// Returns the port number of the target HTTP server.
|
||||
|
||||
|
@ -12,6 +12,7 @@ add_library (_lz4 ${SRCS})
|
||||
add_library (ch_contrib::lz4 ALIAS _lz4)
|
||||
|
||||
target_compile_definitions (_lz4 PUBLIC LZ4_DISABLE_DEPRECATE_WARNINGS=1)
|
||||
target_compile_definitions (_lz4 PUBLIC LZ4_FAST_DEC_LOOP=1)
|
||||
if (SANITIZE STREQUAL "undefined")
|
||||
target_compile_options (_lz4 PRIVATE -fno-sanitize=undefined)
|
||||
endif ()
|
||||
|
@ -109,7 +109,7 @@ INSERT INTO test.visits (StartDate, CounterID, Sign, UserID)
|
||||
VALUES (1667446031, 1, 6, 3)
|
||||
```
|
||||
|
||||
The data are inserted in both the table and the materialized view `test.mv_visits`.
|
||||
The data is inserted in both the table and the materialized view `test.mv_visits`.
|
||||
|
||||
To get the aggregated data, we need to execute a query such as `SELECT ... GROUP BY ...` from the materialized view `test.mv_visits`:
|
||||
|
||||
|
@ -1,147 +1,156 @@
|
||||
# Approximate Nearest Neighbor Search Indexes [experimental] {#table_engines-ANNIndex}
|
||||
|
||||
The main task that indexes achieve is to quickly find nearest neighbors for multidimensional data. An example of such a problem can be finding similar pictures (texts) for a given picture (text). That problem can be reduced to finding the nearest [embeddings](https://cloud.google.com/architecture/overview-extracting-and-serving-feature-embeddings-for-machine-learning). They can be created from data using [UDF](/docs/en/sql-reference/functions/index.md/#executable-user-defined-functions).
|
||||
Nearest neighborhood search refers to the problem of finding the point(s) with the smallest distance to a given point in an n-dimensional
|
||||
space. Since exact search is in practice usually typically too slow, the task is often solved with approximate algorithms. A popular use
|
||||
case of of neighbor search is finding similar pictures (texts) for a given picture (text). Pictures (texts) can be decomposed into
|
||||
[embeddings](https://cloud.google.com/architecture/overview-extracting-and-serving-feature-embeddings-for-machine-learning), and instead of
|
||||
comparing pictures (texts) pixel-by-pixel (character-by-character), only the embeddings are compared.
|
||||
|
||||
The next queries find the closest neighbors in N-dimensional space using the L2 (Euclidean) distance:
|
||||
``` sql
|
||||
SELECT *
|
||||
FROM table_name
|
||||
WHERE L2Distance(Column, Point) < MaxDistance
|
||||
In terms of SQL, the problem can be expressed as follows:
|
||||
|
||||
``` sql
|
||||
SELECT *
|
||||
FROM table
|
||||
WHERE L2Distance(column, Point) < MaxDistance
|
||||
LIMIT N
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT *
|
||||
FROM table_name
|
||||
ORDER BY L2Distance(Column, Point)
|
||||
``` sql
|
||||
SELECT *
|
||||
FROM table
|
||||
ORDER BY L2Distance(column, Point)
|
||||
LIMIT N
|
||||
```
|
||||
But it will take some time for execution because of the long calculation of the distance between `TargetEmbedding` and all other vectors. This is where ANN indexes can help. They store a compact approximation of the search space (e.g. using clustering, search trees, etc.) and are able to compute approximate neighbors quickly.
|
||||
|
||||
## Indexes Structure
|
||||
The queries are expensive because the L2 (Euclidean) distance between `Point` and all points in `column` and must be computed. To speed this process up, Approximate Nearest Neighbor Search Indexes (ANN indexes) store a compact representation of the search space (using clustering, search trees, etc.) which allows to compute an approximate answer quickly.
|
||||
|
||||
Approximate Nearest Neighbor Search Indexes (`ANNIndexes`) are similar to skip indexes. They are constructed by some granules and determine which of them should be skipped. Compared to skip indices, ANN indices use their results not only to skip some group of granules, but also to select particular granules from a set of granules.
|
||||
# Creating ANN Indexes
|
||||
|
||||
`ANNIndexes` are designed to speed up two types of queries:
|
||||
As long as ANN indexes are experimental, you first need to `SET allow_experimental_annoy_index = 1`.
|
||||
|
||||
- ###### Type 1: Where
|
||||
``` sql
|
||||
SELECT *
|
||||
FROM table_name
|
||||
WHERE DistanceFunction(Column, Point) < MaxDistance
|
||||
Syntax to create an ANN index over an `Array` column:
|
||||
|
||||
```sql
|
||||
CREATE TABLE table
|
||||
(
|
||||
`id` Int64,
|
||||
`embedding` Array(Float32),
|
||||
INDEX <ann_index_name> embedding TYPE <ann_index_type>(<ann_index_parameters>) GRANULARITY <N>
|
||||
)
|
||||
ENGINE = MergeTree
|
||||
ORDER BY id;
|
||||
```
|
||||
|
||||
Syntax to create an ANN index over a `Tuple` column:
|
||||
|
||||
```sql
|
||||
CREATE TABLE table
|
||||
(
|
||||
`id` Int64,
|
||||
`embedding` Tuple(Float32[, Float32[, ...]]),
|
||||
INDEX <ann_index_name> embedding TYPE <ann_index_type>(<ann_index_parameters>) GRANULARITY <N>
|
||||
)
|
||||
ENGINE = MergeTree
|
||||
ORDER BY id;
|
||||
```
|
||||
|
||||
ANN indexes are built during column insertion and merge and `INSERT` and `OPTIMIZE` statements will be slower than for ordinary tables. ANNIndexes are ideally used only with immutable or rarely changed data, respectively there are much more read requests than write requests.
|
||||
|
||||
Similar to regular skip indexes, ANN indexes are constructed over granules and each indexed block consists of `GRANULARITY = <N>`-many
|
||||
granules. For example, if the primary index granularity of the table is 8192 (setting `index_granularity = 8192`) and `GRANULARITY = 2`,
|
||||
then each indexed block will consist of 16384 rows. However, unlike skip indexes, ANN indexes are not only able to skip the entire indexed
|
||||
block, they are able to skip individual granules in indexed blocks. As a result, the `GRANULARITY` parameter has a different meaning in ANN
|
||||
indexes than in normal skip indexes. Basically, the bigger `GRANULARITY` is chosen, the more data is provided to a single ANN index, and the
|
||||
higher the chance that with the right hyper parameters, the index will remember the data structure better.
|
||||
|
||||
# Using ANN Indexes
|
||||
|
||||
ANN indexes support two types of queries:
|
||||
|
||||
- WHERE queries:
|
||||
|
||||
``` sql
|
||||
SELECT *
|
||||
FROM table
|
||||
WHERE DistanceFunction(column, Point) < MaxDistance
|
||||
LIMIT N
|
||||
```
|
||||
- ###### Type 2: Order by
|
||||
|
||||
- ORDER BY queries:
|
||||
|
||||
``` sql
|
||||
SELECT *
|
||||
FROM table_name [WHERE ...]
|
||||
ORDER BY DistanceFunction(Column, Point)
|
||||
SELECT *
|
||||
FROM table
|
||||
[WHERE ...]
|
||||
ORDER BY DistanceFunction(column, Point)
|
||||
LIMIT N
|
||||
```
|
||||
|
||||
In these queries, `DistanceFunction` is selected from [distance functions](/docs/en/sql-reference/functions/distance-functions.md). `Point` is a known vector (something like `(0.1, 0.1, ... )`). To avoid writing large vectors, use [client parameters](/docs/en//interfaces/cli.md#queries-with-parameters-cli-queries-with-parameters). `Value` - a float value that will bound the neighbourhood.
|
||||
`DistanceFunction` is a [distance function](/docs/en/sql-reference/functions/distance-functions.md), `Point` is a reference vector (e.g. `(0.17, 0.33, ...)`) and `MaxDistance` is a floating point value which restricts the size of the neighbourhood.
|
||||
|
||||
:::note
|
||||
ANN index can't speed up query that satisfies both types (`where + order by`, only one of them). All queries must have the limit, as algorithms are used to find nearest neighbors and need a specific number of them.
|
||||
:::tip
|
||||
To avoid writing out large vectors, you can use [query parameters](/docs/en//interfaces/cli.md#queries-with-parameters-cli-queries-with-parameters), e.g.
|
||||
|
||||
```bash
|
||||
clickhouse-client --param_vec='hello' --query="SELECT * FROM table WHERE L2Distance(embedding, {vec: Array(Float32)}) < 1.0"
|
||||
```
|
||||
:::
|
||||
|
||||
:::note
|
||||
Indexes are applied only to queries with a limit less than the `max_limit_for_ann_queries` setting. This helps to avoid memory overflows in queries with a large limit. `max_limit_for_ann_queries` setting can be changed if you know you can provide enough memory. The default value is `1000000`.
|
||||
:::
|
||||
ANN indexes cannot speed up queries that contain both a `WHERE DistanceFunction(column, Point) < MaxDistance` and an `ORDER BY DistanceFunction(column, Point)` clause. Also, the approximate algorithms used to determine the nearest neighbors require a limit, hence queries that use an ANN index must have a `LIMIT` clause.
|
||||
|
||||
Both types of queries are handled the same way. The indexes get `n` neighbors (where `n` is taken from the `LIMIT` clause) and work with them. In `ORDER BY` query they remember the numbers of all parts of the granule that have at least one of neighbor. In `WHERE` query they remember only those parts that satisfy the requirements.
|
||||
An ANN index is only used if the query has a `LIMIT` value smaller than setting `max_limit_for_ann_queries` (default: 1 million rows). This is a safety measure which helps to avoid large memory consumption by external libraries for approximate neighbor search.
|
||||
|
||||
# Available ANN Indexes
|
||||
|
||||
## Create table with ANNIndex
|
||||
|
||||
This feature is disabled by default. To enable it, set `allow_experimental_annoy_index` to 1. Also, this feature is disabled on ARM, due to likely problems with the algorithm.
|
||||
|
||||
```sql
|
||||
CREATE TABLE t
|
||||
(
|
||||
`id` Int64,
|
||||
`data` Tuple(Float32, Float32, Float32),
|
||||
INDEX ann_index_name data TYPE ann_index_type(ann_index_parameters) GRANULARITY N
|
||||
)
|
||||
ENGINE = MergeTree
|
||||
ORDER BY id;
|
||||
```
|
||||
|
||||
```sql
|
||||
CREATE TABLE t
|
||||
(
|
||||
`id` Int64,
|
||||
`data` Array(Float32),
|
||||
INDEX ann_index_name data TYPE ann_index_type(ann_index_parameters) GRANULARITY N
|
||||
)
|
||||
ENGINE = MergeTree
|
||||
ORDER BY id;
|
||||
```
|
||||
|
||||
With greater `GRANULARITY` indexes remember the data structure better. The `GRANULARITY` indicates how many granules will be used to construct the index. The more data is provided for the index, the more of it can be handled by one index and the more chances that with the right hyper parameters the index will remember the data structure better. But some indexes can't be built if they don't have enough data, so this granule will always participate in the query. For more information, see the description of indexes.
|
||||
|
||||
As the indexes are built only during insertions into table, `INSERT` and `OPTIMIZE` queries are slower than for ordinary table. At this stage indexes remember all the information about the given data. ANNIndexes should be used if you have immutable or rarely changed data and many read requests.
|
||||
|
||||
You can create your table with index which uses certain algorithm. Now only indices based on the following algorithms are supported:
|
||||
|
||||
# Index list
|
||||
- [Annoy](/docs/en/engines/table-engines/mergetree-family/annindexes.md#annoy-annoy)
|
||||
|
||||
# Annoy {#annoy}
|
||||
Implementation of the algorithm was taken from [this repository](https://github.com/spotify/annoy).
|
||||
## Annoy {#annoy}
|
||||
|
||||
Short description of the algorithm:
|
||||
The algorithm recursively divides in half all space by random linear surfaces (lines in 2D, planes in 3D etc.). Thus it makes tree of polyhedrons and points that they contains. Repeating the operation several times for greater accuracy it creates a forest.
|
||||
To find K Nearest Neighbours it goes down through the trees and fills the buffer of closest points using the priority queue of polyhedrons. Next, it sorts buffer and return the nearest K points.
|
||||
(currently disabled on ARM due to memory safety problems with the algorithm)
|
||||
|
||||
This type of ANN index implements [the Annoy algorithm](https://github.com/spotify/annoy) which uses a recursive division of the space in random linear surfaces (lines in 2D, planes in 3D etc.).
|
||||
|
||||
Syntax to create a Annoy index over a `Array` column:
|
||||
|
||||
__Examples__:
|
||||
```sql
|
||||
CREATE TABLE t
|
||||
CREATE TABLE table
|
||||
(
|
||||
id Int64,
|
||||
data Tuple(Float32, Float32, Float32),
|
||||
INDEX ann_index_name data TYPE annoy(NumTrees, DistanceName) GRANULARITY N
|
||||
embedding Array(Float32),
|
||||
INDEX <ann_index_name> embedding TYPE annoy([DistanceName[, NumTrees]]) GRANULARITY N
|
||||
)
|
||||
ENGINE = MergeTree
|
||||
ORDER BY id;
|
||||
```
|
||||
|
||||
Syntax to create a Annoy index over a `Tuple` column:
|
||||
|
||||
```sql
|
||||
CREATE TABLE t
|
||||
CREATE TABLE table
|
||||
(
|
||||
id Int64,
|
||||
data Array(Float32),
|
||||
INDEX ann_index_name data TYPE annoy(NumTrees, DistanceName) GRANULARITY N
|
||||
embedding Tuple(Float32[, Float32[, ...]]),
|
||||
INDEX <ann_index_name> embedding TYPE annoy([DistanceName[, NumTrees]]) GRANULARITY N
|
||||
)
|
||||
ENGINE = MergeTree
|
||||
ORDER BY id;
|
||||
```
|
||||
|
||||
Parameter `DistanceName` is name of a distance function (default `L2Distance`). Annoy currently supports `L2Distance` and `cosineDistance` as distance functions. Parameter `NumTrees` (default: 100) is the number of trees which the algorithm will create. Higher values of `NumTree` mean slower `CREATE` and `SELECT` statements (approximately linearly), but increase the accuracy of search results.
|
||||
|
||||
:::note
|
||||
Table with array field will work faster, but all arrays **must** have same length. Use [CONSTRAINT](/docs/en/sql-reference/statements/create/table.md#constraints) to avoid errors. For example, `CONSTRAINT constraint_name_1 CHECK length(data) = 256`.
|
||||
Indexes over columns of type `Array` will generally work faster than indexes on `Tuple` columns. All arrays **must** have same length. Use [CONSTRAINT](/docs/en/sql-reference/statements/create/table.md#constraints) to avoid errors. For example, `CONSTRAINT constraint_name_1 CHECK length(embedding) = 256`.
|
||||
:::
|
||||
|
||||
Parameter `NumTrees` is the number of trees which the algorithm will create. The bigger it is, the slower (approximately linear) it works (in both `CREATE` and `SELECT` requests), but the better accuracy you get (adjusted for randomness). By default it is set to `100`. Parameter `DistanceName` is name of distance function. By default it is set to `L2Distance`. It can be set without changing first parameter, for example
|
||||
```sql
|
||||
CREATE TABLE t
|
||||
(
|
||||
id Int64,
|
||||
data Array(Float32),
|
||||
INDEX ann_index_name data TYPE annoy('cosineDistance') GRANULARITY N
|
||||
)
|
||||
ENGINE = MergeTree
|
||||
ORDER BY id;
|
||||
```
|
||||
Setting `annoy_index_search_k_nodes` (default: `NumTrees * LIMIT`) determines how many tree nodes are inspected during SELECTs. It can be used to
|
||||
balance runtime and accuracy at runtime.
|
||||
|
||||
Annoy supports `L2Distance` and `cosineDistance`.
|
||||
Example:
|
||||
|
||||
In the `SELECT` in the settings (`ann_index_select_query_params`) you can specify the size of the internal buffer (more details in the description above or in the [original repository](https://github.com/spotify/annoy)). During the query it will inspect up to `search_k` nodes which defaults to `n_trees * n` if not provided. `search_k` gives you a run-time trade-off between better accuracy and speed.
|
||||
|
||||
__Example__:
|
||||
``` sql
|
||||
SELECT *
|
||||
FROM table_name [WHERE ...]
|
||||
ORDER BY L2Distance(Column, Point)
|
||||
SELECT *
|
||||
FROM table_name [WHERE ...]
|
||||
ORDER BY L2Distance(column, Point)
|
||||
LIMIT N
|
||||
SETTING ann_index_select_query_params=`k_search=100`
|
||||
SETTINGS annoy_index_search_k_nodes=100
|
||||
```
|
||||
|
@ -75,7 +75,7 @@ SELECT
|
||||
payment_type,
|
||||
pickup_ntaname,
|
||||
dropoff_ntaname
|
||||
FROM s3(
|
||||
FROM gcs(
|
||||
'https://storage.googleapis.com/clickhouse-public-datasets/nyc-taxi/trips_{0..2}.gz',
|
||||
'TabSeparatedWithNames'
|
||||
);
|
||||
|
@ -227,6 +227,89 @@ SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices='`d1_
|
||||
SELECT * FROM data_01515 WHERE d1 = 0 AND assumeNotNull(d1_null) = 0 SETTINGS force_data_skipping_indices='`d1_idx`, d1_null_idx'; -- Ok.
|
||||
```
|
||||
|
||||
## ignore_data_skipping_indices {#settings-ignore_data_skipping_indices}
|
||||
|
||||
Ignores the skipping indexes specified if used by the query.
|
||||
|
||||
Consider the following example:
|
||||
|
||||
```sql
|
||||
CREATE TABLE data
|
||||
(
|
||||
key Int,
|
||||
x Int,
|
||||
y Int,
|
||||
INDEX x_idx x TYPE minmax GRANULARITY 1,
|
||||
INDEX y_idx y TYPE minmax GRANULARITY 1,
|
||||
INDEX xy_idx (x,y) TYPE minmax GRANULARITY 1
|
||||
)
|
||||
Engine=MergeTree()
|
||||
ORDER BY key;
|
||||
|
||||
INSERT INTO data VALUES (1, 2, 3);
|
||||
|
||||
SELECT * FROM data;
|
||||
SELECT * FROM data SETTINGS ignore_data_skipping_indices=''; -- query will produce CANNOT_PARSE_TEXT error.
|
||||
SELECT * FROM data SETTINGS ignore_data_skipping_indices='x_idx'; -- Ok.
|
||||
SELECT * FROM data SETTINGS ignore_data_skipping_indices='na_idx'; -- Ok.
|
||||
|
||||
SELECT * FROM data WHERE x = 1 AND y = 1 SETTINGS ignore_data_skipping_indices='xy_idx',force_data_skipping_indices='xy_idx' ; -- query will produce INDEX_NOT_USED error, since xy_idx is explictly ignored.
|
||||
SELECT * FROM data WHERE x = 1 AND y = 2 SETTINGS ignore_data_skipping_indices='xy_idx';
|
||||
```
|
||||
|
||||
The query without ignoring any indexes:
|
||||
```sql
|
||||
EXPLAIN indexes = 1 SELECT * FROM data WHERE x = 1 AND y = 2;
|
||||
|
||||
Expression ((Projection + Before ORDER BY))
|
||||
Filter (WHERE)
|
||||
ReadFromMergeTree (default.data)
|
||||
Indexes:
|
||||
PrimaryKey
|
||||
Condition: true
|
||||
Parts: 1/1
|
||||
Granules: 1/1
|
||||
Skip
|
||||
Name: x_idx
|
||||
Description: minmax GRANULARITY 1
|
||||
Parts: 0/1
|
||||
Granules: 0/1
|
||||
Skip
|
||||
Name: y_idx
|
||||
Description: minmax GRANULARITY 1
|
||||
Parts: 0/0
|
||||
Granules: 0/0
|
||||
Skip
|
||||
Name: xy_idx
|
||||
Description: minmax GRANULARITY 1
|
||||
Parts: 0/0
|
||||
Granules: 0/0
|
||||
```
|
||||
|
||||
Ignoring the `xy_idx` index:
|
||||
```sql
|
||||
EXPLAIN indexes = 1 SELECT * FROM data WHERE x = 1 AND y = 2 SETTINGS ignore_data_skipping_indices='xy_idx';
|
||||
|
||||
Expression ((Projection + Before ORDER BY))
|
||||
Filter (WHERE)
|
||||
ReadFromMergeTree (default.data)
|
||||
Indexes:
|
||||
PrimaryKey
|
||||
Condition: true
|
||||
Parts: 1/1
|
||||
Granules: 1/1
|
||||
Skip
|
||||
Name: x_idx
|
||||
Description: minmax GRANULARITY 1
|
||||
Parts: 0/1
|
||||
Granules: 0/1
|
||||
Skip
|
||||
Name: y_idx
|
||||
Description: minmax GRANULARITY 1
|
||||
Parts: 0/0
|
||||
Granules: 0/0
|
||||
```
|
||||
|
||||
Works with tables in the MergeTree family.
|
||||
|
||||
## convert_query_to_cnf {#convert_query_to_cnf}
|
||||
@ -3155,7 +3238,7 @@ Possible values:
|
||||
- Positive integer.
|
||||
- 0 or 1 — Disabled. `SELECT` queries are executed in a single thread.
|
||||
|
||||
Default value: `16`.
|
||||
Default value: `max_threads`.
|
||||
|
||||
## opentelemetry_start_trace_probability {#opentelemetry-start-trace-probability}
|
||||
|
||||
|
@ -4,7 +4,7 @@ sidebar_label: Aggregate Functions
|
||||
sidebar_position: 33
|
||||
---
|
||||
|
||||
# Aggregate Functions
|
||||
# Aggregate Functions
|
||||
|
||||
Aggregate functions work in the [normal](http://www.sql-tutorial.com/sql-aggregate-functions-sql-tutorial) way as expected by database experts.
|
||||
|
||||
@ -72,3 +72,16 @@ FROM t_null_big
|
||||
│ 2.3333333333333335 │ 1.4 │
|
||||
└────────────────────┴─────────────────────┘
|
||||
```
|
||||
|
||||
Also you can use [Tuple](/docs/en/sql-reference/data-types/tuple.md) to work around NULL skipping behavior. The a `Tuple` that contains only a `NULL` value is not `NULL`, so the aggregate functions won't skip that row because of that `NULL` value.
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
groupArray(y),
|
||||
groupArray(tuple(y)).1
|
||||
FROM t_null_big;
|
||||
|
||||
┌─groupArray(y)─┬─tupleElement(groupArray(tuple(y)), 1)─┐
|
||||
│ [2,2,3] │ [2,NULL,2,3,NULL] │
|
||||
└───────────────┴───────────────────────────────────────┘
|
||||
```
|
||||
|
@ -6,6 +6,7 @@ sidebar_position: 106
|
||||
# argMax
|
||||
|
||||
Calculates the `arg` value for a maximum `val` value. If there are several different values of `arg` for maximum values of `val`, returns the first of these values encountered.
|
||||
Both parts the `arg` and the `max` behave as [aggregate functions](/docs/en/sql-reference/aggregate-functions/index.md), they both [skip `Null`](/docs/en/sql-reference/aggregate-functions/index.md#null-processing) during processing and return not `Null` values if not `Null` values are available.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -49,3 +50,60 @@ Result:
|
||||
│ director │
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
**Extended example**
|
||||
|
||||
```sql
|
||||
CREATE TABLE test
|
||||
(
|
||||
a Nullable(String),
|
||||
b Nullable(Int64)
|
||||
)
|
||||
ENGINE = Memory AS
|
||||
SELECT *
|
||||
FROM VALUES(('a', 1), ('b', 2), ('c', 2), (NULL, 3), (NULL, NULL), ('d', NULL));
|
||||
|
||||
select * from test;
|
||||
┌─a────┬────b─┐
|
||||
│ a │ 1 │
|
||||
│ b │ 2 │
|
||||
│ c │ 2 │
|
||||
│ ᴺᵁᴸᴸ │ 3 │
|
||||
│ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │
|
||||
│ d │ ᴺᵁᴸᴸ │
|
||||
└──────┴──────┘
|
||||
|
||||
SELECT argMax(a, b), max(b) FROM test;
|
||||
┌─argMax(a, b)─┬─max(b)─┐
|
||||
│ b │ 3 │ -- argMax = 'b' because it the first not Null value, max(b) is from another row!
|
||||
└──────────────┴────────┘
|
||||
|
||||
SELECT argMax(tuple(a), b) FROM test;
|
||||
┌─argMax(tuple(a), b)─┐
|
||||
│ (NULL) │ -- The a `Tuple` that contains only a `NULL` value is not `NULL`, so the aggregate functions won't skip that row because of that `NULL` value
|
||||
└─────────────────────┘
|
||||
|
||||
SELECT (argMax((a, b), b) as t).1 argMaxA, t.2 argMaxB FROM test;
|
||||
┌─argMaxA─┬─argMaxB─┐
|
||||
│ ᴺᵁᴸᴸ │ 3 │ -- you can use Tuple and get both (all - tuple(*)) columns for the according max(b)
|
||||
└─────────┴─────────┘
|
||||
|
||||
SELECT argMax(a, b), max(b) FROM test WHERE a IS NULL AND b IS NULL;
|
||||
┌─argMax(a, b)─┬─max(b)─┐
|
||||
│ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ -- All aggregated rows contains at least one `NULL` value because of the filter, so all rows are skipped, therefore the result will be `NULL`
|
||||
└──────────────┴────────┘
|
||||
|
||||
SELECT argMax(a, (b,a)) FROM test;
|
||||
┌─argMax(a, tuple(b, a))─┐
|
||||
│ c │ -- There are two rows with b=2, `Tuple` in the `Max` allows to get not the first `arg`
|
||||
└────────────────────────┘
|
||||
|
||||
SELECT argMax(a, tuple(b)) FROM test;
|
||||
┌─argMax(a, tuple(b))─┐
|
||||
│ b │ -- `Tuple` can be used in `Max` to not skip Nulls in `Max`
|
||||
└─────────────────────┘
|
||||
```
|
||||
|
||||
**See also**
|
||||
|
||||
- [Tuple](/docs/en/sql-reference/data-types/tuple.md)
|
||||
|
@ -6,6 +6,7 @@ sidebar_position: 105
|
||||
# argMin
|
||||
|
||||
Calculates the `arg` value for a minimum `val` value. If there are several different values of `arg` for minimum values of `val`, returns the first of these values encountered.
|
||||
Both parts the `arg` and the `min` behave as [aggregate functions](/docs/en/sql-reference/aggregate-functions/index.md), they both [skip `Null`](/docs/en/sql-reference/aggregate-functions/index.md#null-processing) during processing and return not `Null` values if not `Null` values are available.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -49,3 +50,65 @@ Result:
|
||||
│ worker │
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
**Extended example**
|
||||
|
||||
```sql
|
||||
CREATE TABLE test
|
||||
(
|
||||
a Nullable(String),
|
||||
b Nullable(Int64)
|
||||
)
|
||||
ENGINE = Memory AS
|
||||
SELECT *
|
||||
FROM VALUES((NULL, 0), ('a', 1), ('b', 2), ('c', 2), (NULL, NULL), ('d', NULL));
|
||||
|
||||
select * from test;
|
||||
┌─a────┬────b─┐
|
||||
│ ᴺᵁᴸᴸ │ 0 │
|
||||
│ a │ 1 │
|
||||
│ b │ 2 │
|
||||
│ c │ 2 │
|
||||
│ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │
|
||||
│ d │ ᴺᵁᴸᴸ │
|
||||
└──────┴──────┘
|
||||
|
||||
SELECT argMin(a, b), min(b) FROM test;
|
||||
┌─argMin(a, b)─┬─min(b)─┐
|
||||
│ a │ 0 │ -- argMin = a because it the first not `NULL` value, min(b) is from another row!
|
||||
└──────────────┴────────┘
|
||||
|
||||
SELECT argMin(tuple(a), b) FROM test;
|
||||
┌─argMin(tuple(a), b)─┐
|
||||
│ (NULL) │ -- The a `Tuple` that contains only a `NULL` value is not `NULL`, so the aggregate functions won't skip that row because of that `NULL` value
|
||||
└─────────────────────┘
|
||||
|
||||
SELECT (argMin((a, b), b) as t).1 argMinA, t.2 argMinB from test;
|
||||
┌─argMinA─┬─argMinB─┐
|
||||
│ ᴺᵁᴸᴸ │ 0 │ -- you can use `Tuple` and get both (all - tuple(*)) columns for the according max(b)
|
||||
└─────────┴─────────┘
|
||||
|
||||
SELECT argMin(a, b), min(b) FROM test WHERE a IS NULL and b IS NULL;
|
||||
┌─argMin(a, b)─┬─min(b)─┐
|
||||
│ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ -- All aggregated rows contains at least one `NULL` value because of the filter, so all rows are skipped, therefore the result will be `NULL`
|
||||
└──────────────┴────────┘
|
||||
|
||||
SELECT argMin(a, (b, a)), min(tuple(b, a)) FROM test;
|
||||
┌─argMin(a, tuple(b, a))─┬─min(tuple(b, a))─┐
|
||||
│ d │ (NULL,NULL) │ -- 'd' is the first not `NULL` value for the min
|
||||
└────────────────────────┴──────────────────┘
|
||||
|
||||
SELECT argMin((a, b), (b, a)), min(tuple(b, a)) FROM test;
|
||||
┌─argMin(tuple(a, b), tuple(b, a))─┬─min(tuple(b, a))─┐
|
||||
│ (NULL,NULL) │ (NULL,NULL) │ -- argMin returns (NULL,NULL) here because `Tuple` allows to don't skip `NULL` and min(tuple(b, a)) in this case is minimal value for this dataset
|
||||
└──────────────────────────────────┴──────────────────┘
|
||||
|
||||
SELECT argMin(a, tuple(b)) FROM test;
|
||||
┌─argMax(a, tuple(b))─┐
|
||||
│ d │ -- `Tuple` can be used in `min` to not skip rows with `NULL` values as b.
|
||||
└─────────────────────┘
|
||||
```
|
||||
|
||||
**See also**
|
||||
|
||||
- [Tuple](/docs/en/sql-reference/data-types/tuple.md)
|
||||
|
@ -10,7 +10,9 @@ There are at least\* two types of functions - regular functions (they are just c
|
||||
|
||||
In this section we discuss regular functions. For aggregate functions, see the section “Aggregate functions”.
|
||||
|
||||
\* - There is a third type of function that the ‘arrayJoin’ function belongs to; table functions can also be mentioned separately.\*
|
||||
:::note
|
||||
There is a third type of function that the [‘arrayJoin’ function](/docs/en/sql-reference/functions/array-join.md) belongs to. And [table functions](/docs/en/sql-reference/table-functions/index.md) can also be mentioned separately.
|
||||
:::
|
||||
|
||||
## Strong Typing
|
||||
|
||||
|
@ -10,7 +10,7 @@ sidebar_label: INDEX
|
||||
|
||||
The following operations are available:
|
||||
|
||||
- `ALTER TABLE [db].table_name [ON CLUSTER cluster] ADD INDEX name expression TYPE type GRANULARITY value [FIRST|AFTER name]` - Adds index description to tables metadata.
|
||||
- `ALTER TABLE [db].table_name [ON CLUSTER cluster] ADD INDEX name expression TYPE type [GRANULARITY value] [FIRST|AFTER name]` - Adds index description to tables metadata.
|
||||
|
||||
- `ALTER TABLE [db].table_name [ON CLUSTER cluster] DROP INDEX name` - Removes index description from tables metadata and deletes index files from disk. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
|
||||
|
||||
|
@ -273,7 +273,7 @@ SHOW DICTIONARIES FROM db LIKE '%reg%' LIMIT 2
|
||||
Displays a list of primary and data skipping indexes of a table.
|
||||
|
||||
```sql
|
||||
SHOW [EXTENDED] {INDEX | INDEXES | KEYS } {FROM | IN} <table> [{FROM | IN} <db>] [WHERE <expr>] [INTO OUTFILE <filename>] [FORMAT <format>]
|
||||
SHOW [EXTENDED] {INDEX | INDEXES | INDICES | KEYS } {FROM | IN} <table> [{FROM | IN} <db>] [WHERE <expr>] [INTO OUTFILE <filename>] [FORMAT <format>]
|
||||
```
|
||||
|
||||
The database and table name can be specified in abbreviated form as `<db>.<table>`, i.e. `FROM tab FROM db` and `FROM db.tab` are
|
||||
|
@ -141,6 +141,13 @@ public:
|
||||
nested_func->merge(place, rhs, arena);
|
||||
}
|
||||
|
||||
bool isAbleToParallelizeMerge() const override { return nested_func->isAbleToParallelizeMerge(); }
|
||||
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, ThreadPool & thread_pool, Arena * arena) const override
|
||||
{
|
||||
nested_func->merge(place, rhs, thread_pool, arena);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf, std::optional<size_t> version) const override
|
||||
{
|
||||
nested_func->serialize(place, buf, version);
|
||||
|
@ -110,6 +110,13 @@ public:
|
||||
nested_func->merge(place, rhs, arena);
|
||||
}
|
||||
|
||||
bool isAbleToParallelizeMerge() const override { return nested_func->isAbleToParallelizeMerge(); }
|
||||
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, ThreadPool & thread_pool, Arena * arena) const override
|
||||
{
|
||||
nested_func->merge(place, rhs, thread_pool, arena);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf, std::optional<size_t> version) const override
|
||||
{
|
||||
nested_func->serialize(place, buf, version);
|
||||
|
@ -148,6 +148,13 @@ public:
|
||||
nested_function->merge(nestedPlace(place), nestedPlace(rhs), arena);
|
||||
}
|
||||
|
||||
bool isAbleToParallelizeMerge() const override { return nested_function->isAbleToParallelizeMerge(); }
|
||||
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, ThreadPool & thread_pool, Arena * arena) const override
|
||||
{
|
||||
nested_function->merge(nestedPlace(place), nestedPlace(rhs), thread_pool, arena);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf, std::optional<size_t> version) const override
|
||||
{
|
||||
bool flag = getFlag(place);
|
||||
|
@ -91,6 +91,13 @@ public:
|
||||
nested_func->merge(place, rhs, arena);
|
||||
}
|
||||
|
||||
bool isAbleToParallelizeMerge() const override { return nested_func->isAbleToParallelizeMerge(); }
|
||||
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, ThreadPool & thread_pool, Arena * arena) const override
|
||||
{
|
||||
nested_func->merge(place, rhs, thread_pool, arena);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf, std::optional<size_t> version) const override
|
||||
{
|
||||
nested_func->serialize(place, buf, version);
|
||||
|
@ -117,7 +117,10 @@ ASTPtr ColumnNode::toASTImpl(const ConvertToASTOptions & options) const
|
||||
else
|
||||
{
|
||||
const auto & table_storage_id = table_node->getStorageID();
|
||||
column_identifier_parts = { table_storage_id.getDatabaseName(), table_storage_id.getTableName() };
|
||||
if (table_storage_id.hasDatabase() && options.qualify_indentifiers_with_database)
|
||||
column_identifier_parts = { table_storage_id.getDatabaseName(), table_storage_id.getTableName() };
|
||||
else
|
||||
column_identifier_parts = { table_storage_id.getTableName() };
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -187,10 +187,13 @@ public:
|
||||
|
||||
/// Identifiers are fully qualified (`database.table.column`), otherwise names are just column names (`column`)
|
||||
bool fully_qualified_identifiers = true;
|
||||
|
||||
/// Identifiers are qualified but database name is not added (`table.column`) if set to false.
|
||||
bool qualify_indentifiers_with_database = true;
|
||||
};
|
||||
|
||||
/// Convert query tree to AST
|
||||
ASTPtr toAST(const ConvertToASTOptions & options = { .add_cast_for_constants = true, .fully_qualified_identifiers = true }) const;
|
||||
ASTPtr toAST(const ConvertToASTOptions & options = { .add_cast_for_constants = true, .fully_qualified_identifiers = true, .qualify_indentifiers_with_database = true }) const;
|
||||
|
||||
/// Convert query tree to AST and then format it for error message.
|
||||
String formatConvertedASTForErrorMessage() const;
|
||||
|
@ -10,9 +10,10 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
LambdaNode::LambdaNode(Names argument_names_, QueryTreeNodePtr expression_)
|
||||
LambdaNode::LambdaNode(Names argument_names_, QueryTreeNodePtr expression_, DataTypePtr result_type_)
|
||||
: IQueryTreeNode(children_size)
|
||||
, argument_names(std::move(argument_names_))
|
||||
, result_type(std::move(result_type_))
|
||||
{
|
||||
auto arguments_list_node = std::make_shared<ListNode>();
|
||||
auto & nodes = arguments_list_node->getNodes();
|
||||
@ -63,7 +64,7 @@ void LambdaNode::updateTreeHashImpl(HashState & state) const
|
||||
|
||||
QueryTreeNodePtr LambdaNode::cloneImpl() const
|
||||
{
|
||||
return std::make_shared<LambdaNode>(argument_names, getExpression());
|
||||
return std::make_shared<LambdaNode>(argument_names, getExpression(), result_type);
|
||||
}
|
||||
|
||||
ASTPtr LambdaNode::toASTImpl(const ConvertToASTOptions & options) const
|
||||
|
@ -35,7 +35,7 @@ class LambdaNode final : public IQueryTreeNode
|
||||
{
|
||||
public:
|
||||
/// Initialize lambda with argument names and lambda body expression
|
||||
explicit LambdaNode(Names argument_names_, QueryTreeNodePtr expression_);
|
||||
explicit LambdaNode(Names argument_names_, QueryTreeNodePtr expression_, DataTypePtr result_type_ = {});
|
||||
|
||||
/// Get argument names
|
||||
const Names & getArgumentNames() const
|
||||
|
@ -4767,13 +4767,14 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
||||
auto * table_node = in_second_argument->as<TableNode>();
|
||||
auto * table_function_node = in_second_argument->as<TableFunctionNode>();
|
||||
|
||||
if (table_node && dynamic_cast<StorageSet *>(table_node->getStorage().get()) != nullptr)
|
||||
if (table_node)
|
||||
{
|
||||
/// If table is already prepared set, we do not replace it with subquery
|
||||
/// If table is already prepared set, we do not replace it with subquery.
|
||||
/// If table is not a StorageSet, we'll create plan to build set in the Planner.
|
||||
}
|
||||
else if (table_node || table_function_node)
|
||||
else if (table_function_node)
|
||||
{
|
||||
const auto & storage_snapshot = table_node ? table_node->getStorageSnapshot() : table_function_node->getStorageSnapshot();
|
||||
const auto & storage_snapshot = table_function_node->getStorageSnapshot();
|
||||
auto columns_to_select = storage_snapshot->getColumns(GetColumnsOptions(GetColumnsOptions::Ordinary));
|
||||
|
||||
size_t columns_to_select_size = columns_to_select.size();
|
||||
|
@ -91,6 +91,11 @@ ASTPtr TableNode::toASTImpl(const ConvertToASTOptions & /* options */) const
|
||||
if (!temporary_table_name.empty())
|
||||
return std::make_shared<ASTTableIdentifier>(temporary_table_name);
|
||||
|
||||
// In case of cross-replication we don't know what database is used for the table.
|
||||
// `storage_id.hasDatabase()` can return false only on the initiator node.
|
||||
// Each shard will use the default database (in the case of cross-replication shards may have different defaults).
|
||||
if (!storage_id.hasDatabase())
|
||||
return std::make_shared<ASTTableIdentifier>(storage_id.getTableName());
|
||||
return std::make_shared<ASTTableIdentifier>(storage_id.getDatabaseName(), storage_id.getTableName());
|
||||
}
|
||||
|
||||
|
@ -313,6 +313,11 @@ MutableColumnPtr ColumnLowCardinality::cloneResized(size_t size) const
|
||||
MutableColumnPtr ColumnLowCardinality::cloneNullable() const
|
||||
{
|
||||
auto res = cloneFinalized();
|
||||
/* Compact required not to share dictionary.
|
||||
* If `shared` flag is not set `cloneFinalized` will return shallow copy
|
||||
* and `nestedToNullable` will mutate source column.
|
||||
*/
|
||||
assert_cast<ColumnLowCardinality &>(*res).compactInplace();
|
||||
assert_cast<ColumnLowCardinality &>(*res).nestedToNullable();
|
||||
return res;
|
||||
}
|
||||
|
@ -48,3 +48,16 @@ TEST(ColumnLowCardinality, Insert)
|
||||
testLowCardinalityNumberInsert<Float32>(std::make_shared<DataTypeFloat32>());
|
||||
testLowCardinalityNumberInsert<Float64>(std::make_shared<DataTypeFloat64>());
|
||||
}
|
||||
|
||||
TEST(ColumnLowCardinality, Clone)
|
||||
{
|
||||
auto data_type = std::make_shared<DataTypeInt32>();
|
||||
auto low_cardinality_type = std::make_shared<DataTypeLowCardinality>(data_type);
|
||||
auto column = low_cardinality_type->createColumn();
|
||||
ASSERT_FALSE(assert_cast<const ColumnLowCardinality &>(*column).nestedIsNullable());
|
||||
|
||||
auto nullable_column = assert_cast<const ColumnLowCardinality &>(*column).cloneNullable();
|
||||
|
||||
ASSERT_TRUE(assert_cast<const ColumnLowCardinality &>(*nullable_column).nestedIsNullable());
|
||||
ASSERT_FALSE(assert_cast<const ColumnLowCardinality &>(*column).nestedIsNullable());
|
||||
}
|
||||
|
@ -167,7 +167,7 @@ void ExternalTablesHandler::handlePart(const Poco::Net::MessageHeader & header,
|
||||
auto temporary_table = TemporaryTableHolder(getContext(), ColumnsDescription{columns}, {});
|
||||
auto storage = temporary_table.getTable();
|
||||
getContext()->addExternalTable(data->table_name, std::move(temporary_table));
|
||||
auto sink = storage->write(ASTPtr(), storage->getInMemoryMetadataPtr(), getContext());
|
||||
auto sink = storage->write(ASTPtr(), storage->getInMemoryMetadataPtr(), getContext(), /*async_insert=*/false);
|
||||
|
||||
/// Write data
|
||||
auto pipeline = QueryPipelineBuilder::getPipeline(std::move(*data->pipe));
|
||||
|
@ -160,6 +160,7 @@ class IColumn;
|
||||
M(UInt64, allow_experimental_parallel_reading_from_replicas, 0, "Use all the replicas from a shard for SELECT query execution. Reading is parallelized and coordinated dynamically. 0 - disabled, 1 - enabled, silently disable them in case of failure, 2 - enabled, throw an exception in case of failure", 0) \
|
||||
M(Float, parallel_replicas_single_task_marks_count_multiplier, 2, "A multiplier which will be added during calculation for minimal number of marks to retrieve from coordinator. This will be applied only for remote replicas.", 0) \
|
||||
M(Bool, parallel_replicas_for_non_replicated_merge_tree, false, "If true, ClickHouse will use parallel replicas algorithm also for non-replicated MergeTree tables", 0) \
|
||||
M(UInt64, parallel_replicas_min_number_of_granules_to_enable, 0, "If the number of marks to read is less than the value of this setting - parallel replicas will be disabled", 0) \
|
||||
\
|
||||
M(Bool, skip_unavailable_shards, false, "If true, ClickHouse silently skips unavailable shards and nodes unresolvable through DNS. Shard is marked as unavailable when none of the replicas can be reached.", 0) \
|
||||
\
|
||||
@ -201,6 +202,8 @@ class IColumn;
|
||||
M(Bool, force_primary_key, false, "Throw an exception if there is primary key in a table, and it is not used.", 0) \
|
||||
M(Bool, use_skip_indexes, true, "Use data skipping indexes during query execution.", 0) \
|
||||
M(Bool, use_skip_indexes_if_final, false, "If query has FINAL, then skipping data based on indexes may produce incorrect result, hence disabled by default.", 0) \
|
||||
M(String, ignore_data_skipping_indices, "", "Comma separated list of strings or literals with the name of the data skipping indices that should be excluded during query execution.", 0) \
|
||||
\
|
||||
M(String, force_data_skipping_indices, "", "Comma separated list of strings or literals with the name of the data skipping indices that should be used during query execution, otherwise an exception will be thrown.", 0) \
|
||||
\
|
||||
M(Float, max_streams_to_max_threads_ratio, 1, "Allows you to use more sources than the number of threads - to more evenly distribute work across threads. It is assumed that this is a temporary solution, since it will be possible in the future to make the number of sources equal to the number of threads, but for each source to dynamically select available work for itself.", 0) \
|
||||
@ -719,7 +722,6 @@ class IColumn;
|
||||
\
|
||||
M(Bool, parallelize_output_from_storages, true, "Parallelize output for reading step from storage. It allows parallelizing query processing right after reading from storage if possible", 0) \
|
||||
M(String, insert_deduplication_token, "", "If not empty, used for duplicate detection instead of data digest", 0) \
|
||||
M(String, ann_index_select_query_params, "", "Parameters passed to ANN indexes in SELECT queries, the format is 'param1=x, param2=y, ...'", 0) \
|
||||
M(Bool, count_distinct_optimization, false, "Rewrite count distinct to subquery of group by", 0) \
|
||||
M(Bool, throw_if_no_data_to_insert, true, "Enables or disables empty INSERTs, enabled by default", 0) \
|
||||
M(Bool, compatibility_ignore_auto_increment_in_create_table, false, "Ignore AUTO_INCREMENT keyword in column declaration if true, otherwise return error. It simplifies migration from MySQL", 0) \
|
||||
@ -742,7 +744,8 @@ class IColumn;
|
||||
M(Bool, allow_experimental_hash_functions, false, "Enable experimental hash functions (hashid, etc)", 0) \
|
||||
M(Bool, allow_experimental_object_type, false, "Allow Object and JSON data types", 0) \
|
||||
M(Bool, allow_experimental_annoy_index, false, "Allows to use Annoy index. Disabled by default because this feature is experimental", 0) \
|
||||
M(UInt64, max_limit_for_ann_queries, 1000000, "Maximum limit value for using ANN indexes is used to prevent memory overflow in search queries for indexes", 0) \
|
||||
M(UInt64, max_limit_for_ann_queries, 1'000'000, "SELECT queries with LIMIT bigger than this setting cannot use ANN indexes. Helps to prevent memory overflows in ANN search indexes.", 0) \
|
||||
M(Int64, annoy_index_search_k_nodes, -1, "SELECT queries search up to this many nodes in Annoy indexes.", 0) \
|
||||
M(Bool, throw_on_unsupported_query_inside_transaction, true, "Throw exception if unsupported query is used inside transaction", 0) \
|
||||
M(TransactionsWaitCSNMode, wait_changes_become_visible_after_commit_mode, TransactionsWaitCSNMode::WAIT_UNKNOWN, "Wait for committed changes to become actually visible in the latest snapshot", 0) \
|
||||
M(Bool, implicit_transaction, false, "If enabled and not already inside a transaction, wraps the query inside a full transaction (begin + commit or rollback)", 0) \
|
||||
|
@ -129,17 +129,6 @@ struct RegExpTreeDictionary::RegexTreeNode
|
||||
return searcher.Match(haystack, 0, size, re2_st::RE2::Anchor::UNANCHORED, nullptr, 0);
|
||||
}
|
||||
|
||||
/// check if this node can cover all the attributes from the query.
|
||||
bool containsAll(const std::unordered_map<String, const DictionaryAttribute &> & matching_attributes) const
|
||||
{
|
||||
for (const auto & [key, value] : matching_attributes)
|
||||
{
|
||||
if (!attributes.contains(key))
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
struct AttributeValue
|
||||
{
|
||||
Field field;
|
||||
@ -691,9 +680,6 @@ std::unordered_map<String, ColumnPtr> RegExpTreeDictionary::match(
|
||||
if (node_ptr->match(reinterpret_cast<const char *>(keys_data.data()) + offset, length))
|
||||
{
|
||||
match_result.insertNodeID(node_ptr->id);
|
||||
/// When this node is leaf and contains all the required attributes, it means a match.
|
||||
if (node_ptr->containsAll(attributes) && node_ptr->children.empty())
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -945,18 +945,23 @@ bool CachedOnDiskReadBufferFromFile::nextImplStep()
|
||||
ProfileEvents::increment(ProfileEvents::CachedReadBufferReadFromCacheBytes, size);
|
||||
ProfileEvents::increment(ProfileEvents::CachedReadBufferReadFromCacheMicroseconds, elapsed);
|
||||
|
||||
#ifdef ABORT_ON_LOGICAL_ERROR
|
||||
const size_t new_file_offset = file_offset_of_buffer_end + size;
|
||||
chassert(new_file_offset - 1 <= file_segment.range().right);
|
||||
const size_t file_segment_write_offset = file_segment.getCurrentWriteOffset(true);
|
||||
if (new_file_offset > file_segment.range().right + 1)
|
||||
{
|
||||
auto file_segment_path = file_segment.getPathInLocalCache();
|
||||
throw Exception(
|
||||
ErrorCodes::LOGICAL_ERROR,
|
||||
"Read unexpected size. File size: {}, file path: {}, file segment info: {}",
|
||||
fs::file_size(file_segment_path), file_segment_path, file_segment.getInfoForLog());
|
||||
}
|
||||
if (new_file_offset > file_segment_write_offset)
|
||||
{
|
||||
LOG_TRACE(
|
||||
log, "Read {} bytes, file offset: {}, segment: {}, segment write offset: {}",
|
||||
throw Exception(
|
||||
ErrorCodes::LOGICAL_ERROR,
|
||||
"Read unexpected size. Read {} bytes, file offset: {}, segment: {}, segment write offset: {}",
|
||||
size, file_offset_of_buffer_end, file_segment.range().toString(), file_segment_write_offset);
|
||||
chassert(false);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
else
|
||||
{
|
||||
|
@ -52,18 +52,20 @@ bool FileSegmentRangeWriter::write(const char * data, size_t size, size_t offset
|
||||
|
||||
FileSegment * file_segment;
|
||||
|
||||
if (file_segments.empty() || file_segments.back().isDownloaded())
|
||||
if (!file_segments || file_segments->empty() || file_segments->front().isDownloaded())
|
||||
{
|
||||
file_segment = &allocateFileSegment(expected_write_offset, segment_kind);
|
||||
}
|
||||
else
|
||||
{
|
||||
file_segment = &file_segments.back();
|
||||
file_segment = &file_segments->front();
|
||||
}
|
||||
|
||||
SCOPE_EXIT({
|
||||
if (file_segments.back().isDownloader())
|
||||
file_segments.back().completePartAndResetDownloader();
|
||||
if (!file_segments || file_segments->empty())
|
||||
return;
|
||||
if (file_segments->front().isDownloader())
|
||||
file_segments->front().completePartAndResetDownloader();
|
||||
});
|
||||
|
||||
while (size > 0)
|
||||
@ -71,7 +73,7 @@ bool FileSegmentRangeWriter::write(const char * data, size_t size, size_t offset
|
||||
size_t available_size = file_segment->range().size() - file_segment->getDownloadedSize(false);
|
||||
if (available_size == 0)
|
||||
{
|
||||
completeFileSegment(*file_segment);
|
||||
completeFileSegment();
|
||||
file_segment = &allocateFileSegment(expected_write_offset, segment_kind);
|
||||
continue;
|
||||
}
|
||||
@ -114,10 +116,7 @@ void FileSegmentRangeWriter::finalize()
|
||||
if (finalized)
|
||||
return;
|
||||
|
||||
if (file_segments.empty())
|
||||
return;
|
||||
|
||||
completeFileSegment(file_segments.back());
|
||||
completeFileSegment();
|
||||
finalized = true;
|
||||
}
|
||||
|
||||
@ -145,10 +144,9 @@ FileSegment & FileSegmentRangeWriter::allocateFileSegment(size_t offset, FileSeg
|
||||
|
||||
/// We set max_file_segment_size to be downloaded,
|
||||
/// if we have less size to write, file segment will be resized in complete() method.
|
||||
auto holder = cache->set(key, offset, cache->getMaxFileSegmentSize(), create_settings);
|
||||
chassert(holder->size() == 1);
|
||||
holder->moveTo(file_segments);
|
||||
return file_segments.back();
|
||||
file_segments = cache->set(key, offset, cache->getMaxFileSegmentSize(), create_settings);
|
||||
chassert(file_segments->size() == 1);
|
||||
return file_segments->front();
|
||||
}
|
||||
|
||||
void FileSegmentRangeWriter::appendFilesystemCacheLog(const FileSegment & file_segment)
|
||||
@ -176,8 +174,12 @@ void FileSegmentRangeWriter::appendFilesystemCacheLog(const FileSegment & file_s
|
||||
cache_log->add(elem);
|
||||
}
|
||||
|
||||
void FileSegmentRangeWriter::completeFileSegment(FileSegment & file_segment)
|
||||
void FileSegmentRangeWriter::completeFileSegment()
|
||||
{
|
||||
if (!file_segments || file_segments->empty())
|
||||
return;
|
||||
|
||||
auto & file_segment = file_segments->front();
|
||||
/// File segment can be detached if space reservation failed.
|
||||
if (file_segment.isDetached() || file_segment.isCompleted())
|
||||
return;
|
||||
|
@ -43,7 +43,7 @@ private:
|
||||
|
||||
void appendFilesystemCacheLog(const FileSegment & file_segment);
|
||||
|
||||
void completeFileSegment(FileSegment & file_segment);
|
||||
void completeFileSegment();
|
||||
|
||||
FileCache * cache;
|
||||
FileSegment::Key key;
|
||||
@ -53,7 +53,7 @@ private:
|
||||
String query_id;
|
||||
String source_path;
|
||||
|
||||
FileSegmentsHolder file_segments{};
|
||||
FileSegmentsHolderPtr file_segments;
|
||||
|
||||
size_t expected_write_offset = 0;
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
#include <Columns/ColumnConst.h>
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <Columns/ColumnsNumber.h>
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <DataTypes/DataTypeDate.h>
|
||||
#include <DataTypes/DataTypeDateTime.h>
|
||||
#include <DataTypes/DataTypeInterval.h>
|
||||
@ -25,7 +25,7 @@ class FunctionDateTrunc : public IFunction
|
||||
public:
|
||||
static constexpr auto name = "dateTrunc";
|
||||
|
||||
explicit FunctionDateTrunc(ContextPtr context_) : context(context_) { }
|
||||
explicit FunctionDateTrunc(ContextPtr context_) : context(context_) {}
|
||||
|
||||
static FunctionPtr create(ContextPtr context) { return std::make_shared<FunctionDateTrunc>(context); }
|
||||
|
||||
@ -39,58 +39,51 @@ public:
|
||||
{
|
||||
/// The first argument is a constant string with the name of datepart.
|
||||
|
||||
intermediate_type_is_date = false;
|
||||
auto result_type_is_date = false;
|
||||
String datepart_param;
|
||||
auto check_first_argument = [&]
|
||||
{
|
||||
auto check_first_argument = [&] {
|
||||
const ColumnConst * datepart_column = checkAndGetColumnConst<ColumnString>(arguments[0].column.get());
|
||||
if (!datepart_column)
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
|
||||
"First argument for function {} must be constant string: "
|
||||
"name of datepart",
|
||||
getName());
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "First argument for function {} must be constant string: "
|
||||
"name of datepart", getName());
|
||||
|
||||
datepart_param = datepart_column->getValue<String>();
|
||||
if (datepart_param.empty())
|
||||
throw Exception(
|
||||
ErrorCodes::BAD_ARGUMENTS, "First argument (name of datepart) for function {} cannot be empty", getName());
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "First argument (name of datepart) for function {} cannot be empty",
|
||||
getName());
|
||||
|
||||
if (!IntervalKind::tryParseString(datepart_param, datepart_kind))
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "{} doesn't look like datepart name in {}", datepart_param, getName());
|
||||
|
||||
intermediate_type_is_date = (datepart_kind == IntervalKind::Year) || (datepart_kind == IntervalKind::Quarter)
|
||||
|| (datepart_kind == IntervalKind::Month) || (datepart_kind == IntervalKind::Week);
|
||||
result_type_is_date = (datepart_kind == IntervalKind::Year)
|
||||
|| (datepart_kind == IntervalKind::Quarter) || (datepart_kind == IntervalKind::Month)
|
||||
|| (datepart_kind == IntervalKind::Week);
|
||||
};
|
||||
|
||||
bool second_argument_is_date = false;
|
||||
auto check_second_argument = [&]
|
||||
{
|
||||
auto check_second_argument = [&] {
|
||||
if (!isDate(arguments[1].type) && !isDateTime(arguments[1].type) && !isDateTime64(arguments[1].type))
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
|
||||
"Illegal type {} of 2nd argument of function {}. "
|
||||
"Should be a date or a date with time",
|
||||
arguments[1].type->getName(),
|
||||
getName());
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "Illegal type {} of 2nd argument of function {}. "
|
||||
"Should be a date or a date with time", arguments[1].type->getName(), getName());
|
||||
|
||||
second_argument_is_date = isDate(arguments[1].type);
|
||||
|
||||
if (second_argument_is_date
|
||||
&& ((datepart_kind == IntervalKind::Hour) || (datepart_kind == IntervalKind::Minute)
|
||||
|| (datepart_kind == IntervalKind::Second)))
|
||||
if (second_argument_is_date && ((datepart_kind == IntervalKind::Hour)
|
||||
|| (datepart_kind == IntervalKind::Minute) || (datepart_kind == IntervalKind::Second)))
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "Illegal type Date of argument for function {}", getName());
|
||||
};
|
||||
|
||||
auto check_timezone_argument = [&]
|
||||
{
|
||||
auto check_timezone_argument = [&] {
|
||||
if (!WhichDataType(arguments[2].type).isString())
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
|
||||
"Illegal type {} of argument of function {}. "
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "Illegal type {} of argument of function {}. "
|
||||
"This argument is optional and must be a constant string with timezone name",
|
||||
arguments[2].type->getName(),
|
||||
getName());
|
||||
arguments[2].type->getName(), getName());
|
||||
|
||||
if (second_argument_is_date && result_type_is_date)
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
|
||||
"The timezone argument of function {} with datepart '{}' "
|
||||
"is allowed only when the 2nd argument has the type DateTime",
|
||||
getName(), datepart_param);
|
||||
};
|
||||
|
||||
if (arguments.size() == 2)
|
||||
@ -106,14 +99,15 @@ public:
|
||||
}
|
||||
else
|
||||
{
|
||||
throw Exception(
|
||||
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH,
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH,
|
||||
"Number of arguments for function {} doesn't match: passed {}, should be 2 or 3",
|
||||
getName(),
|
||||
arguments.size());
|
||||
getName(), arguments.size());
|
||||
}
|
||||
|
||||
return std::make_shared<DataTypeDateTime>(extractTimeZoneNameFromFunctionArguments(arguments, 2, 1));
|
||||
if (result_type_is_date)
|
||||
return std::make_shared<DataTypeDate>();
|
||||
else
|
||||
return std::make_shared<DataTypeDateTime>(extractTimeZoneNameFromFunctionArguments(arguments, 2, 1));
|
||||
}
|
||||
|
||||
bool useDefaultImplementationForConstants() const override { return true; }
|
||||
@ -130,40 +124,26 @@ public:
|
||||
|
||||
auto to_start_of_interval = FunctionFactory::instance().get("toStartOfInterval", context);
|
||||
|
||||
ColumnPtr truncated_column;
|
||||
auto date_type = std::make_shared<DataTypeDate>();
|
||||
|
||||
if (arguments.size() == 2)
|
||||
truncated_column = to_start_of_interval->build(temp_columns)
|
||||
->execute(temp_columns, intermediate_type_is_date ? date_type : result_type, input_rows_count);
|
||||
else
|
||||
{
|
||||
temp_columns[2] = arguments[2];
|
||||
truncated_column = to_start_of_interval->build(temp_columns)
|
||||
->execute(temp_columns, intermediate_type_is_date ? date_type : result_type, input_rows_count);
|
||||
}
|
||||
return to_start_of_interval->build(temp_columns)->execute(temp_columns, result_type, input_rows_count);
|
||||
|
||||
if (!intermediate_type_is_date)
|
||||
return truncated_column;
|
||||
|
||||
ColumnsWithTypeAndName temp_truncated_column(1);
|
||||
temp_truncated_column[0] = {truncated_column, date_type, ""};
|
||||
|
||||
auto to_date_time_or_default = FunctionFactory::instance().get("toDateTime", context);
|
||||
return to_date_time_or_default->build(temp_truncated_column)->execute(temp_truncated_column, result_type, input_rows_count);
|
||||
temp_columns[2] = arguments[2];
|
||||
return to_start_of_interval->build(temp_columns)->execute(temp_columns, result_type, input_rows_count);
|
||||
}
|
||||
|
||||
bool hasInformationAboutMonotonicity() const override { return true; }
|
||||
bool hasInformationAboutMonotonicity() const override
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
Monotonicity getMonotonicityForRange(const IDataType &, const Field &, const Field &) const override
|
||||
{
|
||||
return {.is_monotonic = true, .is_always_monotonic = true};
|
||||
return { .is_monotonic = true, .is_always_monotonic = true };
|
||||
}
|
||||
|
||||
private:
|
||||
ContextPtr context;
|
||||
mutable IntervalKind::Kind datepart_kind = IntervalKind::Kind::Second;
|
||||
mutable bool intermediate_type_is_date = false;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -68,7 +68,8 @@ namespace
|
||||
if (https)
|
||||
{
|
||||
#if USE_SSL
|
||||
String resolved_host = resolve_host ? DNSResolver::instance().resolveHost(host).toString() : host;
|
||||
/// Cannot resolve host in advance, otherwise SNI won't work in Poco.
|
||||
/// For more information about SNI, see the https://en.wikipedia.org/wiki/Server_Name_Indication
|
||||
auto https_session = std::make_shared<Poco::Net::HTTPSClientSession>(host, port);
|
||||
if (resolve_host)
|
||||
https_session->setResolvedHost(DNSResolver::instance().resolveHost(host).toString());
|
||||
@ -184,6 +185,24 @@ namespace
|
||||
std::mutex mutex;
|
||||
std::unordered_map<Key, PoolPtr, Hasher> endpoints_pool;
|
||||
|
||||
void updateHostIfIpChanged(Entry & session, const String & new_ip)
|
||||
{
|
||||
const auto old_ip = session->getResolvedHost().empty() ? session->getHost() : session->getResolvedHost();
|
||||
|
||||
if (new_ip != old_ip)
|
||||
{
|
||||
session->reset();
|
||||
if (session->getResolvedHost().empty())
|
||||
{
|
||||
session->setHost(new_ip);
|
||||
}
|
||||
else
|
||||
{
|
||||
session->setResolvedHost(new_ip);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
protected:
|
||||
HTTPSessionPool() = default;
|
||||
|
||||
@ -238,13 +257,7 @@ namespace
|
||||
|
||||
if (resolve_host)
|
||||
{
|
||||
/// Host can change IP
|
||||
const auto ip = DNSResolver::instance().resolveHost(host).toString();
|
||||
if (ip != session->getHost())
|
||||
{
|
||||
session->reset();
|
||||
session->setHost(ip);
|
||||
}
|
||||
updateHostIfIpChanged(session, DNSResolver::instance().resolveHost(host).toString());
|
||||
}
|
||||
}
|
||||
/// Reset the message, once it has been printed,
|
||||
|
@ -74,12 +74,12 @@ const String & FileCache::getBasePath() const
|
||||
|
||||
String FileCache::getPathInLocalCache(const Key & key, size_t offset, FileSegmentKind segment_kind) const
|
||||
{
|
||||
return metadata.getPathInLocalCache(key, offset, segment_kind);
|
||||
return metadata.getPathForFileSegment(key, offset, segment_kind);
|
||||
}
|
||||
|
||||
String FileCache::getPathInLocalCache(const Key & key) const
|
||||
{
|
||||
return metadata.getPathInLocalCache(key);
|
||||
return metadata.getPathForKey(key);
|
||||
}
|
||||
|
||||
void FileCache::assertInitialized() const
|
||||
@ -650,7 +650,7 @@ bool FileCache::tryReserve(FileSegment & file_segment, const size_t size)
|
||||
}
|
||||
|
||||
ProfileEvents::increment(ProfileEvents::FilesystemCacheEvictedFileSegments);
|
||||
ProfileEvents::increment(ProfileEvents::FilesystemCacheEvictedBytes, segment->range().size());
|
||||
ProfileEvents::increment(ProfileEvents::FilesystemCacheEvictedBytes, segment->getDownloadedSize(false));
|
||||
|
||||
locked_key.removeFileSegment(segment->offset(), segment->lock());
|
||||
return PriorityIterationResult::REMOVE_AND_CONTINUE;
|
||||
@ -1057,7 +1057,7 @@ std::vector<String> FileCache::tryGetCachePaths(const Key & key)
|
||||
for (const auto & [offset, file_segment_metadata] : *locked_key->getKeyMetadata())
|
||||
{
|
||||
if (file_segment_metadata->file_segment->state() == FileSegment::State::DOWNLOADED)
|
||||
cache_paths.push_back(metadata.getPathInLocalCache(key, offset, file_segment_metadata->file_segment->getKind()));
|
||||
cache_paths.push_back(metadata.getPathForFileSegment(key, offset, file_segment_metadata->file_segment->getKind()));
|
||||
}
|
||||
return cache_paths;
|
||||
}
|
||||
|
@ -314,6 +314,8 @@ void FileSegment::write(const char * from, size_t size, size_t offset)
|
||||
if (!size)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Writing zero size is not allowed");
|
||||
|
||||
const auto file_segment_path = getPathInLocalCache();
|
||||
|
||||
{
|
||||
auto lock = segment_guard.lock();
|
||||
|
||||
@ -352,7 +354,7 @@ void FileSegment::write(const char * from, size_t size, size_t offset)
|
||||
"Cache writer was finalized (downloaded size: {}, state: {})",
|
||||
current_downloaded_size, stateToString(download_state));
|
||||
|
||||
cache_writer = std::make_unique<WriteBufferFromFile>(getPathInLocalCache());
|
||||
cache_writer = std::make_unique<WriteBufferFromFile>(file_segment_path);
|
||||
}
|
||||
}
|
||||
|
||||
@ -366,7 +368,7 @@ void FileSegment::write(const char * from, size_t size, size_t offset)
|
||||
|
||||
downloaded_size += size;
|
||||
|
||||
chassert(std::filesystem::file_size(getPathInLocalCache()) == downloaded_size);
|
||||
chassert(std::filesystem::file_size(file_segment_path) == downloaded_size);
|
||||
}
|
||||
catch (ErrnoException & e)
|
||||
{
|
||||
@ -376,9 +378,10 @@ void FileSegment::write(const char * from, size_t size, size_t offset)
|
||||
int code = e.getErrno();
|
||||
if (code == /* No space left on device */28 || code == /* Quota exceeded */122)
|
||||
{
|
||||
const auto file_size = fs::file_size(getPathInLocalCache());
|
||||
const auto file_size = fs::file_size(file_segment_path);
|
||||
chassert(downloaded_size <= file_size);
|
||||
chassert(reserved_size >= file_size);
|
||||
chassert(file_size <= range().size());
|
||||
if (downloaded_size != file_size)
|
||||
downloaded_size = file_size;
|
||||
}
|
||||
@ -523,8 +526,8 @@ void FileSegment::setDownloadedUnlocked(const FileSegmentGuard::Lock &)
|
||||
remote_file_reader.reset();
|
||||
}
|
||||
|
||||
chassert(getDownloadedSize(false) > 0);
|
||||
chassert(fs::file_size(getPathInLocalCache()) > 0);
|
||||
chassert(downloaded_size > 0);
|
||||
chassert(fs::file_size(getPathInLocalCache()) == downloaded_size);
|
||||
}
|
||||
|
||||
void FileSegment::setDownloadFailedUnlocked(const FileSegmentGuard::Lock & lock)
|
||||
@ -848,7 +851,8 @@ void FileSegment::detach(const FileSegmentGuard::Lock & lock, const LockedKey &)
|
||||
if (download_state == State::DETACHED)
|
||||
return;
|
||||
|
||||
resetDownloaderUnlocked(lock);
|
||||
if (!downloader_id.empty())
|
||||
resetDownloaderUnlocked(lock);
|
||||
setDetachedState(lock);
|
||||
}
|
||||
|
||||
|
@ -360,12 +360,6 @@ struct FileSegmentsHolder : private boost::noncopyable
|
||||
FileSegments::const_iterator begin() const { return file_segments.begin(); }
|
||||
FileSegments::const_iterator end() const { return file_segments.end(); }
|
||||
|
||||
void moveTo(FileSegmentsHolder & holder)
|
||||
{
|
||||
holder.file_segments.insert(holder.file_segments.end(), file_segments.begin(), file_segments.end());
|
||||
file_segments.clear();
|
||||
}
|
||||
|
||||
private:
|
||||
FileSegments file_segments{};
|
||||
const bool complete_on_dtor = true;
|
||||
|
@ -145,15 +145,12 @@ String CacheMetadata::getFileNameForFileSegment(size_t offset, FileSegmentKind s
|
||||
return std::to_string(offset) + file_suffix;
|
||||
}
|
||||
|
||||
String CacheMetadata::getPathInLocalCache(const Key & key, size_t offset, FileSegmentKind segment_kind) const
|
||||
String CacheMetadata::getPathForFileSegment(const Key & key, size_t offset, FileSegmentKind segment_kind) const
|
||||
{
|
||||
String file_suffix;
|
||||
|
||||
const auto key_str = key.toString();
|
||||
return fs::path(path) / key_str.substr(0, 3) / key_str / getFileNameForFileSegment(offset, segment_kind);
|
||||
return fs::path(getPathForKey(key)) / getFileNameForFileSegment(offset, segment_kind);
|
||||
}
|
||||
|
||||
String CacheMetadata::getPathInLocalCache(const Key & key) const
|
||||
String CacheMetadata::getPathForKey(const Key & key) const
|
||||
{
|
||||
const auto key_str = key.toString();
|
||||
return fs::path(path) / key_str.substr(0, 3) / key_str;
|
||||
@ -178,7 +175,7 @@ LockedKeyPtr CacheMetadata::lockKeyMetadata(
|
||||
|
||||
it = emplace(
|
||||
key, std::make_shared<KeyMetadata>(
|
||||
key, getPathInLocalCache(key), *cleanup_queue, is_initial_load)).first;
|
||||
key, getPathForKey(key), *cleanup_queue, is_initial_load)).first;
|
||||
}
|
||||
|
||||
key_metadata = it->second;
|
||||
@ -260,7 +257,7 @@ void CacheMetadata::doCleanup()
|
||||
erase(it);
|
||||
LOG_DEBUG(log, "Key {} is removed from metadata", cleanup_key);
|
||||
|
||||
const fs::path key_directory = getPathInLocalCache(cleanup_key);
|
||||
const fs::path key_directory = getPathForKey(cleanup_key);
|
||||
const fs::path key_prefix_directory = key_directory.parent_path();
|
||||
|
||||
try
|
||||
@ -380,8 +377,14 @@ KeyMetadata::iterator LockedKey::removeFileSegment(size_t offset, const FileSegm
|
||||
file_segment->queue_iterator->annul();
|
||||
|
||||
const auto path = key_metadata->getFileSegmentPath(*file_segment);
|
||||
if (fs::exists(path))
|
||||
bool exists = fs::exists(path);
|
||||
if (exists)
|
||||
{
|
||||
fs::remove(path);
|
||||
LOG_TEST(log, "Removed file segment at path: {}", path);
|
||||
}
|
||||
else if (file_segment->downloaded_size)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Expected path {} to exist", path);
|
||||
|
||||
file_segment->detach(segment_lock, *this);
|
||||
return key_metadata->erase(it);
|
||||
|
@ -85,12 +85,12 @@ public:
|
||||
|
||||
const String & getBaseDirectory() const { return path; }
|
||||
|
||||
String getPathInLocalCache(
|
||||
String getPathForFileSegment(
|
||||
const Key & key,
|
||||
size_t offset,
|
||||
FileSegmentKind segment_kind) const;
|
||||
|
||||
String getPathInLocalCache(const Key & key) const;
|
||||
String getPathForKey(const Key & key) const;
|
||||
static String getFileNameForFileSegment(size_t offset, FileSegmentKind segment_kind);
|
||||
|
||||
void iterate(IterateCacheMetadataFunc && func);
|
||||
|
@ -170,7 +170,7 @@ public:
|
||||
else if (getContext()->getSettingsRef().use_index_for_in_with_subqueries)
|
||||
{
|
||||
auto external_table = external_storage_holder->getTable();
|
||||
auto table_out = external_table->write({}, external_table->getInMemoryMetadataPtr(), getContext());
|
||||
auto table_out = external_table->write({}, external_table->getInMemoryMetadataPtr(), getContext(), /*async_insert=*/false);
|
||||
auto io = interpreter->execute();
|
||||
io.pipeline.complete(std::move(table_out));
|
||||
CompletedPipelineExecutor executor(io.pipeline);
|
||||
|
@ -707,8 +707,9 @@ Block HashJoin::prepareRightBlock(const Block & block, const Block & saved_block
|
||||
for (const auto & sample_column : saved_block_sample_.getColumnsWithTypeAndName())
|
||||
{
|
||||
ColumnWithTypeAndName column = block.getByName(sample_column.name);
|
||||
if (sample_column.column->isNullable())
|
||||
JoinCommon::convertColumnToNullable(column);
|
||||
|
||||
/// There's no optimization for right side const columns. Remove constness if any.
|
||||
column.column = recursiveRemoveSparse(column.column->convertToFullColumnIfConst());
|
||||
|
||||
if (column.column->lowCardinality() && !sample_column.column->lowCardinality())
|
||||
{
|
||||
@ -716,8 +717,9 @@ Block HashJoin::prepareRightBlock(const Block & block, const Block & saved_block
|
||||
column.type = removeLowCardinality(column.type);
|
||||
}
|
||||
|
||||
/// There's no optimization for right side const columns. Remove constness if any.
|
||||
column.column = recursiveRemoveSparse(column.column->convertToFullColumnIfConst());
|
||||
if (sample_column.column->isNullable())
|
||||
JoinCommon::convertColumnToNullable(column);
|
||||
|
||||
structured_block.insert(std::move(column));
|
||||
}
|
||||
|
||||
|
@ -282,7 +282,7 @@ Chain InterpreterInsertQuery::buildSink(
|
||||
/// Otherwise we'll get duplicates when MV reads same rows again from Kafka.
|
||||
if (table->noPushingToViews() && !no_destination)
|
||||
{
|
||||
auto sink = table->write(query_ptr, metadata_snapshot, context_ptr);
|
||||
auto sink = table->write(query_ptr, metadata_snapshot, context_ptr, async_insert);
|
||||
sink->setRuntimeData(thread_status, elapsed_counter_ms);
|
||||
out.addSource(std::move(sink));
|
||||
}
|
||||
@ -290,7 +290,7 @@ Chain InterpreterInsertQuery::buildSink(
|
||||
{
|
||||
out = buildPushingToViewsChain(table, metadata_snapshot, context_ptr,
|
||||
query_ptr, no_destination,
|
||||
thread_status_holder, running_group, elapsed_counter_ms);
|
||||
thread_status_holder, running_group, elapsed_counter_ms, async_insert);
|
||||
}
|
||||
|
||||
return out;
|
||||
|
@ -160,16 +160,14 @@ static ColumnPtr tryConvertColumnToNullable(ColumnPtr col)
|
||||
|
||||
if (col->lowCardinality())
|
||||
{
|
||||
auto mut_col = IColumn::mutate(std::move(col));
|
||||
ColumnLowCardinality * col_lc = assert_cast<ColumnLowCardinality *>(mut_col.get());
|
||||
if (col_lc->nestedIsNullable())
|
||||
const ColumnLowCardinality & col_lc = assert_cast<const ColumnLowCardinality &>(*col);
|
||||
if (col_lc.nestedIsNullable())
|
||||
{
|
||||
return mut_col;
|
||||
return col;
|
||||
}
|
||||
else if (col_lc->nestedCanBeInsideNullable())
|
||||
else if (col_lc.nestedCanBeInsideNullable())
|
||||
{
|
||||
col_lc->nestedToNullable();
|
||||
return mut_col;
|
||||
return col_lc.cloneNullable();
|
||||
}
|
||||
}
|
||||
else if (const ColumnConst * col_const = checkAndGetColumn<ColumnConst>(*col))
|
||||
@ -232,11 +230,7 @@ void removeColumnNullability(ColumnWithTypeAndName & column)
|
||||
|
||||
if (column.column && column.column->lowCardinality())
|
||||
{
|
||||
auto mut_col = IColumn::mutate(std::move(column.column));
|
||||
ColumnLowCardinality * col_as_lc = typeid_cast<ColumnLowCardinality *>(mut_col.get());
|
||||
if (col_as_lc && col_as_lc->nestedIsNullable())
|
||||
col_as_lc->nestedRemoveNullable();
|
||||
column.column = std::move(mut_col);
|
||||
column.column = assert_cast<const ColumnLowCardinality *>(column.column.get())->cloneWithDefaultOnNull();
|
||||
}
|
||||
}
|
||||
else
|
||||
|
@ -1,144 +0,0 @@
|
||||
#include <Interpreters/OptimizeDateFilterVisitor.h>
|
||||
|
||||
#include <Common/DateLUT.h>
|
||||
#include <Common/DateLUTImpl.h>
|
||||
#include <Parsers/ASTIdentifier.h>
|
||||
#include <Parsers/ASTLiteral.h>
|
||||
#include <Parsers/ASTFunction.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
ASTPtr generateOptimizedDateFilterAST(const String & comparator, const String & converter, const String & column, UInt64 compare_to)
|
||||
{
|
||||
const DateLUTImpl & date_lut = DateLUT::instance();
|
||||
|
||||
String start_date;
|
||||
String end_date;
|
||||
|
||||
if (converter == "toYear")
|
||||
{
|
||||
UInt64 year = compare_to;
|
||||
start_date = date_lut.dateToString(date_lut.makeDayNum(year, 1, 1));
|
||||
end_date = date_lut.dateToString(date_lut.makeDayNum(year, 12, 31));
|
||||
}
|
||||
else if (converter == "toYYYYMM")
|
||||
{
|
||||
UInt64 year = compare_to / 100;
|
||||
UInt64 month = compare_to % 100;
|
||||
|
||||
if (month == 0 || month > 12) return {};
|
||||
|
||||
static constexpr UInt8 days_of_month[] = {31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31};
|
||||
|
||||
bool leap_year = (year & 3) == 0 && (year % 100 || (year % 400 == 0 && year));
|
||||
|
||||
start_date = date_lut.dateToString(date_lut.makeDayNum(year, month, 1));
|
||||
end_date = date_lut.dateToString(date_lut.makeDayNum(year, month, days_of_month[month - 1] + (leap_year && month == 2)));
|
||||
}
|
||||
else
|
||||
{
|
||||
return {};
|
||||
}
|
||||
|
||||
if (comparator == "equals")
|
||||
{
|
||||
return makeASTFunction("and",
|
||||
makeASTFunction("greaterOrEquals",
|
||||
std::make_shared<ASTIdentifier>(column),
|
||||
std::make_shared<ASTLiteral>(start_date)
|
||||
),
|
||||
makeASTFunction("lessOrEquals",
|
||||
std::make_shared<ASTIdentifier>(column),
|
||||
std::make_shared<ASTLiteral>(end_date)
|
||||
)
|
||||
);
|
||||
}
|
||||
else if (comparator == "notEquals")
|
||||
{
|
||||
return makeASTFunction("or",
|
||||
makeASTFunction("less",
|
||||
std::make_shared<ASTIdentifier>(column),
|
||||
std::make_shared<ASTLiteral>(start_date)
|
||||
),
|
||||
makeASTFunction("greater",
|
||||
std::make_shared<ASTIdentifier>(column),
|
||||
std::make_shared<ASTLiteral>(end_date)
|
||||
)
|
||||
);
|
||||
}
|
||||
else if (comparator == "less" || comparator == "greaterOrEquals")
|
||||
{
|
||||
return makeASTFunction(comparator,
|
||||
std::make_shared<ASTIdentifier>(column),
|
||||
std::make_shared<ASTLiteral>(start_date)
|
||||
);
|
||||
}
|
||||
else
|
||||
{
|
||||
return makeASTFunction(comparator,
|
||||
std::make_shared<ASTIdentifier>(column),
|
||||
std::make_shared<ASTLiteral>(end_date)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
bool rewritePredicateInPlace(ASTFunction & function, ASTPtr & ast)
|
||||
{
|
||||
const static std::unordered_map<String, String> swap_relations = {
|
||||
{"equals", "equals"},
|
||||
{"notEquals", "notEquals"},
|
||||
{"less", "greater"},
|
||||
{"greater", "less"},
|
||||
{"lessOrEquals", "greaterOrEquals"},
|
||||
{"greaterOrEquals", "lessOrEquals"},
|
||||
};
|
||||
|
||||
if (!swap_relations.contains(function.name)) return false;
|
||||
|
||||
if (!function.arguments || function.arguments->children.size() != 2) return false;
|
||||
|
||||
size_t func_id = function.arguments->children.size();
|
||||
|
||||
for (size_t i = 0; i < function.arguments->children.size(); i++)
|
||||
{
|
||||
if (const auto * func = function.arguments->children[i]->as<ASTFunction>(); func)
|
||||
{
|
||||
if (func->name == "toYear" || func->name == "toYYYYMM")
|
||||
{
|
||||
func_id = i;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (func_id == function.arguments->children.size()) return false;
|
||||
|
||||
size_t literal_id = 1 - func_id;
|
||||
const auto * literal = function.arguments->children[literal_id]->as<ASTLiteral>();
|
||||
|
||||
if (!literal || literal->value.getType() != Field::Types::UInt64) return false;
|
||||
|
||||
UInt64 compare_to = literal->value.get<UInt64>();
|
||||
String comparator = literal_id > func_id ? function.name : swap_relations.at(function.name);
|
||||
|
||||
const auto * func = function.arguments->children[func_id]->as<ASTFunction>();
|
||||
const auto * column_id = func->arguments->children.at(0)->as<ASTIdentifier>();
|
||||
|
||||
if (!column_id) return false;
|
||||
|
||||
String column = column_id->name();
|
||||
|
||||
const auto new_ast = generateOptimizedDateFilterAST(comparator, func->name, column, compare_to);
|
||||
|
||||
if (!new_ast) return false;
|
||||
|
||||
ast = new_ast;
|
||||
return true;
|
||||
}
|
||||
|
||||
void OptimizeDateFilterInPlaceData::visit(ASTFunction & function, ASTPtr & ast) const
|
||||
{
|
||||
rewritePredicateInPlace(function, ast);
|
||||
}
|
||||
}
|
@ -1,20 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <Interpreters/InDepthNodeVisitor.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
class ASTFunction;
|
||||
|
||||
/// Rewrite the predicates in place
|
||||
class OptimizeDateFilterInPlaceData
|
||||
{
|
||||
public:
|
||||
using TypeToVisit = ASTFunction;
|
||||
void visit(ASTFunction & function, ASTPtr & ast) const;
|
||||
};
|
||||
|
||||
using OptimizeDateFilterInPlaceMatcher = OneTypeMatcher<OptimizeDateFilterInPlaceData>;
|
||||
using OptimizeDateFilterInPlaceVisitor = InDepthNodeVisitor<OptimizeDateFilterInPlaceMatcher, true>;
|
||||
}
|
@ -232,8 +232,17 @@ public:
|
||||
bool allowParallelHashJoin() const;
|
||||
|
||||
bool joinUseNulls() const { return join_use_nulls; }
|
||||
bool forceNullableRight() const { return join_use_nulls && isLeftOrFull(kind()); }
|
||||
bool forceNullableLeft() const { return join_use_nulls && isRightOrFull(kind()); }
|
||||
|
||||
bool forceNullableRight() const
|
||||
{
|
||||
return join_use_nulls && isLeftOrFull(kind());
|
||||
}
|
||||
|
||||
bool forceNullableLeft() const
|
||||
{
|
||||
return join_use_nulls && isRightOrFull(kind());
|
||||
}
|
||||
|
||||
size_t defaultMaxBytes() const { return default_max_bytes; }
|
||||
size_t maxJoinedBlockRows() const { return max_joined_block_rows; }
|
||||
size_t maxRowsInRightBlock() const { return partial_merge_join_rows_in_right_blocks; }
|
||||
|
@ -25,7 +25,6 @@
|
||||
#include <Interpreters/GatherFunctionQuantileVisitor.h>
|
||||
#include <Interpreters/RewriteSumIfFunctionVisitor.h>
|
||||
#include <Interpreters/RewriteArrayExistsFunctionVisitor.h>
|
||||
#include <Interpreters/OptimizeDateFilterVisitor.h>
|
||||
|
||||
#include <Parsers/ASTExpressionList.h>
|
||||
#include <Parsers/ASTFunction.h>
|
||||
@ -678,21 +677,6 @@ void optimizeInjectiveFunctionsInsideUniq(ASTPtr & query, ContextPtr context)
|
||||
RemoveInjectiveFunctionsVisitor(data).visit(query);
|
||||
}
|
||||
|
||||
void optimizeDateFilters(ASTSelectQuery * select_query)
|
||||
{
|
||||
/// Predicates in HAVING clause has been moved to WHERE clause.
|
||||
if (select_query->where())
|
||||
{
|
||||
OptimizeDateFilterInPlaceVisitor::Data data;
|
||||
OptimizeDateFilterInPlaceVisitor(data).visit(select_query->refWhere());
|
||||
}
|
||||
if (select_query->prewhere())
|
||||
{
|
||||
OptimizeDateFilterInPlaceVisitor::Data data;
|
||||
OptimizeDateFilterInPlaceVisitor(data).visit(select_query->refPrewhere());
|
||||
}
|
||||
}
|
||||
|
||||
void transformIfStringsIntoEnum(ASTPtr & query)
|
||||
{
|
||||
std::unordered_set<String> function_names = {"if", "transform"};
|
||||
@ -796,9 +780,6 @@ void TreeOptimizer::apply(ASTPtr & query, TreeRewriterResult & result,
|
||||
tables_with_columns, result.storage_snapshot->metadata, result.storage);
|
||||
}
|
||||
|
||||
/// Rewrite date filters to avoid the calls of converters such as toYear, toYYYYMM, toISOWeek, etc.
|
||||
optimizeDateFilters(select_query);
|
||||
|
||||
/// GROUP BY injective function elimination.
|
||||
optimizeGroupBy(select_query, context);
|
||||
|
||||
|
@ -192,6 +192,22 @@ Field convertFieldToTypeImpl(const Field & src, const IDataType & type, const ID
|
||||
{
|
||||
return static_cast<const DataTypeDateTime &>(type).getTimeZone().fromDayNum(DayNum(src.get<Int32>()));
|
||||
}
|
||||
else if (which_type.isDateTime64() && which_from_type.isDate())
|
||||
{
|
||||
const auto & date_time64_type = static_cast<const DataTypeDateTime64 &>(type);
|
||||
const auto value = date_time64_type.getTimeZone().fromDayNum(DayNum(src.get<UInt16>()));
|
||||
return DecimalField(
|
||||
DecimalUtils::decimalFromComponentsWithMultiplier<DateTime64>(value, 0, date_time64_type.getScaleMultiplier()),
|
||||
date_time64_type.getScale());
|
||||
}
|
||||
else if (which_type.isDateTime64() && which_from_type.isDate32())
|
||||
{
|
||||
const auto & date_time64_type = static_cast<const DataTypeDateTime64 &>(type);
|
||||
const auto value = date_time64_type.getTimeZone().fromDayNum(ExtendedDayNum(static_cast<Int32>(src.get<Int32>())));
|
||||
return DecimalField(
|
||||
DecimalUtils::decimalFromComponentsWithMultiplier<DateTime64>(value, 0, date_time64_type.getScaleMultiplier()),
|
||||
date_time64_type.getScale());
|
||||
}
|
||||
else if (type.isValueRepresentedByNumber() && src.getType() != Field::Types::String)
|
||||
{
|
||||
if (which_type.isUInt8()) return convertNumericType<UInt8>(src, type);
|
||||
@ -534,7 +550,7 @@ Field convertFieldToType(const Field & from_value, const IDataType & to_type, co
|
||||
Field convertFieldToTypeOrThrow(const Field & from_value, const IDataType & to_type, const IDataType * from_type_hint)
|
||||
{
|
||||
bool is_null = from_value.isNull();
|
||||
if (is_null && !to_type.isNullable())
|
||||
if (is_null && !to_type.isNullable() && !to_type.isLowCardinalityNullable())
|
||||
throw Exception(ErrorCodes::TYPE_MISMATCH, "Cannot convert NULL to {}", to_type.getName());
|
||||
|
||||
Field converted = convertFieldToType(from_value, to_type, from_type_hint);
|
||||
|
184
src/Interpreters/tests/gtest_convertFieldToType.cpp
Normal file
184
src/Interpreters/tests/gtest_convertFieldToType.cpp
Normal file
@ -0,0 +1,184 @@
|
||||
#include <initializer_list>
|
||||
#include <limits>
|
||||
#include <ostream>
|
||||
#include <Core/Field.h>
|
||||
#include <Core/iostream_debug_helpers.h>
|
||||
#include <Interpreters/convertFieldToType.h>
|
||||
#include <DataTypes/DataTypeFactory.h>
|
||||
|
||||
#include <gtest/gtest.h>
|
||||
#include "base/Decimal.h"
|
||||
#include "base/types.h"
|
||||
|
||||
using namespace DB;
|
||||
|
||||
struct ConvertFieldToTypeTestParams
|
||||
{
|
||||
const char * from_type; // MUST NOT BE NULL
|
||||
const Field from_value;
|
||||
const char * to_type; // MUST NOT BE NULL
|
||||
const std::optional<Field> expected_value;
|
||||
};
|
||||
|
||||
std::ostream & operator << (std::ostream & ostr, const ConvertFieldToTypeTestParams & params)
|
||||
{
|
||||
return ostr << "{"
|
||||
<< "\n\tfrom_type : " << params.from_type
|
||||
<< "\n\tfrom_value : " << params.from_value
|
||||
<< "\n\tto_type : " << params.to_type
|
||||
<< "\n\texpected : " << (params.expected_value ? *params.expected_value : Field())
|
||||
<< "\n}";
|
||||
}
|
||||
|
||||
class ConvertFieldToTypeTest : public ::testing::TestWithParam<ConvertFieldToTypeTestParams>
|
||||
{};
|
||||
|
||||
TEST_P(ConvertFieldToTypeTest, convert)
|
||||
{
|
||||
const auto & params = GetParam();
|
||||
|
||||
ASSERT_NE(nullptr, params.from_type);
|
||||
ASSERT_NE(nullptr, params.to_type);
|
||||
|
||||
const auto & type_factory = DataTypeFactory::instance();
|
||||
const auto from_type = type_factory.get(params.from_type);
|
||||
const auto to_type = type_factory.get(params.to_type);
|
||||
|
||||
if (params.expected_value)
|
||||
{
|
||||
const auto result = convertFieldToType(params.from_value, *to_type, from_type.get());
|
||||
EXPECT_EQ(*params.expected_value, result);
|
||||
}
|
||||
else
|
||||
{
|
||||
EXPECT_ANY_THROW(convertFieldToType(params.from_value, *to_type, from_type.get()));
|
||||
}
|
||||
}
|
||||
|
||||
// Basically, the number of seconds in a day works for UTC here
|
||||
const Int64 Day = 24 * 60 * 60;
|
||||
|
||||
// 123 is arbitrary value here
|
||||
|
||||
INSTANTIATE_TEST_SUITE_P(
|
||||
DateToDateTime64,
|
||||
ConvertFieldToTypeTest,
|
||||
::testing::ValuesIn(std::initializer_list<ConvertFieldToTypeTestParams>{
|
||||
// min value of Date
|
||||
{
|
||||
"Date",
|
||||
Field(0),
|
||||
"DateTime64(0, 'UTC')",
|
||||
DecimalField(DateTime64(0), 0)
|
||||
},
|
||||
// Max value of Date
|
||||
{
|
||||
"Date",
|
||||
Field(std::numeric_limits<DB::UInt16>::max()),
|
||||
"DateTime64(0, 'UTC')",
|
||||
DecimalField(DateTime64(std::numeric_limits<DB::UInt16>::max() * Day), 0)
|
||||
},
|
||||
// check that scale is respected
|
||||
{
|
||||
"Date",
|
||||
Field(123),
|
||||
"DateTime64(0, 'UTC')",
|
||||
DecimalField(DateTime64(123 * Day), 0)
|
||||
},
|
||||
{
|
||||
"Date",
|
||||
Field(1),
|
||||
"DateTime64(1, 'UTC')",
|
||||
DecimalField(DateTime64(Day * 10), 1)
|
||||
},
|
||||
{
|
||||
"Date",
|
||||
Field(123),
|
||||
"DateTime64(3, 'UTC')",
|
||||
DecimalField(DateTime64(123 * Day * 1000), 3)
|
||||
},
|
||||
{
|
||||
"Date",
|
||||
Field(123),
|
||||
"DateTime64(6, 'UTC')",
|
||||
DecimalField(DateTime64(123 * Day * 1'000'000), 6)
|
||||
},
|
||||
})
|
||||
);
|
||||
|
||||
INSTANTIATE_TEST_SUITE_P(
|
||||
Date32ToDateTime64,
|
||||
ConvertFieldToTypeTest,
|
||||
::testing::ValuesIn(std::initializer_list<ConvertFieldToTypeTestParams>{
|
||||
// min value of Date32: 1st Jan 1900 (see DATE_LUT_MIN_YEAR)
|
||||
{
|
||||
"Date32",
|
||||
Field(-25'567),
|
||||
"DateTime64(0, 'UTC')",
|
||||
DecimalField(DateTime64(-25'567 * Day), 0)
|
||||
},
|
||||
// max value of Date32: 31 Dec 2299 (see DATE_LUT_MAX_YEAR)
|
||||
{
|
||||
"Date32",
|
||||
Field(120'529),
|
||||
"DateTime64(0, 'UTC')",
|
||||
DecimalField(DateTime64(120'529 * Day), 0)
|
||||
},
|
||||
// check that scale is respected
|
||||
{
|
||||
"Date32",
|
||||
Field(123),
|
||||
"DateTime64(0, 'UTC')",
|
||||
DecimalField(DateTime64(123 * Day), 0)
|
||||
},
|
||||
{
|
||||
"Date32",
|
||||
Field(123),
|
||||
"DateTime64(1, 'UTC')",
|
||||
DecimalField(DateTime64(123 * Day * 10), 1)
|
||||
},
|
||||
{
|
||||
"Date32",
|
||||
Field(123),
|
||||
"DateTime64(3, 'UTC')",
|
||||
DecimalField(DateTime64(123 * Day * 1000), 3)
|
||||
},
|
||||
{
|
||||
"Date32",
|
||||
Field(123),
|
||||
"DateTime64(6, 'UTC')",
|
||||
DecimalField(DateTime64(123 * Day * 1'000'000), 6)
|
||||
}
|
||||
})
|
||||
);
|
||||
|
||||
INSTANTIATE_TEST_SUITE_P(
|
||||
DateTimeToDateTime64,
|
||||
ConvertFieldToTypeTest,
|
||||
::testing::ValuesIn(std::initializer_list<ConvertFieldToTypeTestParams>{
|
||||
{
|
||||
"DateTime",
|
||||
Field(1),
|
||||
"DateTime64(0, 'UTC')",
|
||||
DecimalField(DateTime64(1), 0)
|
||||
},
|
||||
{
|
||||
"DateTime",
|
||||
Field(1),
|
||||
"DateTime64(1, 'UTC')",
|
||||
DecimalField(DateTime64(1'0), 1)
|
||||
},
|
||||
{
|
||||
"DateTime",
|
||||
Field(123),
|
||||
"DateTime64(3, 'UTC')",
|
||||
DecimalField(DateTime64(123'000), 3)
|
||||
},
|
||||
{
|
||||
"DateTime",
|
||||
Field(123),
|
||||
"DateTime64(6, 'UTC')",
|
||||
DecimalField(DateTime64(123'000'000), 6)
|
||||
},
|
||||
})
|
||||
);
|
@ -44,8 +44,9 @@ bool ParserDropQuery::parseImpl(IParser::Pos & pos, ASTPtr & node, Expected & ex
|
||||
bool if_exists = false;
|
||||
bool is_truncate = false;
|
||||
|
||||
if (s_truncate.ignore(pos, expected) && s_table.ignore(pos, expected))
|
||||
if (s_truncate.ignore(pos, expected))
|
||||
{
|
||||
s_table.ignore(pos, expected);
|
||||
is_truncate = true;
|
||||
query->kind = ASTDropQuery::Kind::Table;
|
||||
ASTDropQuery::QualifiedName name;
|
||||
|
@ -36,17 +36,17 @@ bool ParserCreateIndexDeclaration::parseImpl(Pos & pos, ASTPtr & node, Expected
|
||||
if (!data_type_p.parse(pos, type, expected))
|
||||
return false;
|
||||
|
||||
if (!s_granularity.ignore(pos, expected))
|
||||
return false;
|
||||
|
||||
if (!granularity_p.parse(pos, granularity, expected))
|
||||
return false;
|
||||
if (s_granularity.ignore(pos, expected))
|
||||
{
|
||||
if (!granularity_p.parse(pos, granularity, expected))
|
||||
return false;
|
||||
}
|
||||
|
||||
auto index = std::make_shared<ASTIndexDeclaration>();
|
||||
index->part_of_create_index_query = true;
|
||||
index->granularity = granularity->as<ASTLiteral &>().value.safeGet<UInt64>();
|
||||
index->set(index->expr, expr);
|
||||
index->set(index->type, type);
|
||||
index->granularity = granularity ? granularity->as<ASTLiteral &>().value.safeGet<UInt64>() : 1;
|
||||
node = index;
|
||||
|
||||
return true;
|
||||
|
@ -139,9 +139,9 @@ bool ParserIndexDeclaration::parseImpl(Pos & pos, ASTPtr & node, Expected & expe
|
||||
|
||||
auto index = std::make_shared<ASTIndexDeclaration>();
|
||||
index->name = name->as<ASTIdentifier &>().name();
|
||||
index->granularity = granularity ? granularity->as<ASTLiteral &>().value.safeGet<UInt64>() : 1;
|
||||
index->set(index->expr, expr);
|
||||
index->set(index->type, type);
|
||||
index->granularity = granularity ? granularity->as<ASTLiteral &>().value.safeGet<UInt64>() : 1;
|
||||
node = index;
|
||||
|
||||
return true;
|
||||
|
@ -28,7 +28,7 @@ bool ParserShowIndexesQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expe
|
||||
if (ParserKeyword("EXTENDED").ignore(pos, expected))
|
||||
query->extended = true;
|
||||
|
||||
if (!(ParserKeyword("INDEX").ignore(pos, expected) || ParserKeyword("INDEXES").ignore(pos, expected) || ParserKeyword("KEYS").ignore(pos, expected)))
|
||||
if (!(ParserKeyword("INDEX").ignore(pos, expected) || ParserKeyword("INDEXES").ignore(pos, expected) || ParserKeyword("INDICES").ignore(pos, expected) || ParserKeyword("KEYS").ignore(pos, expected)))
|
||||
return false;
|
||||
|
||||
if (ParserKeyword("FROM").ignore(pos, expected) || ParserKeyword("IN").ignore(pos, expected))
|
||||
|
@ -67,7 +67,8 @@ public:
|
||||
planner_context.registerSet(set_key, PlannerSet(FutureSet(std::move(set))));
|
||||
}
|
||||
else if (in_second_argument_node_type == QueryTreeNodeType::QUERY ||
|
||||
in_second_argument_node_type == QueryTreeNodeType::UNION)
|
||||
in_second_argument_node_type == QueryTreeNodeType::UNION ||
|
||||
in_second_argument_node_type == QueryTreeNodeType::TABLE)
|
||||
{
|
||||
planner_context.registerSet(set_key, PlannerSet(in_second_argument));
|
||||
}
|
||||
|
@ -43,6 +43,7 @@
|
||||
#include <Storages/IStorage.h>
|
||||
|
||||
#include <Analyzer/Utils.h>
|
||||
#include <Analyzer/ColumnNode.h>
|
||||
#include <Analyzer/ConstantNode.h>
|
||||
#include <Analyzer/FunctionNode.h>
|
||||
#include <Analyzer/SortNode.h>
|
||||
@ -909,12 +910,42 @@ void addBuildSubqueriesForSetsStepIfNeeded(QueryPlan & query_plan,
|
||||
if (!planner_set)
|
||||
continue;
|
||||
|
||||
if (planner_set->getSet().isCreated() || !planner_set->getSubqueryNode())
|
||||
auto subquery_to_execute = planner_set->getSubqueryNode();
|
||||
|
||||
if (planner_set->getSet().isCreated() || !subquery_to_execute)
|
||||
continue;
|
||||
|
||||
if (auto * table_node = subquery_to_execute->as<TableNode>())
|
||||
{
|
||||
auto storage_snapshot = table_node->getStorageSnapshot();
|
||||
auto columns_to_select = storage_snapshot->getColumns(GetColumnsOptions(GetColumnsOptions::Ordinary));
|
||||
|
||||
size_t columns_to_select_size = columns_to_select.size();
|
||||
|
||||
auto column_nodes_to_select = std::make_shared<ListNode>();
|
||||
column_nodes_to_select->getNodes().reserve(columns_to_select_size);
|
||||
|
||||
NamesAndTypes projection_columns;
|
||||
projection_columns.reserve(columns_to_select_size);
|
||||
|
||||
for (auto & column : columns_to_select)
|
||||
{
|
||||
column_nodes_to_select->getNodes().emplace_back(std::make_shared<ColumnNode>(column, subquery_to_execute));
|
||||
projection_columns.emplace_back(column.name, column.type);
|
||||
}
|
||||
|
||||
auto subquery_for_table = std::make_shared<QueryNode>(Context::createCopy(planner_context->getQueryContext()));
|
||||
subquery_for_table->setIsSubquery(true);
|
||||
subquery_for_table->getProjectionNode() = std::move(column_nodes_to_select);
|
||||
subquery_for_table->getJoinTree() = std::move(subquery_to_execute);
|
||||
subquery_for_table->resolveProjectionColumns(std::move(projection_columns));
|
||||
|
||||
subquery_to_execute = std::move(subquery_for_table);
|
||||
}
|
||||
|
||||
auto subquery_options = select_query_options.subquery();
|
||||
Planner subquery_planner(
|
||||
planner_set->getSubqueryNode(),
|
||||
subquery_to_execute,
|
||||
subquery_options,
|
||||
planner_context->getGlobalPlannerContext());
|
||||
subquery_planner.buildQueryPlanIfNeeded();
|
||||
|
@ -19,18 +19,10 @@ const ColumnIdentifier & GlobalPlannerContext::createColumnIdentifier(const Quer
|
||||
return createColumnIdentifier(column_node_typed.getColumn(), column_source_node);
|
||||
}
|
||||
|
||||
const ColumnIdentifier & GlobalPlannerContext::createColumnIdentifier(const NameAndTypePair & column, const QueryTreeNodePtr & column_source_node)
|
||||
const ColumnIdentifier & GlobalPlannerContext::createColumnIdentifier(const NameAndTypePair & column, const QueryTreeNodePtr & /*column_source_node*/)
|
||||
{
|
||||
std::string column_identifier;
|
||||
|
||||
if (column_source_node->hasAlias())
|
||||
column_identifier += column_source_node->getAlias();
|
||||
else if (const auto * table_source_node = column_source_node->as<TableNode>())
|
||||
column_identifier += table_source_node->getStorageID().getFullNameNotQuoted();
|
||||
|
||||
if (!column_identifier.empty())
|
||||
column_identifier += '.';
|
||||
|
||||
column_identifier += column.name;
|
||||
column_identifier += '_' + std::to_string(column_identifiers.size());
|
||||
|
||||
@ -137,7 +129,8 @@ void PlannerContext::registerSet(const SetKey & key, PlannerSet planner_set)
|
||||
auto node_type = subquery_node->getNodeType();
|
||||
|
||||
if (node_type != QueryTreeNodeType::QUERY &&
|
||||
node_type != QueryTreeNodeType::UNION)
|
||||
node_type != QueryTreeNodeType::UNION &&
|
||||
node_type != QueryTreeNodeType::TABLE)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||
"Invalid node for set table expression. Expected query or union. Actual {}",
|
||||
subquery_node->formatASTForErrorMessage());
|
||||
|
@ -106,7 +106,11 @@ void checkAccessRights(const TableNode & table_node, const Names & column_names,
|
||||
storage_id.getFullTableName());
|
||||
}
|
||||
|
||||
query_context->checkAccess(AccessType::SELECT, storage_id, column_names);
|
||||
// In case of cross-replication we don't know what database is used for the table.
|
||||
// `storage_id.hasDatabase()` can return false only on the initiator node.
|
||||
// Each shard will use the default database (in the case of cross-replication shards may have different defaults).
|
||||
if (storage_id.hasDatabase())
|
||||
query_context->checkAccess(AccessType::SELECT, storage_id, column_names);
|
||||
}
|
||||
|
||||
NameAndTypePair chooseSmallestColumnToReadFromStorage(const StoragePtr & storage, const StorageSnapshotPtr & storage_snapshot)
|
||||
@ -873,10 +877,11 @@ JoinTreeQueryPlan buildQueryPlanForJoinNode(const QueryTreeNodePtr & join_table_
|
||||
|
||||
JoinClausesAndActions join_clauses_and_actions;
|
||||
JoinKind join_kind = join_node.getKind();
|
||||
JoinStrictness join_strictness = join_node.getStrictness();
|
||||
|
||||
std::optional<bool> join_constant;
|
||||
|
||||
if (join_node.getStrictness() == JoinStrictness::All)
|
||||
if (join_strictness == JoinStrictness::All)
|
||||
join_constant = tryExtractConstantFromJoinNode(join_table_expression);
|
||||
|
||||
if (join_constant)
|
||||
|
@ -107,7 +107,10 @@ Block buildCommonHeaderForUnion(const Blocks & queries_headers, SelectUnionMode
|
||||
ASTPtr queryNodeToSelectQuery(const QueryTreeNodePtr & query_node)
|
||||
{
|
||||
auto & query_node_typed = query_node->as<QueryNode &>();
|
||||
auto result_ast = query_node_typed.toAST();
|
||||
|
||||
// In case of cross-replication we don't know what database is used for the table.
|
||||
// Each shard will use the default database (in the case of cross-replication shards may have different defaults).
|
||||
auto result_ast = query_node_typed.toAST({ .qualify_indentifiers_with_database = false });
|
||||
|
||||
while (true)
|
||||
{
|
||||
|
@ -176,13 +176,16 @@ static AvroDeserializer::DeserializeFn createDecimalDeserializeFn(const avro::No
|
||||
{
|
||||
static constexpr size_t field_type_size = sizeof(typename DecimalType::FieldType);
|
||||
decoder.decodeString(tmp);
|
||||
if (tmp.size() != field_type_size)
|
||||
if (tmp.size() > field_type_size)
|
||||
throw ParsingException(
|
||||
ErrorCodes::CANNOT_PARSE_UUID,
|
||||
"Cannot parse type {}, expected binary data with size {}, got {}",
|
||||
"Cannot parse type {}, expected binary data with size equal to or less than {}, got {}",
|
||||
target_type->getName(),
|
||||
field_type_size,
|
||||
tmp.size());
|
||||
else if (tmp.size() != field_type_size)
|
||||
/// Add padding with 0-bytes.
|
||||
tmp = std::string(field_type_size - tmp.size(), '\0') + tmp;
|
||||
|
||||
typename DecimalType::FieldType field;
|
||||
ReadBufferFromString buf(tmp);
|
||||
|
@ -126,7 +126,9 @@ std::pair<std::vector<Values>, std::vector<RangesInDataParts>> split(RangesInDat
|
||||
return marks_in_current_layer < intersected_parts * 2;
|
||||
};
|
||||
|
||||
result_layers.emplace_back();
|
||||
auto & current_layer = result_layers.emplace_back();
|
||||
/// Map part_idx into index inside layer, used to merge marks from the same part into one reader
|
||||
std::unordered_map<size_t, size_t> part_idx_in_layer;
|
||||
|
||||
while (rows_in_current_layer < rows_per_layer || layers_intersection_is_too_big() || result_layers.size() == max_layers)
|
||||
{
|
||||
@ -140,11 +142,16 @@ std::pair<std::vector<Values>, std::vector<RangesInDataParts>> split(RangesInDat
|
||||
|
||||
if (current.event == PartsRangesIterator::EventType::RangeEnd)
|
||||
{
|
||||
result_layers.back().emplace_back(
|
||||
parts[part_idx].data_part,
|
||||
parts[part_idx].alter_conversions,
|
||||
parts[part_idx].part_index_in_query,
|
||||
MarkRanges{{current_part_range_begin[part_idx], current.range.end}});
|
||||
const auto & mark = MarkRange{current_part_range_begin[part_idx], current.range.end};
|
||||
auto it = part_idx_in_layer.emplace(std::make_pair(part_idx, current_layer.size()));
|
||||
if (it.second)
|
||||
current_layer.emplace_back(
|
||||
parts[part_idx].data_part,
|
||||
parts[part_idx].alter_conversions,
|
||||
parts[part_idx].part_index_in_query,
|
||||
MarkRanges{mark});
|
||||
else
|
||||
current_layer[it.first->second].ranges.push_back(mark);
|
||||
|
||||
current_part_range_begin.erase(part_idx);
|
||||
current_part_range_end.erase(part_idx);
|
||||
@ -170,11 +177,17 @@ std::pair<std::vector<Values>, std::vector<RangesInDataParts>> split(RangesInDat
|
||||
}
|
||||
for (const auto & [part_idx, last_mark] : current_part_range_end)
|
||||
{
|
||||
result_layers.back().emplace_back(
|
||||
parts[part_idx].data_part,
|
||||
parts[part_idx].alter_conversions,
|
||||
parts[part_idx].part_index_in_query,
|
||||
MarkRanges{{current_part_range_begin[part_idx], last_mark + 1}});
|
||||
const auto & mark = MarkRange{current_part_range_begin[part_idx], last_mark + 1};
|
||||
auto it = part_idx_in_layer.emplace(std::make_pair(part_idx, current_layer.size()));
|
||||
|
||||
if (it.second)
|
||||
result_layers.back().emplace_back(
|
||||
parts[part_idx].data_part,
|
||||
parts[part_idx].alter_conversions,
|
||||
parts[part_idx].part_index_in_query,
|
||||
MarkRanges{mark});
|
||||
else
|
||||
current_layer[it.first->second].ranges.push_back(mark);
|
||||
|
||||
current_part_range_begin[part_idx] = current_part_range_end[part_idx];
|
||||
}
|
||||
|
@ -91,7 +91,7 @@ void CreatingSetsTransform::startSubquery()
|
||||
|
||||
if (subquery.table)
|
||||
/// TODO: make via port
|
||||
table_out = QueryPipeline(subquery.table->write({}, subquery.table->getInMemoryMetadataPtr(), getContext()));
|
||||
table_out = QueryPipeline(subquery.table->write({}, subquery.table->getInMemoryMetadataPtr(), getContext(), /*async_insert=*/false));
|
||||
|
||||
done_with_set = !subquery.set_in_progress;
|
||||
done_with_table = !subquery.table;
|
||||
|
@ -196,6 +196,7 @@ Chain buildPushingToViewsChain(
|
||||
ThreadStatusesHolderPtr thread_status_holder,
|
||||
ThreadGroupPtr running_group,
|
||||
std::atomic_uint64_t * elapsed_counter_ms,
|
||||
bool async_insert,
|
||||
const Block & live_view_header)
|
||||
{
|
||||
checkStackSize();
|
||||
@ -347,7 +348,7 @@ Chain buildPushingToViewsChain(
|
||||
out = buildPushingToViewsChain(
|
||||
view, view_metadata_snapshot, insert_context, ASTPtr(),
|
||||
/* no_destination= */ true,
|
||||
thread_status_holder, running_group, view_counter_ms, storage_header);
|
||||
thread_status_holder, running_group, view_counter_ms, async_insert, storage_header);
|
||||
}
|
||||
else if (auto * window_view = dynamic_cast<StorageWindowView *>(view.get()))
|
||||
{
|
||||
@ -356,13 +357,13 @@ Chain buildPushingToViewsChain(
|
||||
out = buildPushingToViewsChain(
|
||||
view, view_metadata_snapshot, insert_context, ASTPtr(),
|
||||
/* no_destination= */ true,
|
||||
thread_status_holder, running_group, view_counter_ms);
|
||||
thread_status_holder, running_group, view_counter_ms, async_insert);
|
||||
}
|
||||
else
|
||||
out = buildPushingToViewsChain(
|
||||
view, view_metadata_snapshot, insert_context, ASTPtr(),
|
||||
/* no_destination= */ false,
|
||||
thread_status_holder, running_group, view_counter_ms);
|
||||
thread_status_holder, running_group, view_counter_ms, async_insert);
|
||||
|
||||
views_data->views.emplace_back(ViewRuntimeData{
|
||||
std::move(query),
|
||||
@ -444,7 +445,7 @@ Chain buildPushingToViewsChain(
|
||||
/// Do not push to destination table if the flag is set
|
||||
else if (!no_destination)
|
||||
{
|
||||
auto sink = storage->write(query_ptr, metadata_snapshot, context);
|
||||
auto sink = storage->write(query_ptr, metadata_snapshot, context, async_insert);
|
||||
metadata_snapshot->check(sink->getHeader().getColumnsWithTypeAndName());
|
||||
sink->setRuntimeData(thread_status, elapsed_counter_ms);
|
||||
result_chain.addSource(std::move(sink));
|
||||
|
@ -69,6 +69,8 @@ Chain buildPushingToViewsChain(
|
||||
ThreadGroupPtr running_group,
|
||||
/// Counter to measure time spent separately per view. Should be improved.
|
||||
std::atomic_uint64_t * elapsed_counter_ms,
|
||||
/// True if it's part of async insert flush
|
||||
bool async_insert,
|
||||
/// LiveView executes query itself, it needs source block structure.
|
||||
const Block & live_view_header = {});
|
||||
|
||||
|
@ -1101,7 +1101,7 @@ namespace
|
||||
{
|
||||
/// The data will be written directly to the table.
|
||||
auto metadata_snapshot = storage->getInMemoryMetadataPtr();
|
||||
auto sink = storage->write(ASTPtr(), metadata_snapshot, query_context);
|
||||
auto sink = storage->write(ASTPtr(), metadata_snapshot, query_context, /*async_insert=*/false);
|
||||
|
||||
std::unique_ptr<ReadBuffer> buf = std::make_unique<ReadBufferFromMemory>(external_table.data().data(), external_table.data().size());
|
||||
buf = wrapReadBufferWithCompressionMethod(std::move(buf), chooseCompressionMethod("", external_table.compression_type()));
|
||||
|
@ -1692,7 +1692,7 @@ bool TCPHandler::receiveData(bool scalar)
|
||||
}
|
||||
auto metadata_snapshot = storage->getInMemoryMetadataPtr();
|
||||
/// The data will be written directly to the table.
|
||||
QueryPipeline temporary_table_out(storage->write(ASTPtr(), metadata_snapshot, query_context));
|
||||
QueryPipeline temporary_table_out(storage->write(ASTPtr(), metadata_snapshot, query_context, /*async_insert=*/false));
|
||||
PushingPipelineExecutor executor(temporary_table_out);
|
||||
executor.start();
|
||||
executor.push(block);
|
||||
|
@ -624,7 +624,7 @@ Pipe StorageHDFS::read(
|
||||
return Pipe::unitePipes(std::move(pipes));
|
||||
}
|
||||
|
||||
SinkToStoragePtr StorageHDFS::write(const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, ContextPtr context_)
|
||||
SinkToStoragePtr StorageHDFS::write(const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, ContextPtr context_, bool /*async_insert*/)
|
||||
{
|
||||
String current_uri = uris.back();
|
||||
|
||||
|
@ -41,7 +41,7 @@ public:
|
||||
size_t max_block_size,
|
||||
size_t num_streams) override;
|
||||
|
||||
SinkToStoragePtr write(const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, ContextPtr context) override;
|
||||
SinkToStoragePtr write(const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, ContextPtr context, bool async_insert) override;
|
||||
|
||||
void truncate(
|
||||
const ASTPtr & query,
|
||||
|
@ -905,7 +905,7 @@ HiveFiles StorageHive::collectHiveFiles(
|
||||
return hive_files;
|
||||
}
|
||||
|
||||
SinkToStoragePtr StorageHive::write(const ASTPtr & /*query*/, const StorageMetadataPtr & /* metadata_snapshot*/, ContextPtr /*context*/)
|
||||
SinkToStoragePtr StorageHive::write(const ASTPtr & /*query*/, const StorageMetadataPtr & /* metadata_snapshot*/, ContextPtr /*context*/, bool /*async_insert*/)
|
||||
{
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method write is not implemented for StorageHive");
|
||||
}
|
||||
|
@ -61,7 +61,7 @@ public:
|
||||
size_t max_block_size,
|
||||
size_t num_streams) override;
|
||||
|
||||
SinkToStoragePtr write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /*context*/) override;
|
||||
SinkToStoragePtr write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /*context*/, bool async_insert) override;
|
||||
|
||||
NamesAndTypesList getVirtuals() const override;
|
||||
|
||||
|
@ -402,11 +402,14 @@ public:
|
||||
* passed in all parts of the returned streams. Storage metadata can be
|
||||
* changed during lifetime of the returned streams, but the snapshot is
|
||||
* guaranteed to be immutable.
|
||||
*
|
||||
* async_insert - set to true if the write is part of async insert flushing
|
||||
*/
|
||||
virtual SinkToStoragePtr write(
|
||||
const ASTPtr & /*query*/,
|
||||
const StorageMetadataPtr & /*metadata_snapshot*/,
|
||||
ContextPtr /*context*/)
|
||||
ContextPtr /*context*/,
|
||||
bool /*async_insert*/)
|
||||
{
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method write is not supported by storage {}", getName());
|
||||
}
|
||||
|
@ -374,7 +374,7 @@ Pipe StorageKafka::read(
|
||||
}
|
||||
|
||||
|
||||
SinkToStoragePtr StorageKafka::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context)
|
||||
SinkToStoragePtr StorageKafka::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context, bool /*async_insert*/)
|
||||
{
|
||||
auto modified_context = Context::createCopy(local_context);
|
||||
modified_context->applySettingsChanges(settings_adjustments);
|
||||
|
@ -60,7 +60,8 @@ public:
|
||||
SinkToStoragePtr write(
|
||||
const ASTPtr & query,
|
||||
const StorageMetadataPtr & /*metadata_snapshot*/,
|
||||
ContextPtr context) override;
|
||||
ContextPtr context,
|
||||
bool async_insert) override;
|
||||
|
||||
/// We want to control the number of rows in a chunk inserted into Kafka
|
||||
bool prefersLargeBlocks() const override { return false; }
|
||||
|
@ -137,7 +137,7 @@ Pipe StorageMeiliSearch::read(
|
||||
return Pipe(std::make_shared<MeiliSearchSource>(config, sample_block, max_block_size, route, kv_pairs_params));
|
||||
}
|
||||
|
||||
SinkToStoragePtr StorageMeiliSearch::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context)
|
||||
SinkToStoragePtr StorageMeiliSearch::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context, bool /*async_insert*/)
|
||||
{
|
||||
LOG_TRACE(log, "Trying update index: {}", config.index);
|
||||
return std::make_shared<SinkMeiliSearch>(config, metadata_snapshot->getSampleBlock(), local_context);
|
||||
|
@ -26,7 +26,7 @@ public:
|
||||
size_t max_block_size,
|
||||
size_t num_streams) override;
|
||||
|
||||
SinkToStoragePtr write(const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) override;
|
||||
SinkToStoragePtr write(const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context, bool async_insert) override;
|
||||
|
||||
static MeiliSearchConfiguration getConfiguration(ASTs engine_args, ContextPtr context);
|
||||
|
||||
|
@ -1,17 +1,15 @@
|
||||
#include <Storages/MergeTree/CommonANNIndexes.h>
|
||||
#include <Storages/MergeTree/KeyCondition.h>
|
||||
#include <Storages/MergeTree/ApproximateNearestNeighborIndexesCommon.h>
|
||||
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Parsers/ASTFunction.h>
|
||||
#include <Parsers/ASTIdentifier.h>
|
||||
#include <Parsers/ASTLiteral.h>
|
||||
#include <Parsers/ASTOrderByElement.h>
|
||||
#include <Parsers/ASTSelectQuery.h>
|
||||
#include <Parsers/ASTSetQuery.h>
|
||||
|
||||
#include <Storages/MergeTree/KeyCondition.h>
|
||||
#include <Storages/MergeTree/MergeTreeSettings.h>
|
||||
|
||||
#include <Interpreters/Context.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
@ -24,208 +22,166 @@ namespace ErrorCodes
|
||||
namespace
|
||||
{
|
||||
|
||||
namespace ANN = ApproximateNearestNeighbour;
|
||||
|
||||
template <typename Literal>
|
||||
void extractTargetVectorFromLiteral(ANN::ANNQueryInformation::Embedding & target, Literal literal)
|
||||
void extractReferenceVectorFromLiteral(ApproximateNearestNeighborInformation::Embedding & reference_vector, Literal literal)
|
||||
{
|
||||
Float64 float_element_of_target_vector;
|
||||
Int64 int_element_of_target_vector;
|
||||
Float64 float_element_of_reference_vector;
|
||||
Int64 int_element_of_reference_vector;
|
||||
|
||||
for (const auto & value : literal.value())
|
||||
{
|
||||
if (value.tryGet(float_element_of_target_vector))
|
||||
{
|
||||
target.emplace_back(float_element_of_target_vector);
|
||||
}
|
||||
else if (value.tryGet(int_element_of_target_vector))
|
||||
{
|
||||
target.emplace_back(static_cast<float>(int_element_of_target_vector));
|
||||
}
|
||||
if (value.tryGet(float_element_of_reference_vector))
|
||||
reference_vector.emplace_back(float_element_of_reference_vector);
|
||||
else if (value.tryGet(int_element_of_reference_vector))
|
||||
reference_vector.emplace_back(static_cast<float>(int_element_of_reference_vector));
|
||||
else
|
||||
{
|
||||
throw Exception(ErrorCodes::INCORRECT_QUERY, "Wrong type of elements in target vector. Only float or int are supported.");
|
||||
}
|
||||
throw Exception(ErrorCodes::INCORRECT_QUERY, "Wrong type of elements in reference vector. Only float or int are supported.");
|
||||
}
|
||||
}
|
||||
|
||||
ANN::ANNQueryInformation::Metric castMetricFromStringToType(String metric_name)
|
||||
ApproximateNearestNeighborInformation::Metric stringToMetric(std::string_view metric)
|
||||
{
|
||||
if (metric_name == "L2Distance")
|
||||
return ANN::ANNQueryInformation::Metric::L2;
|
||||
if (metric_name == "LpDistance")
|
||||
return ANN::ANNQueryInformation::Metric::Lp;
|
||||
return ANN::ANNQueryInformation::Metric::Unknown;
|
||||
if (metric == "L2Distance")
|
||||
return ApproximateNearestNeighborInformation::Metric::L2;
|
||||
else if (metric == "LpDistance")
|
||||
return ApproximateNearestNeighborInformation::Metric::Lp;
|
||||
else
|
||||
return ApproximateNearestNeighborInformation::Metric::Unknown;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
namespace ApproximateNearestNeighbour
|
||||
{
|
||||
ApproximateNearestNeighborCondition::ApproximateNearestNeighborCondition(const SelectQueryInfo & query_info, ContextPtr context)
|
||||
: block_with_constants(KeyCondition::getBlockWithConstants(query_info.query, query_info.syntax_analyzer_result, context))
|
||||
, index_granularity(context->getMergeTreeSettings().index_granularity)
|
||||
, max_limit_for_ann_queries(context->getSettings().max_limit_for_ann_queries)
|
||||
, index_is_useful(checkQueryStructure(query_info))
|
||||
{}
|
||||
|
||||
ANNCondition::ANNCondition(const SelectQueryInfo & query_info,
|
||||
ContextPtr context) :
|
||||
block_with_constants{KeyCondition::getBlockWithConstants(query_info.query, query_info.syntax_analyzer_result, context)},
|
||||
ann_index_select_query_params{context->getSettings().get("ann_index_select_query_params").get<String>()},
|
||||
index_granularity{context->getMergeTreeSettings().get("index_granularity").get<UInt64>()},
|
||||
limit_restriction{context->getSettings().get("max_limit_for_ann_queries").get<UInt64>()},
|
||||
index_is_useful{checkQueryStructure(query_info)} {}
|
||||
|
||||
bool ANNCondition::alwaysUnknownOrTrue(String metric_name) const
|
||||
bool ApproximateNearestNeighborCondition::alwaysUnknownOrTrue(String metric) const
|
||||
{
|
||||
if (!index_is_useful)
|
||||
{
|
||||
return true; // Query isn't supported
|
||||
}
|
||||
// If query is supported, check metrics for match
|
||||
return !(castMetricFromStringToType(metric_name) == query_information->metric);
|
||||
return !(stringToMetric(metric) == query_information->metric);
|
||||
}
|
||||
|
||||
float ANNCondition::getComparisonDistanceForWhereQuery() const
|
||||
float ApproximateNearestNeighborCondition::getComparisonDistanceForWhereQuery() const
|
||||
{
|
||||
if (index_is_useful && query_information.has_value()
|
||||
&& query_information->query_type == ANNQueryInformation::Type::Where)
|
||||
{
|
||||
&& query_information->type == ApproximateNearestNeighborInformation::Type::Where)
|
||||
return query_information->distance;
|
||||
}
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Not supported method for this query type");
|
||||
}
|
||||
|
||||
UInt64 ANNCondition::getLimit() const
|
||||
UInt64 ApproximateNearestNeighborCondition::getLimit() const
|
||||
{
|
||||
if (index_is_useful && query_information.has_value())
|
||||
{
|
||||
return query_information->limit;
|
||||
}
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "No LIMIT section in query, not supported");
|
||||
}
|
||||
|
||||
std::vector<float> ANNCondition::getTargetVector() const
|
||||
std::vector<float> ApproximateNearestNeighborCondition::getReferenceVector() const
|
||||
{
|
||||
if (index_is_useful && query_information.has_value())
|
||||
{
|
||||
return query_information->target;
|
||||
}
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Target vector was requested for useless or uninitialized index.");
|
||||
return query_information->reference_vector;
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Reference vector was requested for useless or uninitialized index.");
|
||||
}
|
||||
|
||||
size_t ANNCondition::getNumOfDimensions() const
|
||||
size_t ApproximateNearestNeighborCondition::getNumOfDimensions() const
|
||||
{
|
||||
if (index_is_useful && query_information.has_value())
|
||||
{
|
||||
return query_information->target.size();
|
||||
}
|
||||
return query_information->reference_vector.size();
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Number of dimensions was requested for useless or uninitialized index.");
|
||||
}
|
||||
|
||||
String ANNCondition::getColumnName() const
|
||||
String ApproximateNearestNeighborCondition::getColumnName() const
|
||||
{
|
||||
if (index_is_useful && query_information.has_value())
|
||||
{
|
||||
return query_information->column_name;
|
||||
}
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Column name was requested for useless or uninitialized index.");
|
||||
}
|
||||
|
||||
ANNQueryInformation::Metric ANNCondition::getMetricType() const
|
||||
ApproximateNearestNeighborInformation::Metric ApproximateNearestNeighborCondition::getMetricType() const
|
||||
{
|
||||
if (index_is_useful && query_information.has_value())
|
||||
{
|
||||
return query_information->metric;
|
||||
}
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Metric name was requested for useless or uninitialized index.");
|
||||
}
|
||||
|
||||
float ANNCondition::getPValueForLpDistance() const
|
||||
float ApproximateNearestNeighborCondition::getPValueForLpDistance() const
|
||||
{
|
||||
if (index_is_useful && query_information.has_value())
|
||||
{
|
||||
return query_information->p_for_lp_dist;
|
||||
}
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "P from LPDistance was requested for useless or uninitialized index.");
|
||||
}
|
||||
|
||||
ANNQueryInformation::Type ANNCondition::getQueryType() const
|
||||
ApproximateNearestNeighborInformation::Type ApproximateNearestNeighborCondition::getQueryType() const
|
||||
{
|
||||
if (index_is_useful && query_information.has_value())
|
||||
{
|
||||
return query_information->query_type;
|
||||
}
|
||||
return query_information->type;
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Query type was requested for useless or uninitialized index.");
|
||||
}
|
||||
|
||||
bool ANNCondition::checkQueryStructure(const SelectQueryInfo & query)
|
||||
bool ApproximateNearestNeighborCondition::checkQueryStructure(const SelectQueryInfo & query)
|
||||
{
|
||||
// RPN-s for different sections of the query
|
||||
/// RPN-s for different sections of the query
|
||||
RPN rpn_prewhere_clause;
|
||||
RPN rpn_where_clause;
|
||||
RPN rpn_order_by_clause;
|
||||
RPNElement rpn_limit;
|
||||
UInt64 limit;
|
||||
|
||||
ANNQueryInformation prewhere_info;
|
||||
ANNQueryInformation where_info;
|
||||
ANNQueryInformation order_by_info;
|
||||
ApproximateNearestNeighborInformation prewhere_info;
|
||||
ApproximateNearestNeighborInformation where_info;
|
||||
ApproximateNearestNeighborInformation order_by_info;
|
||||
|
||||
// Build rpns for query sections
|
||||
/// Build rpns for query sections
|
||||
const auto & select = query.query->as<ASTSelectQuery &>();
|
||||
|
||||
if (select.prewhere()) // If query has PREWHERE clause
|
||||
{
|
||||
/// If query has PREWHERE clause
|
||||
if (select.prewhere())
|
||||
traverseAST(select.prewhere(), rpn_prewhere_clause);
|
||||
}
|
||||
|
||||
if (select.where()) // If query has WHERE clause
|
||||
{
|
||||
/// If query has WHERE clause
|
||||
if (select.where())
|
||||
traverseAST(select.where(), rpn_where_clause);
|
||||
}
|
||||
|
||||
if (select.limitLength()) // If query has LIMIT clause
|
||||
{
|
||||
/// If query has LIMIT clause
|
||||
if (select.limitLength())
|
||||
traverseAtomAST(select.limitLength(), rpn_limit);
|
||||
}
|
||||
|
||||
if (select.orderBy()) // If query has ORDERBY clause
|
||||
{
|
||||
traverseOrderByAST(select.orderBy(), rpn_order_by_clause);
|
||||
}
|
||||
|
||||
// Reverse RPNs for conveniences during parsing
|
||||
/// Reverse RPNs for conveniences during parsing
|
||||
std::reverse(rpn_prewhere_clause.begin(), rpn_prewhere_clause.end());
|
||||
std::reverse(rpn_where_clause.begin(), rpn_where_clause.end());
|
||||
std::reverse(rpn_order_by_clause.begin(), rpn_order_by_clause.end());
|
||||
|
||||
// Match rpns with supported types and extract information
|
||||
/// Match rpns with supported types and extract information
|
||||
const bool prewhere_is_valid = matchRPNWhere(rpn_prewhere_clause, prewhere_info);
|
||||
const bool where_is_valid = matchRPNWhere(rpn_where_clause, where_info);
|
||||
const bool order_by_is_valid = matchRPNOrderBy(rpn_order_by_clause, order_by_info);
|
||||
const bool limit_is_valid = matchRPNLimit(rpn_limit, limit);
|
||||
|
||||
// Query without a LIMIT clause or with a limit greater than a restriction is not supported
|
||||
if (!limit_is_valid || limit_restriction < limit)
|
||||
{
|
||||
/// Query without a LIMIT clause or with a limit greater than a restriction is not supported
|
||||
if (!limit_is_valid || max_limit_for_ann_queries < limit)
|
||||
return false;
|
||||
}
|
||||
|
||||
// Search type query in both sections isn't supported
|
||||
/// Search type query in both sections isn't supported
|
||||
if (prewhere_is_valid && where_is_valid)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
// Search type should be in WHERE or PREWHERE clause
|
||||
/// Search type should be in WHERE or PREWHERE clause
|
||||
if (prewhere_is_valid || where_is_valid)
|
||||
{
|
||||
query_information = std::move(prewhere_is_valid ? prewhere_info : where_info);
|
||||
}
|
||||
|
||||
if (order_by_is_valid)
|
||||
{
|
||||
// Query with valid where and order by type is not supported
|
||||
/// Query with valid where and order by type is not supported
|
||||
if (query_information.has_value())
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
query_information = std::move(order_by_info);
|
||||
}
|
||||
@ -236,7 +192,7 @@ bool ANNCondition::checkQueryStructure(const SelectQueryInfo & query)
|
||||
return query_information.has_value();
|
||||
}
|
||||
|
||||
void ANNCondition::traverseAST(const ASTPtr & node, RPN & rpn)
|
||||
void ApproximateNearestNeighborCondition::traverseAST(const ASTPtr & node, RPN & rpn)
|
||||
{
|
||||
// If the node is ASTFunction, it may have children nodes
|
||||
if (const auto * func = node->as<ASTFunction>())
|
||||
@ -244,27 +200,23 @@ void ANNCondition::traverseAST(const ASTPtr & node, RPN & rpn)
|
||||
const ASTs & children = func->arguments->children;
|
||||
// Traverse children nodes
|
||||
for (const auto& child : children)
|
||||
{
|
||||
traverseAST(child, rpn);
|
||||
}
|
||||
}
|
||||
|
||||
RPNElement element;
|
||||
// Get the data behind node
|
||||
/// Get the data behind node
|
||||
if (!traverseAtomAST(node, element))
|
||||
{
|
||||
element.function = RPNElement::FUNCTION_UNKNOWN;
|
||||
}
|
||||
|
||||
rpn.emplace_back(std::move(element));
|
||||
}
|
||||
|
||||
bool ANNCondition::traverseAtomAST(const ASTPtr & node, RPNElement & out)
|
||||
bool ApproximateNearestNeighborCondition::traverseAtomAST(const ASTPtr & node, RPNElement & out)
|
||||
{
|
||||
// Match Functions
|
||||
/// Match Functions
|
||||
if (const auto * function = node->as<ASTFunction>())
|
||||
{
|
||||
// Set the name
|
||||
/// Set the name
|
||||
out.func_name = function->name;
|
||||
|
||||
if (function->name == "L1Distance" ||
|
||||
@ -273,36 +225,24 @@ bool ANNCondition::traverseAtomAST(const ASTPtr & node, RPNElement & out)
|
||||
function->name == "cosineDistance" ||
|
||||
function->name == "dotProduct" ||
|
||||
function->name == "LpDistance")
|
||||
{
|
||||
out.function = RPNElement::FUNCTION_DISTANCE;
|
||||
}
|
||||
else if (function->name == "tuple")
|
||||
{
|
||||
out.function = RPNElement::FUNCTION_TUPLE;
|
||||
}
|
||||
else if (function->name == "array")
|
||||
{
|
||||
out.function = RPNElement::FUNCTION_ARRAY;
|
||||
}
|
||||
else if (function->name == "less" ||
|
||||
function->name == "greater" ||
|
||||
function->name == "lessOrEquals" ||
|
||||
function->name == "greaterOrEquals")
|
||||
{
|
||||
out.function = RPNElement::FUNCTION_COMPARISON;
|
||||
}
|
||||
else if (function->name == "_CAST")
|
||||
{
|
||||
out.function = RPNElement::FUNCTION_CAST;
|
||||
}
|
||||
else
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
// Match identifier
|
||||
/// Match identifier
|
||||
else if (const auto * identifier = node->as<ASTIdentifier>())
|
||||
{
|
||||
out.function = RPNElement::FUNCTION_IDENTIFIER;
|
||||
@ -312,11 +252,11 @@ bool ANNCondition::traverseAtomAST(const ASTPtr & node, RPNElement & out)
|
||||
return true;
|
||||
}
|
||||
|
||||
// Check if we have constants behind the node
|
||||
/// Check if we have constants behind the node
|
||||
return tryCastToConstType(node, out);
|
||||
}
|
||||
|
||||
bool ANNCondition::tryCastToConstType(const ASTPtr & node, RPNElement & out)
|
||||
bool ApproximateNearestNeighborCondition::tryCastToConstType(const ASTPtr & node, RPNElement & out)
|
||||
{
|
||||
Field const_value;
|
||||
DataTypePtr const_type;
|
||||
@ -375,37 +315,29 @@ bool ANNCondition::tryCastToConstType(const ASTPtr & node, RPNElement & out)
|
||||
return false;
|
||||
}
|
||||
|
||||
void ANNCondition::traverseOrderByAST(const ASTPtr & node, RPN & rpn)
|
||||
void ApproximateNearestNeighborCondition::traverseOrderByAST(const ASTPtr & node, RPN & rpn)
|
||||
{
|
||||
if (const auto * expr_list = node->as<ASTExpressionList>())
|
||||
{
|
||||
if (const auto * order_by_element = expr_list->children.front()->as<ASTOrderByElement>())
|
||||
{
|
||||
traverseAST(order_by_element->children.front(), rpn);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Returns true and stores ANNQueryInformation if the query has valid WHERE clause
|
||||
bool ANNCondition::matchRPNWhere(RPN & rpn, ANNQueryInformation & expr)
|
||||
/// Returns true and stores ApproximateNearestNeighborInformation if the query has valid WHERE clause
|
||||
bool ApproximateNearestNeighborCondition::matchRPNWhere(RPN & rpn, ApproximateNearestNeighborInformation & ann_info)
|
||||
{
|
||||
/// Fill query type field
|
||||
expr.query_type = ANNQueryInformation::Type::Where;
|
||||
ann_info.type = ApproximateNearestNeighborInformation::Type::Where;
|
||||
|
||||
// WHERE section must have at least 5 expressions
|
||||
// Operator->Distance(float)->DistanceFunc->Column->Tuple(Array)Func(TargetVector(floats))
|
||||
/// WHERE section must have at least 5 expressions
|
||||
/// Operator->Distance(float)->DistanceFunc->Column->Tuple(Array)Func(ReferenceVector(floats))
|
||||
if (rpn.size() < 5)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
auto iter = rpn.begin();
|
||||
|
||||
// Query starts from operator less
|
||||
/// Query starts from operator less
|
||||
if (iter->function != RPNElement::FUNCTION_COMPARISON)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
const bool greater_case = iter->func_name == "greater" || iter->func_name == "greaterOrEquals";
|
||||
const bool less_case = iter->func_name == "less" || iter->func_name == "lessOrEquals";
|
||||
@ -415,64 +347,54 @@ bool ANNCondition::matchRPNWhere(RPN & rpn, ANNQueryInformation & expr)
|
||||
if (less_case)
|
||||
{
|
||||
if (iter->function != RPNElement::FUNCTION_FLOAT_LITERAL)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
expr.distance = getFloatOrIntLiteralOrPanic(iter);
|
||||
if (expr.distance < 0)
|
||||
throw Exception(ErrorCodes::INCORRECT_QUERY, "Distance can't be negative. Got {}", expr.distance);
|
||||
ann_info.distance = getFloatOrIntLiteralOrPanic(iter);
|
||||
if (ann_info.distance < 0)
|
||||
throw Exception(ErrorCodes::INCORRECT_QUERY, "Distance can't be negative. Got {}", ann_info.distance);
|
||||
|
||||
++iter;
|
||||
|
||||
}
|
||||
else if (!greater_case)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
auto end = rpn.end();
|
||||
if (!matchMainParts(iter, end, expr))
|
||||
{
|
||||
if (!matchMainParts(iter, end, ann_info))
|
||||
return false;
|
||||
}
|
||||
|
||||
if (greater_case)
|
||||
{
|
||||
if (expr.target.size() < 2)
|
||||
{
|
||||
if (ann_info.reference_vector.size() < 2)
|
||||
return false;
|
||||
}
|
||||
expr.distance = expr.target.back();
|
||||
if (expr.distance < 0)
|
||||
throw Exception(ErrorCodes::INCORRECT_QUERY, "Distance can't be negative. Got {}", expr.distance);
|
||||
expr.target.pop_back();
|
||||
ann_info.distance = ann_info.reference_vector.back();
|
||||
if (ann_info.distance < 0)
|
||||
throw Exception(ErrorCodes::INCORRECT_QUERY, "Distance can't be negative. Got {}", ann_info.distance);
|
||||
ann_info.reference_vector.pop_back();
|
||||
}
|
||||
|
||||
// query is ok
|
||||
/// query is ok
|
||||
return true;
|
||||
}
|
||||
|
||||
// Returns true and stores ANNExpr if the query has valid ORDERBY clause
|
||||
bool ANNCondition::matchRPNOrderBy(RPN & rpn, ANNQueryInformation & expr)
|
||||
/// Returns true and stores ANNExpr if the query has valid ORDERBY clause
|
||||
bool ApproximateNearestNeighborCondition::matchRPNOrderBy(RPN & rpn, ApproximateNearestNeighborInformation & ann_info)
|
||||
{
|
||||
/// Fill query type field
|
||||
expr.query_type = ANNQueryInformation::Type::OrderBy;
|
||||
ann_info.type = ApproximateNearestNeighborInformation::Type::OrderBy;
|
||||
|
||||
// ORDER BY clause must have at least 3 expressions
|
||||
if (rpn.size() < 3)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
auto iter = rpn.begin();
|
||||
auto end = rpn.end();
|
||||
|
||||
return ANNCondition::matchMainParts(iter, end, expr);
|
||||
return ApproximateNearestNeighborCondition::matchMainParts(iter, end, ann_info);
|
||||
}
|
||||
|
||||
// Returns true and stores Length if we have valid LIMIT clause in query
|
||||
bool ANNCondition::matchRPNLimit(RPNElement & rpn, UInt64 & limit)
|
||||
/// Returns true and stores Length if we have valid LIMIT clause in query
|
||||
bool ApproximateNearestNeighborCondition::matchRPNLimit(RPNElement & rpn, UInt64 & limit)
|
||||
{
|
||||
if (rpn.function == RPNElement::FUNCTION_INT_LITERAL)
|
||||
{
|
||||
@ -483,52 +405,46 @@ bool ANNCondition::matchRPNLimit(RPNElement & rpn, UInt64 & limit)
|
||||
return false;
|
||||
}
|
||||
|
||||
/* Matches dist function, target vector, column name */
|
||||
bool ANNCondition::matchMainParts(RPN::iterator & iter, const RPN::iterator & end, ANNQueryInformation & expr)
|
||||
/// Matches dist function, referencer vector, column name
|
||||
bool ApproximateNearestNeighborCondition::matchMainParts(RPN::iterator & iter, const RPN::iterator & end, ApproximateNearestNeighborInformation & ann_info)
|
||||
{
|
||||
bool identifier_found = false;
|
||||
|
||||
// Matches DistanceFunc->[Column]->[Tuple(array)Func]->TargetVector(floats)->[Column]
|
||||
/// Matches DistanceFunc->[Column]->[Tuple(array)Func]->ReferenceVector(floats)->[Column]
|
||||
if (iter->function != RPNElement::FUNCTION_DISTANCE)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
expr.metric = castMetricFromStringToType(iter->func_name);
|
||||
ann_info.metric = stringToMetric(iter->func_name);
|
||||
++iter;
|
||||
|
||||
if (expr.metric == ANN::ANNQueryInformation::Metric::Lp)
|
||||
if (ann_info.metric == ApproximateNearestNeighborInformation::Metric::Lp)
|
||||
{
|
||||
if (iter->function != RPNElement::FUNCTION_FLOAT_LITERAL &&
|
||||
iter->function != RPNElement::FUNCTION_INT_LITERAL)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
expr.p_for_lp_dist = getFloatOrIntLiteralOrPanic(iter);
|
||||
ann_info.p_for_lp_dist = getFloatOrIntLiteralOrPanic(iter);
|
||||
++iter;
|
||||
}
|
||||
|
||||
if (iter->function == RPNElement::FUNCTION_IDENTIFIER)
|
||||
{
|
||||
identifier_found = true;
|
||||
expr.column_name = std::move(iter->identifier.value());
|
||||
ann_info.column_name = std::move(iter->identifier.value());
|
||||
++iter;
|
||||
}
|
||||
|
||||
if (iter->function == RPNElement::FUNCTION_TUPLE || iter->function == RPNElement::FUNCTION_ARRAY)
|
||||
{
|
||||
++iter;
|
||||
}
|
||||
|
||||
if (iter->function == RPNElement::FUNCTION_LITERAL_TUPLE)
|
||||
{
|
||||
extractTargetVectorFromLiteral(expr.target, iter->tuple_literal);
|
||||
extractReferenceVectorFromLiteral(ann_info.reference_vector, iter->tuple_literal);
|
||||
++iter;
|
||||
}
|
||||
|
||||
if (iter->function == RPNElement::FUNCTION_LITERAL_ARRAY)
|
||||
{
|
||||
extractTargetVectorFromLiteral(expr.target, iter->array_literal);
|
||||
extractReferenceVectorFromLiteral(ann_info.reference_vector, iter->array_literal);
|
||||
++iter;
|
||||
}
|
||||
|
||||
@ -539,68 +455,52 @@ bool ANNCondition::matchMainParts(RPN::iterator & iter, const RPN::iterator & en
|
||||
++iter;
|
||||
/// Cast should be made to array or tuple
|
||||
if (!iter->func_name.starts_with("Array") && !iter->func_name.starts_with("Tuple"))
|
||||
{
|
||||
return false;
|
||||
}
|
||||
++iter;
|
||||
if (iter->function == RPNElement::FUNCTION_LITERAL_TUPLE)
|
||||
{
|
||||
extractTargetVectorFromLiteral(expr.target, iter->tuple_literal);
|
||||
extractReferenceVectorFromLiteral(ann_info.reference_vector, iter->tuple_literal);
|
||||
++iter;
|
||||
}
|
||||
else if (iter->function == RPNElement::FUNCTION_LITERAL_ARRAY)
|
||||
{
|
||||
extractTargetVectorFromLiteral(expr.target, iter->array_literal);
|
||||
extractReferenceVectorFromLiteral(ann_info.reference_vector, iter->array_literal);
|
||||
++iter;
|
||||
}
|
||||
else
|
||||
{
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
while (iter != end)
|
||||
{
|
||||
if (iter->function == RPNElement::FUNCTION_FLOAT_LITERAL ||
|
||||
iter->function == RPNElement::FUNCTION_INT_LITERAL)
|
||||
{
|
||||
expr.target.emplace_back(getFloatOrIntLiteralOrPanic(iter));
|
||||
}
|
||||
ann_info.reference_vector.emplace_back(getFloatOrIntLiteralOrPanic(iter));
|
||||
else if (iter->function == RPNElement::FUNCTION_IDENTIFIER)
|
||||
{
|
||||
if (identifier_found)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
expr.column_name = std::move(iter->identifier.value());
|
||||
ann_info.column_name = std::move(iter->identifier.value());
|
||||
identifier_found = true;
|
||||
}
|
||||
else
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
++iter;
|
||||
}
|
||||
|
||||
// Final checks of correctness
|
||||
return identifier_found && !expr.target.empty();
|
||||
/// Final checks of correctness
|
||||
return identifier_found && !ann_info.reference_vector.empty();
|
||||
}
|
||||
|
||||
// Gets float or int from AST node
|
||||
float ANNCondition::getFloatOrIntLiteralOrPanic(const RPN::iterator& iter)
|
||||
/// Gets float or int from AST node
|
||||
float ApproximateNearestNeighborCondition::getFloatOrIntLiteralOrPanic(const RPN::iterator& iter)
|
||||
{
|
||||
if (iter->float_literal.has_value())
|
||||
{
|
||||
return iter->float_literal.value();
|
||||
}
|
||||
if (iter->int_literal.has_value())
|
||||
{
|
||||
return static_cast<float>(iter->int_literal.value());
|
||||
}
|
||||
throw Exception(ErrorCodes::INCORRECT_QUERY, "Wrong parsed AST in buildRPN\n");
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
223
src/Storages/MergeTree/ApproximateNearestNeighborIndexesCommon.h
Normal file
223
src/Storages/MergeTree/ApproximateNearestNeighborIndexesCommon.h
Normal file
@ -0,0 +1,223 @@
|
||||
#pragma once
|
||||
|
||||
#include <Storages/MergeTree/MergeTreeIndices.h>
|
||||
#include "base/types.h"
|
||||
|
||||
#include <optional>
|
||||
#include <vector>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
/// Approximate Nearest Neighbour queries have a similar structure:
|
||||
/// - reference vector from which all distances are calculated
|
||||
/// - metric name (e.g L2Distance, LpDistance, etc.)
|
||||
/// - name of column with embeddings
|
||||
/// - type of query
|
||||
/// - maximum number of returned elements (LIMIT)
|
||||
///
|
||||
/// And two optional parameters:
|
||||
/// - p for LpDistance function
|
||||
/// - distance to compare with (only for where queries)
|
||||
///
|
||||
/// This struct holds all these components.
|
||||
struct ApproximateNearestNeighborInformation
|
||||
{
|
||||
using Embedding = std::vector<float>;
|
||||
Embedding reference_vector;
|
||||
|
||||
enum class Metric
|
||||
{
|
||||
Unknown,
|
||||
L2,
|
||||
Lp
|
||||
};
|
||||
Metric metric;
|
||||
|
||||
String column_name;
|
||||
UInt64 limit;
|
||||
|
||||
enum class Type
|
||||
{
|
||||
OrderBy,
|
||||
Where
|
||||
};
|
||||
Type type;
|
||||
|
||||
float p_for_lp_dist = -1.0;
|
||||
float distance = -1.0;
|
||||
};
|
||||
|
||||
|
||||
// Class ANNCondition, is responsible for recognizing if the query is an ANN queries which can utilize ANN indexes. It parses the SQL query
|
||||
/// and checks if it matches ANNIndexes. Method alwaysUnknownOrTrue returns false if we can speed up the query, and true otherwise. It has
|
||||
/// only one argument, the name of the metric with which index was built. Two main patterns of queries are supported
|
||||
///
|
||||
/// - 1. WHERE queries:
|
||||
/// SELECT * FROM * WHERE DistanceFunc(column, reference_vector) < floatLiteral LIMIT count
|
||||
///
|
||||
/// - 2. ORDER BY queries:
|
||||
/// SELECT * FROM * WHERE * ORDER BY DistanceFunc(column, reference_vector) LIMIT count
|
||||
///
|
||||
/// Queries without LIMIT count are not supported
|
||||
/// If the query is both of type 1. and 2., than we can't use the index and alwaysUnknownOrTrue returns true.
|
||||
/// reference_vector should have float coordinates, e.g. (0.2, 0.1, .., 0.5)
|
||||
///
|
||||
/// If the query matches one of these two types, then this class extracts the main information needed for ANN indexes from the query.
|
||||
///
|
||||
/// From matching query it extracts
|
||||
/// - referenceVector
|
||||
/// - metricName(DistanceFunction)
|
||||
/// - dimension size if query uses LpDistance
|
||||
/// - distance to compare(ONLY for search types, otherwise you get exception)
|
||||
/// - spaceDimension(which is referenceVector's components count)
|
||||
/// - column
|
||||
/// - objects count from LIMIT clause(for both queries)
|
||||
/// - queryHasOrderByClause and queryHasWhereClause return true if query matches the type
|
||||
///
|
||||
/// Search query type is also recognized for PREWHERE clause
|
||||
class ApproximateNearestNeighborCondition
|
||||
{
|
||||
public:
|
||||
ApproximateNearestNeighborCondition(const SelectQueryInfo & query_info, ContextPtr context);
|
||||
|
||||
/// Returns false if query can be speeded up by an ANN index, true otherwise.
|
||||
bool alwaysUnknownOrTrue(String metric) const;
|
||||
|
||||
/// Returns the distance to compare with for search query
|
||||
float getComparisonDistanceForWhereQuery() const;
|
||||
|
||||
/// Distance should be calculated regarding to referenceVector
|
||||
std::vector<float> getReferenceVector() const;
|
||||
|
||||
/// Reference vector's dimension size
|
||||
size_t getNumOfDimensions() const;
|
||||
|
||||
String getColumnName() const;
|
||||
|
||||
ApproximateNearestNeighborInformation::Metric getMetricType() const;
|
||||
|
||||
/// The P- value if the metric is 'LpDistance'
|
||||
float getPValueForLpDistance() const;
|
||||
|
||||
ApproximateNearestNeighborInformation::Type getQueryType() const;
|
||||
|
||||
UInt64 getIndexGranularity() const { return index_granularity; }
|
||||
|
||||
/// Length's value from LIMIT clause
|
||||
UInt64 getLimit() const;
|
||||
|
||||
private:
|
||||
struct RPNElement
|
||||
{
|
||||
enum Function
|
||||
{
|
||||
/// DistanceFunctions
|
||||
FUNCTION_DISTANCE,
|
||||
|
||||
//tuple(0.1, ..., 0.1)
|
||||
FUNCTION_TUPLE,
|
||||
|
||||
//array(0.1, ..., 0.1)
|
||||
FUNCTION_ARRAY,
|
||||
|
||||
/// Operators <, >, <=, >=
|
||||
FUNCTION_COMPARISON,
|
||||
|
||||
/// Numeric float value
|
||||
FUNCTION_FLOAT_LITERAL,
|
||||
|
||||
/// Numeric int value
|
||||
FUNCTION_INT_LITERAL,
|
||||
|
||||
/// Column identifier
|
||||
FUNCTION_IDENTIFIER,
|
||||
|
||||
/// Unknown, can be any value
|
||||
FUNCTION_UNKNOWN,
|
||||
|
||||
/// (0.1, ...., 0.1) vector without word 'tuple'
|
||||
FUNCTION_LITERAL_TUPLE,
|
||||
|
||||
/// [0.1, ...., 0.1] vector without word 'array'
|
||||
FUNCTION_LITERAL_ARRAY,
|
||||
|
||||
/// if client parameters are used, cast will always be in the query
|
||||
FUNCTION_CAST,
|
||||
|
||||
/// name of type in cast function
|
||||
FUNCTION_STRING_LITERAL,
|
||||
};
|
||||
|
||||
explicit RPNElement(Function function_ = FUNCTION_UNKNOWN)
|
||||
: function(function_)
|
||||
, func_name("Unknown")
|
||||
, float_literal(std::nullopt)
|
||||
, identifier(std::nullopt)
|
||||
{}
|
||||
|
||||
Function function;
|
||||
String func_name;
|
||||
|
||||
std::optional<float> float_literal;
|
||||
std::optional<String> identifier;
|
||||
std::optional<int64_t> int_literal;
|
||||
|
||||
std::optional<Tuple> tuple_literal;
|
||||
std::optional<Array> array_literal;
|
||||
|
||||
UInt32 dim = 0;
|
||||
};
|
||||
|
||||
using RPN = std::vector<RPNElement>;
|
||||
|
||||
bool checkQueryStructure(const SelectQueryInfo & query);
|
||||
|
||||
/// Util functions for the traversal of AST, parses AST and builds rpn
|
||||
void traverseAST(const ASTPtr & node, RPN & rpn);
|
||||
/// Return true if we can identify our node type
|
||||
bool traverseAtomAST(const ASTPtr & node, RPNElement & out);
|
||||
/// Checks if the AST stores ConstType expression
|
||||
bool tryCastToConstType(const ASTPtr & node, RPNElement & out);
|
||||
/// Traverses the AST of ORDERBY section
|
||||
void traverseOrderByAST(const ASTPtr & node, RPN & rpn);
|
||||
|
||||
/// Returns true and stores ANNExpr if the query has valid WHERE section
|
||||
static bool matchRPNWhere(RPN & rpn, ApproximateNearestNeighborInformation & ann_info);
|
||||
|
||||
/// Returns true and stores ANNExpr if the query has valid ORDERBY section
|
||||
static bool matchRPNOrderBy(RPN & rpn, ApproximateNearestNeighborInformation & ann_info);
|
||||
|
||||
/// Returns true and stores Length if we have valid LIMIT clause in query
|
||||
static bool matchRPNLimit(RPNElement & rpn, UInt64 & limit);
|
||||
|
||||
/* Matches dist function, reference vector, column name */
|
||||
static bool matchMainParts(RPN::iterator & iter, const RPN::iterator & end, ApproximateNearestNeighborInformation & ann_info);
|
||||
|
||||
/// Gets float or int from AST node
|
||||
static float getFloatOrIntLiteralOrPanic(const RPN::iterator& iter);
|
||||
|
||||
Block block_with_constants;
|
||||
|
||||
/// true if we have one of two supported query types
|
||||
std::optional<ApproximateNearestNeighborInformation> query_information;
|
||||
|
||||
// Get from settings ANNIndex parameters
|
||||
const UInt64 index_granularity;
|
||||
|
||||
/// only queries with a lower limit can be considered to avoid memory overflow
|
||||
const UInt64 max_limit_for_ann_queries;
|
||||
|
||||
bool index_is_useful = false;
|
||||
};
|
||||
|
||||
|
||||
/// Common interface of ANN indexes.
|
||||
class IMergeTreeIndexConditionApproximateNearestNeighbor : public IMergeTreeIndexCondition
|
||||
{
|
||||
public:
|
||||
/// Returns vector of indexes of ranges in granule which are useful for query.
|
||||
virtual std::vector<size_t> getUsefulRanges(MergeTreeIndexGranulePtr idx_granule) const = 0;
|
||||
};
|
||||
|
||||
}
|
@ -1,236 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <Storages/MergeTree/MergeTreeIndices.h>
|
||||
#include "base/types.h"
|
||||
|
||||
#include <optional>
|
||||
#include <vector>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ApproximateNearestNeighbour
|
||||
{
|
||||
|
||||
/**
|
||||
* Queries for Approximate Nearest Neighbour Search
|
||||
* have similar structure:
|
||||
* 1) target vector from which all distances are calculated
|
||||
* 2) metric name (e.g L2Distance, LpDistance, etc.)
|
||||
* 3) name of column with embeddings
|
||||
* 4) type of query
|
||||
* 5) Number of elements, that should be taken (limit)
|
||||
*
|
||||
* And two optional parameters:
|
||||
* 1) p for LpDistance function
|
||||
* 2) distance to compare with (only for where queries)
|
||||
*/
|
||||
struct ANNQueryInformation
|
||||
{
|
||||
using Embedding = std::vector<float>;
|
||||
|
||||
// Extracted data from valid query
|
||||
Embedding target;
|
||||
enum class Metric
|
||||
{
|
||||
Unknown,
|
||||
L2,
|
||||
Lp
|
||||
} metric;
|
||||
String column_name;
|
||||
UInt64 limit;
|
||||
|
||||
enum class Type
|
||||
{
|
||||
OrderBy,
|
||||
Where
|
||||
} query_type;
|
||||
|
||||
float p_for_lp_dist = -1.0;
|
||||
float distance = -1.0;
|
||||
};
|
||||
|
||||
/**
|
||||
Class ANNCondition, is responsible for recognizing special query types which
|
||||
can be speeded up by ANN Indexes. It parses the SQL query and checks
|
||||
if it matches ANNIndexes. The recognizing method - alwaysUnknownOrTrue
|
||||
returns false if we can speed up the query, and true otherwise.
|
||||
It has only one argument, name of the metric with which index was built.
|
||||
There are two main patterns of queries being supported
|
||||
|
||||
1) Search query type
|
||||
SELECT * FROM * WHERE DistanceFunc(column, target_vector) < floatLiteral LIMIT count
|
||||
|
||||
2) OrderBy query type
|
||||
SELECT * FROM * WHERE * ORDERBY DistanceFunc(column, target_vector) LIMIT count
|
||||
|
||||
*Query without LIMIT count is not supported*
|
||||
|
||||
target_vector(should have float coordinates) examples:
|
||||
tuple(0.1, 0.1, ...., 0.1) or (0.1, 0.1, ...., 0.1)
|
||||
[the word tuple is not needed]
|
||||
|
||||
If the query matches one of these two types, than the class extracts useful information
|
||||
from the query. If the query has both 1 and 2 types, than we can't speed and alwaysUnknownOrTrue
|
||||
returns true.
|
||||
|
||||
From matching query it extracts
|
||||
* targetVector
|
||||
* metricName(DistanceFunction)
|
||||
* dimension size if query uses LpDistance
|
||||
* distance to compare(ONLY for search types, otherwise you get exception)
|
||||
* spaceDimension(which is targetVector's components count)
|
||||
* column
|
||||
* objects count from LIMIT clause(for both queries)
|
||||
* settings str, if query has settings section with new 'ann_index_select_query_params' value,
|
||||
than you can get the new value(empty by default) calling method getSettingsStr
|
||||
* queryHasOrderByClause and queryHasWhereClause return true if query matches the type
|
||||
|
||||
Search query type is also recognized for PREWHERE clause
|
||||
*/
|
||||
|
||||
class ANNCondition
|
||||
{
|
||||
public:
|
||||
ANNCondition(const SelectQueryInfo & query_info,
|
||||
ContextPtr context);
|
||||
|
||||
// false if query can be speeded up, true otherwise
|
||||
bool alwaysUnknownOrTrue(String metric_name) const;
|
||||
|
||||
// returns the distance to compare with for search query
|
||||
float getComparisonDistanceForWhereQuery() const;
|
||||
|
||||
// distance should be calculated regarding to targetVector
|
||||
std::vector<float> getTargetVector() const;
|
||||
|
||||
// targetVector dimension size
|
||||
size_t getNumOfDimensions() const;
|
||||
|
||||
String getColumnName() const;
|
||||
|
||||
ANNQueryInformation::Metric getMetricType() const;
|
||||
|
||||
// the P- value if the metric is 'LpDistance'
|
||||
float getPValueForLpDistance() const;
|
||||
|
||||
ANNQueryInformation::Type getQueryType() const;
|
||||
|
||||
UInt64 getIndexGranularity() const { return index_granularity; }
|
||||
|
||||
// length's value from LIMIT clause
|
||||
UInt64 getLimit() const;
|
||||
|
||||
// value of 'ann_index_select_query_params' if have in SETTINGS clause, empty string otherwise
|
||||
String getParamsStr() const { return ann_index_select_query_params; }
|
||||
|
||||
private:
|
||||
|
||||
struct RPNElement
|
||||
{
|
||||
enum Function
|
||||
{
|
||||
// DistanceFunctions
|
||||
FUNCTION_DISTANCE,
|
||||
|
||||
//tuple(0.1, ..., 0.1)
|
||||
FUNCTION_TUPLE,
|
||||
|
||||
//array(0.1, ..., 0.1)
|
||||
FUNCTION_ARRAY,
|
||||
|
||||
// Operators <, >, <=, >=
|
||||
FUNCTION_COMPARISON,
|
||||
|
||||
// Numeric float value
|
||||
FUNCTION_FLOAT_LITERAL,
|
||||
|
||||
// Numeric int value
|
||||
FUNCTION_INT_LITERAL,
|
||||
|
||||
// Column identifier
|
||||
FUNCTION_IDENTIFIER,
|
||||
|
||||
// Unknown, can be any value
|
||||
FUNCTION_UNKNOWN,
|
||||
|
||||
// (0.1, ...., 0.1) vector without word 'tuple'
|
||||
FUNCTION_LITERAL_TUPLE,
|
||||
|
||||
// [0.1, ...., 0.1] vector without word 'array'
|
||||
FUNCTION_LITERAL_ARRAY,
|
||||
|
||||
// if client parameters are used, cast will always be in the query
|
||||
FUNCTION_CAST,
|
||||
|
||||
// name of type in cast function
|
||||
FUNCTION_STRING_LITERAL,
|
||||
};
|
||||
|
||||
explicit RPNElement(Function function_ = FUNCTION_UNKNOWN)
|
||||
: function(function_), func_name("Unknown"), float_literal(std::nullopt), identifier(std::nullopt) {}
|
||||
|
||||
Function function;
|
||||
String func_name;
|
||||
|
||||
std::optional<float> float_literal;
|
||||
std::optional<String> identifier;
|
||||
std::optional<int64_t> int_literal;
|
||||
|
||||
std::optional<Tuple> tuple_literal;
|
||||
std::optional<Array> array_literal;
|
||||
|
||||
UInt32 dim = 0;
|
||||
};
|
||||
|
||||
using RPN = std::vector<RPNElement>;
|
||||
|
||||
bool checkQueryStructure(const SelectQueryInfo & query);
|
||||
|
||||
// Util functions for the traversal of AST, parses AST and builds rpn
|
||||
void traverseAST(const ASTPtr & node, RPN & rpn);
|
||||
// Return true if we can identify our node type
|
||||
bool traverseAtomAST(const ASTPtr & node, RPNElement & out);
|
||||
// Checks if the AST stores ConstType expression
|
||||
bool tryCastToConstType(const ASTPtr & node, RPNElement & out);
|
||||
// Traverses the AST of ORDERBY section
|
||||
void traverseOrderByAST(const ASTPtr & node, RPN & rpn);
|
||||
|
||||
// Returns true and stores ANNExpr if the query has valid WHERE section
|
||||
static bool matchRPNWhere(RPN & rpn, ANNQueryInformation & expr);
|
||||
|
||||
// Returns true and stores ANNExpr if the query has valid ORDERBY section
|
||||
static bool matchRPNOrderBy(RPN & rpn, ANNQueryInformation & expr);
|
||||
|
||||
// Returns true and stores Length if we have valid LIMIT clause in query
|
||||
static bool matchRPNLimit(RPNElement & rpn, UInt64 & limit);
|
||||
|
||||
/* Matches dist function, target vector, column name */
|
||||
static bool matchMainParts(RPN::iterator & iter, const RPN::iterator & end, ANNQueryInformation & expr);
|
||||
|
||||
// Gets float or int from AST node
|
||||
static float getFloatOrIntLiteralOrPanic(const RPN::iterator& iter);
|
||||
|
||||
Block block_with_constants;
|
||||
|
||||
// true if we have one of two supported query types
|
||||
std::optional<ANNQueryInformation> query_information;
|
||||
|
||||
// Get from settings ANNIndex parameters
|
||||
String ann_index_select_query_params;
|
||||
UInt64 index_granularity;
|
||||
/// only queries with a lower limit can be considered to avoid memory overflow
|
||||
UInt64 limit_restriction;
|
||||
bool index_is_useful = false;
|
||||
};
|
||||
|
||||
// condition interface for Ann indexes. Returns vector of indexes of ranges in granule which are useful for query.
|
||||
class IMergeTreeIndexConditionAnn : public IMergeTreeIndexCondition
|
||||
{
|
||||
public:
|
||||
virtual std::vector<size_t> getUsefulRanges(MergeTreeIndexGranulePtr idx_granule) const = 0;
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
}
|
@ -7154,6 +7154,9 @@ QueryProcessingStage::Enum MergeTreeData::getQueryProcessingStage(
|
||||
/// Parallel replicas
|
||||
if (query_context->canUseParallelReplicasOnInitiator() && to_stage >= QueryProcessingStage::WithMergeableState)
|
||||
{
|
||||
if (!canUseParallelReplicasBasedOnPKAnalysis(query_context, storage_snapshot, query_info))
|
||||
return QueryProcessingStage::Enum::FetchColumns;
|
||||
|
||||
/// ReplicatedMergeTree
|
||||
if (supportsReplication())
|
||||
return QueryProcessingStage::Enum::WithMergeableState;
|
||||
@ -7179,6 +7182,42 @@ QueryProcessingStage::Enum MergeTreeData::getQueryProcessingStage(
|
||||
}
|
||||
|
||||
|
||||
bool MergeTreeData::canUseParallelReplicasBasedOnPKAnalysis(
|
||||
ContextPtr query_context,
|
||||
const StorageSnapshotPtr & storage_snapshot,
|
||||
SelectQueryInfo & query_info) const
|
||||
{
|
||||
const auto & snapshot_data = assert_cast<const MergeTreeData::SnapshotData &>(*storage_snapshot->data);
|
||||
const auto & parts = snapshot_data.parts;
|
||||
|
||||
MergeTreeDataSelectExecutor reader(*this);
|
||||
auto result_ptr = reader.estimateNumMarksToRead(
|
||||
parts,
|
||||
query_info.prewhere_info,
|
||||
storage_snapshot->getMetadataForQuery()->getColumns().getAll().getNames(),
|
||||
storage_snapshot->metadata,
|
||||
storage_snapshot->metadata,
|
||||
query_info,
|
||||
/*added_filter_nodes*/ActionDAGNodes{},
|
||||
query_context,
|
||||
query_context->getSettingsRef().max_threads);
|
||||
|
||||
if (result_ptr->error())
|
||||
std::rethrow_exception(std::get<std::exception_ptr>(result_ptr->result));
|
||||
|
||||
LOG_TRACE(log, "Estimated number of granules to read is {}", result_ptr->marks());
|
||||
|
||||
bool decision = result_ptr->marks() >= query_context->getSettingsRef().parallel_replicas_min_number_of_granules_to_enable;
|
||||
|
||||
if (!decision)
|
||||
LOG_DEBUG(log, "Parallel replicas will be disabled, because the estimated number of granules to read {} is less than the threshold which is {}",
|
||||
result_ptr->marks(),
|
||||
query_context->getSettingsRef().parallel_replicas_min_number_of_granules_to_enable);
|
||||
|
||||
return decision;
|
||||
}
|
||||
|
||||
|
||||
MergeTreeData & MergeTreeData::checkStructureAndGetMergeTreeData(IStorage & source_table, const StorageMetadataPtr & src_snapshot, const StorageMetadataPtr & my_snapshot) const
|
||||
{
|
||||
MergeTreeData * src_data = dynamic_cast<MergeTreeData *>(&source_table);
|
||||
|
@ -1536,6 +1536,13 @@ private:
|
||||
static MutableDataPartPtr asMutableDeletingPart(const DataPartPtr & part);
|
||||
|
||||
mutable TemporaryParts temporary_parts;
|
||||
|
||||
/// Estimate the number of marks to read to make a decision whether to enable parallel replicas (distributed processing) or not
|
||||
/// Note: it could be very rough.
|
||||
bool canUseParallelReplicasBasedOnPKAnalysis(
|
||||
ContextPtr query_context,
|
||||
const StorageSnapshotPtr & storage_snapshot,
|
||||
SelectQueryInfo & query_info) const;
|
||||
};
|
||||
|
||||
/// RAII struct to record big parts that are submerging or emerging.
|
||||
|
@ -16,6 +16,7 @@
|
||||
#include <Parsers/ASTLiteral.h>
|
||||
#include <Parsers/ASTFunction.h>
|
||||
#include <Parsers/ASTSampleRatio.h>
|
||||
#include <Parsers/ExpressionListParsers.h>
|
||||
#include <Parsers/parseIdentifierOrStringLiteral.h>
|
||||
#include <Interpreters/ExpressionAnalyzer.h>
|
||||
#include <Interpreters/InterpreterSelectQuery.h>
|
||||
@ -45,7 +46,7 @@
|
||||
|
||||
#include <IO/WriteBufferFromOStream.h>
|
||||
|
||||
#include <Storages/MergeTree/CommonANNIndexes.h>
|
||||
#include <Storages/MergeTree/ApproximateNearestNeighborIndexesCommon.h>
|
||||
|
||||
namespace CurrentMetrics
|
||||
{
|
||||
@ -948,25 +949,52 @@ RangesInDataParts MergeTreeDataSelectExecutor::filterPartsByPrimaryKeyAndSkipInd
|
||||
|
||||
std::list<DataSkippingIndexAndCondition> useful_indices;
|
||||
std::map<std::pair<String, size_t>, MergedDataSkippingIndexAndCondition> merged_indices;
|
||||
std::unordered_set<std::string> ignored_index_names;
|
||||
|
||||
if (use_skip_indexes && settings.ignore_data_skipping_indices.changed)
|
||||
{
|
||||
const auto & indices = settings.ignore_data_skipping_indices.toString();
|
||||
Tokens tokens(indices.data(), indices.data() + indices.size(), settings.max_query_size);
|
||||
IParser::Pos pos(tokens, static_cast<unsigned>(settings.max_parser_depth));
|
||||
Expected expected;
|
||||
|
||||
/// Use an unordered list rather than string vector
|
||||
auto parse_single_id_or_literal = [&]
|
||||
{
|
||||
String str;
|
||||
if (!parseIdentifierOrStringLiteral(pos, expected, str))
|
||||
return false;
|
||||
|
||||
ignored_index_names.insert(std::move(str));
|
||||
return true;
|
||||
};
|
||||
|
||||
if (!ParserList::parseUtil(pos, expected, parse_single_id_or_literal, false))
|
||||
throw Exception(ErrorCodes::CANNOT_PARSE_TEXT, "Cannot parse ignore_data_skipping_indices ('{}')", indices);
|
||||
}
|
||||
|
||||
if (use_skip_indexes)
|
||||
{
|
||||
for (const auto & index : metadata_snapshot->getSecondaryIndices())
|
||||
{
|
||||
auto index_helper = MergeTreeIndexFactory::instance().get(index);
|
||||
if (index_helper->isMergeable())
|
||||
{
|
||||
auto [it, inserted] = merged_indices.try_emplace({index_helper->index.type, index_helper->getGranularity()});
|
||||
if (inserted)
|
||||
it->second.condition = index_helper->createIndexMergedCondition(query_info, metadata_snapshot);
|
||||
|
||||
it->second.addIndex(index_helper);
|
||||
}
|
||||
else
|
||||
auto index_helper = MergeTreeIndexFactory::instance().get(index);
|
||||
if (!ignored_index_names.contains(index.name))
|
||||
{
|
||||
auto condition = index_helper->createIndexCondition(query_info, context);
|
||||
if (!condition->alwaysUnknownOrTrue())
|
||||
useful_indices.emplace_back(index_helper, condition);
|
||||
if (index_helper->isMergeable())
|
||||
{
|
||||
auto [it, inserted] = merged_indices.try_emplace({index_helper->index.type, index_helper->getGranularity()});
|
||||
if (inserted)
|
||||
it->second.condition = index_helper->createIndexMergedCondition(query_info, metadata_snapshot);
|
||||
|
||||
it->second.addIndex(index_helper);
|
||||
}
|
||||
else
|
||||
{
|
||||
auto condition = index_helper->createIndexCondition(query_info, context);
|
||||
if (!condition->alwaysUnknownOrTrue())
|
||||
useful_indices.emplace_back(index_helper, condition);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -1686,17 +1714,14 @@ MarkRanges MergeTreeDataSelectExecutor::filterMarksUsingIndex(
|
||||
{
|
||||
if (index_mark != index_range.begin || !granule || last_index_mark != index_range.begin)
|
||||
granule = reader.read();
|
||||
const auto * gin_filter_condition = dynamic_cast<const MergeTreeConditionInverted *>(&*condition);
|
||||
// Cast to Ann condition
|
||||
auto ann_condition = std::dynamic_pointer_cast<ApproximateNearestNeighbour::IMergeTreeIndexConditionAnn>(condition);
|
||||
auto ann_condition = std::dynamic_pointer_cast<IMergeTreeIndexConditionApproximateNearestNeighbor>(condition);
|
||||
if (ann_condition != nullptr)
|
||||
{
|
||||
// vector of indexes of useful ranges
|
||||
auto result = ann_condition->getUsefulRanges(granule);
|
||||
if (result.empty())
|
||||
{
|
||||
++granules_dropped;
|
||||
}
|
||||
|
||||
for (auto range : result)
|
||||
{
|
||||
@ -1714,6 +1739,7 @@ MarkRanges MergeTreeDataSelectExecutor::filterMarksUsingIndex(
|
||||
}
|
||||
|
||||
bool result = false;
|
||||
const auto * gin_filter_condition = dynamic_cast<const MergeTreeConditionInverted *>(&*condition);
|
||||
if (!gin_filter_condition)
|
||||
result = condition->mayBeTrueOnGranule(granule);
|
||||
else
|
||||
|
@ -42,7 +42,7 @@ void MergeTreeIndexAggregatorBloomFilter::update(const Block & block, size_t * p
|
||||
{
|
||||
if (*pos >= block.rows())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "The provided position is not less than the number of block rows. "
|
||||
"Position: {}, Block rows: {}.", toString(*pos), toString(block.rows()));
|
||||
"Position: {}, Block rows: {}.", *pos, block.rows());
|
||||
|
||||
Block granule_index_block;
|
||||
size_t max_read_rows = std::min(block.rows() - *pos, limit);
|
||||
|
@ -2,26 +2,40 @@
|
||||
|
||||
#include <Storages/MergeTree/MergeTreeIndexAnnoy.h>
|
||||
|
||||
#include <Columns/ColumnArray.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Core/Field.h>
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include <Interpreters/castColumn.h>
|
||||
#include <Columns/ColumnArray.h>
|
||||
#include <DataTypes/DataTypeArray.h>
|
||||
#include <DataTypes/DataTypeTuple.h>
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Interpreters/castColumn.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ApproximateNearestNeighbour
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
extern const int INCORRECT_DATA;
|
||||
extern const int INCORRECT_NUMBER_OF_COLUMNS;
|
||||
extern const int INCORRECT_QUERY;
|
||||
extern const int LOGICAL_ERROR;
|
||||
}
|
||||
|
||||
template<typename Dist>
|
||||
void AnnoyIndex<Dist>::serialize(WriteBuffer& ostr) const
|
||||
|
||||
template <typename Distance>
|
||||
AnnoyIndexWithSerialization<Distance>::AnnoyIndexWithSerialization(uint64_t dim)
|
||||
: Base::AnnoyIndex(dim)
|
||||
{
|
||||
assert(Base::_built);
|
||||
}
|
||||
|
||||
template<typename Distance>
|
||||
void AnnoyIndexWithSerialization<Distance>::serialize(WriteBuffer& ostr) const
|
||||
{
|
||||
chassert(Base::_built);
|
||||
writeIntBinary(Base::_s, ostr);
|
||||
writeIntBinary(Base::_n_items, ostr);
|
||||
writeIntBinary(Base::_n_nodes, ostr);
|
||||
@ -32,10 +46,10 @@ void AnnoyIndex<Dist>::serialize(WriteBuffer& ostr) const
|
||||
ostr.write(reinterpret_cast<const char*>(Base::_nodes), Base::_s * Base::_n_nodes);
|
||||
}
|
||||
|
||||
template<typename Dist>
|
||||
void AnnoyIndex<Dist>::deserialize(ReadBuffer& istr)
|
||||
template<typename Distance>
|
||||
void AnnoyIndexWithSerialization<Distance>::deserialize(ReadBuffer& istr)
|
||||
{
|
||||
assert(!Base::_built);
|
||||
chassert(!Base::_built);
|
||||
readIntBinary(Base::_s, istr);
|
||||
readIntBinary(Base::_n_items, istr);
|
||||
readIntBinary(Base::_n_nodes, istr);
|
||||
@ -54,24 +68,12 @@ void AnnoyIndex<Dist>::deserialize(ReadBuffer& istr)
|
||||
Base::_built = true;
|
||||
}
|
||||
|
||||
template<typename Dist>
|
||||
uint64_t AnnoyIndex<Dist>::getNumOfDimensions() const
|
||||
template<typename Distance>
|
||||
uint64_t AnnoyIndexWithSerialization<Distance>::getNumOfDimensions() const
|
||||
{
|
||||
return Base::get_f();
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
extern const int INCORRECT_DATA;
|
||||
extern const int INCORRECT_NUMBER_OF_COLUMNS;
|
||||
extern const int INCORRECT_QUERY;
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
template <typename Distance>
|
||||
MergeTreeIndexGranuleAnnoy<Distance>::MergeTreeIndexGranuleAnnoy(const String & index_name_, const Block & index_sample_block_)
|
||||
@ -84,16 +86,16 @@ template <typename Distance>
|
||||
MergeTreeIndexGranuleAnnoy<Distance>::MergeTreeIndexGranuleAnnoy(
|
||||
const String & index_name_,
|
||||
const Block & index_sample_block_,
|
||||
AnnoyIndexPtr index_base_)
|
||||
AnnoyIndexWithSerializationPtr<Distance> index_)
|
||||
: index_name(index_name_)
|
||||
, index_sample_block(index_sample_block_)
|
||||
, index(std::move(index_base_))
|
||||
, index(std::move(index_))
|
||||
{}
|
||||
|
||||
template <typename Distance>
|
||||
void MergeTreeIndexGranuleAnnoy<Distance>::serializeBinary(WriteBuffer & ostr) const
|
||||
{
|
||||
/// number of dimensions is required in the constructor,
|
||||
/// Number of dimensions is required in the index constructor,
|
||||
/// so it must be written and read separately from the other part
|
||||
writeIntBinary(index->getNumOfDimensions(), ostr); // write dimension
|
||||
index->serialize(ostr);
|
||||
@ -104,7 +106,7 @@ void MergeTreeIndexGranuleAnnoy<Distance>::deserializeBinary(ReadBuffer & istr,
|
||||
{
|
||||
uint64_t dimension;
|
||||
readIntBinary(dimension, istr);
|
||||
index = std::make_shared<AnnoyIndex>(dimension);
|
||||
index = std::make_shared<AnnoyIndexWithSerialization<Distance>>(dimension);
|
||||
index->deserialize(istr);
|
||||
}
|
||||
|
||||
@ -112,18 +114,18 @@ template <typename Distance>
|
||||
MergeTreeIndexAggregatorAnnoy<Distance>::MergeTreeIndexAggregatorAnnoy(
|
||||
const String & index_name_,
|
||||
const Block & index_sample_block_,
|
||||
uint64_t number_of_trees_)
|
||||
uint64_t trees_)
|
||||
: index_name(index_name_)
|
||||
, index_sample_block(index_sample_block_)
|
||||
, number_of_trees(number_of_trees_)
|
||||
, trees(trees_)
|
||||
{}
|
||||
|
||||
template <typename Distance>
|
||||
MergeTreeIndexGranulePtr MergeTreeIndexAggregatorAnnoy<Distance>::getGranuleAndReset()
|
||||
{
|
||||
// NOLINTNEXTLINE(*)
|
||||
index->build(static_cast<int>(number_of_trees), /*number_of_threads=*/1);
|
||||
auto granule = std::make_shared<MergeTreeIndexGranuleAnnoy<Distance> >(index_name, index_sample_block, index);
|
||||
index->build(static_cast<int>(trees), /*number_of_threads=*/1);
|
||||
auto granule = std::make_shared<MergeTreeIndexGranuleAnnoy<Distance>>(index_name, index_sample_block, index);
|
||||
index = nullptr;
|
||||
return granule;
|
||||
}
|
||||
@ -135,270 +137,255 @@ void MergeTreeIndexAggregatorAnnoy<Distance>::update(const Block & block, size_t
|
||||
throw Exception(
|
||||
ErrorCodes::LOGICAL_ERROR,
|
||||
"The provided position is not less than the number of block rows. Position: {}, Block rows: {}.",
|
||||
toString(*pos), toString(block.rows()));
|
||||
*pos, block.rows());
|
||||
|
||||
size_t rows_read = std::min(limit, block.rows() - *pos);
|
||||
|
||||
if (rows_read == 0)
|
||||
return;
|
||||
|
||||
if (index_sample_block.columns() > 1)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Only one column is supported");
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Expected block with single column");
|
||||
|
||||
auto index_column_name = index_sample_block.getByPosition(0).name;
|
||||
const auto & column_cut = block.getByName(index_column_name).column->cut(*pos, rows_read);
|
||||
const auto & column_array = typeid_cast<const ColumnArray*>(column_cut.get());
|
||||
if (column_array)
|
||||
const String & index_column_name = index_sample_block.getByPosition(0).name;
|
||||
ColumnPtr column_cut = block.getByName(index_column_name).column->cut(*pos, rows_read);
|
||||
|
||||
if (const auto & column_array = typeid_cast<const ColumnArray *>(column_cut.get()))
|
||||
{
|
||||
const auto & data = column_array->getData();
|
||||
const auto & array = typeid_cast<const ColumnFloat32&>(data).getData();
|
||||
const auto & array = typeid_cast<const ColumnFloat32 &>(data).getData();
|
||||
|
||||
if (array.empty())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Array has 0 rows, {} rows expected", rows_read);
|
||||
|
||||
const auto & offsets = column_array->getOffsets();
|
||||
size_t num_rows = offsets.size();
|
||||
const size_t num_rows = offsets.size();
|
||||
|
||||
/// Check all sizes are the same
|
||||
size_t size = offsets[0];
|
||||
for (size_t i = 0; i < num_rows - 1; ++i)
|
||||
if (offsets[i + 1] - offsets[i] != size)
|
||||
throw Exception(ErrorCodes::INCORRECT_DATA, "Arrays should have same length");
|
||||
throw Exception(ErrorCodes::INCORRECT_DATA, "All arrays in column {} must have equal length", index_column_name);
|
||||
|
||||
index = std::make_shared<AnnoyIndex>(size);
|
||||
index = std::make_shared<AnnoyIndexWithSerialization<Distance>>(size);
|
||||
|
||||
/// Add all rows of block
|
||||
index->add_item(index->get_n_items(), array.data());
|
||||
/// add all rows from 1 to num_rows - 1 (this is the same as the beginning of the last element)
|
||||
for (size_t current_row = 1; current_row < num_rows; ++current_row)
|
||||
index->add_item(index->get_n_items(), &array[offsets[current_row - 1]]);
|
||||
}
|
||||
else
|
||||
else if (const auto & column_tuple = typeid_cast<const ColumnTuple *>(column_cut.get()))
|
||||
{
|
||||
/// Other possible type of column is Tuple
|
||||
const auto & column_tuple = typeid_cast<const ColumnTuple*>(column_cut.get());
|
||||
|
||||
if (!column_tuple)
|
||||
throw Exception(ErrorCodes::INCORRECT_QUERY, "Wrong type was given to index.");
|
||||
|
||||
const auto & columns = column_tuple->getColumns();
|
||||
|
||||
/// TODO check if calling index->add_item() directly on the block's tuples is faster than materializing everything
|
||||
std::vector<std::vector<Float32>> data{column_tuple->size(), std::vector<Float32>()};
|
||||
for (const auto& column : columns)
|
||||
for (const auto & column : columns)
|
||||
{
|
||||
const auto& pod_array = typeid_cast<const ColumnFloat32*>(column.get())->getData();
|
||||
const auto & pod_array = typeid_cast<const ColumnFloat32 *>(column.get())->getData();
|
||||
for (size_t i = 0; i < pod_array.size(); ++i)
|
||||
data[i].push_back(pod_array[i]);
|
||||
}
|
||||
assert(!data.empty());
|
||||
if (!index)
|
||||
index = std::make_shared<AnnoyIndex>(data[0].size());
|
||||
for (const auto& item : data)
|
||||
|
||||
if (data.empty())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Tuple has 0 rows, {} rows expected", rows_read);
|
||||
|
||||
index = std::make_shared<AnnoyIndexWithSerialization<Distance>>(data[0].size());
|
||||
|
||||
for (const auto & item : data)
|
||||
index->add_item(index->get_n_items(), item.data());
|
||||
}
|
||||
else
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Expected Array or Tuple column");
|
||||
|
||||
*pos += rows_read;
|
||||
}
|
||||
|
||||
|
||||
MergeTreeIndexConditionAnnoy::MergeTreeIndexConditionAnnoy(
|
||||
const IndexDescription & /*index*/,
|
||||
const IndexDescription & /*index_description*/,
|
||||
const SelectQueryInfo & query,
|
||||
ContextPtr context,
|
||||
const String& distance_name_)
|
||||
: condition(query, context), distance_name(distance_name_)
|
||||
const String & distance_function_,
|
||||
ContextPtr context)
|
||||
: ann_condition(query, context)
|
||||
, distance_function(distance_function_)
|
||||
, search_k(context->getSettings().annoy_index_search_k_nodes)
|
||||
{}
|
||||
|
||||
|
||||
bool MergeTreeIndexConditionAnnoy::mayBeTrueOnGranule(MergeTreeIndexGranulePtr /* idx_granule */) const
|
||||
bool MergeTreeIndexConditionAnnoy::mayBeTrueOnGranule(MergeTreeIndexGranulePtr /*idx_granule*/) const
|
||||
{
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "mayBeTrueOnGranule is not supported for ANN skip indexes");
|
||||
}
|
||||
|
||||
bool MergeTreeIndexConditionAnnoy::alwaysUnknownOrTrue() const
|
||||
{
|
||||
return condition.alwaysUnknownOrTrue(distance_name);
|
||||
return ann_condition.alwaysUnknownOrTrue(distance_function);
|
||||
}
|
||||
|
||||
std::vector<size_t> MergeTreeIndexConditionAnnoy::getUsefulRanges(MergeTreeIndexGranulePtr idx_granule) const
|
||||
{
|
||||
if (distance_name == "L2Distance")
|
||||
{
|
||||
return getUsefulRangesImpl<::Annoy::Euclidean>(idx_granule);
|
||||
}
|
||||
else if (distance_name == "cosineDistance")
|
||||
{
|
||||
return getUsefulRangesImpl<::Annoy::Angular>(idx_granule);
|
||||
}
|
||||
else
|
||||
{
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown distance name. Must be 'L2Distance' or 'cosineDistance'. Got {}", distance_name);
|
||||
}
|
||||
if (distance_function == "L2Distance")
|
||||
return getUsefulRangesImpl<Annoy::Euclidean>(idx_granule);
|
||||
else if (distance_function == "cosineDistance")
|
||||
return getUsefulRangesImpl<Annoy::Angular>(idx_granule);
|
||||
std::unreachable();
|
||||
}
|
||||
|
||||
|
||||
template <typename Distance>
|
||||
std::vector<size_t> MergeTreeIndexConditionAnnoy::getUsefulRangesImpl(MergeTreeIndexGranulePtr idx_granule) const
|
||||
{
|
||||
UInt64 limit = condition.getLimit();
|
||||
UInt64 index_granularity = condition.getIndexGranularity();
|
||||
std::optional<float> comp_dist = condition.getQueryType() == ApproximateNearestNeighbour::ANNQueryInformation::Type::Where ?
|
||||
std::optional<float>(condition.getComparisonDistanceForWhereQuery()) : std::nullopt;
|
||||
const UInt64 limit = ann_condition.getLimit();
|
||||
const UInt64 index_granularity = ann_condition.getIndexGranularity();
|
||||
const std::optional<float> comparison_distance = ann_condition.getQueryType() == ApproximateNearestNeighborInformation::Type::Where
|
||||
? std::optional<float>(ann_condition.getComparisonDistanceForWhereQuery())
|
||||
: std::nullopt;
|
||||
|
||||
if (comp_dist && comp_dist.value() < 0)
|
||||
if (comparison_distance && comparison_distance.value() < 0)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Attempt to optimize query with where without distance");
|
||||
|
||||
std::vector<float> target_vec = condition.getTargetVector();
|
||||
const std::vector<float> reference_vector = ann_condition.getReferenceVector();
|
||||
|
||||
auto granule = std::dynamic_pointer_cast<MergeTreeIndexGranuleAnnoy<Distance> >(idx_granule);
|
||||
const auto granule = std::dynamic_pointer_cast<MergeTreeIndexGranuleAnnoy<Distance>>(idx_granule);
|
||||
if (granule == nullptr)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Granule has the wrong type");
|
||||
|
||||
auto annoy = granule->index;
|
||||
const AnnoyIndexWithSerializationPtr<Distance> annoy = granule->index;
|
||||
|
||||
if (condition.getNumOfDimensions() != annoy->getNumOfDimensions())
|
||||
if (ann_condition.getNumOfDimensions() != annoy->getNumOfDimensions())
|
||||
throw Exception(ErrorCodes::INCORRECT_QUERY, "The dimension of the space in the request ({}) "
|
||||
"does not match with the dimension in the index ({})",
|
||||
toString(condition.getNumOfDimensions()), toString(annoy->getNumOfDimensions()));
|
||||
"does not match the dimension in the index ({})",
|
||||
ann_condition.getNumOfDimensions(), annoy->getNumOfDimensions());
|
||||
|
||||
/// neighbors contain indexes of dots which were closest to target vector
|
||||
std::vector<UInt64> neighbors;
|
||||
std::vector<UInt64> neighbors; /// indexes of dots which were closest to the reference vector
|
||||
std::vector<Float32> distances;
|
||||
neighbors.reserve(limit);
|
||||
distances.reserve(limit);
|
||||
|
||||
int k_search = -1;
|
||||
String params_str = condition.getParamsStr();
|
||||
if (!params_str.empty())
|
||||
{
|
||||
try
|
||||
{
|
||||
/// k_search=... (algorithm will inspect up to search_k nodes which defaults to n_trees * n if not provided)
|
||||
k_search = std::stoi(params_str.data() + 9);
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
throw Exception(ErrorCodes::INCORRECT_QUERY, "Setting of the annoy index should be int");
|
||||
}
|
||||
}
|
||||
annoy->get_nns_by_vector(target_vec.data(), limit, k_search, &neighbors, &distances);
|
||||
std::unordered_set<size_t> granule_numbers;
|
||||
annoy->get_nns_by_vector(reference_vector.data(), limit, static_cast<int>(search_k), &neighbors, &distances);
|
||||
|
||||
chassert(neighbors.size() == distances.size());
|
||||
|
||||
std::vector<size_t> granule_numbers;
|
||||
granule_numbers.reserve(neighbors.size());
|
||||
for (size_t i = 0; i < neighbors.size(); ++i)
|
||||
{
|
||||
if (comp_dist && distances[i] > comp_dist)
|
||||
if (comparison_distance && distances[i] > comparison_distance)
|
||||
continue;
|
||||
granule_numbers.insert(neighbors[i] / index_granularity);
|
||||
granule_numbers.push_back(neighbors[i] / index_granularity);
|
||||
}
|
||||
|
||||
std::vector<size_t> result_vector;
|
||||
result_vector.reserve(granule_numbers.size());
|
||||
for (auto granule_number : granule_numbers)
|
||||
result_vector.push_back(granule_number);
|
||||
/// make unique
|
||||
std::sort(granule_numbers.begin(), granule_numbers.end());
|
||||
granule_numbers.erase(std::unique(granule_numbers.begin(), granule_numbers.end()), granule_numbers.end());
|
||||
|
||||
return result_vector;
|
||||
return granule_numbers;
|
||||
}
|
||||
|
||||
MergeTreeIndexAnnoy::MergeTreeIndexAnnoy(const IndexDescription & index_, uint64_t trees_, const String & distance_function_)
|
||||
: IMergeTreeIndex(index_)
|
||||
, trees(trees_)
|
||||
, distance_function(distance_function_)
|
||||
{}
|
||||
|
||||
MergeTreeIndexGranulePtr MergeTreeIndexAnnoy::createIndexGranule() const
|
||||
{
|
||||
if (distance_name == "L2Distance")
|
||||
{
|
||||
return std::make_shared<MergeTreeIndexGranuleAnnoy<::Annoy::Euclidean> >(index.name, index.sample_block);
|
||||
}
|
||||
if (distance_name == "cosineDistance")
|
||||
{
|
||||
return std::make_shared<MergeTreeIndexGranuleAnnoy<::Annoy::Angular> >(index.name, index.sample_block);
|
||||
}
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown distance name. Must be 'L2Distance' or 'cosineDistance'. Got {}", distance_name);
|
||||
if (distance_function == "L2Distance")
|
||||
return std::make_shared<MergeTreeIndexGranuleAnnoy<Annoy::Euclidean>>(index.name, index.sample_block);
|
||||
else if (distance_function == "cosineDistance")
|
||||
return std::make_shared<MergeTreeIndexGranuleAnnoy<Annoy::Angular>>(index.name, index.sample_block);
|
||||
std::unreachable();
|
||||
}
|
||||
|
||||
MergeTreeIndexAggregatorPtr MergeTreeIndexAnnoy::createIndexAggregator() const
|
||||
{
|
||||
if (distance_name == "L2Distance")
|
||||
{
|
||||
return std::make_shared<MergeTreeIndexAggregatorAnnoy<::Annoy::Euclidean> >(index.name, index.sample_block, number_of_trees);
|
||||
}
|
||||
if (distance_name == "cosineDistance")
|
||||
{
|
||||
return std::make_shared<MergeTreeIndexAggregatorAnnoy<::Annoy::Angular> >(index.name, index.sample_block, number_of_trees);
|
||||
}
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown distance name. Must be 'L2Distance' or 'cosineDistance'. Got {}", distance_name);
|
||||
/// TODO: Support more metrics. Available metrics: https://github.com/spotify/annoy/blob/master/src/annoymodule.cc#L151-L171
|
||||
if (distance_function == "L2Distance")
|
||||
return std::make_shared<MergeTreeIndexAggregatorAnnoy<Annoy::Euclidean>>(index.name, index.sample_block, trees);
|
||||
else if (distance_function == "cosineDistance")
|
||||
return std::make_shared<MergeTreeIndexAggregatorAnnoy<Annoy::Angular>>(index.name, index.sample_block, trees);
|
||||
std::unreachable();
|
||||
}
|
||||
|
||||
MergeTreeIndexConditionPtr MergeTreeIndexAnnoy::createIndexCondition(
|
||||
const SelectQueryInfo & query, ContextPtr context) const
|
||||
MergeTreeIndexConditionPtr MergeTreeIndexAnnoy::createIndexCondition(const SelectQueryInfo & query, ContextPtr context) const
|
||||
{
|
||||
return std::make_shared<MergeTreeIndexConditionAnnoy>(index, query, context, distance_name);
|
||||
return std::make_shared<MergeTreeIndexConditionAnnoy>(index, query, distance_function, context);
|
||||
};
|
||||
|
||||
MergeTreeIndexPtr annoyIndexCreator(const IndexDescription & index)
|
||||
{
|
||||
uint64_t param = 100;
|
||||
String distance_name = "L2Distance";
|
||||
if (!index.arguments.empty() && !index.arguments[0].tryGet<uint64_t>(param))
|
||||
{
|
||||
if (!index.arguments[0].tryGet<String>(distance_name))
|
||||
{
|
||||
throw Exception(ErrorCodes::INCORRECT_DATA, "Can't parse first argument");
|
||||
}
|
||||
}
|
||||
if (index.arguments.size() > 1 && !index.arguments[1].tryGet<String>(distance_name))
|
||||
{
|
||||
throw Exception(ErrorCodes::INCORRECT_DATA, "Can't parse second argument");
|
||||
}
|
||||
return std::make_shared<MergeTreeIndexAnnoy>(index, param, distance_name);
|
||||
}
|
||||
static constexpr auto default_trees = 100uz;
|
||||
static constexpr auto default_distance_function = "L2Distance";
|
||||
|
||||
static void assertIndexColumnsType(const Block & header)
|
||||
{
|
||||
DataTypePtr column_data_type_ptr = header.getDataTypes()[0];
|
||||
String distance_function = default_distance_function;
|
||||
if (!index.arguments.empty())
|
||||
distance_function = index.arguments[0].get<String>();
|
||||
|
||||
if (const auto * array_type = typeid_cast<const DataTypeArray *>(column_data_type_ptr.get()))
|
||||
{
|
||||
TypeIndex nested_type_index = array_type->getNestedType()->getTypeId();
|
||||
if (!WhichDataType(nested_type_index).isFloat32())
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_COLUMN,
|
||||
"Unexpected type {} of Annoy index. Only Array(Float32) and Tuple(Float32) are supported.",
|
||||
column_data_type_ptr->getName());
|
||||
}
|
||||
else if (const auto * tuple_type = typeid_cast<const DataTypeTuple *>(column_data_type_ptr.get()))
|
||||
{
|
||||
const DataTypes & nested_types = tuple_type->getElements();
|
||||
for (const auto & type : nested_types)
|
||||
{
|
||||
TypeIndex nested_type_index = type->getTypeId();
|
||||
if (!WhichDataType(nested_type_index).isFloat32())
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_COLUMN,
|
||||
"Unexpected type {} of Annoy index. Only Array(Float32) and Tuple(Float32) are supported.",
|
||||
column_data_type_ptr->getName());
|
||||
}
|
||||
}
|
||||
else
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_COLUMN,
|
||||
"Unexpected type {} of Annoy index. Only Array(Float32) and Tuple(Float32) are supported.",
|
||||
column_data_type_ptr->getName());
|
||||
uint64_t trees = default_trees;
|
||||
if (index.arguments.size() > 1)
|
||||
trees = index.arguments[1].get<uint64_t>();
|
||||
|
||||
return std::make_shared<MergeTreeIndexAnnoy>(index, trees, distance_function);
|
||||
}
|
||||
|
||||
void annoyIndexValidator(const IndexDescription & index, bool /* attach */)
|
||||
{
|
||||
/// Check number and type of Annoy index arguments:
|
||||
|
||||
if (index.arguments.size() > 2)
|
||||
{
|
||||
throw Exception(ErrorCodes::INCORRECT_QUERY, "Annoy index must not have more than two parameters");
|
||||
}
|
||||
if (!index.arguments.empty() && index.arguments[0].getType() != Field::Types::UInt64
|
||||
&& index.arguments[0].getType() != Field::Types::String)
|
||||
{
|
||||
throw Exception(ErrorCodes::INCORRECT_QUERY, "Annoy index first argument must be UInt64 or String.");
|
||||
}
|
||||
if (index.arguments.size() > 1 && index.arguments[1].getType() != Field::Types::String)
|
||||
{
|
||||
throw Exception(ErrorCodes::INCORRECT_QUERY, "Annoy index second argument must be String.");
|
||||
}
|
||||
|
||||
if (!index.arguments.empty() && index.arguments[0].getType() != Field::Types::String)
|
||||
throw Exception(ErrorCodes::INCORRECT_QUERY, "Distance function argument of Annoy index must be of type String");
|
||||
|
||||
if (index.arguments.size() > 1 && index.arguments[1].getType() != Field::Types::UInt64)
|
||||
throw Exception(ErrorCodes::INCORRECT_QUERY, "Number of trees argument of Annoy index must be UInt64");
|
||||
|
||||
/// Check that the index is created on a single column
|
||||
|
||||
if (index.column_names.size() != 1 || index.data_types.size() != 1)
|
||||
throw Exception(ErrorCodes::INCORRECT_NUMBER_OF_COLUMNS, "Annoy indexes must be created on a single column");
|
||||
|
||||
assertIndexColumnsType(index.sample_block);
|
||||
/// Check that a supported metric was passed as first argument
|
||||
|
||||
if (!index.arguments.empty())
|
||||
{
|
||||
String distance_name = index.arguments[0].get<String>();
|
||||
if (distance_name != "L2Distance" && distance_name != "cosineDistance")
|
||||
throw Exception(ErrorCodes::INCORRECT_DATA, "Annoy index supports only distance functions 'L2Distance' and 'cosineDistance'. Given distance function: {}", distance_name);
|
||||
}
|
||||
|
||||
/// Check data type of indexed column:
|
||||
|
||||
auto throw_unsupported_underlying_column_exception = [](DataTypePtr data_type)
|
||||
{
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_COLUMN,
|
||||
"Annoy indexes can only be created on columns of type Array(Float32) and Tuple(Float32). Given type: {}",
|
||||
data_type->getName());
|
||||
};
|
||||
|
||||
DataTypePtr data_type = index.sample_block.getDataTypes()[0];
|
||||
|
||||
if (const auto * data_type_array = typeid_cast<const DataTypeArray *>(data_type.get()))
|
||||
{
|
||||
TypeIndex nested_type_index = data_type_array->getNestedType()->getTypeId();
|
||||
if (!WhichDataType(nested_type_index).isFloat32())
|
||||
throw_unsupported_underlying_column_exception(data_type);
|
||||
}
|
||||
else if (const auto * data_type_tuple = typeid_cast<const DataTypeTuple *>(data_type.get()))
|
||||
{
|
||||
const DataTypes & inner_types = data_type_tuple->getElements();
|
||||
for (const auto & inner_type : inner_types)
|
||||
{
|
||||
TypeIndex nested_type_index = inner_type->getTypeId();
|
||||
if (!WhichDataType(nested_type_index).isFloat32())
|
||||
throw_unsupported_underlying_column_exception(data_type);
|
||||
}
|
||||
}
|
||||
else
|
||||
throw_unsupported_underlying_column_exception(data_type);
|
||||
}
|
||||
|
||||
}
|
||||
#endif // ENABLE_ANNOY
|
||||
|
||||
#endif
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
#ifdef ENABLE_ANNOY
|
||||
|
||||
#include <Storages/MergeTree/CommonANNIndexes.h>
|
||||
#include <Storages/MergeTree/ApproximateNearestNeighborIndexesCommon.h>
|
||||
|
||||
#include <annoylib.h>
|
||||
#include <kissrandom.h>
|
||||
@ -10,36 +10,26 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
// auxiliary namespace for working with spotify-annoy library
|
||||
// mainly for serialization and deserialization of the index
|
||||
namespace ApproximateNearestNeighbour
|
||||
template <typename Distance>
|
||||
class AnnoyIndexWithSerialization : public Annoy::AnnoyIndex<UInt64, Float32, Distance, Annoy::Kiss64Random, Annoy::AnnoyIndexMultiThreadedBuildPolicy>
|
||||
{
|
||||
using AnnoyIndexThreadedBuildPolicy = ::Annoy::AnnoyIndexMultiThreadedBuildPolicy;
|
||||
// TODO: Support different metrics. List of available metrics can be taken from here:
|
||||
// https://github.com/spotify/annoy/blob/master/src/annoymodule.cc#L151-L171
|
||||
template <typename Distance>
|
||||
class AnnoyIndex : public ::Annoy::AnnoyIndex<UInt64, Float32, Distance, ::Annoy::Kiss64Random, AnnoyIndexThreadedBuildPolicy>
|
||||
{
|
||||
using Base = ::Annoy::AnnoyIndex<UInt64, Float32, Distance, ::Annoy::Kiss64Random, AnnoyIndexThreadedBuildPolicy>;
|
||||
public:
|
||||
explicit AnnoyIndex(const uint64_t dim) : Base::AnnoyIndex(dim) {}
|
||||
void serialize(WriteBuffer& ostr) const;
|
||||
void deserialize(ReadBuffer& istr);
|
||||
uint64_t getNumOfDimensions() const;
|
||||
};
|
||||
}
|
||||
using Base = Annoy::AnnoyIndex<UInt64, Float32, Distance, Annoy::Kiss64Random, Annoy::AnnoyIndexMultiThreadedBuildPolicy>;
|
||||
|
||||
public:
|
||||
explicit AnnoyIndexWithSerialization(uint64_t dim);
|
||||
void serialize(WriteBuffer& ostr) const;
|
||||
void deserialize(ReadBuffer& istr);
|
||||
uint64_t getNumOfDimensions() const;
|
||||
};
|
||||
|
||||
template <typename Distance>
|
||||
using AnnoyIndexWithSerializationPtr = std::shared_ptr<AnnoyIndexWithSerialization<Distance>>;
|
||||
|
||||
template <typename Distance>
|
||||
struct MergeTreeIndexGranuleAnnoy final : public IMergeTreeIndexGranule
|
||||
{
|
||||
using AnnoyIndex = ApproximateNearestNeighbour::AnnoyIndex<Distance>;
|
||||
using AnnoyIndexPtr = std::shared_ptr<AnnoyIndex>;
|
||||
|
||||
MergeTreeIndexGranuleAnnoy(const String & index_name_, const Block & index_sample_block_);
|
||||
MergeTreeIndexGranuleAnnoy(
|
||||
const String & index_name_,
|
||||
const Block & index_sample_block_,
|
||||
AnnoyIndexPtr index_base_);
|
||||
MergeTreeIndexGranuleAnnoy(const String & index_name_, const Block & index_sample_block_, AnnoyIndexWithSerializationPtr<Distance> index_);
|
||||
|
||||
~MergeTreeIndexGranuleAnnoy() override = default;
|
||||
|
||||
@ -48,54 +38,50 @@ struct MergeTreeIndexGranuleAnnoy final : public IMergeTreeIndexGranule
|
||||
|
||||
bool empty() const override { return !index.get(); }
|
||||
|
||||
String index_name;
|
||||
Block index_sample_block;
|
||||
AnnoyIndexPtr index;
|
||||
const String index_name;
|
||||
const Block index_sample_block;
|
||||
AnnoyIndexWithSerializationPtr<Distance> index;
|
||||
};
|
||||
|
||||
template <typename Distance>
|
||||
struct MergeTreeIndexAggregatorAnnoy final : IMergeTreeIndexAggregator
|
||||
{
|
||||
using AnnoyIndex = ApproximateNearestNeighbour::AnnoyIndex<Distance>;
|
||||
using AnnoyIndexPtr = std::shared_ptr<AnnoyIndex>;
|
||||
|
||||
MergeTreeIndexAggregatorAnnoy(const String & index_name_, const Block & index_sample_block, uint64_t number_of_trees);
|
||||
MergeTreeIndexAggregatorAnnoy(const String & index_name_, const Block & index_sample_block, uint64_t trees);
|
||||
~MergeTreeIndexAggregatorAnnoy() override = default;
|
||||
|
||||
bool empty() const override { return !index || index->get_n_items() == 0; }
|
||||
MergeTreeIndexGranulePtr getGranuleAndReset() override;
|
||||
void update(const Block & block, size_t * pos, size_t limit) override;
|
||||
|
||||
String index_name;
|
||||
Block index_sample_block;
|
||||
const uint64_t number_of_trees;
|
||||
AnnoyIndexPtr index;
|
||||
const String index_name;
|
||||
const Block index_sample_block;
|
||||
const uint64_t trees;
|
||||
AnnoyIndexWithSerializationPtr<Distance> index;
|
||||
};
|
||||
|
||||
|
||||
class MergeTreeIndexConditionAnnoy final : public ApproximateNearestNeighbour::IMergeTreeIndexConditionAnn
|
||||
class MergeTreeIndexConditionAnnoy final : public IMergeTreeIndexConditionApproximateNearestNeighbor
|
||||
{
|
||||
public:
|
||||
MergeTreeIndexConditionAnnoy(
|
||||
const IndexDescription & index,
|
||||
const IndexDescription & index_description,
|
||||
const SelectQueryInfo & query,
|
||||
ContextPtr context,
|
||||
const String& distance_name);
|
||||
|
||||
bool alwaysUnknownOrTrue() const override;
|
||||
|
||||
bool mayBeTrueOnGranule(MergeTreeIndexGranulePtr idx_granule) const override;
|
||||
|
||||
std::vector<size_t> getUsefulRanges(MergeTreeIndexGranulePtr idx_granule) const override;
|
||||
const String & distance_function,
|
||||
ContextPtr context);
|
||||
|
||||
~MergeTreeIndexConditionAnnoy() override = default;
|
||||
|
||||
bool alwaysUnknownOrTrue() const override;
|
||||
bool mayBeTrueOnGranule(MergeTreeIndexGranulePtr idx_granule) const override;
|
||||
std::vector<size_t> getUsefulRanges(MergeTreeIndexGranulePtr idx_granule) const override;
|
||||
|
||||
private:
|
||||
template <typename Distance>
|
||||
std::vector<size_t> getUsefulRangesImpl(MergeTreeIndexGranulePtr idx_granule) const;
|
||||
|
||||
ApproximateNearestNeighbour::ANNCondition condition;
|
||||
const String distance_name;
|
||||
const ApproximateNearestNeighborCondition ann_condition;
|
||||
const String distance_function;
|
||||
const Int64 search_k;
|
||||
};
|
||||
|
||||
|
||||
@ -103,28 +89,22 @@ class MergeTreeIndexAnnoy : public IMergeTreeIndex
|
||||
{
|
||||
public:
|
||||
|
||||
MergeTreeIndexAnnoy(const IndexDescription & index_, uint64_t number_of_trees_, const String& distance_name_)
|
||||
: IMergeTreeIndex(index_)
|
||||
, number_of_trees(number_of_trees_)
|
||||
, distance_name(distance_name_)
|
||||
{}
|
||||
MergeTreeIndexAnnoy(const IndexDescription & index_, uint64_t trees_, const String & distance_function_);
|
||||
|
||||
~MergeTreeIndexAnnoy() override = default;
|
||||
|
||||
MergeTreeIndexGranulePtr createIndexGranule() const override;
|
||||
MergeTreeIndexAggregatorPtr createIndexAggregator() const override;
|
||||
|
||||
MergeTreeIndexConditionPtr createIndexCondition(
|
||||
const SelectQueryInfo & query, ContextPtr context) const override;
|
||||
MergeTreeIndexConditionPtr createIndexCondition(const SelectQueryInfo & query, ContextPtr context) const override;
|
||||
|
||||
bool mayBenefitFromIndexForIn(const ASTPtr & /*node*/) const override { return false; }
|
||||
|
||||
private:
|
||||
const uint64_t number_of_trees;
|
||||
const String distance_name;
|
||||
const uint64_t trees;
|
||||
const String distance_function;
|
||||
};
|
||||
|
||||
|
||||
}
|
||||
|
||||
#endif // ENABLE_ANNOY
|
||||
#endif
|
||||
|
@ -92,7 +92,7 @@ void MergeTreeIndexAggregatorFullText::update(const Block & block, size_t * pos,
|
||||
{
|
||||
if (*pos >= block.rows())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "The provided position is not less than the number of block rows. "
|
||||
"Position: {}, Block rows: {}.", toString(*pos), toString(block.rows()));
|
||||
"Position: {}, Block rows: {}.", *pos, block.rows());
|
||||
|
||||
size_t rows_read = std::min(limit, block.rows() - *pos);
|
||||
|
||||
|
@ -123,7 +123,7 @@ void MergeTreeIndexAggregatorInverted::update(const Block & block, size_t * pos,
|
||||
{
|
||||
if (*pos >= block.rows())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "The provided position is not less than the number of block rows. "
|
||||
"Position: {}, Block rows: {}.", toString(*pos), toString(block.rows()));
|
||||
"Position: {}, Block rows: {}.", *pos, block.rows());
|
||||
|
||||
size_t rows_read = std::min(limit, block.rows() - *pos);
|
||||
auto row_id = store->getNextRowIDRange(rows_read);
|
||||
|
@ -122,7 +122,7 @@ void MergeTreeIndexAggregatorMinMax::update(const Block & block, size_t * pos, s
|
||||
{
|
||||
if (*pos >= block.rows())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "The provided position is not less than the number of block rows. "
|
||||
"Position: {}, Block rows: {}.", toString(*pos), toString(block.rows()));
|
||||
"Position: {}, Block rows: {}.", *pos, block.rows());
|
||||
|
||||
size_t rows_read = std::min(limit, block.rows() - *pos);
|
||||
|
||||
|
@ -146,7 +146,7 @@ void MergeTreeIndexAggregatorSet::update(const Block & block, size_t * pos, size
|
||||
{
|
||||
if (*pos >= block.rows())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "The provided position is not less than the number of block rows. "
|
||||
"Position: {}, Block rows: {}.", toString(*pos), toString(block.rows()));
|
||||
"Position: {}, Block rows: {}.", *pos, block.rows());
|
||||
|
||||
size_t rows_read = std::min(limit, block.rows() - *pos);
|
||||
|
||||
|
@ -353,7 +353,7 @@ void StorageNATS::read(
|
||||
}
|
||||
|
||||
|
||||
SinkToStoragePtr StorageNATS::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context)
|
||||
SinkToStoragePtr StorageNATS::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context, bool /*async_insert*/)
|
||||
{
|
||||
auto modified_context = addSettings(local_context);
|
||||
std::string subject = modified_context->getSettingsRef().stream_like_engine_insert_queue.changed
|
||||
|
@ -51,7 +51,7 @@ public:
|
||||
size_t /* max_block_size */,
|
||||
size_t /* num_streams */) override;
|
||||
|
||||
SinkToStoragePtr write(const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, ContextPtr context) override;
|
||||
SinkToStoragePtr write(const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, ContextPtr context, bool async_insert) override;
|
||||
|
||||
/// We want to control the number of rows in a chunk inserted into NATS
|
||||
bool prefersLargeBlocks() const override { return false; }
|
||||
|
@ -764,7 +764,7 @@ void StorageRabbitMQ::read(
|
||||
}
|
||||
|
||||
|
||||
SinkToStoragePtr StorageRabbitMQ::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context)
|
||||
SinkToStoragePtr StorageRabbitMQ::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context, bool /*async_insert*/)
|
||||
{
|
||||
auto producer = std::make_unique<RabbitMQProducer>(
|
||||
configuration, routing_keys, exchange_name, exchange_type, producer_id.fetch_add(1), persistent, shutdown_called, log);
|
||||
|
@ -57,7 +57,8 @@ public:
|
||||
SinkToStoragePtr write(
|
||||
const ASTPtr & query,
|
||||
const StorageMetadataPtr & metadata_snapshot,
|
||||
ContextPtr context) override;
|
||||
ContextPtr context,
|
||||
bool async_insert) override;
|
||||
|
||||
/// We want to control the number of rows in a chunk inserted into RabbitMQ
|
||||
bool prefersLargeBlocks() const override { return false; }
|
||||
|
@ -461,7 +461,7 @@ Pipe StorageEmbeddedRocksDB::read(
|
||||
}
|
||||
|
||||
SinkToStoragePtr StorageEmbeddedRocksDB::write(
|
||||
const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /*context*/)
|
||||
const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /*context*/, bool /*async_insert*/)
|
||||
{
|
||||
return std::make_shared<EmbeddedRocksDBSink>(*this, metadata_snapshot);
|
||||
}
|
||||
|
@ -48,7 +48,7 @@ public:
|
||||
size_t max_block_size,
|
||||
size_t num_streams) override;
|
||||
|
||||
SinkToStoragePtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override;
|
||||
SinkToStoragePtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context, bool async_insert) override;
|
||||
void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) override;
|
||||
|
||||
void checkMutationIsPossible(const MutationCommands & commands, const Settings & settings) const override;
|
||||
|
@ -656,7 +656,7 @@ private:
|
||||
};
|
||||
|
||||
|
||||
SinkToStoragePtr StorageBuffer::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /*context*/)
|
||||
SinkToStoragePtr StorageBuffer::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /*context*/, bool /*async_insert*/)
|
||||
{
|
||||
return std::make_shared<BufferSink>(*this, metadata_snapshot);
|
||||
}
|
||||
|
@ -88,7 +88,7 @@ public:
|
||||
|
||||
bool supportsSubcolumns() const override { return true; }
|
||||
|
||||
SinkToStoragePtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override;
|
||||
SinkToStoragePtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context, bool /*async_insert*/) override;
|
||||
|
||||
void startup() override;
|
||||
/// Flush all buffers into the subordinate table and stop background thread.
|
||||
|
@ -847,7 +847,7 @@ private:
|
||||
/** Execute subquery node and put result in mutable context temporary table.
|
||||
* Returns table node that is initialized with temporary table storage.
|
||||
*/
|
||||
QueryTreeNodePtr executeSubqueryNode(const QueryTreeNodePtr & subquery_node,
|
||||
TableNodePtr executeSubqueryNode(const QueryTreeNodePtr & subquery_node,
|
||||
ContextMutablePtr & mutable_context,
|
||||
size_t subquery_depth)
|
||||
{
|
||||
@ -897,7 +897,7 @@ QueryTreeNodePtr executeSubqueryNode(const QueryTreeNodePtr & subquery_node,
|
||||
auto temporary_table_expression_node = std::make_shared<TableNode>(external_storage, mutable_context);
|
||||
temporary_table_expression_node->setTemporaryTableName(temporary_table_name);
|
||||
|
||||
auto table_out = external_storage->write({}, external_storage->getInMemoryMetadataPtr(), mutable_context);
|
||||
auto table_out = external_storage->write({}, external_storage->getInMemoryMetadataPtr(), mutable_context, /*async_insert=*/false);
|
||||
auto io = interpreter.execute();
|
||||
io.pipeline.complete(std::move(table_out));
|
||||
CompletedPipelineExecutor executor(io.pipeline);
|
||||
@ -943,8 +943,14 @@ QueryTreeNodePtr buildQueryTreeDistributed(SelectQueryInfo & query_info,
|
||||
}
|
||||
else
|
||||
{
|
||||
auto resolved_remote_storage_id = query_context->resolveStorageID(remote_storage_id);
|
||||
auto storage = std::make_shared<StorageDummy>(resolved_remote_storage_id, distributed_storage_snapshot->metadata->getColumns());
|
||||
auto resolved_remote_storage_id = remote_storage_id;
|
||||
// In case of cross-replication we don't know what database is used for the table.
|
||||
// `storage_id.hasDatabase()` can return false only on the initiator node.
|
||||
// Each shard will use the default database (in the case of cross-replication shards may have different defaults).
|
||||
if (remote_storage_id.hasDatabase())
|
||||
resolved_remote_storage_id = query_context->resolveStorageID(remote_storage_id);
|
||||
|
||||
auto storage = std::make_shared<StorageDummy>(resolved_remote_storage_id, distributed_storage_snapshot->metadata->getColumns(), distributed_storage_snapshot->object_columns);
|
||||
auto table_node = std::make_shared<TableNode>(std::move(storage), query_context);
|
||||
|
||||
if (table_expression_modifiers)
|
||||
@ -1001,6 +1007,7 @@ QueryTreeNodePtr buildQueryTreeDistributed(SelectQueryInfo & query_info,
|
||||
planner_context->getMutableQueryContext(),
|
||||
global_in_or_join_node.subquery_depth);
|
||||
temporary_table_expression_node->setAlias(join_right_table_expression->getAlias());
|
||||
|
||||
replacement_map.emplace(join_right_table_expression.get(), std::move(temporary_table_expression_node));
|
||||
continue;
|
||||
}
|
||||
@ -1014,6 +1021,7 @@ QueryTreeNodePtr buildQueryTreeDistributed(SelectQueryInfo & query_info,
|
||||
auto temporary_table_expression_node = executeSubqueryNode(in_function_subquery_node,
|
||||
planner_context->getMutableQueryContext(),
|
||||
global_in_or_join_node.subquery_depth);
|
||||
|
||||
in_function_subquery_node = std::move(temporary_table_expression_node);
|
||||
}
|
||||
else
|
||||
@ -1057,9 +1065,8 @@ void StorageDistributed::read(
|
||||
storage_snapshot,
|
||||
remote_storage_id,
|
||||
remote_table_function_ptr);
|
||||
|
||||
header = InterpreterSelectQueryAnalyzer::getSampleBlock(query_tree_distributed, local_context, SelectQueryOptions(processed_stage).analyze());
|
||||
query_ast = queryNodeToSelectQuery(query_tree_distributed);
|
||||
header = InterpreterSelectQueryAnalyzer::getSampleBlock(query_ast, local_context, SelectQueryOptions(processed_stage).analyze());
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -1132,7 +1139,7 @@ void StorageDistributed::read(
|
||||
}
|
||||
|
||||
|
||||
SinkToStoragePtr StorageDistributed::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context)
|
||||
SinkToStoragePtr StorageDistributed::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context, bool /*async_insert*/)
|
||||
{
|
||||
auto cluster = getCluster();
|
||||
const auto & settings = local_context->getSettingsRef();
|
||||
|
@ -118,7 +118,7 @@ public:
|
||||
bool supportsParallelInsert() const override { return true; }
|
||||
std::optional<UInt64> totalBytes(const Settings &) const override;
|
||||
|
||||
SinkToStoragePtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override;
|
||||
SinkToStoragePtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context, bool /*async_insert*/) override;
|
||||
|
||||
std::optional<QueryPipeline> distributedWrite(const ASTInsertQuery & query, ContextPtr context) override;
|
||||
|
||||
|
@ -9,8 +9,9 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
StorageDummy::StorageDummy(const StorageID & table_id_, const ColumnsDescription & columns_)
|
||||
StorageDummy::StorageDummy(const StorageID & table_id_, const ColumnsDescription & columns_, ColumnsDescription object_columns_)
|
||||
: IStorage(table_id_)
|
||||
, object_columns(std::move(object_columns_))
|
||||
{
|
||||
StorageInMemoryMetadata storage_metadata;
|
||||
storage_metadata.setColumns(columns_);
|
||||
|
@ -11,7 +11,7 @@ namespace DB
|
||||
class StorageDummy : public IStorage
|
||||
{
|
||||
public:
|
||||
StorageDummy(const StorageID & table_id_, const ColumnsDescription & columns_);
|
||||
StorageDummy(const StorageID & table_id_, const ColumnsDescription & columns_, ColumnsDescription object_columns_ = {});
|
||||
|
||||
std::string getName() const override { return "StorageDummy"; }
|
||||
|
||||
@ -22,6 +22,11 @@ public:
|
||||
bool supportsDynamicSubcolumns() const override { return true; }
|
||||
bool canMoveConditionsToPrewhere() const override { return false; }
|
||||
|
||||
StorageSnapshotPtr getStorageSnapshot(const StorageMetadataPtr & metadata_snapshot, ContextPtr /*query_context*/) const override
|
||||
{
|
||||
return std::make_shared<StorageSnapshot>(*this, metadata_snapshot, object_columns);
|
||||
}
|
||||
|
||||
QueryProcessingStage::Enum getQueryProcessingStage(
|
||||
ContextPtr local_context,
|
||||
QueryProcessingStage::Enum to_stage,
|
||||
@ -37,6 +42,8 @@ public:
|
||||
QueryProcessingStage::Enum processed_stage,
|
||||
size_t max_block_size,
|
||||
size_t num_streams) override;
|
||||
private:
|
||||
const ColumnsDescription object_columns;
|
||||
};
|
||||
|
||||
class ReadFromDummy : public SourceStepWithFilter
|
||||
|
@ -1049,7 +1049,8 @@ private:
|
||||
SinkToStoragePtr StorageFile::write(
|
||||
const ASTPtr & query,
|
||||
const StorageMetadataPtr & metadata_snapshot,
|
||||
ContextPtr context)
|
||||
ContextPtr context,
|
||||
bool /*async_insert*/)
|
||||
{
|
||||
if (format_name == "Distributed")
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method write is not implemented for Distributed format");
|
||||
|
@ -50,7 +50,8 @@ public:
|
||||
SinkToStoragePtr write(
|
||||
const ASTPtr & query,
|
||||
const StorageMetadataPtr & /*metadata_snapshot*/,
|
||||
ContextPtr context) override;
|
||||
ContextPtr context,
|
||||
bool async_insert) override;
|
||||
|
||||
void truncate(
|
||||
const ASTPtr & /*query*/,
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user