diff --git a/CHANGELOG.md b/CHANGELOG.md index 1ccd4f9846d..72372c8fac4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,4 +1,5 @@ ### Table of Contents +**[ClickHouse release v23.5, 2023-06-08](#235)**
**[ClickHouse release v23.4, 2023-04-26](#234)**
**[ClickHouse release v23.3 LTS, 2023-03-30](#233)**
**[ClickHouse release v23.2, 2023-02-23](#232)**
@@ -7,6 +8,258 @@ # 2023 Changelog +### ClickHouse release 23.5, 2023-06-08 + +#### Upgrade Notes +* Compress marks and primary key by default. It significantly reduces the cold query time. Upgrade notes: the support for compressed marks and primary key has been added in version 22.9. If you turned on compressed marks or primary key or installed version 23.5 or newer, which has compressed marks or primary key on by default, you will not be able to downgrade to version 22.8 or earlier. You can also explicitly disable compressed marks or primary keys by specifying the `compress_marks` and `compress_primary_key` settings in the `` section of the server configuration file. **Upgrade notes:** If you upgrade from versions prior to 22.9, you should either upgrade all replicas at once or disable the compression before upgrade, or upgrade through an intermediate version, where the compressed marks are supported but not enabled by default, such as 23.3. [#42587](https://github.com/ClickHouse/ClickHouse/pull/42587) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Make local object storage work consistently with s3 object storage, fix problem with append (closes [#48465](https://github.com/ClickHouse/ClickHouse/issues/48465)), make it configurable as independent storage. The change is backward incompatible because the cache on top of local object storage is not compatible to previous versions. [#48791](https://github.com/ClickHouse/ClickHouse/pull/48791) ([Kseniia Sumarokova](https://github.com/kssenii)). +* The experimental feature "in-memory data parts" is removed. The data format is still supported, but the settings are no-op, and compact or wide parts will be used instead. This closes [#45409](https://github.com/ClickHouse/ClickHouse/issues/45409). [#49429](https://github.com/ClickHouse/ClickHouse/pull/49429) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Changed default values of settings `parallelize_output_from_storages` and `input_format_parquet_preserve_order`. This allows ClickHouse to reorder rows when reading from files (e.g. CSV or Parquet), greatly improving performance in many cases. To restore the old behavior of preserving order, use `parallelize_output_from_storages = 0`, `input_format_parquet_preserve_order = 1`. [#49479](https://github.com/ClickHouse/ClickHouse/pull/49479) ([Michael Kolupaev](https://github.com/al13n321)). +* Make projections production-ready. Add the `optimize_use_projections` setting to control whether the projections will be selected for SELECT queries. The setting `allow_experimental_projection_optimization` is obsolete and does nothing. [#49719](https://github.com/ClickHouse/ClickHouse/pull/49719) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Mark `joinGet` as non-deterministic (so as `dictGet`). It allows using them in mutations without an extra setting. [#49843](https://github.com/ClickHouse/ClickHouse/pull/49843) ([Azat Khuzhin](https://github.com/azat)). +* Revert the "`groupArray` returns cannot be nullable" change (due to binary compatibility breakage for `groupArray`/`groupArrayLast`/`groupArraySample` over `Nullable` types, which likely will lead to `TOO_LARGE_ARRAY_SIZE` or `CANNOT_READ_ALL_DATA`). [#49971](https://github.com/ClickHouse/ClickHouse/pull/49971) ([Azat Khuzhin](https://github.com/azat)). +* Setting `enable_memory_bound_merging_of_aggregation_results` is enabled by default. If you update from version prior to 22.12, we recommend to set this flag to `false` until update is finished. [#50319](https://github.com/ClickHouse/ClickHouse/pull/50319) ([Nikita Taranov](https://github.com/nickitat)). + +#### New Feature +* Added storage engine AzureBlobStorage and azureBlobStorage table function. The supported set of features is very similar to storage/table function S3 [#50604] (https://github.com/ClickHouse/ClickHouse/pull/50604) ([alesapin](https://github.com/alesapin)) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni). +* Added native ClickHouse Keeper CLI Client, it is available as `clickhouse keeper-client` [#47414](https://github.com/ClickHouse/ClickHouse/pull/47414) ([pufit](https://github.com/pufit)). +* Add `urlCluster` table function. Refactor all *Cluster table functions to reduce code duplication. Make schema inference work for all possible *Cluster function signatures and for named collections. Closes [#38499](https://github.com/ClickHouse/ClickHouse/issues/38499). [#45427](https://github.com/ClickHouse/ClickHouse/pull/45427) ([attack204](https://github.com/attack204)), Pavel Kruglov. +* The query cache can now be used for production workloads. [#47977](https://github.com/ClickHouse/ClickHouse/pull/47977) ([Robert Schulze](https://github.com/rschu1ze)). The query cache can now support queries with totals and extremes modifier. [#48853](https://github.com/ClickHouse/ClickHouse/pull/48853) ([Robert Schulze](https://github.com/rschu1ze)). Make `allow_experimental_query_cache` setting as obsolete for backward-compatibility. It was removed in https://github.com/ClickHouse/ClickHouse/pull/47977. [#49934](https://github.com/ClickHouse/ClickHouse/pull/49934) ([Timur Solodovnikov](https://github.com/tsolodov)). +* Geographical data types (`Point`, `Ring`, `Polygon`, and `MultiPolygon`) are production-ready. [#50022](https://github.com/ClickHouse/ClickHouse/pull/50022) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Add schema inference to PostgreSQL, MySQL, MeiliSearch, and SQLite table engines. Closes [#49972](https://github.com/ClickHouse/ClickHouse/issues/49972). [#50000](https://github.com/ClickHouse/ClickHouse/pull/50000) ([Nikolay Degterinsky](https://github.com/evillique)). +* Password type in queries like `CREATE USER u IDENTIFIED BY 'p'` will be automatically set according to the setting `default_password_type` in the `config.xml` on the server. Closes [#42915](https://github.com/ClickHouse/ClickHouse/issues/42915). [#44674](https://github.com/ClickHouse/ClickHouse/pull/44674) ([Nikolay Degterinsky](https://github.com/evillique)). +* Add bcrypt password authentication type. Closes [#34599](https://github.com/ClickHouse/ClickHouse/issues/34599). [#44905](https://github.com/ClickHouse/ClickHouse/pull/44905) ([Nikolay Degterinsky](https://github.com/evillique)). +* Introduces new keyword `INTO OUTFILE 'file.txt' APPEND`. [#48880](https://github.com/ClickHouse/ClickHouse/pull/48880) ([alekar](https://github.com/alekar)). +* Added `system.zookeeper_connection` table that shows information about Keeper connections. [#45245](https://github.com/ClickHouse/ClickHouse/pull/45245) ([mateng915](https://github.com/mateng0915)). +* Add new function `generateRandomStructure` that generates random table structure. It can be used in combination with table function `generateRandom`. [#47409](https://github.com/ClickHouse/ClickHouse/pull/47409) ([Kruglov Pavel](https://github.com/Avogar)). +* Allow the use of `CASE` without an `ELSE` branch and extended `transform` to deal with more types. Also fix some issues that made transform() return incorrect results when decimal types were mixed with other numeric types. [#48300](https://github.com/ClickHouse/ClickHouse/pull/48300) ([Salvatore Mesoraca](https://github.com/aiven-sal)). This closes #2655. This closes #9596. This closes #38666. +* Added [server-side encryption using KMS keys](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html) with S3 tables, and the `header` setting with S3 disks. Closes [#48723](https://github.com/ClickHouse/ClickHouse/issues/48723). [#48724](https://github.com/ClickHouse/ClickHouse/pull/48724) ([Johann Gan](https://github.com/johanngan)). +* Add MemoryTracker for the background tasks (merges and mutation). Introduces `merges_mutations_memory_usage_soft_limit` and `merges_mutations_memory_usage_to_ram_ratio` settings that represent the soft memory limit for merges and mutations. If this limit is reached ClickHouse won't schedule new merge or mutation tasks. Also `MergesMutationsMemoryTracking` metric is introduced to allow observing current memory usage of background tasks. Resubmit [#46089](https://github.com/ClickHouse/ClickHouse/issues/46089). Closes [#48774](https://github.com/ClickHouse/ClickHouse/issues/48774). [#48787](https://github.com/ClickHouse/ClickHouse/pull/48787) ([Dmitry Novik](https://github.com/novikd)). +* Function `dotProduct` work for array. [#49050](https://github.com/ClickHouse/ClickHouse/pull/49050) ([FFFFFFFHHHHHHH](https://github.com/FFFFFFFHHHHHHH)). +* Support statement `SHOW INDEX` to improve compatibility with MySQL. [#49158](https://github.com/ClickHouse/ClickHouse/pull/49158) ([Robert Schulze](https://github.com/rschu1ze)). +* Add virtual column `_file` and `_path` support to table function `url`. - Impove error message for table function `url`. - resolves [#49231](https://github.com/ClickHouse/ClickHouse/issues/49231) - resolves [#49232](https://github.com/ClickHouse/ClickHouse/issues/49232). [#49356](https://github.com/ClickHouse/ClickHouse/pull/49356) ([Ziyi Tan](https://github.com/Ziy1-Tan)). +* Adding the `grants` field in the users.xml file, which allows specifying grants for users. [#49381](https://github.com/ClickHouse/ClickHouse/pull/49381) ([pufit](https://github.com/pufit)). +* Support full/right join by using grace hash join algorithm. [#49483](https://github.com/ClickHouse/ClickHouse/pull/49483) ([lgbo](https://github.com/lgbo-ustc)). +* `WITH FILL` modifier groups filling by sorting prefix. Controlled by `use_with_fill_by_sorting_prefix` setting (enabled by default). Related to [#33203](https://github.com/ClickHouse/ClickHouse/issues/33203)#issuecomment-1418736794. [#49503](https://github.com/ClickHouse/ClickHouse/pull/49503) ([Igor Nikonov](https://github.com/devcrafter)). +* Clickhouse-client now accepts queries after "--multiquery" when "--query" (or "-q") is absent. example: clickhouse-client --multiquery "select 1; select 2;". [#49870](https://github.com/ClickHouse/ClickHouse/pull/49870) ([Alexey Gerasimchuk](https://github.com/Demilivor)). +* Add separate `handshake_timeout` for receiving Hello packet from replica. Closes [#48854](https://github.com/ClickHouse/ClickHouse/issues/48854). [#49948](https://github.com/ClickHouse/ClickHouse/pull/49948) ([Kruglov Pavel](https://github.com/Avogar)). +* Added a function "space" which repeats a space as many times as specified. [#50103](https://github.com/ClickHouse/ClickHouse/pull/50103) ([Robert Schulze](https://github.com/rschu1ze)). +* Added --input_format_csv_trim_whitespaces option. [#50215](https://github.com/ClickHouse/ClickHouse/pull/50215) ([Alexey Gerasimchuk](https://github.com/Demilivor)). +* Allow the `dictGetAll` function for regexp tree dictionaries to return values from multiple matches as arrays. Closes [#50254](https://github.com/ClickHouse/ClickHouse/issues/50254). [#50255](https://github.com/ClickHouse/ClickHouse/pull/50255) ([Johann Gan](https://github.com/johanngan)). +* Added `toLastDayOfWeek` function to round a date or a date with time up to the nearest Saturday or Sunday. [#50315](https://github.com/ClickHouse/ClickHouse/pull/50315) ([Victor Krasnov](https://github.com/sirvickr)). +* Ability to ignore a skip index by specifying `ignore_data_skipping_indices`. [#50329](https://github.com/ClickHouse/ClickHouse/pull/50329) ([Boris Kuschel](https://github.com/bkuschel)). +* Add `system.user_processes` table and `SHOW USER PROCESSES` query to show memory info and ProfileEvents on user level. [#50492](https://github.com/ClickHouse/ClickHouse/pull/50492) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* Add server and format settings `display_secrets_in_show_and_select` for displaying secrets of tables, databases, table functions, and dictionaries. Add privilege `displaySecretsInShowAndSelect` controlling which users can view secrets. [#46528](https://github.com/ClickHouse/ClickHouse/pull/46528) ([Mike Kot](https://github.com/myrrc)). +* Allow to set up a ROW POLICY for all tables that belong to a DATABASE. [#47640](https://github.com/ClickHouse/ClickHouse/pull/47640) ([Ilya Golshtein](https://github.com/ilejn)). + +#### Performance Improvement +* Compress marks and primary key by default. It significantly reduces the cold query time. Upgrade notes: the support for compressed marks and primary key has been added in version 22.9. If you turned on compressed marks or primary key or installed version 23.5 or newer, which has compressed marks or primary key on by default, you will not be able to downgrade to version 22.8 or earlier. You can also explicitly disable compressed marks or primary keys by specifying the `compress_marks` and `compress_primary_key` settings in the `` section of the server configuration file. [#42587](https://github.com/ClickHouse/ClickHouse/pull/42587) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* New setting s3_max_inflight_parts_for_one_file sets the limit of concurrently loaded parts with multipart upload request in scope of one file. [#49961](https://github.com/ClickHouse/ClickHouse/pull/49961) ([Sema Checherinda](https://github.com/CheSema)). +* When reading from multiple files reduce parallel parsing threads for each file. Resolves [#42192](https://github.com/ClickHouse/ClickHouse/issues/42192). [#46661](https://github.com/ClickHouse/ClickHouse/pull/46661) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* Use aggregate projection only if it reads fewer granules than normal reading. It should help in case if query hits the PK of the table, but not the projection. Fixes [#49150](https://github.com/ClickHouse/ClickHouse/issues/49150). [#49417](https://github.com/ClickHouse/ClickHouse/pull/49417) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Do not store blocks in `ANY` hash join if nothing is inserted. [#48633](https://github.com/ClickHouse/ClickHouse/pull/48633) ([vdimir](https://github.com/vdimir)). +* Fixes aggregate combinator `-If` when JIT compiled, and enable JIT compilation for aggregate functions. Closes [#48120](https://github.com/ClickHouse/ClickHouse/issues/48120). [#49083](https://github.com/ClickHouse/ClickHouse/pull/49083) ([Igor Nikonov](https://github.com/devcrafter)). +* For reading from remote tables we use smaller tasks (instead of reading the whole part) to make tasks stealing work * task size is determined by size of columns to read * always use 1mb buffers for reading from s3 * boundaries of cache segments aligned to 1mb so they have decent size even with small tasks. it also should prevent fragmentation. [#49287](https://github.com/ClickHouse/ClickHouse/pull/49287) ([Nikita Taranov](https://github.com/nickitat)). +* Introduced settings: - `merge_max_block_size_bytes` to limit the amount of memory used for background operations. - `vertical_merge_algorithm_min_bytes_to_activate` to add another condition to activate vertical merges. [#49313](https://github.com/ClickHouse/ClickHouse/pull/49313) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Default size of a read buffer for reading from local filesystem changed to a slightly better value. Also two new settings are introduced: `max_read_buffer_size_local_fs` and `max_read_buffer_size_remote_fs`. [#49321](https://github.com/ClickHouse/ClickHouse/pull/49321) ([Nikita Taranov](https://github.com/nickitat)). +* Improve memory usage and speed of `SPARSE_HASHED`/`HASHED` dictionaries (e.g. `SPARSE_HASHED` now eats 2.6x less memory, and is ~2x faster). [#49380](https://github.com/ClickHouse/ClickHouse/pull/49380) ([Azat Khuzhin](https://github.com/azat)). +* Optimize the `system.query_log` and `system.query_thread_log` tables by applying `LowCardinality` when appropriate. The queries over these tables will be faster. [#49530](https://github.com/ClickHouse/ClickHouse/pull/49530) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Better performance when reading local `Parquet` files (through parallel reading). [#49539](https://github.com/ClickHouse/ClickHouse/pull/49539) ([Michael Kolupaev](https://github.com/al13n321)). +* Improve the performance of `RIGHT/FULL JOIN` by up to 2 times in certain scenarios, especially when joining a small left table with a large right table. [#49585](https://github.com/ClickHouse/ClickHouse/pull/49585) ([lgbo](https://github.com/lgbo-ustc)). +* Improve performance of BLAKE3 by 11% by enabling LTO for Rust. [#49600](https://github.com/ClickHouse/ClickHouse/pull/49600) ([Azat Khuzhin](https://github.com/azat)). Now it is on par with C++. +* Optimize the structure of the `system.opentelemetry_span_log`. Use `LowCardinality` where appropriate. Although this table is generally stupid (it is using the Map data type even for common attributes), it will be slightly better. [#49647](https://github.com/ClickHouse/ClickHouse/pull/49647) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Try to reserve hash table's size in `grace_hash` join. [#49816](https://github.com/ClickHouse/ClickHouse/pull/49816) ([lgbo](https://github.com/lgbo-ustc)). +* Parallel merge of `uniqExactIf` states. Closes [#49885](https://github.com/ClickHouse/ClickHouse/issues/49885). [#50285](https://github.com/ClickHouse/ClickHouse/pull/50285) ([flynn](https://github.com/ucasfl)). +* Keeper improvement: add `CheckNotExists` request to Keeper, which allows to improve the performance of Replicated tables. [#48897](https://github.com/ClickHouse/ClickHouse/pull/48897) ([Antonio Andelic](https://github.com/antonio2368)). +* Keeper performance improvements: avoid serializing same request twice while processing. Cache deserialization results of large requests. Controlled by new coordination setting `min_request_size_for_cache`. [#49004](https://github.com/ClickHouse/ClickHouse/pull/49004) ([Antonio Andelic](https://github.com/antonio2368)). +* Reduced number of `List` ZooKeeper requests when selecting parts to merge and a lot of partitions do not have anything to merge. [#49637](https://github.com/ClickHouse/ClickHouse/pull/49637) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Rework locking in the FS cache [#44985](https://github.com/ClickHouse/ClickHouse/pull/44985) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Disable pure parallel replicas if trivial count optimization is possible. [#50594](https://github.com/ClickHouse/ClickHouse/pull/50594) ([Raúl Marín](https://github.com/Algunenano)). +* Don't send head request for all keys in Iceberg schema inference, only for keys that are used for reaing data. [#50203](https://github.com/ClickHouse/ClickHouse/pull/50203) ([Kruglov Pavel](https://github.com/Avogar)). +* Setting `enable_memory_bound_merging_of_aggregation_results` is enabled by default. [#50319](https://github.com/ClickHouse/ClickHouse/pull/50319) ([Nikita Taranov](https://github.com/nickitat)). + +#### Experimental Feature +* `DEFLATE_QPL` codec lower the minimum simd version to SSE 4.2. [doc change in qpl](https://github.com/intel/qpl/commit/3f8f5cea27739f5261e8fd577dc233ffe88bf679) - Intel® QPL relies on a run-time kernels dispatcher and cpuid check to choose the best available implementation(sse/avx2/avx512) - restructured cmakefile for qpl build in clickhouse to align with latest upstream qpl. [#49811](https://github.com/ClickHouse/ClickHouse/pull/49811) ([jasperzhu](https://github.com/jinjunzh)). +* Add initial support to do JOINs with pure parallel replicas. [#49544](https://github.com/ClickHouse/ClickHouse/pull/49544) ([Raúl Marín](https://github.com/Algunenano)). +* More parallelism on `Outdated` parts removal with "zero-copy replication". [#49630](https://github.com/ClickHouse/ClickHouse/pull/49630) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Parallel Replicas: 1) Fixed an error `NOT_FOUND_COLUMN_IN_BLOCK` in case of using parallel replicas with non-replicated storage with disabled setting `parallel_replicas_for_non_replicated_merge_tree` 2) Now `allow_experimental_parallel_reading_from_replicas` have 3 possible values - 0, 1 and 2. 0 - disabled, 1 - enabled, silently disable them in case of failure (in case of FINAL or JOIN), 2 - enabled, throw an expection in case of failure. 3) If FINAL modifier is used in SELECT query and parallel replicas are enabled, ClickHouse will try to disable them if `allow_experimental_parallel_reading_from_replicas` is set to 1 and throw an exception otherwise. [#50195](https://github.com/ClickHouse/ClickHouse/pull/50195) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* When parallel replicas are enabled they will always skip unavailable servers (the behavior is controlled by the setting `skip_unavailable_shards`, enabled by default and can be only disabled). This closes: [#48565](https://github.com/ClickHouse/ClickHouse/issues/48565). [#50293](https://github.com/ClickHouse/ClickHouse/pull/50293) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). + +#### Improvement +* The `BACKUP` command will not decrypt data from encrypted disks while making a backup. Instead the data will be stored in a backup in encrypted form. Such backups can be restored only to an encrypted disk with the same (or extended) list of encryption keys. [#48896](https://github.com/ClickHouse/ClickHouse/pull/48896) ([Vitaly Baranov](https://github.com/vitlibar)). +* Added possibility to use temporary tables in FROM part of ATTACH PARTITION FROM and REPLACE PARTITION FROM. [#49436](https://github.com/ClickHouse/ClickHouse/pull/49436) ([Roman Vasin](https://github.com/rvasin)). +* Added setting `async_insert` for `MergeTree` tables. It has the same meaning as query-level setting `async_insert` and enables asynchronous inserts for specific table. Note: it doesn't take effect for insert queries from `clickhouse-client`, use query-level setting in that case. [#49122](https://github.com/ClickHouse/ClickHouse/pull/49122) ([Anton Popov](https://github.com/CurtizJ)). +* Add support for size suffixes in quota creation statement parameters. [#49087](https://github.com/ClickHouse/ClickHouse/pull/49087) ([Eridanus](https://github.com/Eridanus117)). +* Extend `first_value` and `last_value` to accept NULL. [#46467](https://github.com/ClickHouse/ClickHouse/pull/46467) ([lgbo](https://github.com/lgbo-ustc)). +* Add alias `str_to_map` and `mapFromString` for `extractKeyValuePairs`. closes https://github.com/clickhouse/clickhouse/issues/47185. [#49466](https://github.com/ClickHouse/ClickHouse/pull/49466) ([flynn](https://github.com/ucasfl)). +* Add support for CGroup version 2 for asynchronous metrics about the memory usage and availability. This closes [#37983](https://github.com/ClickHouse/ClickHouse/issues/37983). [#45999](https://github.com/ClickHouse/ClickHouse/pull/45999) ([sichenzhao](https://github.com/sichenzhao)). +* Cluster table functions should always skip unavailable shards. close [#46314](https://github.com/ClickHouse/ClickHouse/issues/46314). [#46765](https://github.com/ClickHouse/ClickHouse/pull/46765) ([zk_kiger](https://github.com/zk-kiger)). +* Allow CSV file to contain empty columns in its header. [#47496](https://github.com/ClickHouse/ClickHouse/pull/47496) ([你不要过来啊](https://github.com/iiiuwioajdks)). +* Add Google Cloud Storage S3 compatible table function `gcs`. Like the `oss` and `cosn` functions, it is just an alias over the `s3` table function, and it does not bring any new features. [#47815](https://github.com/ClickHouse/ClickHouse/pull/47815) ([Kuba Kaflik](https://github.com/jkaflik)). +* Add ability to use strict parts size for S3 (compatibility with CloudFlare R2 S3 Storage). [#48492](https://github.com/ClickHouse/ClickHouse/pull/48492) ([Azat Khuzhin](https://github.com/azat)). +* Added new columns with info about `Replicated` database replicas to `system.clusters`: `database_shard_name`, `database_replica_name`, `is_active`. Added an optional `FROM SHARD` clause to `SYSTEM DROP DATABASE REPLICA` query. [#48548](https://github.com/ClickHouse/ClickHouse/pull/48548) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Add a new column `zookeeper_name` in system.replicas, to indicate on which (auxiliary) zookeeper cluster the replicated table's metadata is stored. [#48549](https://github.com/ClickHouse/ClickHouse/pull/48549) ([cangyin](https://github.com/cangyin)). +* `IN` operator support the comparison of `Date` and `Date32`. Closes [#48736](https://github.com/ClickHouse/ClickHouse/issues/48736). [#48806](https://github.com/ClickHouse/ClickHouse/pull/48806) ([flynn](https://github.com/ucasfl)). +* Support for erasure codes in `HDFS`, author: @M1eyu2018, @tomscut. [#48833](https://github.com/ClickHouse/ClickHouse/pull/48833) ([M1eyu](https://github.com/M1eyu2018)). +* Implement SYSTEM DROP REPLICA from auxillary ZooKeeper clusters, may be close [#48931](https://github.com/ClickHouse/ClickHouse/issues/48931). [#48932](https://github.com/ClickHouse/ClickHouse/pull/48932) ([wangxiaobo](https://github.com/wzb5212)). +* Add Array data type to MongoDB. Closes [#48598](https://github.com/ClickHouse/ClickHouse/issues/48598). [#48983](https://github.com/ClickHouse/ClickHouse/pull/48983) ([Nikolay Degterinsky](https://github.com/evillique)). +* Support storing `Interval` data types in tables. [#49085](https://github.com/ClickHouse/ClickHouse/pull/49085) ([larryluogit](https://github.com/larryluogit)). +* Allow using `ntile` window function without explicit window frame definition: `ntile(3) OVER (ORDER BY a)`, close [#46763](https://github.com/ClickHouse/ClickHouse/issues/46763). [#49093](https://github.com/ClickHouse/ClickHouse/pull/49093) ([vdimir](https://github.com/vdimir)). +* Added settings (`number_of_mutations_to_delay`, `number_of_mutations_to_throw`) to delay or throw `ALTER` queries that create mutations (`ALTER UPDATE`, `ALTER DELETE`, `ALTER MODIFY COLUMN`, ...) in case when table already has a lot of unfinished mutations. [#49117](https://github.com/ClickHouse/ClickHouse/pull/49117) ([Anton Popov](https://github.com/CurtizJ)). +* Catch exception from `create_directories` in filesystem cache. [#49203](https://github.com/ClickHouse/ClickHouse/pull/49203) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Copies embedded examples to a new field `example` in `system.functions` to supplement the field `description`. [#49222](https://github.com/ClickHouse/ClickHouse/pull/49222) ([Dan Roscigno](https://github.com/DanRoscigno)). +* Enable connection options for the MongoDB dictionary. Example: ``` xml localhost 27017 test dictionary_source ssl=true ``` ### Documentation entry for user-facing changes. [#49225](https://github.com/ClickHouse/ClickHouse/pull/49225) ([MikhailBurdukov](https://github.com/MikhailBurdukov)). +* Added an alias `asymptotic` for `asymp` computational method for `kolmogorovSmirnovTest`. Improved documentation. [#49286](https://github.com/ClickHouse/ClickHouse/pull/49286) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Aggregation function groupBitAnd/Or/Xor now work on signed integer data. This makes them consistent with the behavior of scalar functions bitAnd/Or/Xor. [#49292](https://github.com/ClickHouse/ClickHouse/pull/49292) ([exmy](https://github.com/exmy)). +* Split function-documentation into more fine-granular fields. [#49300](https://github.com/ClickHouse/ClickHouse/pull/49300) ([Robert Schulze](https://github.com/rschu1ze)). +* Use multiple threads shared between all tables within a server to load outdated data parts. The the size of the pool and its queue is controlled by `max_outdated_parts_loading_thread_pool_size` and `outdated_part_loading_thread_pool_queue_size` settings. [#49317](https://github.com/ClickHouse/ClickHouse/pull/49317) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Don't overestimate the size of processed data for `LowCardinality` columns when they share dictionaries between blocks. This closes [#49322](https://github.com/ClickHouse/ClickHouse/issues/49322). See also [#48745](https://github.com/ClickHouse/ClickHouse/issues/48745). [#49323](https://github.com/ClickHouse/ClickHouse/pull/49323) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Parquet writer now uses reasonable row group size when invoked through `OUTFILE`. [#49325](https://github.com/ClickHouse/ClickHouse/pull/49325) ([Michael Kolupaev](https://github.com/al13n321)). +* Allow restricted keywords like `ARRAY` as an alias if the alias is quoted. Closes [#49324](https://github.com/ClickHouse/ClickHouse/issues/49324). [#49360](https://github.com/ClickHouse/ClickHouse/pull/49360) ([Nikolay Degterinsky](https://github.com/evillique)). +* Data parts loading and deletion jobs were moved to shared server-wide pools instead of per-table pools. Pools sizes are controlled via settings `max_active_parts_loading_thread_pool_size`, `max_outdated_parts_loading_thread_pool_size` and `max_parts_cleaning_thread_pool_size` in top-level config. Table-level settings `max_part_loading_threads` and `max_part_removal_threads` became obsolete. [#49474](https://github.com/ClickHouse/ClickHouse/pull/49474) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Allow `?password=pass` in URL of the Play UI. Password is replaced in browser history. [#49505](https://github.com/ClickHouse/ClickHouse/pull/49505) ([Mike Kot](https://github.com/myrrc)). +* Allow reading zero-size objects from remote filesystems. (because empty files are not backup'd, so we might end up with zero blobs in metadata file). Closes [#49480](https://github.com/ClickHouse/ClickHouse/issues/49480). [#49519](https://github.com/ClickHouse/ClickHouse/pull/49519) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Attach thread MemoryTracker to `total_memory_tracker` after `ThreadGroup` detached. [#49527](https://github.com/ClickHouse/ClickHouse/pull/49527) ([Dmitry Novik](https://github.com/novikd)). +* Fix parameterized views when a query parameter is used multiple times in the query. [#49556](https://github.com/ClickHouse/ClickHouse/pull/49556) ([Azat Khuzhin](https://github.com/azat)). +* Release memory allocated for the last sent ProfileEvents snapshot in the context of a query. Followup [#47564](https://github.com/ClickHouse/ClickHouse/issues/47564). [#49561](https://github.com/ClickHouse/ClickHouse/pull/49561) ([Dmitry Novik](https://github.com/novikd)). +* Function "makeDate" now provides a MySQL-compatible overload (year & day of the year argument). [#49603](https://github.com/ClickHouse/ClickHouse/pull/49603) ([Robert Schulze](https://github.com/rschu1ze)). +* Support `dictionary` table function for `RegExpTreeDictionary`. [#49666](https://github.com/ClickHouse/ClickHouse/pull/49666) ([Han Fei](https://github.com/hanfei1991)). +* Added weighted fair IO scheduling policy. Added dynamic resource manager, which allows IO scheduling hierarchy to be updated in runtime w/o server restarts. [#49671](https://github.com/ClickHouse/ClickHouse/pull/49671) ([Sergei Trifonov](https://github.com/serxa)). +* Add compose request after multipart upload to GCS. This enables the usage of copy operation on objects uploaded with the multipart upload. It's recommended to set `s3_strict_upload_part_size` to some value because compose request can fail on objects created with parts of different sizes. [#49693](https://github.com/ClickHouse/ClickHouse/pull/49693) ([Antonio Andelic](https://github.com/antonio2368)). +* For the `extractKeyValuePairs` function: improve the "best-effort" parsing logic to accept `key_value_delimiter` as a valid part of the value. This also simplifies branching and might even speed up things a bit. [#49760](https://github.com/ClickHouse/ClickHouse/pull/49760) ([Arthur Passos](https://github.com/arthurpassos)). +* Add `initial_query_id` field for system.processors_profile_log [#49777](https://github.com/ClickHouse/ClickHouse/pull/49777) ([helifu](https://github.com/helifu)). +* System log tables can now have custom sorting keys. [#49778](https://github.com/ClickHouse/ClickHouse/pull/49778) ([helifu](https://github.com/helifu)). +* A new field `partitions` to `system.query_log` is used to indicate which partitions are participating in the calculation. [#49779](https://github.com/ClickHouse/ClickHouse/pull/49779) ([helifu](https://github.com/helifu)). +* Added `enable_the_endpoint_id_with_zookeeper_name_prefix` setting for `ReplicatedMergeTree` (disabled by default). When enabled, it adds ZooKeeper cluster name to table's interserver communication endpoint. It avoids `Duplicate interserver IO endpoint` errors when having replicated tables with the same path, but different auxiliary ZooKeepers. [#49780](https://github.com/ClickHouse/ClickHouse/pull/49780) ([helifu](https://github.com/helifu)). +* Add query parameters to `clickhouse-local`. Closes [#46561](https://github.com/ClickHouse/ClickHouse/issues/46561). [#49785](https://github.com/ClickHouse/ClickHouse/pull/49785) ([Nikolay Degterinsky](https://github.com/evillique)). +* Allow loading dictionaries and functions from YAML by default. In previous versions, it required editing the `dictionaries_config` or `user_defined_executable_functions_config` in the configuration file, as they expected `*.xml` files. [#49812](https://github.com/ClickHouse/ClickHouse/pull/49812) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* The Kafka table engine now allows to use alias columns. [#49824](https://github.com/ClickHouse/ClickHouse/pull/49824) ([Aleksandr Musorin](https://github.com/AVMusorin)). +* Add setting to limit the max number of pairs produced by `extractKeyValuePairs`, a safeguard to avoid using way too much memory. [#49836](https://github.com/ClickHouse/ClickHouse/pull/49836) ([Arthur Passos](https://github.com/arthurpassos)). +* Add support for (an unusual) case where the arguments in the `IN` operator are single-element tuples. [#49844](https://github.com/ClickHouse/ClickHouse/pull/49844) ([MikhailBurdukov](https://github.com/MikhailBurdukov)). +* `bitHammingDistance` function support `String` and `FixedString` data type. Closes [#48827](https://github.com/ClickHouse/ClickHouse/issues/48827). [#49858](https://github.com/ClickHouse/ClickHouse/pull/49858) ([flynn](https://github.com/ucasfl)). +* Fix timeout resetting errors in the client on OS X. [#49863](https://github.com/ClickHouse/ClickHouse/pull/49863) ([alekar](https://github.com/alekar)). +* Add support for big integers, such as UInt128, Int128, UInt256, and Int256 in the function `bitCount`. This enables Hamming distance over large bit masks for AI applications. [#49867](https://github.com/ClickHouse/ClickHouse/pull/49867) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fingerprints to be used instead of key IDs in encrypted disks. This simplifies the configuration of encrypted disks. [#49882](https://github.com/ClickHouse/ClickHouse/pull/49882) ([Vitaly Baranov](https://github.com/vitlibar)). +* Add UUID data type to PostgreSQL. Closes [#49739](https://github.com/ClickHouse/ClickHouse/issues/49739). [#49894](https://github.com/ClickHouse/ClickHouse/pull/49894) ([Nikolay Degterinsky](https://github.com/evillique)). +* Function `toUnixTimestamp` now accepts `Date` and `Date32` arguments. [#49989](https://github.com/ClickHouse/ClickHouse/pull/49989) ([Victor Krasnov](https://github.com/sirvickr)). +* Charge only server memory for dictionaries. [#49995](https://github.com/ClickHouse/ClickHouse/pull/49995) ([Azat Khuzhin](https://github.com/azat)). +* The server will allow using the `SQL_*` settings such as `SQL_AUTO_IS_NULL` as no-ops for MySQL compatibility. This closes [#49927](https://github.com/ClickHouse/ClickHouse/issues/49927). [#50013](https://github.com/ClickHouse/ClickHouse/pull/50013) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Preserve initial_query_id for ON CLUSTER queries, which is useful for introspection (under `distributed_ddl_entry_format_version=5`). [#50015](https://github.com/ClickHouse/ClickHouse/pull/50015) ([Azat Khuzhin](https://github.com/azat)). +* Preserve backward incompatibility for renamed settings by using aliases (`allow_experimental_projection_optimization` for `optimize_use_projections`, `allow_experimental_lightweight_delete` for `enable_lightweight_delete`). [#50044](https://github.com/ClickHouse/ClickHouse/pull/50044) ([Azat Khuzhin](https://github.com/azat)). +* Support passing FQDN through setting my_hostname to register cluster node in keeper. Add setting of invisible to support multi compute groups. A compute group as a cluster, is invisible to other compute groups. [#50186](https://github.com/ClickHouse/ClickHouse/pull/50186) ([Yangkuan Liu](https://github.com/LiuYangkuan)). +* Fix PostgreSQL reading all the data even though `LIMIT n` could be specified. [#50187](https://github.com/ClickHouse/ClickHouse/pull/50187) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Add new profile events for queries with subqueries (`QueriesWithSubqueries`/`SelectQueriesWithSubqueries`/`InsertQueriesWithSubqueries`). [#50204](https://github.com/ClickHouse/ClickHouse/pull/50204) ([Azat Khuzhin](https://github.com/azat)). +* Adding the roles field in the users.xml file, which allows specifying roles with grants via a config file. [#50278](https://github.com/ClickHouse/ClickHouse/pull/50278) ([pufit](https://github.com/pufit)). +* Report `CGroupCpuCfsPeriod` and `CGroupCpuCfsQuota` in AsynchronousMetrics. - Respect cgroup v2 memory limits during server startup. [#50379](https://github.com/ClickHouse/ClickHouse/pull/50379) ([alekar](https://github.com/alekar)). +* Add a signal handler for SIGQUIT to work the same way as SIGINT. Closes [#50298](https://github.com/ClickHouse/ClickHouse/issues/50298). [#50435](https://github.com/ClickHouse/ClickHouse/pull/50435) ([Nikolay Degterinsky](https://github.com/evillique)). +* In case JSON parse fails due to the large size of the object output the last position to allow debugging. [#50474](https://github.com/ClickHouse/ClickHouse/pull/50474) ([Valentin Alexeev](https://github.com/valentinalexeev)). +* Support decimals with not fixed size. Closes [#49130](https://github.com/ClickHouse/ClickHouse/issues/49130). [#50586](https://github.com/ClickHouse/ClickHouse/pull/50586) ([Kruglov Pavel](https://github.com/Avogar)). + +#### Build/Testing/Packaging Improvement +* New and improved `keeper-bench`. Everything can be customized from YAML/XML file: - request generator - each type of request generator can have a specific set of fields - multi requests can be generated just by doing the same under `multi` key - for each request or subrequest in multi a `weight` field can be defined to control distribution - define trees that need to be setup for a test run - hosts can be defined with all timeouts customizable and it's possible to control how many sessions to generate for each host - integers defined with `min_value` and `max_value` fields are random number generators. [#48547](https://github.com/ClickHouse/ClickHouse/pull/48547) ([Antonio Andelic](https://github.com/antonio2368)). +* Io_uring is not supported on macos, don't choose it when running tests on local to avoid occassional failures. [#49250](https://github.com/ClickHouse/ClickHouse/pull/49250) ([Frank Chen](https://github.com/FrankChen021)). +* Support named fault injection for testing. [#49361](https://github.com/ClickHouse/ClickHouse/pull/49361) ([Han Fei](https://github.com/hanfei1991)). +* Allow running ClickHouse in the OS where the `prctl` (process control) syscall is not available, such as AWS Lambda. [#49538](https://github.com/ClickHouse/ClickHouse/pull/49538) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fixed the issue of build conflict between contrib/isa-l and isa-l in qpl [49296](https://github.com/ClickHouse/ClickHouse/issues/49296). [#49584](https://github.com/ClickHouse/ClickHouse/pull/49584) ([jasperzhu](https://github.com/jinjunzh)). +* Utilities are now only build if explicitly requested ("-DENABLE_UTILS=1") instead of by default, this reduces link times in typical development builds. [#49620](https://github.com/ClickHouse/ClickHouse/pull/49620) ([Robert Schulze](https://github.com/rschu1ze)). +* Pull build description of idxd-config into a separate CMake file to avoid accidental removal in future. [#49651](https://github.com/ClickHouse/ClickHouse/pull/49651) ([jasperzhu](https://github.com/jinjunzh)). +* Add CI check with an enabled analyzer in the master. Follow-up [#49562](https://github.com/ClickHouse/ClickHouse/issues/49562). [#49668](https://github.com/ClickHouse/ClickHouse/pull/49668) ([Dmitry Novik](https://github.com/novikd)). +* Switch to LLVM/clang 16. [#49678](https://github.com/ClickHouse/ClickHouse/pull/49678) ([Azat Khuzhin](https://github.com/azat)). +* Allow building ClickHouse with clang-17. [#49851](https://github.com/ClickHouse/ClickHouse/pull/49851) ([Alexey Milovidov](https://github.com/alexey-milovidov)). [#50410](https://github.com/ClickHouse/ClickHouse/pull/50410) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* ClickHouse is now easier to be integrated into other cmake projects. [#49991](https://github.com/ClickHouse/ClickHouse/pull/49991) ([Amos Bird](https://github.com/amosbird)). (Which is strongly discouraged - Alexey Milovidov). +* Fix strange additional QEMU logging after [#47151](https://github.com/ClickHouse/ClickHouse/issues/47151), see https://s3.amazonaws.com/clickhouse-test-reports/50078/a4743996ee4f3583884d07bcd6501df0cfdaa346/stateless_tests__release__databasereplicated__[3_4].html. [#50442](https://github.com/ClickHouse/ClickHouse/pull/50442) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* ClickHouse can work on Linux RISC-V 6.1.22. This closes [#50456](https://github.com/ClickHouse/ClickHouse/issues/50456). [#50457](https://github.com/ClickHouse/ClickHouse/pull/50457) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Bump internal protobuf to v3.18 (fixes bogus CVE-2022-1941). [#50400](https://github.com/ClickHouse/ClickHouse/pull/50400) ([Robert Schulze](https://github.com/rschu1ze)). +* Bump internal libxml2 to v2.10.4 (fixes bogus CVE-2023-28484 and bogus CVE-2023-29469). [#50402](https://github.com/ClickHouse/ClickHouse/pull/50402) ([Robert Schulze](https://github.com/rschu1ze)). +* Bump c-ares to v1.19.1 (bogus CVE-2023-32067, bogus CVE-2023-31130, bogus CVE-2023-31147). [#50403](https://github.com/ClickHouse/ClickHouse/pull/50403) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix bogus CVE-2022-2469 in libgsasl. [#50404](https://github.com/ClickHouse/ClickHouse/pull/50404) ([Robert Schulze](https://github.com/rschu1ze)). + +#### Bug Fix (user-visible misbehavior in an official stable release) + +* ActionsDAG: fix wrong optimization [#47584](https://github.com/ClickHouse/ClickHouse/pull/47584) ([Salvatore Mesoraca](https://github.com/aiven-sal)). +* Correctly handle concurrent snapshots in Keeper [#48466](https://github.com/ClickHouse/ClickHouse/pull/48466) ([Antonio Andelic](https://github.com/antonio2368)). +* MergeTreeMarksLoader holds DataPart instead of DataPartStorage [#48515](https://github.com/ClickHouse/ClickHouse/pull/48515) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* Sequence state fix [#48603](https://github.com/ClickHouse/ClickHouse/pull/48603) ([Ilya Golshtein](https://github.com/ilejn)). +* Back/Restore concurrency check on previous fails [#48726](https://github.com/ClickHouse/ClickHouse/pull/48726) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* Fix Attaching a table with non-existent ZK path does not increase the ReadonlyReplica metric [#48954](https://github.com/ClickHouse/ClickHouse/pull/48954) ([wangxiaobo](https://github.com/wzb5212)). +* Fix possible terminate called for uncaught exception in some places [#49112](https://github.com/ClickHouse/ClickHouse/pull/49112) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix key not found error for queries with multiple StorageJoin [#49137](https://github.com/ClickHouse/ClickHouse/pull/49137) ([vdimir](https://github.com/vdimir)). +* Fix wrong query result when using nullable primary key [#49172](https://github.com/ClickHouse/ClickHouse/pull/49172) ([Duc Canh Le](https://github.com/canhld94)). +* Fix reinterpretAs*() on big endian machines [#49198](https://github.com/ClickHouse/ClickHouse/pull/49198) ([Suzy Wang](https://github.com/SuzyWangIBMer)). +* (Experimental zero-copy replication) Lock zero copy parts more atomically [#49211](https://github.com/ClickHouse/ClickHouse/pull/49211) ([alesapin](https://github.com/alesapin)). +* Fix race on Outdated parts loading [#49223](https://github.com/ClickHouse/ClickHouse/pull/49223) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Fix all key value is null and group use rollup return wrong answer [#49282](https://github.com/ClickHouse/ClickHouse/pull/49282) ([Shuai li](https://github.com/loneylee)). +* Fix calculating load_factor for HASHED dictionaries with SHARDS [#49319](https://github.com/ClickHouse/ClickHouse/pull/49319) ([Azat Khuzhin](https://github.com/azat)). +* Disallow configuring compression CODECs for alias columns [#49363](https://github.com/ClickHouse/ClickHouse/pull/49363) ([Timur Solodovnikov](https://github.com/tsolodov)). +* Fix bug in removal of existing part directory [#49365](https://github.com/ClickHouse/ClickHouse/pull/49365) ([alesapin](https://github.com/alesapin)). +* Properly fix GCS when HMAC is used [#49390](https://github.com/ClickHouse/ClickHouse/pull/49390) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix fuzz bug when subquery set is not built when reading from remote() [#49425](https://github.com/ClickHouse/ClickHouse/pull/49425) ([Alexander Gololobov](https://github.com/davenger)). +* Invert `shutdown_wait_unfinished_queries` [#49427](https://github.com/ClickHouse/ClickHouse/pull/49427) ([Konstantin Bogdanov](https://github.com/thevar1able)). +* (Experimental zero-copy replication) Fix another zero copy bug [#49473](https://github.com/ClickHouse/ClickHouse/pull/49473) ([alesapin](https://github.com/alesapin)). +* Fix postgres database setting [#49481](https://github.com/ClickHouse/ClickHouse/pull/49481) ([Mal Curtis](https://github.com/snikch)). +* Correctly handle `s3Cluster` arguments [#49490](https://github.com/ClickHouse/ClickHouse/pull/49490) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix bug in TraceCollector destructor. [#49508](https://github.com/ClickHouse/ClickHouse/pull/49508) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Fix AsynchronousReadIndirectBufferFromRemoteFS breaking on short seeks [#49525](https://github.com/ClickHouse/ClickHouse/pull/49525) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix dictionaries loading order [#49560](https://github.com/ClickHouse/ClickHouse/pull/49560) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Forbid the change of data type of Object('json') column [#49563](https://github.com/ClickHouse/ClickHouse/pull/49563) ([Nikolay Degterinsky](https://github.com/evillique)). +* Fix stress test (Logical error: Expected 7134 >= 11030) [#49623](https://github.com/ClickHouse/ClickHouse/pull/49623) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix bug in DISTINCT [#49628](https://github.com/ClickHouse/ClickHouse/pull/49628) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix: DISTINCT in order with zero values in non-sorted columns [#49636](https://github.com/ClickHouse/ClickHouse/pull/49636) ([Igor Nikonov](https://github.com/devcrafter)). +* Fix one-off error in big integers found by UBSan with fuzzer [#49645](https://github.com/ClickHouse/ClickHouse/pull/49645) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix reading from sparse columns after restart [#49660](https://github.com/ClickHouse/ClickHouse/pull/49660) ([Anton Popov](https://github.com/CurtizJ)). +* Fix assert in SpanHolder::finish() with fibers [#49673](https://github.com/ClickHouse/ClickHouse/pull/49673) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix short circuit functions and mutations with sparse arguments [#49716](https://github.com/ClickHouse/ClickHouse/pull/49716) ([Anton Popov](https://github.com/CurtizJ)). +* Fix writing appended files to incremental backups [#49725](https://github.com/ClickHouse/ClickHouse/pull/49725) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix "There is no physical column _row_exists in table" error occurring during lightweight delete mutation on a table with Object column. [#49737](https://github.com/ClickHouse/ClickHouse/pull/49737) ([Alexander Gololobov](https://github.com/davenger)). +* Fix msan issue in randomStringUTF8(uneven number) [#49750](https://github.com/ClickHouse/ClickHouse/pull/49750) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix aggregate function kolmogorovSmirnovTest [#49768](https://github.com/ClickHouse/ClickHouse/pull/49768) ([FFFFFFFHHHHHHH](https://github.com/FFFFFFFHHHHHHH)). +* Fix settings aliases in native protocol [#49776](https://github.com/ClickHouse/ClickHouse/pull/49776) ([Azat Khuzhin](https://github.com/azat)). +* Fix `arrayMap` with array of tuples with single argument [#49789](https://github.com/ClickHouse/ClickHouse/pull/49789) ([Anton Popov](https://github.com/CurtizJ)). +* Fix per-query IO/BACKUPs throttling settings [#49797](https://github.com/ClickHouse/ClickHouse/pull/49797) ([Azat Khuzhin](https://github.com/azat)). +* Fix setting NULL in profile definition [#49831](https://github.com/ClickHouse/ClickHouse/pull/49831) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix a bug with projections and the aggregate_functions_null_for_empty setting (for query_plan_optimize_projection) [#49873](https://github.com/ClickHouse/ClickHouse/pull/49873) ([Amos Bird](https://github.com/amosbird)). +* Fix processing pending batch for Distributed async INSERT after restart [#49884](https://github.com/ClickHouse/ClickHouse/pull/49884) ([Azat Khuzhin](https://github.com/azat)). +* Fix assertion in CacheMetadata::doCleanup [#49914](https://github.com/ClickHouse/ClickHouse/pull/49914) ([Kseniia Sumarokova](https://github.com/kssenii)). +* fix `is_prefix` in OptimizeRegularExpression [#49919](https://github.com/ClickHouse/ClickHouse/pull/49919) ([Han Fei](https://github.com/hanfei1991)). +* Fix metrics `WriteBufferFromS3Bytes`, `WriteBufferFromS3Microseconds` and `WriteBufferFromS3RequestsErrors` [#49930](https://github.com/ClickHouse/ClickHouse/pull/49930) ([Aleksandr Musorin](https://github.com/AVMusorin)). +* Fix IPv6 encoding in protobuf [#49933](https://github.com/ClickHouse/ClickHouse/pull/49933) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Fix possible Logical error on bad Nullable parsing for text formats [#49960](https://github.com/ClickHouse/ClickHouse/pull/49960) ([Kruglov Pavel](https://github.com/Avogar)). +* Add setting output_format_parquet_compliant_nested_types to produce more compatible Parquet files [#50001](https://github.com/ClickHouse/ClickHouse/pull/50001) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix logical error in stress test "Not enough space to add ..." [#50021](https://github.com/ClickHouse/ClickHouse/pull/50021) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Avoid deadlock when starting table in attach thread of `ReplicatedMergeTree` [#50026](https://github.com/ClickHouse/ClickHouse/pull/50026) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix assert in SpanHolder::finish() with fibers attempt 2 [#50034](https://github.com/ClickHouse/ClickHouse/pull/50034) ([Kruglov Pavel](https://github.com/Avogar)). +* Add proper escaping for DDL OpenTelemetry context serialization [#50045](https://github.com/ClickHouse/ClickHouse/pull/50045) ([Azat Khuzhin](https://github.com/azat)). +* Fix reporting broken projection parts [#50052](https://github.com/ClickHouse/ClickHouse/pull/50052) ([Amos Bird](https://github.com/amosbird)). +* JIT compilation not equals NaN fix [#50056](https://github.com/ClickHouse/ClickHouse/pull/50056) ([Maksim Kita](https://github.com/kitaisreal)). +* Fix crashing in case of Replicated database without arguments [#50058](https://github.com/ClickHouse/ClickHouse/pull/50058) ([Azat Khuzhin](https://github.com/azat)). +* Fix crash with `multiIf` and constant condition and nullable arguments [#50123](https://github.com/ClickHouse/ClickHouse/pull/50123) ([Anton Popov](https://github.com/CurtizJ)). +* Fix invalid index analysis for date related keys [#50153](https://github.com/ClickHouse/ClickHouse/pull/50153) ([Amos Bird](https://github.com/amosbird)). +* do not allow modify order by when there are no order by cols [#50154](https://github.com/ClickHouse/ClickHouse/pull/50154) ([Han Fei](https://github.com/hanfei1991)). +* Fix broken index analysis when binary operator contains a null constant argument [#50177](https://github.com/ClickHouse/ClickHouse/pull/50177) ([Amos Bird](https://github.com/amosbird)). +* clickhouse-client: disallow usage of `--query` and `--queries-file` at the same time [#50210](https://github.com/ClickHouse/ClickHouse/pull/50210) ([Alexey Gerasimchuk](https://github.com/Demilivor)). +* Fix UB for INTO OUTFILE extensions (APPEND / AND STDOUT) and WATCH EVENTS [#50216](https://github.com/ClickHouse/ClickHouse/pull/50216) ([Azat Khuzhin](https://github.com/azat)). +* Fix skipping spaces at end of row in CustomSeparatedIgnoreSpaces format [#50224](https://github.com/ClickHouse/ClickHouse/pull/50224) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix iceberg metadata parsing [#50232](https://github.com/ClickHouse/ClickHouse/pull/50232) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix nested distributed SELECT in WITH clause [#50234](https://github.com/ClickHouse/ClickHouse/pull/50234) ([Azat Khuzhin](https://github.com/azat)). +* Fix msan issue in keyed siphash [#50245](https://github.com/ClickHouse/ClickHouse/pull/50245) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix bugs in Poco sockets in non-blocking mode, use true non-blocking sockets [#50252](https://github.com/ClickHouse/ClickHouse/pull/50252) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix checksum calculation for backup entries [#50264](https://github.com/ClickHouse/ClickHouse/pull/50264) ([Vitaly Baranov](https://github.com/vitlibar)). +* Comparison functions NaN fix [#50287](https://github.com/ClickHouse/ClickHouse/pull/50287) ([Maksim Kita](https://github.com/kitaisreal)). +* JIT aggregation nullable key fix [#50291](https://github.com/ClickHouse/ClickHouse/pull/50291) ([Maksim Kita](https://github.com/kitaisreal)). +* Fix clickhouse-local crashing when writing empty Arrow or Parquet output [#50328](https://github.com/ClickHouse/ClickHouse/pull/50328) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix crash when Pool::Entry::disconnect() is called [#50334](https://github.com/ClickHouse/ClickHouse/pull/50334) ([Val Doroshchuk](https://github.com/valbok)). +* Improved fetch part by holding directory lock longer [#50339](https://github.com/ClickHouse/ClickHouse/pull/50339) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* Fix bitShift* functions with both constant arguments [#50343](https://github.com/ClickHouse/ClickHouse/pull/50343) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix Keeper deadlock on exception when preprocessing requests. [#50387](https://github.com/ClickHouse/ClickHouse/pull/50387) ([frinkr](https://github.com/frinkr)). +* Fix hashing of const integer values [#50421](https://github.com/ClickHouse/ClickHouse/pull/50421) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix merge_tree_min_rows_for_seek/merge_tree_min_bytes_for_seek for data skipping indexes [#50432](https://github.com/ClickHouse/ClickHouse/pull/50432) ([Azat Khuzhin](https://github.com/azat)). +* Limit the number of in-flight tasks for loading outdated parts [#50450](https://github.com/ClickHouse/ClickHouse/pull/50450) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Keeper fix: apply uncommitted state after snapshot install [#50483](https://github.com/ClickHouse/ClickHouse/pull/50483) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix incorrect constant folding [#50536](https://github.com/ClickHouse/ClickHouse/pull/50536) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix logical error in stress test (Not enough space to add ...) [#50583](https://github.com/ClickHouse/ClickHouse/pull/50583) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix converting Null to LowCardinality(Nullable) in values table function [#50637](https://github.com/ClickHouse/ClickHouse/pull/50637) ([Kruglov Pavel](https://github.com/Avogar)). +* Revert invalid RegExpTreeDictionary optimization [#50642](https://github.com/ClickHouse/ClickHouse/pull/50642) ([Johann Gan](https://github.com/johanngan)). + ### ClickHouse release 23.4, 2023-04-26 #### Backward Incompatible Change diff --git a/SECURITY.md b/SECURITY.md index 75c1a9d7d6a..1864eb6e9e5 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -13,9 +13,10 @@ The following versions of ClickHouse server are currently being supported with s | Version | Supported | |:-|:-| +| 23.5 | ✔️ | | 23.4 | ✔️ | | 23.3 | ✔️ | -| 23.2 | ✔️ | +| 23.2 | ❌ | | 23.1 | ❌ | | 22.12 | ❌ | | 22.11 | ❌ | diff --git a/base/poco/Foundation/include/Poco/BinaryReader.h b/base/poco/Foundation/include/Poco/BinaryReader.h index 4042b507a2f..2b9bca29944 100644 --- a/base/poco/Foundation/include/Poco/BinaryReader.h +++ b/base/poco/Foundation/include/Poco/BinaryReader.h @@ -117,6 +117,9 @@ public: void readRaw(char * buffer, std::streamsize length); /// Reads length bytes of raw data into buffer. + void readCString(std::string& value); + /// Reads zero-terminated C-string into value. + void readBOM(); /// Reads a byte-order mark from the stream and configures /// the reader for the encountered byte order. diff --git a/base/poco/Foundation/include/Poco/BinaryWriter.h b/base/poco/Foundation/include/Poco/BinaryWriter.h index aa280d4ccab..a35d76d84bc 100644 --- a/base/poco/Foundation/include/Poco/BinaryWriter.h +++ b/base/poco/Foundation/include/Poco/BinaryWriter.h @@ -56,6 +56,8 @@ public: LITTLE_ENDIAN_BYTE_ORDER = 3 /// little-endian byte-order }; + static const std::streamsize DEFAULT_MAX_CSTR_LENGTH { 1024 }; + BinaryWriter(std::ostream & ostr, StreamByteOrder byteOrder = NATIVE_BYTE_ORDER); /// Creates the BinaryWriter. @@ -131,6 +133,9 @@ public: void writeRaw(const char * buffer, std::streamsize length); /// Writes length raw bytes from the given buffer to the stream. + void writeCString(const char* cString, std::streamsize maxLength = DEFAULT_MAX_CSTR_LENGTH); + /// Writes zero-terminated C-string. + void writeBOM(); /// Writes a byte-order mark to the stream. A byte order mark is /// a 16-bit integer with a value of 0xFEFF, written in host byte-order. diff --git a/base/poco/Foundation/src/BinaryReader.cpp b/base/poco/Foundation/src/BinaryReader.cpp index f2961e03966..37ec2bc9040 100644 --- a/base/poco/Foundation/src/BinaryReader.cpp +++ b/base/poco/Foundation/src/BinaryReader.cpp @@ -274,6 +274,31 @@ void BinaryReader::readRaw(char* buffer, std::streamsize length) } +void BinaryReader::readCString(std::string& value) +{ + value.clear(); + if (!_istr.good()) + { + return; + } + value.reserve(256); + while (true) + { + char c; + _istr.get(c); + if (!_istr.good()) + { + break; + } + if (c == '\0') + { + break; + } + value += c; + } +} + + void BinaryReader::readBOM() { UInt16 bom; diff --git a/base/poco/Foundation/src/BinaryWriter.cpp b/base/poco/Foundation/src/BinaryWriter.cpp index 6db5ab7cb90..c3fcabc4374 100644 --- a/base/poco/Foundation/src/BinaryWriter.cpp +++ b/base/poco/Foundation/src/BinaryWriter.cpp @@ -271,7 +271,7 @@ BinaryWriter& BinaryWriter::operator << (const std::string& value) BinaryWriter& BinaryWriter::operator << (const char* value) { poco_check_ptr (value); - + if (_pTextConverter) { std::string converted; @@ -332,6 +332,15 @@ void BinaryWriter::writeRaw(const char* buffer, std::streamsize length) } +void BinaryWriter::writeCString(const char* cString, std::streamsize maxLength) +{ + const std::size_t len = ::strnlen(cString, maxLength); + writeRaw(cString, len); + static const char zero = '\0'; + _ostr.write(&zero, sizeof(zero)); +} + + void BinaryWriter::writeBOM() { UInt16 value = 0xFEFF; diff --git a/base/poco/MongoDB/CMakeLists.txt b/base/poco/MongoDB/CMakeLists.txt index 8f5c6be2cae..bb6f90ed8f5 100644 --- a/base/poco/MongoDB/CMakeLists.txt +++ b/base/poco/MongoDB/CMakeLists.txt @@ -13,3 +13,4 @@ target_compile_options (_poco_mongodb target_include_directories (_poco_mongodb SYSTEM PUBLIC "include") target_link_libraries (_poco_mongodb PUBLIC Poco::Net) + diff --git a/base/poco/MongoDB/include/Poco/MongoDB/Array.h b/base/poco/MongoDB/include/Poco/MongoDB/Array.h index 4ed9cdd87ee..8a30c785b2d 100644 --- a/base/poco/MongoDB/include/Poco/MongoDB/Array.h +++ b/base/poco/MongoDB/include/Poco/MongoDB/Array.h @@ -33,7 +33,7 @@ namespace MongoDB /// This class represents a BSON Array. { public: - typedef SharedPtr Ptr; + using Ptr = SharedPtr; Array(); /// Creates an empty Array. @@ -41,8 +41,31 @@ namespace MongoDB virtual ~Array(); /// Destroys the Array. + // Document template functions available for backward compatibility + using Document::add; + using Document::get; + template - T get(int pos) const + Document & add(T value) + /// Creates an element with the name from the current pos and value and + /// adds it to the array document. + /// + /// The active document is returned to allow chaining of the add methods. + { + return Document::add(Poco::NumberFormatter::format(size()), value); + } + + Document & add(const char * value) + /// Creates an element with a name from the current pos and value and + /// adds it to the array document. + /// + /// The active document is returned to allow chaining of the add methods. + { + return Document::add(Poco::NumberFormatter::format(size()), value); + } + + template + T get(std::size_t pos) const /// Returns the element at the given index and tries to convert /// it to the template type. If the element is not found, a /// Poco::NotFoundException will be thrown. If the element cannot be @@ -52,7 +75,7 @@ namespace MongoDB } template - T get(int pos, const T & deflt) const + T get(std::size_t pos, const T & deflt) const /// Returns the element at the given index and tries to convert /// it to the template type. If the element is not found, or /// has the wrong type, the deflt argument will be returned. @@ -60,12 +83,12 @@ namespace MongoDB return Document::get(Poco::NumberFormatter::format(pos), deflt); } - Element::Ptr get(int pos) const; + Element::Ptr get(std::size_t pos) const; /// Returns the element at the given index. /// An empty element will be returned if the element is not found. template - bool isType(int pos) const + bool isType(std::size_t pos) const /// Returns true if the type of the element equals the TypeId of ElementTrait, /// otherwise false. { @@ -74,6 +97,9 @@ namespace MongoDB std::string toString(int indent = 0) const; /// Returns a string representation of the Array. + + private: + friend void BSONReader::read(Array::Ptr & to); }; diff --git a/base/poco/MongoDB/include/Poco/MongoDB/Binary.h b/base/poco/MongoDB/include/Poco/MongoDB/Binary.h index 1005cb000f5..aad8736e8b6 100644 --- a/base/poco/MongoDB/include/Poco/MongoDB/Binary.h +++ b/base/poco/MongoDB/include/Poco/MongoDB/Binary.h @@ -40,7 +40,7 @@ namespace MongoDB /// A Binary stores its data in a Poco::Buffer. { public: - typedef SharedPtr Ptr; + using Ptr = SharedPtr; Binary(); /// Creates an empty Binary with subtype 0. diff --git a/base/poco/MongoDB/include/Poco/MongoDB/Connection.h b/base/poco/MongoDB/include/Poco/MongoDB/Connection.h index dcb813b75bc..cf679d530aa 100644 --- a/base/poco/MongoDB/include/Poco/MongoDB/Connection.h +++ b/base/poco/MongoDB/include/Poco/MongoDB/Connection.h @@ -18,6 +18,7 @@ #define MongoDB_Connection_INCLUDED +#include "Poco/MongoDB/OpMsgMessage.h" #include "Poco/MongoDB/RequestMessage.h" #include "Poco/MongoDB/ResponseMessage.h" #include "Poco/Mutex.h" @@ -39,7 +40,7 @@ namespace MongoDB /// for more information on the wire protocol. { public: - typedef Poco::SharedPtr Ptr; + using Ptr = Poco::SharedPtr; class MongoDB_API SocketFactory { @@ -90,7 +91,7 @@ namespace MongoDB Poco::Net::SocketAddress address() const; /// Returns the address of the MongoDB server. - + const std::string & uri() const; /// Returns the uri on which the connection was made. @@ -145,6 +146,21 @@ namespace MongoDB /// Use this when a response is expected: only a "query" or "getmore" /// request will return a response. + void sendRequest(OpMsgMessage & request, OpMsgMessage & response); + /// Sends a request to the MongoDB server and receives the response + /// using newer wire protocol with OP_MSG. + + void sendRequest(OpMsgMessage & request); + /// Sends an unacknowledged request to the MongoDB server using newer + /// wire protocol with OP_MSG. + /// No response is sent by the server. + + void readResponse(OpMsgMessage & response); + /// Reads additional response data when previous message's flag moreToCome + /// indicates that server will send more data. + /// NOTE: See comments in OpMsgCursor code. + + protected: void connect(); @@ -164,7 +180,7 @@ namespace MongoDB } inline const std::string & Connection::uri() const { - return _uri; + return _uri; } diff --git a/base/poco/MongoDB/include/Poco/MongoDB/Cursor.h b/base/poco/MongoDB/include/Poco/MongoDB/Cursor.h index 4aed9fe64fb..8849d737a62 100644 --- a/base/poco/MongoDB/include/Poco/MongoDB/Cursor.h +++ b/base/poco/MongoDB/include/Poco/MongoDB/Cursor.h @@ -40,6 +40,9 @@ namespace MongoDB Cursor(const std::string & fullCollectionName, QueryRequest::Flags flags = QueryRequest::QUERY_DEFAULT); /// Creates a Cursor for the given database and collection ("database.collection"), using the specified flags. + Cursor(const Document & aggregationResponse); + /// Creates a Cursor for the given aggregation query response. + virtual ~Cursor(); /// Destroys the Cursor. diff --git a/base/poco/MongoDB/include/Poco/MongoDB/Database.h b/base/poco/MongoDB/include/Poco/MongoDB/Database.h index 62aea632b08..3334a673df6 100644 --- a/base/poco/MongoDB/include/Poco/MongoDB/Database.h +++ b/base/poco/MongoDB/include/Poco/MongoDB/Database.h @@ -26,6 +26,8 @@ #include "Poco/MongoDB/QueryRequest.h" #include "Poco/MongoDB/UpdateRequest.h" +#include "Poco/MongoDB/OpMsgCursor.h" +#include "Poco/MongoDB/OpMsgMessage.h" namespace Poco { @@ -45,6 +47,9 @@ namespace MongoDB virtual ~Database(); /// Destroys the Database. + const std::string & name() const; + /// Database name + bool authenticate( Connection & connection, const std::string & username, @@ -62,34 +67,49 @@ namespace MongoDB /// May throw a Poco::ProtocolException if authentication fails for a reason other than /// invalid credentials. + Document::Ptr queryBuildInfo(Connection & connection) const; + /// Queries server build info (all wire protocols) + + Document::Ptr queryServerHello(Connection & connection) const; + /// Queries hello response from server (all wire protocols) + Int64 count(Connection & connection, const std::string & collectionName) const; - /// Sends a count request for the given collection to MongoDB. + /// Sends a count request for the given collection to MongoDB. (old wire protocol) /// /// If the command fails, -1 is returned. Poco::SharedPtr createCommand() const; - /// Creates a QueryRequest for a command. + /// Creates a QueryRequest for a command. (old wire protocol) Poco::SharedPtr createCountRequest(const std::string & collectionName) const; /// Creates a QueryRequest to count the given collection. - /// The collectionname must not contain the database name. + /// The collectionname must not contain the database name. (old wire protocol) Poco::SharedPtr createDeleteRequest(const std::string & collectionName) const; /// Creates a DeleteRequest to delete documents in the given collection. - /// The collectionname must not contain the database name. + /// The collectionname must not contain the database name. (old wire protocol) Poco::SharedPtr createInsertRequest(const std::string & collectionName) const; /// Creates an InsertRequest to insert new documents in the given collection. - /// The collectionname must not contain the database name. + /// The collectionname must not contain the database name. (old wire protocol) Poco::SharedPtr createQueryRequest(const std::string & collectionName) const; - /// Creates a QueryRequest. + /// Creates a QueryRequest. (old wire protocol) /// The collectionname must not contain the database name. Poco::SharedPtr createUpdateRequest(const std::string & collectionName) const; - /// Creates an UpdateRequest. + /// Creates an UpdateRequest. (old wire protocol) /// The collectionname must not contain the database name. + Poco::SharedPtr createOpMsgMessage(const std::string & collectionName) const; + /// Creates OpMsgMessage. (new wire protocol) + + Poco::SharedPtr createOpMsgMessage() const; + /// Creates OpMsgMessage for database commands that do not require collection as an argument. (new wire protocol) + + Poco::SharedPtr createOpMsgCursor(const std::string & collectionName) const; + /// Creates OpMsgCursor. (new wire protocol) + Poco::MongoDB::Document::Ptr ensureIndex( Connection & connection, const std::string & collection, @@ -100,14 +120,16 @@ namespace MongoDB int version = 0, int ttl = 0); /// Creates an index. The document returned is the result of a getLastError call. - /// For more info look at the ensureIndex information on the MongoDB website. + /// For more info look at the ensureIndex information on the MongoDB website. (old wire protocol) Document::Ptr getLastErrorDoc(Connection & connection) const; /// Sends the getLastError command to the database and returns the error document. + /// (old wire protocol) std::string getLastError(Connection & connection) const; /// Sends the getLastError command to the database and returns the err element /// from the error document. When err is null, an empty string is returned. + /// (old wire protocol) static const std::string AUTH_MONGODB_CR; /// Default authentication mechanism prior to MongoDB 3.0. @@ -115,6 +137,27 @@ namespace MongoDB static const std::string AUTH_SCRAM_SHA1; /// Default authentication mechanism for MongoDB 3.0. + enum WireVersion + /// Wire version as reported by the command hello. + /// See details in MongoDB github, repository specifications. + /// @see queryServerHello + { + VER_26 = 1, + VER_26_2 = 2, + VER_30 = 3, + VER_32 = 4, + VER_34 = 5, + VER_36 = 6, ///< First wire version that supports OP_MSG + VER_40 = 7, + VER_42 = 8, + VER_44 = 9, + VER_50 = 13, + VER_51 = 14, ///< First wire version that supports only OP_MSG + VER_52 = 15, + VER_53 = 16, + VER_60 = 17 + }; + protected: bool authCR(Connection & connection, const std::string & username, const std::string & password); bool authSCRAM(Connection & connection, const std::string & username, const std::string & password); @@ -127,6 +170,12 @@ namespace MongoDB // // inlines // + inline const std::string & Database::name() const + { + return _dbname; + } + + inline Poco::SharedPtr Database::createCommand() const { Poco::SharedPtr cmd = createQueryRequest("$cmd"); @@ -158,6 +207,24 @@ namespace MongoDB return new Poco::MongoDB::UpdateRequest(_dbname + '.' + collectionName); } + // -- New wire protocol commands + + inline Poco::SharedPtr Database::createOpMsgMessage(const std::string & collectionName) const + { + return new Poco::MongoDB::OpMsgMessage(_dbname, collectionName); + } + + inline Poco::SharedPtr Database::createOpMsgMessage() const + { + // Collection name for database commands is not needed. + return createOpMsgMessage(""); + } + + inline Poco::SharedPtr Database::createOpMsgCursor(const std::string & collectionName) const + { + return new Poco::MongoDB::OpMsgCursor(_dbname, collectionName); + } + } } // namespace Poco::MongoDB diff --git a/base/poco/MongoDB/include/Poco/MongoDB/Document.h b/base/poco/MongoDB/include/Poco/MongoDB/Document.h index 12889663827..9e1df349e20 100644 --- a/base/poco/MongoDB/include/Poco/MongoDB/Document.h +++ b/base/poco/MongoDB/include/Poco/MongoDB/Document.h @@ -31,6 +31,7 @@ namespace Poco namespace MongoDB { + class Array; class ElementFindByName { @@ -48,8 +49,8 @@ namespace MongoDB /// Represents a MongoDB (BSON) document. { public: - typedef SharedPtr Ptr; - typedef std::vector Vector; + using Ptr = SharedPtr; + using Vector = std::vector; Document(); /// Creates an empty Document. @@ -86,6 +87,10 @@ namespace MongoDB /// Unlike the other add methods, this method returns /// a reference to the new document. + Array & addNewArray(const std::string & name); + /// Create a new array and add it to this document. + /// Method returns a reference to the new array. + void clear(); /// Removes all elements from the document. @@ -95,7 +100,7 @@ namespace MongoDB bool empty() const; /// Returns true if the document doesn't contain any documents. - bool exists(const std::string & name); + bool exists(const std::string & name) const; /// Returns true if the document has an element with the given name. template @@ -158,6 +163,9 @@ namespace MongoDB /// return an Int64. When the element is not found, a /// Poco::NotFoundException will be thrown. + bool remove(const std::string & name); + /// Removes an element from the document. + template bool isType(const std::string & name) const /// Returns true when the type of the element equals the TypeId of ElementTrait. @@ -227,12 +235,23 @@ namespace MongoDB } - inline bool Document::exists(const std::string & name) + inline bool Document::exists(const std::string & name) const { return std::find_if(_elements.begin(), _elements.end(), ElementFindByName(name)) != _elements.end(); } + inline bool Document::remove(const std::string & name) + { + auto it = std::find_if(_elements.begin(), _elements.end(), ElementFindByName(name)); + if (it == _elements.end()) + return false; + + _elements.erase(it); + return true; + } + + inline std::size_t Document::size() const { return _elements.size(); diff --git a/base/poco/MongoDB/include/Poco/MongoDB/Element.h b/base/poco/MongoDB/include/Poco/MongoDB/Element.h index b5592bd0e0b..26525d7d02b 100644 --- a/base/poco/MongoDB/include/Poco/MongoDB/Element.h +++ b/base/poco/MongoDB/include/Poco/MongoDB/Element.h @@ -45,7 +45,7 @@ namespace MongoDB /// Represents an Element of a Document or an Array. { public: - typedef Poco::SharedPtr Ptr; + using Ptr = Poco::SharedPtr; explicit Element(const std::string & name); /// Creates the Element with the given name. @@ -80,7 +80,7 @@ namespace MongoDB } - typedef std::list ElementSet; + using ElementSet = std::list; template @@ -266,7 +266,7 @@ namespace MongoDB } - typedef Nullable NullValue; + using NullValue = Nullable; // BSON Null Value diff --git a/base/poco/MongoDB/include/Poco/MongoDB/JavaScriptCode.h b/base/poco/MongoDB/include/Poco/MongoDB/JavaScriptCode.h index df1edc16817..c0f584b7c19 100644 --- a/base/poco/MongoDB/include/Poco/MongoDB/JavaScriptCode.h +++ b/base/poco/MongoDB/include/Poco/MongoDB/JavaScriptCode.h @@ -35,7 +35,7 @@ namespace MongoDB /// Represents JavaScript type in BSON. { public: - typedef SharedPtr Ptr; + using Ptr = SharedPtr; JavaScriptCode(); /// Creates an empty JavaScriptCode object. diff --git a/base/poco/MongoDB/include/Poco/MongoDB/MessageHeader.h b/base/poco/MongoDB/include/Poco/MongoDB/MessageHeader.h index 2b88e30fc74..98f45e876c1 100644 --- a/base/poco/MongoDB/include/Poco/MongoDB/MessageHeader.h +++ b/base/poco/MongoDB/include/Poco/MongoDB/MessageHeader.h @@ -28,6 +28,9 @@ namespace MongoDB { + class Message; // Required to disambiguate friend declaration in MessageHeader. + + class MongoDB_API MessageHeader /// Represents the message header which is always prepended to a /// MongoDB request or response message. @@ -37,14 +40,18 @@ namespace MongoDB enum OpCode { + // Opcodes deprecated in MongoDB 5.0 OP_REPLY = 1, - OP_MSG = 1000, OP_UPDATE = 2001, OP_INSERT = 2002, OP_QUERY = 2004, OP_GET_MORE = 2005, OP_DELETE = 2006, - OP_KILL_CURSORS = 2007 + OP_KILL_CURSORS = 2007, + + /// Opcodes supported in MongoDB 5.1 and later + OP_COMPRESSED = 2012, + OP_MSG = 2013 }; explicit MessageHeader(OpCode); diff --git a/base/poco/MongoDB/include/Poco/MongoDB/MongoDB.h b/base/poco/MongoDB/include/Poco/MongoDB/MongoDB.h index 253f1f8ab27..de246ddc9dd 100644 --- a/base/poco/MongoDB/include/Poco/MongoDB/MongoDB.h +++ b/base/poco/MongoDB/include/Poco/MongoDB/MongoDB.h @@ -33,6 +33,13 @@ // +#if defined(_WIN32) && defined(POCO_DLL) +# if defined(MongoDB_EXPORTS) +# define MongoDB_API __declspec(dllexport) +# else +# define MongoDB_API __declspec(dllimport) +# endif +#endif #if !defined(MongoDB_API) @@ -47,6 +54,11 @@ // // Automatically link MongoDB library. // +#if defined(_MSC_VER) +# if !defined(POCO_NO_AUTOMATIC_LIBS) && !defined(MongoDB_EXPORTS) +# pragma comment(lib, "PocoMongoDB" POCO_LIB_SUFFIX) +# endif +#endif #endif // MongoDBMongoDB_INCLUDED diff --git a/base/poco/MongoDB/include/Poco/MongoDB/ObjectId.h b/base/poco/MongoDB/include/Poco/MongoDB/ObjectId.h index 76bb190db48..8a335320ea0 100644 --- a/base/poco/MongoDB/include/Poco/MongoDB/ObjectId.h +++ b/base/poco/MongoDB/include/Poco/MongoDB/ObjectId.h @@ -44,7 +44,7 @@ namespace MongoDB /// as its value. { public: - typedef SharedPtr Ptr; + using Ptr = SharedPtr; explicit ObjectId(const std::string & id); /// Creates an ObjectId from a string. diff --git a/base/poco/MongoDB/include/Poco/MongoDB/OpMsgCursor.h b/base/poco/MongoDB/include/Poco/MongoDB/OpMsgCursor.h new file mode 100644 index 00000000000..a465a71bb1c --- /dev/null +++ b/base/poco/MongoDB/include/Poco/MongoDB/OpMsgCursor.h @@ -0,0 +1,96 @@ +// +// OpMsgCursor.h +// +// Library: MongoDB +// Package: MongoDB +// Module: OpMsgCursor +// +// Definition of the OpMsgCursor class. +// +// Copyright (c) 2012, Applied Informatics Software Engineering GmbH. +// and Contributors. +// +// SPDX-License-Identifier: BSL-1.0 +// + + +#ifndef MongoDB_OpMsgCursor_INCLUDED +#define MongoDB_OpMsgCursor_INCLUDED + + +#include "Poco/MongoDB/Connection.h" +#include "Poco/MongoDB/MongoDB.h" +#include "Poco/MongoDB/OpMsgMessage.h" + +namespace Poco +{ +namespace MongoDB +{ + + + class MongoDB_API OpMsgCursor : public Document + /// OpMsgCursor is an helper class for querying multiple documents using OpMsgMessage. + { + public: + OpMsgCursor(const std::string & dbname, const std::string & collectionName); + /// Creates a OpMsgCursor for the given database and collection. + + virtual ~OpMsgCursor(); + /// Destroys the OpMsgCursor. + + void setEmptyFirstBatch(bool empty); + /// Empty first batch is used to get error response faster with little server processing + + bool emptyFirstBatch() const; + + void setBatchSize(Int32 batchSize); + /// Set non-default batch size + + Int32 batchSize() const; + /// Current batch size (zero or negative number indicates default batch size) + + Int64 cursorID() const; + + OpMsgMessage & next(Connection & connection); + /// Tries to get the next documents. As long as response message has a + /// cursor ID next can be called to retrieve the next bunch of documents. + /// + /// The cursor must be killed (see kill()) when not all documents are needed. + + OpMsgMessage & query(); + /// Returns the associated query. + + void kill(Connection & connection); + /// Kills the cursor and reset it so that it can be reused. + + private: + OpMsgMessage _query; + OpMsgMessage _response; + + bool _emptyFirstBatch{false}; + Int32 _batchSize{-1}; + /// Batch size used in the cursor. Zero or negative value means that default shall be used. + + Int64 _cursorID{0}; + }; + + + // + // inlines + // + inline OpMsgMessage & OpMsgCursor::query() + { + return _query; + } + + inline Int64 OpMsgCursor::cursorID() const + { + return _cursorID; + } + + +} +} // namespace Poco::MongoDB + + +#endif // MongoDB_OpMsgCursor_INCLUDED diff --git a/base/poco/MongoDB/include/Poco/MongoDB/OpMsgMessage.h b/base/poco/MongoDB/include/Poco/MongoDB/OpMsgMessage.h new file mode 100644 index 00000000000..699c7fc4e12 --- /dev/null +++ b/base/poco/MongoDB/include/Poco/MongoDB/OpMsgMessage.h @@ -0,0 +1,163 @@ +// +// OpMsgMessage.h +// +// Library: MongoDB +// Package: MongoDB +// Module: OpMsgMessage +// +// Definition of the OpMsgMessage class. +// +// Copyright (c) 2022, Applied Informatics Software Engineering GmbH. +// and Contributors. +// +// SPDX-License-Identifier: BSL-1.0 +// + + +#ifndef MongoDB_OpMsgMessage_INCLUDED +#define MongoDB_OpMsgMessage_INCLUDED + + +#include "Poco/MongoDB/Document.h" +#include "Poco/MongoDB/Message.h" +#include "Poco/MongoDB/MongoDB.h" + +#include + +namespace Poco +{ +namespace MongoDB +{ + + + class MongoDB_API OpMsgMessage : public Message + /// This class represents a request/response (OP_MSG) to send requests and receive responses to/from MongoDB. + { + public: + // Constants for most often used MongoDB commands that can be sent using OP_MSG + // For complete list see: https://www.mongodb.com/docs/manual/reference/command/ + + // Query and write + static const std::string CMD_INSERT; + static const std::string CMD_DELETE; + static const std::string CMD_UPDATE; + static const std::string CMD_FIND; + static const std::string CMD_FIND_AND_MODIFY; + static const std::string CMD_GET_MORE; + + // Aggregation + static const std::string CMD_AGGREGATE; + static const std::string CMD_COUNT; + static const std::string CMD_DISTINCT; + static const std::string CMD_MAP_REDUCE; + + // Replication and administration + static const std::string CMD_HELLO; + static const std::string CMD_REPL_SET_GET_STATUS; + static const std::string CMD_REPL_SET_GET_CONFIG; + + static const std::string CMD_CREATE; + static const std::string CMD_CREATE_INDEXES; + static const std::string CMD_DROP; + static const std::string CMD_DROP_DATABASE; + static const std::string CMD_KILL_CURSORS; + static const std::string CMD_LIST_DATABASES; + static const std::string CMD_LIST_INDEXES; + + // Diagnostic + static const std::string CMD_BUILD_INFO; + static const std::string CMD_COLL_STATS; + static const std::string CMD_DB_STATS; + static const std::string CMD_HOST_INFO; + + + enum Flags : UInt32 + { + MSG_FLAGS_DEFAULT = 0, + + MSG_CHECKSUM_PRESENT = (1 << 0), + + MSG_MORE_TO_COME = (1 << 1), + /// Sender will send another message and is not prepared for overlapping messages + + MSG_EXHAUST_ALLOWED = (1 << 16) + /// Client is prepared for multiple replies (using the moreToCome bit) to this request + }; + + OpMsgMessage(); + /// Creates an OpMsgMessage for response. + + OpMsgMessage(const std::string & databaseName, const std::string & collectionName, UInt32 flags = MSG_FLAGS_DEFAULT); + /// Creates an OpMsgMessage for requests. + + virtual ~OpMsgMessage(); + + const std::string & databaseName() const; + + const std::string & collectionName() const; + + void setCommandName(const std::string & command); + /// Sets the command name and clears the command document + + void setCursor(Poco::Int64 cursorID, Poco::Int32 batchSize = -1); + /// Sets the command "getMore" for the cursor id with batch size (if it is not negative). + + const std::string & commandName() const; + /// Current command name. + + void setAcknowledgedRequest(bool ack); + /// Set false to create request that does not return response. + /// It has effect only for commands that write or delete documents. + /// Default is true (request returns acknowledge response). + + bool acknowledgedRequest() const; + + UInt32 flags() const; + + Document & body(); + /// Access to body document. + /// Additional query arguments shall be added after setting the command name. + + const Document & body() const; + + Document::Vector & documents(); + /// Documents prepared for request or retrieved in response. + + const Document::Vector & documents() const; + /// Documents prepared for request or retrieved in response. + + bool responseOk() const; + /// Reads "ok" status from the response message. + + void clear(); + /// Clears the message. + + void send(std::ostream & ostr); + /// Writes the request to stream. + + void read(std::istream & istr); + /// Reads the response from the stream. + + private: + enum PayloadType : UInt8 + { + PAYLOAD_TYPE_0 = 0, + PAYLOAD_TYPE_1 = 1 + }; + + std::string _databaseName; + std::string _collectionName; + UInt32 _flags{MSG_FLAGS_DEFAULT}; + std::string _commandName; + bool _acknowledged{true}; + + Document _body; + Document::Vector _documents; + }; + + +} +} // namespace Poco::MongoDB + + +#endif // MongoDB_OpMsgMessage_INCLUDED diff --git a/base/poco/MongoDB/include/Poco/MongoDB/PoolableConnectionFactory.h b/base/poco/MongoDB/include/Poco/MongoDB/PoolableConnectionFactory.h index 9d35c728e5e..53f4a5127ef 100644 --- a/base/poco/MongoDB/include/Poco/MongoDB/PoolableConnectionFactory.h +++ b/base/poco/MongoDB/include/Poco/MongoDB/PoolableConnectionFactory.h @@ -94,7 +94,23 @@ namespace MongoDB operator Connection::Ptr() { return _connection; } +#if defined(POCO_ENABLE_CPP11) + // Disable copy to prevent unwanted release of resources: C++11 way + PooledConnection(const PooledConnection &) = delete; + PooledConnection & operator=(const PooledConnection &) = delete; + + // Enable move semantics + PooledConnection(PooledConnection && other) = default; + PooledConnection & operator=(PooledConnection &&) = default; +#endif + private: +#if !defined(POCO_ENABLE_CPP11) + // Disable copy to prevent unwanted release of resources: pre C++11 way + PooledConnection(const PooledConnection &); + PooledConnection & operator=(const PooledConnection &); +#endif + Poco::ObjectPool & _pool; Connection::Ptr _connection; }; diff --git a/base/poco/MongoDB/include/Poco/MongoDB/RegularExpression.h b/base/poco/MongoDB/include/Poco/MongoDB/RegularExpression.h index b9a8694d321..244b8c14163 100644 --- a/base/poco/MongoDB/include/Poco/MongoDB/RegularExpression.h +++ b/base/poco/MongoDB/include/Poco/MongoDB/RegularExpression.h @@ -33,7 +33,7 @@ namespace MongoDB /// Represents a regular expression in BSON format. { public: - typedef SharedPtr Ptr; + using Ptr = SharedPtr; RegularExpression(); /// Creates an empty RegularExpression. diff --git a/base/poco/MongoDB/include/Poco/MongoDB/ResponseMessage.h b/base/poco/MongoDB/include/Poco/MongoDB/ResponseMessage.h index 132859cc75f..9cb92cb16c4 100644 --- a/base/poco/MongoDB/include/Poco/MongoDB/ResponseMessage.h +++ b/base/poco/MongoDB/include/Poco/MongoDB/ResponseMessage.h @@ -38,6 +38,9 @@ namespace MongoDB ResponseMessage(); /// Creates an empty ResponseMessage. + ResponseMessage(const Int64 & cursorID); + /// Creates an ResponseMessage for existing cursor ID. + virtual ~ResponseMessage(); /// Destroys the ResponseMessage. diff --git a/base/poco/MongoDB/src/Array.cpp b/base/poco/MongoDB/src/Array.cpp index c6d96d1371d..6fff0994d82 100644 --- a/base/poco/MongoDB/src/Array.cpp +++ b/base/poco/MongoDB/src/Array.cpp @@ -20,7 +20,7 @@ namespace Poco { namespace MongoDB { -Array::Array(): +Array::Array(): Document() { } @@ -31,7 +31,7 @@ Array::~Array() } -Element::Ptr Array::get(int pos) const +Element::Ptr Array::get(std::size_t pos) const { std::string name = Poco::NumberFormatter::format(pos); return Document::get(name); diff --git a/base/poco/MongoDB/src/Connection.cpp b/base/poco/MongoDB/src/Connection.cpp index 38c31d2250a..fa20887054b 100644 --- a/base/poco/MongoDB/src/Connection.cpp +++ b/base/poco/MongoDB/src/Connection.cpp @@ -319,4 +319,30 @@ void Connection::sendRequest(RequestMessage& request, ResponseMessage& response) } +void Connection::sendRequest(OpMsgMessage& request, OpMsgMessage& response) +{ + Poco::Net::SocketOutputStream sos(_socket); + request.send(sos); + + response.clear(); + readResponse(response); +} + + +void Connection::sendRequest(OpMsgMessage& request) +{ + request.setAcknowledgedRequest(false); + Poco::Net::SocketOutputStream sos(_socket); + request.send(sos); +} + + +void Connection::readResponse(OpMsgMessage& response) +{ + Poco::Net::SocketInputStream sis(_socket); + response.read(sis); +} + + + } } // Poco::MongoDB diff --git a/base/poco/MongoDB/src/Cursor.cpp b/base/poco/MongoDB/src/Cursor.cpp index 69031e0ab65..ef7a4ca961d 100644 --- a/base/poco/MongoDB/src/Cursor.cpp +++ b/base/poco/MongoDB/src/Cursor.cpp @@ -33,6 +33,12 @@ Cursor::Cursor(const std::string& fullCollectionName, QueryRequest::Flags flags) } +Cursor::Cursor(const Document& aggregationResponse) : + _query(aggregationResponse.get("cursor")->get("ns")), + _response(aggregationResponse.get("cursor")->get("id")) +{ +} + Cursor::~Cursor() { try diff --git a/base/poco/MongoDB/src/Database.cpp b/base/poco/MongoDB/src/Database.cpp index 2b31523bdc4..1a0d3cfe559 100644 --- a/base/poco/MongoDB/src/Database.cpp +++ b/base/poco/MongoDB/src/Database.cpp @@ -334,6 +334,50 @@ bool Database::authSCRAM(Connection& connection, const std::string& username, co } +Document::Ptr Database::queryBuildInfo(Connection& connection) const +{ + // build info can be issued on "config" system database + Poco::SharedPtr request = createCommand(); + request->selector().add("buildInfo", 1); + + Poco::MongoDB::ResponseMessage response; + connection.sendRequest(*request, response); + + Document::Ptr buildInfo; + if ( response.documents().size() > 0 ) + { + buildInfo = response.documents()[0]; + } + else + { + throw Poco::ProtocolException("Didn't get a response from the buildinfo command"); + } + return buildInfo; +} + + +Document::Ptr Database::queryServerHello(Connection& connection) const +{ + // hello can be issued on "config" system database + Poco::SharedPtr request = createCommand(); + request->selector().add("hello", 1); + + Poco::MongoDB::ResponseMessage response; + connection.sendRequest(*request, response); + + Document::Ptr hello; + if ( response.documents().size() > 0 ) + { + hello = response.documents()[0]; + } + else + { + throw Poco::ProtocolException("Didn't get a response from the hello command"); + } + return hello; +} + + Int64 Database::count(Connection& connection, const std::string& collectionName) const { Poco::SharedPtr countRequest = createCountRequest(collectionName); @@ -390,7 +434,7 @@ Document::Ptr Database::getLastErrorDoc(Connection& connection) const { Document::Ptr errorDoc; - Poco::SharedPtr request = createQueryRequest("$cmd"); + Poco::SharedPtr request = createCommand(); request->setNumberToReturn(1); request->selector().add("getLastError", 1); @@ -420,7 +464,7 @@ std::string Database::getLastError(Connection& connection) const Poco::SharedPtr Database::createCountRequest(const std::string& collectionName) const { - Poco::SharedPtr request = createQueryRequest("$cmd"); + Poco::SharedPtr request = createCommand(); request->setNumberToReturn(1); request->selector().add("count", collectionName); return request; diff --git a/base/poco/MongoDB/src/DeleteRequest.cpp b/base/poco/MongoDB/src/DeleteRequest.cpp index 67a88c33302..ba75beb55fb 100644 --- a/base/poco/MongoDB/src/DeleteRequest.cpp +++ b/base/poco/MongoDB/src/DeleteRequest.cpp @@ -20,8 +20,8 @@ namespace MongoDB { DeleteRequest::DeleteRequest(const std::string& collectionName, DeleteRequest::Flags flags): - RequestMessage(MessageHeader::OP_DELETE), - _flags(flags), + RequestMessage(MessageHeader::OP_DELETE), + _flags(flags), _fullCollectionName(collectionName), _selector() { diff --git a/base/poco/MongoDB/src/Document.cpp b/base/poco/MongoDB/src/Document.cpp index 114fc993891..f7c5c9c5dc6 100644 --- a/base/poco/MongoDB/src/Document.cpp +++ b/base/poco/MongoDB/src/Document.cpp @@ -35,6 +35,14 @@ Document::~Document() } +Array& Document::addNewArray(const std::string& name) +{ + Array::Ptr newArray = new Array(); + add(name, newArray); + return *newArray; +} + + Element::Ptr Document::get(const std::string& name) const { Element::Ptr element; @@ -84,7 +92,7 @@ void Document::read(BinaryReader& reader) while (type != '\0') { Element::Ptr element; - + std::string name = BSONReader(reader).readCString(); switch (type) @@ -198,7 +206,7 @@ void Document::write(BinaryWriter& writer) else { std::stringstream sstream; - Poco::BinaryWriter tempWriter(sstream); + Poco::BinaryWriter tempWriter(sstream, BinaryWriter::LITTLE_ENDIAN_BYTE_ORDER); for (ElementSet::iterator it = _elements.begin(); it != _elements.end(); ++it) { tempWriter << static_cast((*it)->type()); @@ -207,7 +215,7 @@ void Document::write(BinaryWriter& writer) element->write(tempWriter); } tempWriter.flush(); - + Poco::Int32 len = static_cast(5 + sstream.tellp()); /* 5 = sizeof(len) + 0-byte */ writer << len; writer.writeRaw(sstream.str()); diff --git a/base/poco/MongoDB/src/Element.cpp b/base/poco/MongoDB/src/Element.cpp index 89629e0503e..f91ce264493 100644 --- a/base/poco/MongoDB/src/Element.cpp +++ b/base/poco/MongoDB/src/Element.cpp @@ -24,7 +24,7 @@ Element::Element(const std::string& name) : _name(name) } -Element::~Element() +Element::~Element() { } diff --git a/base/poco/MongoDB/src/GetMoreRequest.cpp b/base/poco/MongoDB/src/GetMoreRequest.cpp index f8a6b73c6ad..2c1f6909eb7 100644 --- a/base/poco/MongoDB/src/GetMoreRequest.cpp +++ b/base/poco/MongoDB/src/GetMoreRequest.cpp @@ -21,7 +21,7 @@ namespace MongoDB { GetMoreRequest::GetMoreRequest(const std::string& collectionName, Int64 cursorID): - RequestMessage(MessageHeader::OP_GET_MORE), + RequestMessage(MessageHeader::OP_GET_MORE), _fullCollectionName(collectionName), _numberToReturn(100), _cursorID(cursorID) diff --git a/base/poco/MongoDB/src/InsertRequest.cpp b/base/poco/MongoDB/src/InsertRequest.cpp index ec8dc9cf94a..65be5654b3e 100644 --- a/base/poco/MongoDB/src/InsertRequest.cpp +++ b/base/poco/MongoDB/src/InsertRequest.cpp @@ -20,7 +20,7 @@ namespace MongoDB { InsertRequest::InsertRequest(const std::string& collectionName, Flags flags): - RequestMessage(MessageHeader::OP_INSERT), + RequestMessage(MessageHeader::OP_INSERT), _flags(flags), _fullCollectionName(collectionName) { diff --git a/base/poco/MongoDB/src/KillCursorsRequest.cpp b/base/poco/MongoDB/src/KillCursorsRequest.cpp index 6baa0e0be8f..448002aa16a 100644 --- a/base/poco/MongoDB/src/KillCursorsRequest.cpp +++ b/base/poco/MongoDB/src/KillCursorsRequest.cpp @@ -37,7 +37,7 @@ void KillCursorsRequest::buildRequest(BinaryWriter& writer) for (std::vector::iterator it = _cursors.begin(); it != _cursors.end(); ++it) { writer << *it; - } + } } diff --git a/base/poco/MongoDB/src/Message.cpp b/base/poco/MongoDB/src/Message.cpp index c29d282d15a..7b1cb23bab6 100644 --- a/base/poco/MongoDB/src/Message.cpp +++ b/base/poco/MongoDB/src/Message.cpp @@ -19,7 +19,7 @@ namespace Poco { namespace MongoDB { -Message::Message(MessageHeader::OpCode opcode): +Message::Message(MessageHeader::OpCode opcode): _header(opcode) { } diff --git a/base/poco/MongoDB/src/MessageHeader.cpp b/base/poco/MongoDB/src/MessageHeader.cpp index 222121243db..b472bcec465 100644 --- a/base/poco/MongoDB/src/MessageHeader.cpp +++ b/base/poco/MongoDB/src/MessageHeader.cpp @@ -20,10 +20,10 @@ namespace Poco { namespace MongoDB { -MessageHeader::MessageHeader(OpCode opCode): - _messageLength(0), - _requestID(0), - _responseTo(0), +MessageHeader::MessageHeader(OpCode opCode): + _messageLength(0), + _requestID(0), + _responseTo(0), _opCode(opCode) { } @@ -42,7 +42,7 @@ void MessageHeader::read(BinaryReader& reader) Int32 opCode; reader >> opCode; - _opCode = (OpCode) opCode; + _opCode = static_cast(opCode); if (!reader.good()) { @@ -56,7 +56,7 @@ void MessageHeader::write(BinaryWriter& writer) writer << _messageLength; writer << _requestID; writer << _responseTo; - writer << (Int32) _opCode; + writer << static_cast(_opCode); } diff --git a/base/poco/MongoDB/src/ObjectId.cpp b/base/poco/MongoDB/src/ObjectId.cpp index 3065a2ffc30..0125c246c2d 100644 --- a/base/poco/MongoDB/src/ObjectId.cpp +++ b/base/poco/MongoDB/src/ObjectId.cpp @@ -32,7 +32,7 @@ ObjectId::ObjectId(const std::string& id) poco_assert_dbg(id.size() == 24); const char* p = id.c_str(); - for (std::size_t i = 0; i < 12; ++i) + for (std::size_t i = 0; i < 12; ++i) { _id[i] = fromHex(p); p += 2; diff --git a/base/poco/MongoDB/src/OpMsgCursor.cpp b/base/poco/MongoDB/src/OpMsgCursor.cpp new file mode 100644 index 00000000000..bc95851ae33 --- /dev/null +++ b/base/poco/MongoDB/src/OpMsgCursor.cpp @@ -0,0 +1,187 @@ +// +// OpMsgCursor.cpp +// +// Library: MongoDB +// Package: MongoDB +// Module: OpMsgCursor +// +// Copyright (c) 2022, Applied Informatics Software Engineering GmbH. +// and Contributors. +// +// SPDX-License-Identifier: BSL-1.0 +// + + +#include "Poco/MongoDB/OpMsgCursor.h" +#include "Poco/MongoDB/Array.h" + +// +// NOTE: +// +// MongoDB specification indicates that the flag MSG_EXHAUST_ALLOWED shall be +// used in the request when the receiver is ready to receive multiple messages +// without sending additional requests in between. Sender (MongoDB) indicates +// that more messages follow with flag MSG_MORE_TO_COME. +// +// It seems that this does not work properly. MSG_MORE_TO_COME is set and reading +// next messages sometimes works, however often the data is missing in response +// or the message header contains wrong message length and reading blocks. +// Opcode in the header is correct. +// +// Using MSG_EXHAUST_ALLOWED is therefore currently disabled. +// +// It seems that related JIRA ticket is: +// +// https://jira.mongodb.org/browse/SERVER-57297 +// +// https://github.com/mongodb/specifications/blob/master/source/message/OP_MSG.rst +// + +#define MONGODB_EXHAUST_ALLOWED_WORKS false + +namespace Poco { +namespace MongoDB { + + +static const std::string keyCursor {"cursor"}; +static const std::string keyFirstBatch {"firstBatch"}; +static const std::string keyNextBatch {"nextBatch"}; + +static Poco::Int64 cursorIdFromResponse(const MongoDB::Document& doc); + + +OpMsgCursor::OpMsgCursor(const std::string& db, const std::string& collection): +#if MONGODB_EXHAUST_ALLOWED_WORKS + _query(db, collection, OpMsgMessage::MSG_EXHAUST_ALLOWED) +#else + _query(db, collection) +#endif +{ +} + +OpMsgCursor::~OpMsgCursor() +{ + try + { + poco_assert_dbg(_cursorID == 0); + } + catch (...) + { + } +} + + +void OpMsgCursor::setEmptyFirstBatch(bool empty) +{ + _emptyFirstBatch = empty; +} + + +bool OpMsgCursor::emptyFirstBatch() const +{ + return _emptyFirstBatch; +} + + +void OpMsgCursor::setBatchSize(Int32 batchSize) +{ + _batchSize = batchSize; +} + + +Int32 OpMsgCursor::batchSize() const +{ + return _batchSize; +} + + +OpMsgMessage& OpMsgCursor::next(Connection& connection) +{ + if (_cursorID == 0) + { + _response.clear(); + + if (_emptyFirstBatch || _batchSize > 0) + { + Int32 bsize = _emptyFirstBatch ? 0 : _batchSize; + if (_query.commandName() == OpMsgMessage::CMD_FIND) + { + _query.body().add("batchSize", bsize); + } + else if (_query.commandName() == OpMsgMessage::CMD_AGGREGATE) + { + auto& cursorDoc = _query.body().addNewDocument("cursor"); + cursorDoc.add("batchSize", bsize); + } + } + + connection.sendRequest(_query, _response); + + const auto& rdoc = _response.body(); + _cursorID = cursorIdFromResponse(rdoc); + } + else + { +#if MONGODB_EXHAUST_ALLOWED_WORKS + std::cout << "Response flags: " << _response.flags() << std::endl; + if (_response.flags() & OpMsgMessage::MSG_MORE_TO_COME) + { + std::cout << "More to come. Reading more response: " << std::endl; + _response.clear(); + connection.readResponse(_response); + } + else +#endif + { + _response.clear(); + _query.setCursor(_cursorID, _batchSize); + connection.sendRequest(_query, _response); + } + } + + const auto& rdoc = _response.body(); + _cursorID = cursorIdFromResponse(rdoc); + + return _response; +} + + +void OpMsgCursor::kill(Connection& connection) +{ + _response.clear(); + if (_cursorID != 0) + { + _query.setCommandName(OpMsgMessage::CMD_KILL_CURSORS); + + MongoDB::Array::Ptr cursors = new MongoDB::Array(); + cursors->add(_cursorID); + _query.body().add("cursors", cursors); + + connection.sendRequest(_query, _response); + + const auto killed = _response.body().get("cursorsKilled", nullptr); + if (!killed || killed->size() != 1 || killed->get(0, -1) != _cursorID) + { + throw Poco::ProtocolException("Cursor not killed as expected: " + std::to_string(_cursorID)); + } + + _cursorID = 0; + _query.clear(); + _response.clear(); + } +} + + +Poco::Int64 cursorIdFromResponse(const MongoDB::Document& doc) +{ + Poco::Int64 id {0}; + auto cursorDoc = doc.get(keyCursor, nullptr); + if(cursorDoc) + { + id = cursorDoc->get("id", 0); + } + return id; +} + + +} } // Namespace Poco::MongoDB diff --git a/base/poco/MongoDB/src/OpMsgMessage.cpp b/base/poco/MongoDB/src/OpMsgMessage.cpp new file mode 100644 index 00000000000..2b55772ca59 --- /dev/null +++ b/base/poco/MongoDB/src/OpMsgMessage.cpp @@ -0,0 +1,412 @@ +// +// OpMsgMessage.cpp +// +// Library: MongoDB +// Package: MongoDB +// Module: OpMsgMessage +// +// Copyright (c) 2022, Applied Informatics Software Engineering GmbH. +// and Contributors. +// +// SPDX-License-Identifier: BSL-1.0 +// + +#include "Poco/MongoDB/OpMsgMessage.h" +#include "Poco/MongoDB/MessageHeader.h" +#include "Poco/MongoDB/Array.h" +#include "Poco/StreamCopier.h" +#include "Poco/Logger.h" + +#define POCO_MONGODB_DUMP false + +namespace Poco { +namespace MongoDB { + +// Query and write +const std::string OpMsgMessage::CMD_INSERT { "insert" }; +const std::string OpMsgMessage::CMD_DELETE { "delete" }; +const std::string OpMsgMessage::CMD_UPDATE { "update" }; +const std::string OpMsgMessage::CMD_FIND { "find" }; +const std::string OpMsgMessage::CMD_FIND_AND_MODIFY { "findAndModify" }; +const std::string OpMsgMessage::CMD_GET_MORE { "getMore" }; + +// Aggregation +const std::string OpMsgMessage::CMD_AGGREGATE { "aggregate" }; +const std::string OpMsgMessage::CMD_COUNT { "count" }; +const std::string OpMsgMessage::CMD_DISTINCT { "distinct" }; +const std::string OpMsgMessage::CMD_MAP_REDUCE { "mapReduce" }; + +// Replication and administration +const std::string OpMsgMessage::CMD_HELLO { "hello" }; +const std::string OpMsgMessage::CMD_REPL_SET_GET_STATUS { "replSetGetStatus" }; +const std::string OpMsgMessage::CMD_REPL_SET_GET_CONFIG { "replSetGetConfig" }; + +const std::string OpMsgMessage::CMD_CREATE { "create" }; +const std::string OpMsgMessage::CMD_CREATE_INDEXES { "createIndexes" }; +const std::string OpMsgMessage::CMD_DROP { "drop" }; +const std::string OpMsgMessage::CMD_DROP_DATABASE { "dropDatabase" }; +const std::string OpMsgMessage::CMD_KILL_CURSORS { "killCursors" }; +const std::string OpMsgMessage::CMD_LIST_DATABASES { "listDatabases" }; +const std::string OpMsgMessage::CMD_LIST_INDEXES { "listIndexes" }; + +// Diagnostic +const std::string OpMsgMessage::CMD_BUILD_INFO { "buildInfo" }; +const std::string OpMsgMessage::CMD_COLL_STATS { "collStats" }; +const std::string OpMsgMessage::CMD_DB_STATS { "dbStats" }; +const std::string OpMsgMessage::CMD_HOST_INFO { "hostInfo" }; + + +static const std::string& commandIdentifier(const std::string& command); + /// Commands have different names for the payload that is sent in a separate section + + +static const std::string keyCursor {"cursor"}; +static const std::string keyFirstBatch {"firstBatch"}; +static const std::string keyNextBatch {"nextBatch"}; + + +OpMsgMessage::OpMsgMessage() : + Message(MessageHeader::OP_MSG) +{ +} + + +OpMsgMessage::OpMsgMessage(const std::string& databaseName, const std::string& collectionName, UInt32 flags) : + Message(MessageHeader::OP_MSG), + _databaseName(databaseName), + _collectionName(collectionName), + _flags(flags) +{ +} + + +OpMsgMessage::~OpMsgMessage() +{ +} + +const std::string& OpMsgMessage::databaseName() const +{ + return _databaseName; +} + + +const std::string& OpMsgMessage::collectionName() const +{ + return _collectionName; +} + + +void OpMsgMessage::setCommandName(const std::string& command) +{ + _commandName = command; + _body.clear(); + + // IMPORTANT: Command name must be first + if (_collectionName.empty()) + { + // Collection is not specified. It is assumed that this particular command does + // not need it. + _body.add(_commandName, Int32(1)); + } + else + { + _body.add(_commandName, _collectionName); + } + _body.add("$db", _databaseName); +} + + +void OpMsgMessage::setCursor(Poco::Int64 cursorID, Poco::Int32 batchSize) +{ + _commandName = OpMsgMessage::CMD_GET_MORE; + _body.clear(); + + // IMPORTANT: Command name must be first + _body.add(_commandName, cursorID); + _body.add("$db", _databaseName); + _body.add("collection", _collectionName); + if (batchSize > 0) + { + _body.add("batchSize", batchSize); + } +} + + +const std::string& OpMsgMessage::commandName() const +{ + return _commandName; +} + + +void OpMsgMessage::setAcknowledgedRequest(bool ack) +{ + const auto& id = commandIdentifier(_commandName); + if (id.empty()) + return; + + _acknowledged = ack; + + auto writeConcern = _body.get("writeConcern", nullptr); + if (writeConcern) + writeConcern->remove("w"); + + if (ack) + { + _flags = _flags & (~MSG_MORE_TO_COME); + } + else + { + _flags = _flags | MSG_MORE_TO_COME; + if (!writeConcern) + _body.addNewDocument("writeConcern").add("w", 0); + else + writeConcern->add("w", 0); + } + +} + + +bool OpMsgMessage::acknowledgedRequest() const +{ + return _acknowledged; +} + + +UInt32 OpMsgMessage::flags() const +{ + return _flags; +} + + +Document& OpMsgMessage::body() +{ + return _body; +} + + +const Document& OpMsgMessage::body() const +{ + return _body; +} + + +Document::Vector& OpMsgMessage::documents() +{ + return _documents; +} + + +const Document::Vector& OpMsgMessage::documents() const +{ + return _documents; +} + + +bool OpMsgMessage::responseOk() const +{ + Poco::Int64 ok {false}; + if (_body.exists("ok")) + { + ok = _body.getInteger("ok"); + } + return (ok != 0); +} + + +void OpMsgMessage::clear() +{ + _flags = MSG_FLAGS_DEFAULT; + _commandName.clear(); + _body.clear(); + _documents.clear(); +} + + +void OpMsgMessage::send(std::ostream& ostr) +{ + BinaryWriter socketWriter(ostr, BinaryWriter::LITTLE_ENDIAN_BYTE_ORDER); + + // Serialise the body + std::stringstream ss; + BinaryWriter writer(ss, BinaryWriter::LITTLE_ENDIAN_BYTE_ORDER); + writer << _flags; + + writer << PAYLOAD_TYPE_0; + _body.write(writer); + + if (!_documents.empty()) + { + // Serialise attached documents + + std::stringstream ssdoc; + BinaryWriter wdoc(ssdoc, BinaryWriter::LITTLE_ENDIAN_BYTE_ORDER); + for (auto& doc: _documents) + { + doc->write(wdoc); + } + wdoc.flush(); + + const std::string& identifier = commandIdentifier(_commandName); + const Poco::Int32 size = static_cast(sizeof(size) + identifier.size() + 1 + ssdoc.tellp()); + writer << PAYLOAD_TYPE_1; + writer << size; + writer.writeCString(identifier.c_str()); + StreamCopier::copyStream(ssdoc, ss); + } + writer.flush(); + +#if POCO_MONGODB_DUMP + const std::string section = ss.str(); + std::string dump; + Logger::formatDump(dump, section.data(), section.length()); + std::cout << dump << std::endl; +#endif + + messageLength(static_cast(ss.tellp())); + + _header.write(socketWriter); + StreamCopier::copyStream(ss, ostr); + + ostr.flush(); +} + + +void OpMsgMessage::read(std::istream& istr) +{ + std::string message; + { + BinaryReader reader(istr, BinaryReader::LITTLE_ENDIAN_BYTE_ORDER); + _header.read(reader); + + poco_assert_dbg(_header.opCode() == _header.OP_MSG); + + const std::streamsize remainingSize {_header.getMessageLength() - _header.MSG_HEADER_SIZE }; + message.reserve(remainingSize); + +#if POCO_MONGODB_DUMP + std::cout + << "Message hdr: " << _header.getMessageLength() << " " << remainingSize << " " + << _header.opCode() << " " << _header.getRequestID() << " " << _header.responseTo() + << std::endl; +#endif + + reader.readRaw(remainingSize, message); + +#if POCO_MONGODB_DUMP + std::string dump; + Logger::formatDump(dump, message.data(), message.length()); + std::cout << dump << std::endl; +#endif + } + // Read complete message and then interpret it. + + std::istringstream msgss(message); + BinaryReader reader(msgss, BinaryReader::LITTLE_ENDIAN_BYTE_ORDER); + + Poco::UInt8 payloadType {0xFF}; + + reader >> _flags; + reader >> payloadType; + poco_assert_dbg(payloadType == PAYLOAD_TYPE_0); + + _body.read(reader); + + // Read next sections from the buffer + while (msgss.good()) + { + // NOTE: Not tested yet with database, because it returns everything in the body. + // Does MongoDB ever return documents as Payload type 1? + reader >> payloadType; + if (!msgss.good()) + { + break; + } + poco_assert_dbg(payloadType == PAYLOAD_TYPE_1); +#if POCO_MONGODB_DUMP + std::cout << "section payload: " << payloadType << std::endl; +#endif + + Poco::Int32 sectionSize {0}; + reader >> sectionSize; + poco_assert_dbg(sectionSize > 0); + +#if POCO_MONGODB_DUMP + std::cout << "section size: " << sectionSize << std::endl; +#endif + std::streamoff offset = sectionSize - sizeof(sectionSize); + std::streampos endOfSection = msgss.tellg() + offset; + + std::string identifier; + reader.readCString(identifier); +#if POCO_MONGODB_DUMP + std::cout << "section identifier: " << identifier << std::endl; +#endif + + // Loop to read documents from this section. + while (msgss.tellg() < endOfSection) + { +#if POCO_MONGODB_DUMP + std::cout << "section doc: " << msgss.tellg() << " " << endOfSection << std::endl; +#endif + Document::Ptr doc = new Document(); + doc->read(reader); + _documents.push_back(doc); + if (msgss.tellg() < 0) + { + break; + } + } + } + + // Extract documents from the cursor batch if they are there. + MongoDB::Array::Ptr batch; + auto curDoc = _body.get(keyCursor, nullptr); + if (curDoc) + { + batch = curDoc->get(keyFirstBatch, nullptr); + if (!batch) + { + batch = curDoc->get(keyNextBatch, nullptr); + } + } + if (batch) + { + for(std::size_t i = 0; i < batch->size(); i++) + { + const auto& d = batch->get(i, nullptr); + if (d) + { + _documents.push_back(d); + } + } + } + +} + +const std::string& commandIdentifier(const std::string& command) +{ + // Names of identifiers for commands that send bulk documents in the request + // The identifier is set in the section type 1. + static std::map identifiers { + { OpMsgMessage::CMD_INSERT, "documents" }, + { OpMsgMessage::CMD_DELETE, "deletes" }, + { OpMsgMessage::CMD_UPDATE, "updates" }, + + // Not sure if create index can send document section + { OpMsgMessage::CMD_CREATE_INDEXES, "indexes" } + }; + + const auto i = identifiers.find(command); + if (i != identifiers.end()) + { + return i->second; + } + + // This likely means that documents are incorrectly set for a command + // that does not send list of documents in section type 1. + static const std::string emptyIdentifier; + return emptyIdentifier; +} + + +} } // namespace Poco::MongoDB diff --git a/base/poco/MongoDB/src/QueryRequest.cpp b/base/poco/MongoDB/src/QueryRequest.cpp index 7044335ba30..6d7d23a8456 100644 --- a/base/poco/MongoDB/src/QueryRequest.cpp +++ b/base/poco/MongoDB/src/QueryRequest.cpp @@ -20,10 +20,10 @@ namespace MongoDB { QueryRequest::QueryRequest(const std::string& collectionName, QueryRequest::Flags flags): - RequestMessage(MessageHeader::OP_QUERY), - _flags(flags), + RequestMessage(MessageHeader::OP_QUERY), + _flags(flags), _fullCollectionName(collectionName), - _numberToSkip(0), + _numberToSkip(0), _numberToReturn(100), _selector(), _returnFieldSelector() diff --git a/base/poco/MongoDB/src/RegularExpression.cpp b/base/poco/MongoDB/src/RegularExpression.cpp index e95e7da82e1..5f7eb6bb51b 100644 --- a/base/poco/MongoDB/src/RegularExpression.cpp +++ b/base/poco/MongoDB/src/RegularExpression.cpp @@ -25,8 +25,8 @@ RegularExpression::RegularExpression() } -RegularExpression::RegularExpression(const std::string& pattern, const std::string& options): - _pattern(pattern), +RegularExpression::RegularExpression(const std::string& pattern, const std::string& options): + _pattern(pattern), _options(options) { } diff --git a/base/poco/MongoDB/src/ReplicaSet.cpp b/base/poco/MongoDB/src/ReplicaSet.cpp index b56fea49311..fce2f2bdada 100644 --- a/base/poco/MongoDB/src/ReplicaSet.cpp +++ b/base/poco/MongoDB/src/ReplicaSet.cpp @@ -21,7 +21,7 @@ namespace Poco { namespace MongoDB { -ReplicaSet::ReplicaSet(const std::vector &addresses): +ReplicaSet::ReplicaSet(const std::vector &addresses): _addresses(addresses) { } @@ -81,8 +81,8 @@ Connection::Ptr ReplicaSet::isMaster(const Net::SocketAddress& address) { conn = 0; } - - return 0; + + return 0; } diff --git a/base/poco/MongoDB/src/RequestMessage.cpp b/base/poco/MongoDB/src/RequestMessage.cpp index 6391d966198..999ed8a6ba1 100644 --- a/base/poco/MongoDB/src/RequestMessage.cpp +++ b/base/poco/MongoDB/src/RequestMessage.cpp @@ -21,7 +21,7 @@ namespace Poco { namespace MongoDB { -RequestMessage::RequestMessage(MessageHeader::OpCode opcode): +RequestMessage::RequestMessage(MessageHeader::OpCode opcode): Message(opcode) { } @@ -35,7 +35,7 @@ RequestMessage::~RequestMessage() void RequestMessage::send(std::ostream& ostr) { std::stringstream ss; - BinaryWriter requestWriter(ss); + BinaryWriter requestWriter(ss, BinaryWriter::LITTLE_ENDIAN_BYTE_ORDER); buildRequest(requestWriter); requestWriter.flush(); diff --git a/base/poco/MongoDB/src/ResponseMessage.cpp b/base/poco/MongoDB/src/ResponseMessage.cpp index 3254ace63e6..e8216767494 100644 --- a/base/poco/MongoDB/src/ResponseMessage.cpp +++ b/base/poco/MongoDB/src/ResponseMessage.cpp @@ -21,10 +21,20 @@ namespace MongoDB { ResponseMessage::ResponseMessage(): - Message(MessageHeader::OP_REPLY), - _responseFlags(0), - _cursorID(0), - _startingFrom(0), + Message(MessageHeader::OP_REPLY), + _responseFlags(0), + _cursorID(0), + _startingFrom(0), + _numberReturned(0) +{ +} + + +ResponseMessage::ResponseMessage(const Int64& cursorID): + Message(MessageHeader::OP_REPLY), + _responseFlags(0), + _cursorID(cursorID), + _startingFrom(0), _numberReturned(0) { } @@ -50,7 +60,7 @@ void ResponseMessage::read(std::istream& istr) clear(); BinaryReader reader(istr, BinaryReader::LITTLE_ENDIAN_BYTE_ORDER); - + _header.read(reader); reader >> _responseFlags; diff --git a/base/poco/MongoDB/src/UpdateRequest.cpp b/base/poco/MongoDB/src/UpdateRequest.cpp index 2af4621ff64..7477fc752d5 100644 --- a/base/poco/MongoDB/src/UpdateRequest.cpp +++ b/base/poco/MongoDB/src/UpdateRequest.cpp @@ -20,7 +20,7 @@ namespace MongoDB { UpdateRequest::UpdateRequest(const std::string& collectionName, UpdateRequest::Flags flags): - RequestMessage(MessageHeader::OP_UPDATE), + RequestMessage(MessageHeader::OP_UPDATE), _flags(flags), _fullCollectionName(collectionName), _selector(), diff --git a/base/poco/Net/include/Poco/Net/HTTPClientSession.h b/base/poco/Net/include/Poco/Net/HTTPClientSession.h index 6f67918d2c8..d495d662f75 100644 --- a/base/poco/Net/include/Poco/Net/HTTPClientSession.h +++ b/base/poco/Net/include/Poco/Net/HTTPClientSession.h @@ -127,6 +127,9 @@ namespace Net void setResolvedHost(std::string resolved_host) { _resolved_host.swap(resolved_host); } + std::string getResolvedHost() const { return _resolved_host; } + /// Returns the resolved IP address of the target HTTP server. + Poco::UInt16 getPort() const; /// Returns the port number of the target HTTP server. diff --git a/cmake/autogenerated_versions.txt b/cmake/autogenerated_versions.txt index 462529fbc13..015037b2de6 100644 --- a/cmake/autogenerated_versions.txt +++ b/cmake/autogenerated_versions.txt @@ -2,11 +2,11 @@ # NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION, # only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes. -SET(VERSION_REVISION 54474) +SET(VERSION_REVISION 54475) SET(VERSION_MAJOR 23) -SET(VERSION_MINOR 5) +SET(VERSION_MINOR 6) SET(VERSION_PATCH 1) -SET(VERSION_GITHASH 3920eb987f7ed837ada5de8907284adf123f0583) -SET(VERSION_DESCRIBE v23.5.1.1-testing) -SET(VERSION_STRING 23.5.1.1) +SET(VERSION_GITHASH 2fec796e73efda10a538a03af3205ce8ffa1b2de) +SET(VERSION_DESCRIBE v23.6.1.1-testing) +SET(VERSION_STRING 23.6.1.1) # end of autochange diff --git a/contrib/NuRaft b/contrib/NuRaft index b56784be1ae..491eaf592d9 160000 --- a/contrib/NuRaft +++ b/contrib/NuRaft @@ -1 +1 @@ -Subproject commit b56784be1aec568fb72aff47f281097c017623cb +Subproject commit 491eaf592d950e0e37accbe8b3f217e068c9fecf diff --git a/contrib/azure-cmake/CMakeLists.txt b/contrib/azure-cmake/CMakeLists.txt index 1e2a4c97824..887122e7653 100644 --- a/contrib/azure-cmake/CMakeLists.txt +++ b/contrib/azure-cmake/CMakeLists.txt @@ -1,6 +1,6 @@ option (ENABLE_AZURE_BLOB_STORAGE "Enable Azure blob storage" ${ENABLE_LIBRARIES}) -if (NOT ENABLE_AZURE_BLOB_STORAGE OR BUILD_STANDALONE_KEEPER OR OS_FREEBSD) +if (NOT ENABLE_AZURE_BLOB_STORAGE OR BUILD_STANDALONE_KEEPER OR OS_FREEBSD OR (NOT ARCH_AMD64)) message(STATUS "Not using Azure blob storage") return() endif() diff --git a/contrib/lz4 b/contrib/lz4 index 4c9431e9af5..e82198428c8 160000 --- a/contrib/lz4 +++ b/contrib/lz4 @@ -1 +1 @@ -Subproject commit 4c9431e9af596af0556e5da0ae99305bafb2b10b +Subproject commit e82198428c8061372d5adef1f9bfff4203f6081e diff --git a/docker/keeper/Dockerfile b/docker/keeper/Dockerfile index 73da4515ff4..44967af4b32 100644 --- a/docker/keeper/Dockerfile +++ b/docker/keeper/Dockerfile @@ -32,7 +32,7 @@ RUN arch=${TARGETARCH:-amd64} \ esac ARG REPOSITORY="https://s3.amazonaws.com/clickhouse-builds/22.4/31c367d3cd3aefd316778601ff6565119fe36682/package_release" -ARG VERSION="23.4.2.11" +ARG VERSION="23.5.2.7" ARG PACKAGES="clickhouse-keeper" # user/group precreated explicitly with fixed uid/gid on purpose. diff --git a/docker/server/Dockerfile.alpine b/docker/server/Dockerfile.alpine index 1a5d2071f6b..8ab9bf7b077 100644 --- a/docker/server/Dockerfile.alpine +++ b/docker/server/Dockerfile.alpine @@ -33,7 +33,7 @@ RUN arch=${TARGETARCH:-amd64} \ # lts / testing / prestable / etc ARG REPO_CHANNEL="stable" ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}" -ARG VERSION="23.4.2.11" +ARG VERSION="23.5.2.7" ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static" # user/group precreated explicitly with fixed uid/gid on purpose. diff --git a/docker/server/Dockerfile.ubuntu b/docker/server/Dockerfile.ubuntu index 8792d419a16..b3b0cfe1510 100644 --- a/docker/server/Dockerfile.ubuntu +++ b/docker/server/Dockerfile.ubuntu @@ -22,7 +22,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list ARG REPO_CHANNEL="stable" ARG REPOSITORY="deb https://packages.clickhouse.com/deb ${REPO_CHANNEL} main" -ARG VERSION="23.4.2.11" +ARG VERSION="23.5.2.7" ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static" # set non-empty deb_location_url url to create a docker image diff --git a/docker/server/README.md b/docker/server/README.md index e6c9ee51fa7..67646a262f5 100644 --- a/docker/server/README.md +++ b/docker/server/README.md @@ -16,6 +16,11 @@ For more information and documentation see https://clickhouse.com/. - The tag `head` is built from the latest commit to the default branch. - Each tag has optional `-alpine` suffix to reflect that it's built on top of `alpine`. +### Compatibility + +- The amd64 image requires support for [SSE3 instructions](https://en.wikipedia.org/wiki/SSE3). Virtually all x86 CPUs after 2005 support SSE3. +- The arm64 image requires support for the [ARMv8.2-A architecture](https://en.wikipedia.org/wiki/AArch64#ARMv8.2-A). Most ARM CPUs after 2017 support ARMv8.2-A. A notable exception is Raspberry Pi 4 from 2019 whose CPU only supports ARMv8.0-A. + ## How to use this image ### start server instance diff --git a/docker/test/upgrade/run.sh b/docker/test/upgrade/run.sh index bd0c59a12cd..6f7d3999f1d 100644 --- a/docker/test/upgrade/run.sh +++ b/docker/test/upgrade/run.sh @@ -59,12 +59,6 @@ install_packages previous_release_package_folder # available for dump via clickhouse-local configure -# local_blob_storage disk type does not exist in older versions -sudo cat /etc/clickhouse-server/config.d/storage_conf.xml \ - | sed "s|local_blob_storage|local|" \ - > /etc/clickhouse-server/config.d/storage_conf.xml.tmp -sudo mv /etc/clickhouse-server/config.d/storage_conf.xml.tmp /etc/clickhouse-server/config.d/storage_conf.xml - # it contains some new settings, but we can safely remove it rm /etc/clickhouse-server/config.d/merge_tree.xml @@ -92,11 +86,6 @@ export USE_S3_STORAGE_FOR_MERGE_TREE=1 export ZOOKEEPER_FAULT_INJECTION=0 configure -sudo cat /etc/clickhouse-server/config.d/storage_conf.xml \ - | sed "s|local_blob_storage|local|" \ - > /etc/clickhouse-server/config.d/storage_conf.xml.tmp -sudo mv /etc/clickhouse-server/config.d/storage_conf.xml.tmp /etc/clickhouse-server/config.d/storage_conf.xml - # it contains some new settings, but we can safely remove it rm /etc/clickhouse-server/config.d/merge_tree.xml diff --git a/docs/changelogs/v23.2.7.32-stable.md b/docs/changelogs/v23.2.7.32-stable.md new file mode 100644 index 00000000000..db5e9e76311 --- /dev/null +++ b/docs/changelogs/v23.2.7.32-stable.md @@ -0,0 +1,35 @@ +--- +sidebar_position: 1 +sidebar_label: 2023 +--- + +# 2023 Changelog + +### ClickHouse release v23.2.7.32-stable (934f6a2aa0e) FIXME as compared to v23.2.6.34-stable (570190045b0) + +#### Performance Improvement +* Backported in [#49218](https://github.com/ClickHouse/ClickHouse/issues/49218): Fixed excessive reading in queries with `FINAL`. [#47801](https://github.com/ClickHouse/ClickHouse/pull/47801) ([Nikita Taranov](https://github.com/nickitat)). + +#### Build/Testing/Packaging Improvement +* Backported in [#49208](https://github.com/ClickHouse/ClickHouse/issues/49208): Fix glibc compatibility check: replace `preadv` from musl. [#49144](https://github.com/ClickHouse/ClickHouse/pull/49144) ([alesapin](https://github.com/alesapin)). + +#### Bug Fix (user-visible misbehavior in an official stable release) + +* Fix key not found error for queries with multiple StorageJoin [#49137](https://github.com/ClickHouse/ClickHouse/pull/49137) ([vdimir](https://github.com/vdimir)). +* Fix race on Outdated parts loading [#49223](https://github.com/ClickHouse/ClickHouse/pull/49223) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Fix bug in DISTINCT [#49628](https://github.com/ClickHouse/ClickHouse/pull/49628) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix msan issue in randomStringUTF8() [#49750](https://github.com/ClickHouse/ClickHouse/pull/49750) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix IPv6 encoding in protobuf [#49933](https://github.com/ClickHouse/ClickHouse/pull/49933) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Avoid deadlock when starting table in attach thread of `ReplicatedMergeTree` [#50026](https://github.com/ClickHouse/ClickHouse/pull/50026) ([Antonio Andelic](https://github.com/antonio2368)). +* JIT compilation not equals NaN fix [#50056](https://github.com/ClickHouse/ClickHouse/pull/50056) ([Maksim Kita](https://github.com/kitaisreal)). +* Fix crash with `multiIf` and constant condition and nullable arguments [#50123](https://github.com/ClickHouse/ClickHouse/pull/50123) ([Anton Popov](https://github.com/CurtizJ)). +* Fix Keeper deadlock on exception when preprocessing requests. [#50387](https://github.com/ClickHouse/ClickHouse/pull/50387) ([frinkr](https://github.com/frinkr)). + +#### NOT FOR CHANGELOG / INSIGNIFICANT + +* Improve test reports [#49151](https://github.com/ClickHouse/ClickHouse/pull/49151) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Fallback auth gh api [#49314](https://github.com/ClickHouse/ClickHouse/pull/49314) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Improve CI: status commit, auth for get_gh_api [#49388](https://github.com/ClickHouse/ClickHouse/pull/49388) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Update github.com/distribution/distribution [#50114](https://github.com/ClickHouse/ClickHouse/pull/50114) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Catch issues with dockerd during the build [#50700](https://github.com/ClickHouse/ClickHouse/pull/50700) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). + diff --git a/docs/changelogs/v23.5.1.3174-stable.md b/docs/changelogs/v23.5.1.3174-stable.md new file mode 100644 index 00000000000..01e5425de71 --- /dev/null +++ b/docs/changelogs/v23.5.1.3174-stable.md @@ -0,0 +1,599 @@ +--- +sidebar_position: 1 +sidebar_label: 2023 +--- + +# 2023 Changelog + +### ClickHouse release v23.5.1.3174-stable (2fec796e73e) FIXME as compared to v23.4.1.1943-stable (3920eb987f7) + +#### Backward Incompatible Change +* Make local object storage work consistently with s3 object storage, fix problem with append (closes [#48465](https://github.com/ClickHouse/ClickHouse/issues/48465)), make it configurable as independent storage. The change is backward incompatible because cache on top of local object storage is not incompatible to previous versions. [#48791](https://github.com/ClickHouse/ClickHouse/pull/48791) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Date_trunc function to always return datetime type. [#48851](https://github.com/ClickHouse/ClickHouse/pull/48851) ([Shane Andrade](https://github.com/mauidude)). +* Remove the experimental feature "in-memory data parts". The data format is still supported, but the settings are no-op, and compact or wide parts will be used instead. This closes [#45409](https://github.com/ClickHouse/ClickHouse/issues/45409). [#49429](https://github.com/ClickHouse/ClickHouse/pull/49429) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Changed default values of settings parallelize_output_from_storages and input_format_parquet_preserve_order. This allows ClickHouse to reorder rows when reading from files (e.g. CSV or Parquet), greatly improving performance in many cases. To restore the old behavior of preserving order, use `parallelize_output_from_storages = 0`, `input_format_parquet_preserve_order = 1`. [#49479](https://github.com/ClickHouse/ClickHouse/pull/49479) ([Michael Kolupaev](https://github.com/al13n321)). +* Make projections production-ready. Add the `optimize_use_projections` setting to control whether the projections will be selected for SELECT queries. The setting `allow_experimental_projection_optimization` is obsolete and does nothing. [#49719](https://github.com/ClickHouse/ClickHouse/pull/49719) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Mark joinGet() as non deterministic (so as dictGet). [#49843](https://github.com/ClickHouse/ClickHouse/pull/49843) ([Azat Khuzhin](https://github.com/azat)). +* Revert "`groupArray` returns cannot be nullable" (due to binary compatibility breakage for `groupArray`/`groupArrayLast`/`groupArraySample` over `Nullable` types, which likely will lead to `TOO_LARGE_ARRAY_SIZE` or `CANNOT_READ_ALL_DATA`). [#49971](https://github.com/ClickHouse/ClickHouse/pull/49971) ([Azat Khuzhin](https://github.com/azat)). + +#### New Feature +* Password type in queries like `CREATE USER u IDENTIFIED BY 'p'` will be automatically set according to the setting `default_password_type` in the `config.xml` on the server. Closes [#42915](https://github.com/ClickHouse/ClickHouse/issues/42915). [#44674](https://github.com/ClickHouse/ClickHouse/pull/44674) ([Nikolay Degterinsky](https://github.com/evillique)). +* Add bcrypt password authentication type. Closes [#34599](https://github.com/ClickHouse/ClickHouse/issues/34599). [#44905](https://github.com/ClickHouse/ClickHouse/pull/44905) ([Nikolay Degterinsky](https://github.com/evillique)). +* Added `system.zookeeper_connection` table that shows information about ZooKeeper connections. [#45245](https://github.com/ClickHouse/ClickHouse/pull/45245) ([mateng915](https://github.com/mateng0915)). +* Add urlCluster table function. Refactor all *Cluster table functions to reduce code duplication. Make schema inference work for all possible *Cluster function signatures and for named collections. Closes [#38499](https://github.com/ClickHouse/ClickHouse/issues/38499). [#45427](https://github.com/ClickHouse/ClickHouse/pull/45427) ([attack204](https://github.com/attack204)). +* Extend `first_value` and `last_value` to accept null. [#46467](https://github.com/ClickHouse/ClickHouse/pull/46467) ([lgbo](https://github.com/lgbo-ustc)). +* Add server and format settings `display_secrets_in_show_and_select` for displaying secrets of tables, databases, table functions, and dictionaries. Add privilege `displaySecretsInShowAndSelect` controlling which users can view secrets. [#46528](https://github.com/ClickHouse/ClickHouse/pull/46528) ([Mike Kot](https://github.com/myrrc)). +* Add new function `generateRandomStructure` that generates random table structure. It can be used in combination with table function `generateRandom`. [#47409](https://github.com/ClickHouse/ClickHouse/pull/47409) ([Kruglov Pavel](https://github.com/Avogar)). +* Added native ClickHouse Keeper CLI Client. [#47414](https://github.com/ClickHouse/ClickHouse/pull/47414) ([pufit](https://github.com/pufit)). +* The query cache can now be used for production workloads. [#47977](https://github.com/ClickHouse/ClickHouse/pull/47977) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix a bug that prevented the use of `CASE` without an `ELSE` branch and extended `transform` to deal with more types. Also fix some bugs that made transform() return incorrect results when decimal types were mixed with other numeric types. [#48300](https://github.com/ClickHouse/ClickHouse/pull/48300) ([Salvatore Mesoraca](https://github.com/aiven-sal)). +* Added [server-side encryption using KMS keys](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html) with S3 tables, and the `header` setting with S3 disks. Closes [#48723](https://github.com/ClickHouse/ClickHouse/issues/48723). [#48724](https://github.com/ClickHouse/ClickHouse/pull/48724) ([Johann Gan](https://github.com/johanngan)). +* Add MemoryTracker for the background tasks (merges and mutation). Introduces `merges_mutations_memory_usage_soft_limit` and `merges_mutations_memory_usage_to_ram_ratio` settings that represent the soft memory limit for merges and mutations. If this limit is reached ClickHouse won't schedule new merge or mutation tasks. Also `MergesMutationsMemoryTracking` metric is introduced to allow observing current memory usage of background tasks. Resubmit [#46089](https://github.com/ClickHouse/ClickHouse/issues/46089). Closes [#48774](https://github.com/ClickHouse/ClickHouse/issues/48774). [#48787](https://github.com/ClickHouse/ClickHouse/pull/48787) ([Dmitry Novik](https://github.com/novikd)). +* Function `dotProduct` work for array. [#49050](https://github.com/ClickHouse/ClickHouse/pull/49050) ([FFFFFFFHHHHHHH](https://github.com/FFFFFFFHHHHHHH)). +* Support statement `SHOW INDEX` to improve compatibility with MySQL. [#49158](https://github.com/ClickHouse/ClickHouse/pull/49158) ([Robert Schulze](https://github.com/rschu1ze)). +* Add virtual column `_file` and `_path` support to table function `url`. - Impove error message for table function `url`. - resolves [#49231](https://github.com/ClickHouse/ClickHouse/issues/49231) - resolves [#49232](https://github.com/ClickHouse/ClickHouse/issues/49232). [#49356](https://github.com/ClickHouse/ClickHouse/pull/49356) ([Ziyi Tan](https://github.com/Ziy1-Tan)). +* Adding the `grants` field in the users.xml file, which allows specifying grants for users. [#49381](https://github.com/ClickHouse/ClickHouse/pull/49381) ([pufit](https://github.com/pufit)). +* Add alias `str_to_map` and `mapfromstring` for `extractkeyvaluepairs`. closes [#47185](https://github.com/ClickHouse/ClickHouse/issues/47185). [#49466](https://github.com/ClickHouse/ClickHouse/pull/49466) ([flynn](https://github.com/ucasfl)). +* Support full/right join by using grace hash join algorithm. [#49483](https://github.com/ClickHouse/ClickHouse/pull/49483) ([lgbo](https://github.com/lgbo-ustc)). +* `WITH FILL` modifier groups filling by sorting prefix. Controlled by `use_with_fill_by_sorting_prefix` setting (enabled by default). Related to [#33203](https://github.com/ClickHouse/ClickHouse/issues/33203)#issuecomment-1418736794. [#49503](https://github.com/ClickHouse/ClickHouse/pull/49503) ([Igor Nikonov](https://github.com/devcrafter)). +* Add SQL functions for entropy-learned hashing. [#49656](https://github.com/ClickHouse/ClickHouse/pull/49656) ([Robert Schulze](https://github.com/rschu1ze)). +* Clickhouse-client now accepts queries after "--multiquery" when "--query" (or "-q") is absent. example: clickhouse-client --multiquery "select 1; select 2;". [#49870](https://github.com/ClickHouse/ClickHouse/pull/49870) ([Alexey Gerasimchuck](https://github.com/Demilivor)). +* Add separate `handshake_timeout` for receiving Hello packet from replica. Closes [#48854](https://github.com/ClickHouse/ClickHouse/issues/48854). [#49948](https://github.com/ClickHouse/ClickHouse/pull/49948) ([Kruglov Pavel](https://github.com/Avogar)). +* New setting s3_max_inflight_parts_for_one_file sets the limit of concurrently loaded parts with multipart upload request in scope of one file. [#49961](https://github.com/ClickHouse/ClickHouse/pull/49961) ([Sema Checherinda](https://github.com/CheSema)). +* Geographical data types (`Point`, `Ring`, `Polygon`, and `MultiPolygon`) are production-ready. [#50022](https://github.com/ClickHouse/ClickHouse/pull/50022) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Added a function "space()" which repeats a space as many times as specified. [#50103](https://github.com/ClickHouse/ClickHouse/pull/50103) ([Robert Schulze](https://github.com/rschu1ze)). +* Added --input_format_csv_trim_whitespaces option. [#50215](https://github.com/ClickHouse/ClickHouse/pull/50215) ([Alexey Gerasimchuck](https://github.com/Demilivor)). +* Added the dictGetAll function for regexp tree dictionaries to return values from multiple matches as arrays. Closes [#50254](https://github.com/ClickHouse/ClickHouse/issues/50254). [#50255](https://github.com/ClickHouse/ClickHouse/pull/50255) ([Johann Gan](https://github.com/johanngan)). +* Added toLastDayOfWeek() function to round a date or a date with time up to the nearest Saturday or Sunday. [#50315](https://github.com/ClickHouse/ClickHouse/pull/50315) ([Victor Krasnov](https://github.com/sirvickr)). +* Ability to ignore a skip index by specifying `ignore_data_skipping_indices`. [#50329](https://github.com/ClickHouse/ClickHouse/pull/50329) ([Boris Kuschel](https://github.com/bkuschel)). +* Revert 'Add SQL functions for entropy-learned hashing'. [#50416](https://github.com/ClickHouse/ClickHouse/pull/50416) ([Robert Schulze](https://github.com/rschu1ze)). +* Add `system.user_processes` table and `SHOW USER PROCESSES` query to show memory info and ProfileEvents on user level. [#50492](https://github.com/ClickHouse/ClickHouse/pull/50492) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* Added storage engine `AzureBlobStorage` and `azure_blob_storage` table function. The supported set of features is very similar to storage/table function `S3`. Implements [#19307](https://github.com/ClickHouse/ClickHouse/issues/19307). [#50604](https://github.com/ClickHouse/ClickHouse/pull/50604) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). + +#### Performance Improvement +* Compress marks and primary key by default. It significantly reduces the cold query time. Upgrade notes: the support for compressed marks and primary key has been added in version 22.9. If you turned on compressed marks or primary key or installed version 23.5 or newer, which has compressed marks or primary key on by default, you will not be able to downgrade to version 22.8 or earlier. You can also explicitly disable compressed marks or primary keys by specifying the `compress_marks` and `compress_primary_key` settings in the `` section of the server configuration file. **Upgrade notes:** If you upgrade from versions prior to 22.9, you should either upgrade all replicas at once or disable the compression before upgrade, or upgrade through an intermediate version, where the compressed marks are supported but not enabled by default, such as 23.3. [#42587](https://github.com/ClickHouse/ClickHouse/pull/42587) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* When reading from multiple files reduce parallel parsing threads for each file resolves [#42192](https://github.com/ClickHouse/ClickHouse/issues/42192). [#46661](https://github.com/ClickHouse/ClickHouse/pull/46661) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* Do not store blocks in `ANY` hash join if nothing is inserted. [#48633](https://github.com/ClickHouse/ClickHouse/pull/48633) ([vdimir](https://github.com/vdimir)). +* Fixes aggregate combinator `-If` when JIT compiled. Closes [#48120](https://github.com/ClickHouse/ClickHouse/issues/48120). [#49083](https://github.com/ClickHouse/ClickHouse/pull/49083) ([Igor Nikonov](https://github.com/devcrafter)). +* For reading from remote tables we use smaller tasks (instead of reading the whole part) to make tasks stealing work * task size is determined by size of columns to read * always use 1mb buffers for reading from s3 * boundaries of cache segments aligned to 1mb so they have decent size even with small tasks. it also should prevent fragmentation. [#49287](https://github.com/ClickHouse/ClickHouse/pull/49287) ([Nikita Taranov](https://github.com/nickitat)). +* Default size of a read buffer for reading from local filesystem changed to a slightly better value. Also two new settings are introduced: `max_read_buffer_size_local_fs` and `max_read_buffer_size_remote_fs`. [#49321](https://github.com/ClickHouse/ClickHouse/pull/49321) ([Nikita Taranov](https://github.com/nickitat)). +* Improve memory usage and speed of `SPARSE_HASHED`/`HASHED` dictionaries (e.g. `SPARSE_HASHED` now eats 2.6x less memory, and is ~2x faster). [#49380](https://github.com/ClickHouse/ClickHouse/pull/49380) ([Azat Khuzhin](https://github.com/azat)). +* Use aggregate projection only if it reads fewer granules than normal reading. It should help in case if query hits the PK of the table, but not the projection. Fixes [#49150](https://github.com/ClickHouse/ClickHouse/issues/49150). [#49417](https://github.com/ClickHouse/ClickHouse/pull/49417) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Optimize PODArray::resize_fill() callers. [#49459](https://github.com/ClickHouse/ClickHouse/pull/49459) ([Azat Khuzhin](https://github.com/azat)). +* Optimize the system.query_log and system.query_thread_log tables by applying LowCardinality when appropriate. The queries over these tables will be faster. [#49530](https://github.com/ClickHouse/ClickHouse/pull/49530) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Better performance when reading local Parquet files (through parallel reading). [#49539](https://github.com/ClickHouse/ClickHouse/pull/49539) ([Michael Kolupaev](https://github.com/al13n321)). +* Improve the performance of `RIGHT/FULL JOIN` by up to 2 times in certain scenarios, especially when joining a small left table with a large right table. [#49585](https://github.com/ClickHouse/ClickHouse/pull/49585) ([lgbo](https://github.com/lgbo-ustc)). +* Improve performance of BLAKE3 by 11% by enabling LTO for Rust. [#49600](https://github.com/ClickHouse/ClickHouse/pull/49600) ([Azat Khuzhin](https://github.com/azat)). +* Optimize the structure of the `system.opentelemetry_span_log`. Use `LowCardinality` where appropriate. Although this table is generally stupid (it is using the Map data type even for common attributes), it will be slightly better. [#49647](https://github.com/ClickHouse/ClickHouse/pull/49647) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Try to reserve hash table's size in `grace_hash` join. [#49816](https://github.com/ClickHouse/ClickHouse/pull/49816) ([lgbo](https://github.com/lgbo-ustc)). +* As is addresed in issue [#49748](https://github.com/ClickHouse/ClickHouse/issues/49748), the predicates with date converters, such as **toYear, toYYYYMM**, could be rewritten with the equivalent date (YYYY-MM-DD) comparisons at the AST level. And this transformation could bring performance improvement as it is free from the expensive date converter and the comparison between dates (or integers in the low level representation) is quite low-cost. The [prototype](https://github.com/ZhiguoZh/ClickHouse/commit/c7f1753f0c9363a19d95fa46f1cfed1d9f505ee0) shows that, with all identified date converters optimized, the overall QPS of the 13 queries is enhanced by **~11%** on the ICX server (Intel Xeon Platinum 8380 CPU, 80 cores, 160 threads). [#50062](https://github.com/ClickHouse/ClickHouse/pull/50062) ([Zhiguo Zhou](https://github.com/ZhiguoZh)). +* Parallel merge of `uniqExactIf` states. Closes [#49885](https://github.com/ClickHouse/ClickHouse/issues/49885). [#50285](https://github.com/ClickHouse/ClickHouse/pull/50285) ([flynn](https://github.com/ucasfl)). +* As is addresed in issue [#49748](https://github.com/ClickHouse/ClickHouse/issues/49748), the predicates with date converters, such as toYear, toYYYYMM, could be rewritten with the equivalent date (YYYY-MM-DD) comparisons at the AST level. And this transformation could bring performance improvement as it is free from the expensive date converter and the comparison between dates (or integers in the low level representation) is quite low-cost. [#50307](https://github.com/ClickHouse/ClickHouse/pull/50307) ([Zhiguo Zhou](https://github.com/ZhiguoZh)). +* Parallel merging supported for `uniqExact` with modifiers `-Array`, `-Merge`, `-OrNull`, `-State`. [#50413](https://github.com/ClickHouse/ClickHouse/pull/50413) ([flynn](https://github.com/ucasfl)). +* Enable LZ4_FAST_DEC_LOOP for Arm LZ4 to get 5% of decompression speed. [#50588](https://github.com/ClickHouse/ClickHouse/pull/50588) ([Daniel Kutenin](https://github.com/danlark1)). + +#### Improvement +* Add support for CGroup version 2 for asynchronous metrics about the memory usage and availability. This closes [#37983](https://github.com/ClickHouse/ClickHouse/issues/37983). [#45999](https://github.com/ClickHouse/ClickHouse/pull/45999) ([sichenzhao](https://github.com/sichenzhao)). +* Cluster table functions should always skip unavailable shards. close [#46314](https://github.com/ClickHouse/ClickHouse/issues/46314). [#46765](https://github.com/ClickHouse/ClickHouse/pull/46765) ([zk_kiger](https://github.com/zk-kiger)). +* When your csv file contains empty columns, like: ```. [#47496](https://github.com/ClickHouse/ClickHouse/pull/47496) ([你不要过来啊](https://github.com/iiiuwioajdks)). +* ROW POLICY for all tables that belong to a DATABASE. [#47640](https://github.com/ClickHouse/ClickHouse/pull/47640) ([Ilya Golshtein](https://github.com/ilejn)). +* Add Google Cloud Storage S3 compatible table function `gcs`. Like the `oss` and `cosn` functions, it is just an alias over the `s3` table function, and it does not bring any new features. [#47815](https://github.com/ClickHouse/ClickHouse/pull/47815) ([Kuba Kaflik](https://github.com/jkaflik)). +* Add ability to use strict parts size for S3 (compatibility with CloudFlare R2 S3 Storage). [#48492](https://github.com/ClickHouse/ClickHouse/pull/48492) ([Azat Khuzhin](https://github.com/azat)). +* Added new columns with info about `Replicated` database replicas to `system.clusters`: `database_shard_name`, `database_replica_name`, `is_active`. Added an optional `FROM SHARD` clause to `SYSTEM DROP DATABASE REPLICA` query. [#48548](https://github.com/ClickHouse/ClickHouse/pull/48548) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Add a new column `zookeeper_name` in system.replicas, to indicate on which (auxiliary) zookeeper cluster the replicated table's metadata is stored. [#48549](https://github.com/ClickHouse/ClickHouse/pull/48549) ([cangyin](https://github.com/cangyin)). +* `IN` operator support compare `Date` and `Date32`. Closes [#48736](https://github.com/ClickHouse/ClickHouse/issues/48736). [#48806](https://github.com/ClickHouse/ClickHouse/pull/48806) ([flynn](https://github.com/ucasfl)). +* Support for erasure codes in HDFS, author: @M1eyu2018, @tomscut. [#48833](https://github.com/ClickHouse/ClickHouse/pull/48833) ([M1eyu](https://github.com/M1eyu2018)). +* The query cache can now supports queries with totals and extremes modifier. [#48853](https://github.com/ClickHouse/ClickHouse/pull/48853) ([Robert Schulze](https://github.com/rschu1ze)). +* Introduces new keyword `INTO OUTFILE 'file.txt' APPEND`. [#48880](https://github.com/ClickHouse/ClickHouse/pull/48880) ([alekar](https://github.com/alekar)). +* The `BACKUP` command will not decrypt data from encrypted disks while making a backup. Instead the data will be stored in a backup in encrypted form. Such backups can be restored only to an encrypted disk with the same (or extended) list of encryption keys. [#48896](https://github.com/ClickHouse/ClickHouse/pull/48896) ([Vitaly Baranov](https://github.com/vitlibar)). +* Keeper improvement: add `CheckNotExists` request to Keeper. [#48897](https://github.com/ClickHouse/ClickHouse/pull/48897) ([Antonio Andelic](https://github.com/antonio2368)). +* Implement SYSTEM DROP REPLICA from auxillary ZooKeeper clusters, may be close [#48931](https://github.com/ClickHouse/ClickHouse/issues/48931). [#48932](https://github.com/ClickHouse/ClickHouse/pull/48932) ([wangxiaobo](https://github.com/wzb5212)). +* Add Array data type to MongoDB. Closes [#48598](https://github.com/ClickHouse/ClickHouse/issues/48598). [#48983](https://github.com/ClickHouse/ClickHouse/pull/48983) ([Nikolay Degterinsky](https://github.com/evillique)). +* Keeper performance improvements: avoid serializing same request twice while processing. Cache deserialization results of large requests. Controlled by new coordination setting `min_request_size_for_cache`. [#49004](https://github.com/ClickHouse/ClickHouse/pull/49004) ([Antonio Andelic](https://github.com/antonio2368)). +* Support storing `Interval` data types in tables. [#49085](https://github.com/ClickHouse/ClickHouse/pull/49085) ([larryluogit](https://github.com/larryluogit)). +* Add support for size suffixes in quota creation statement parameters. [#49087](https://github.com/ClickHouse/ClickHouse/pull/49087) ([Eridanus](https://github.com/Eridanus117)). +* Allow using `ntile` window function without explicit window frame definition: `ntile(3) OVER (ORDER BY a)`, close [#46763](https://github.com/ClickHouse/ClickHouse/issues/46763). [#49093](https://github.com/ClickHouse/ClickHouse/pull/49093) ([vdimir](https://github.com/vdimir)). +* Added settings (`number_of_mutations_to_delay`, `number_of_mutations_to_throw`) to delay or throw `ALTER` queries that create mutations (`ALTER UPDATE`, `ALTER DELETE`, `ALTER MODIFY COLUMN`, ...) in case when table already has a lot of unfinished mutations. [#49117](https://github.com/ClickHouse/ClickHouse/pull/49117) ([Anton Popov](https://github.com/CurtizJ)). +* Added setting `async_insert` for `MergeTables`. It has the same meaning as query-level setting `async_insert` and enables asynchronous inserts for specific table. Note: it doesn't take effect for insert queries from `clickhouse-client`, use query-level setting in that case. [#49122](https://github.com/ClickHouse/ClickHouse/pull/49122) ([Anton Popov](https://github.com/CurtizJ)). +* Catch exception from `create_directories` in filesystem cache. [#49203](https://github.com/ClickHouse/ClickHouse/pull/49203) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Copies embedded examples to a new field `example` in `system.functions` to supplement the field `description`. [#49222](https://github.com/ClickHouse/ClickHouse/pull/49222) ([Dan Roscigno](https://github.com/DanRoscigno)). +* Enable connection options for the MongoDB dictionary. Example: ``` xml localhost 27017 test dictionary_source ssl=true ``` ### Documentation entry for user-facing changes. [#49225](https://github.com/ClickHouse/ClickHouse/pull/49225) ([MikhailBurdukov](https://github.com/MikhailBurdukov)). +* Added an alias `asymptotic` for `asymp` computational method for `kolmogorovSmirnovTest`. Improved documentation. [#49286](https://github.com/ClickHouse/ClickHouse/pull/49286) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Aggregation function groupBitAnd/Or/Xor now work on signed integer data. This makes them consistent with the behavior of scalar functions bitAnd/Or/Xor. [#49292](https://github.com/ClickHouse/ClickHouse/pull/49292) ([exmy](https://github.com/exmy)). +* Split function-documentation into more fine-granular fields. [#49300](https://github.com/ClickHouse/ClickHouse/pull/49300) ([Robert Schulze](https://github.com/rschu1ze)). +* Introduced settings: - `merge_max_block_size_bytes` to limit the amount of memory used for background operations. - `vertical_merge_algorithm_min_bytes_to_activate` to add another condition to activate vertical merges. [#49313](https://github.com/ClickHouse/ClickHouse/pull/49313) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Use multiple threads shared between all tables within a server to load outdated data parts. The the size of the pool and its queue is controlled by `max_outdated_parts_loading_thread_pool_size` and `outdated_part_loading_thread_pool_queue_size` settings. [#49317](https://github.com/ClickHouse/ClickHouse/pull/49317) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Don't overestimate the size of processed data for `LowCardinality` columns when they share dictionaries between blocks. This closes [#49322](https://github.com/ClickHouse/ClickHouse/issues/49322). See also [#48745](https://github.com/ClickHouse/ClickHouse/issues/48745). [#49323](https://github.com/ClickHouse/ClickHouse/pull/49323) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Parquet writer now uses reasonable row group size when invoked through OUTFILE. [#49325](https://github.com/ClickHouse/ClickHouse/pull/49325) ([Michael Kolupaev](https://github.com/al13n321)). +* Allow restricted keywords like `ARRAY` as an alias if the alias is quoted. Closes [#49324](https://github.com/ClickHouse/ClickHouse/issues/49324). [#49360](https://github.com/ClickHouse/ClickHouse/pull/49360) ([Nikolay Degterinsky](https://github.com/evillique)). +* Added possibility to use temporary tables in FROM part of ATTACH PARTITION FROM and REPLACE PARTITION FROM. [#49436](https://github.com/ClickHouse/ClickHouse/pull/49436) ([Roman Vasin](https://github.com/rvasin)). +* Data parts loading and deletion jobs were moved to shared server-wide pools instead of per-table pools. Pools sizes are controlled via settings `max_active_parts_loading_thread_pool_size`, `max_outdated_parts_loading_thread_pool_size` and `max_parts_cleaning_thread_pool_size` in top-level config. Table-level settings `max_part_loading_threads` and `max_part_removal_threads` became obsolete. [#49474](https://github.com/ClickHouse/ClickHouse/pull/49474) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Allow `?password=pass` in URL. Password is replaced in browser history. [#49505](https://github.com/ClickHouse/ClickHouse/pull/49505) ([Mike Kot](https://github.com/myrrc)). +* Allow zero objects in ReadBufferFromRemoteFSGather (because empty files are not backuped, so we might end up with zero blobs in metadata file). Closes [#49480](https://github.com/ClickHouse/ClickHouse/issues/49480). [#49519](https://github.com/ClickHouse/ClickHouse/pull/49519) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Attach thread MemoryTracker to `total_memory_tracker` after `ThreadGroup` detached. [#49527](https://github.com/ClickHouse/ClickHouse/pull/49527) ([Dmitry Novik](https://github.com/novikd)). +* Make `Pretty` formats prettier: squash blocks if not much time passed since the output of the previous block. This is controlled by a new setting `output_format_pretty_squash_ms` (100ms by default). This closes [#49153](https://github.com/ClickHouse/ClickHouse/issues/49153). [#49537](https://github.com/ClickHouse/ClickHouse/pull/49537) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Add initial support to do JOINs with pure parallel replicas. [#49544](https://github.com/ClickHouse/ClickHouse/pull/49544) ([Raúl Marín](https://github.com/Algunenano)). +* Fix parameterized views when query parameter used multiple times in the query. [#49556](https://github.com/ClickHouse/ClickHouse/pull/49556) ([Azat Khuzhin](https://github.com/azat)). +* Release memory allocated for the last sent ProfileEvents snapshot in the context of a query. Followup [#47564](https://github.com/ClickHouse/ClickHouse/issues/47564). [#49561](https://github.com/ClickHouse/ClickHouse/pull/49561) ([Dmitry Novik](https://github.com/novikd)). +* Function "makeDate" now provides a MySQL-compatible overload (year & day of the year argument). [#49603](https://github.com/ClickHouse/ClickHouse/pull/49603) ([Robert Schulze](https://github.com/rschu1ze)). +* More parallelism on `Outdated` parts removal with "zero-copy replication". [#49630](https://github.com/ClickHouse/ClickHouse/pull/49630) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Reduced number of `List` ZooKeeper requests when selecting parts to merge and a lot of partitions do not have anything to merge. [#49637](https://github.com/ClickHouse/ClickHouse/pull/49637) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Support `dictionary` table function for `RegExpTreeDictionary`. [#49666](https://github.com/ClickHouse/ClickHouse/pull/49666) ([Han Fei](https://github.com/hanfei1991)). +* Added weighted fair IO scheduling policy. Added dynamic resource manager, which allows IO scheduling hierarchy to be updated in runtime w/o server restarts. [#49671](https://github.com/ClickHouse/ClickHouse/pull/49671) ([Sergei Trifonov](https://github.com/serxa)). +* Add compose request after multipart upload to GCS. This enables the usage of copy operation on objects uploaded with the multipart upload. It's recommended to set `s3_strict_upload_part_size` to some value because compose request can fail on objects created with parts of different sizes. [#49693](https://github.com/ClickHouse/ClickHouse/pull/49693) ([Antonio Andelic](https://github.com/antonio2368)). +* Improve the "best-effort" parsing logic to accept `key_value_delimiter` as a valid part of the value. This also simplifies branching and might even speed up things a bit. [#49760](https://github.com/ClickHouse/ClickHouse/pull/49760) ([Arthur Passos](https://github.com/arthurpassos)). +* Facilitate profile data association and aggregation for the same query. [#49777](https://github.com/ClickHouse/ClickHouse/pull/49777) ([helifu](https://github.com/helifu)). +* System log tables can now have custom sorting keys. [#49778](https://github.com/ClickHouse/ClickHouse/pull/49778) ([helifu](https://github.com/helifu)). +* A new field 'partitions' is used to indicate which partitions are participating in the calculation. [#49779](https://github.com/ClickHouse/ClickHouse/pull/49779) ([helifu](https://github.com/helifu)). +* Added `enable_the_endpoint_id_with_zookeeper_name_prefix` setting for `ReplicatedMergeTree` (disabled by default). When enabled, it adds ZooKeeper cluster name to table's interserver communication endpoint. It avoids `Duplicate interserver IO endpoint` errors when having replicated tables with the same path, but different auxiliary ZooKeepers. [#49780](https://github.com/ClickHouse/ClickHouse/pull/49780) ([helifu](https://github.com/helifu)). +* Add query parameters to clickhouse-local. Closes [#46561](https://github.com/ClickHouse/ClickHouse/issues/46561). [#49785](https://github.com/ClickHouse/ClickHouse/pull/49785) ([Nikolay Degterinsky](https://github.com/evillique)). +* Qpl_deflate codec lower the minimum simd version to sse 4.2. [doc change in qpl](https://github.com/intel/qpl/commit/3f8f5cea27739f5261e8fd577dc233ffe88bf679) - intel® qpl relies on a run-time kernels dispatcher and cpuid check to choose the best available implementation(sse/avx2/avx512) - restructured cmakefile for qpl build in clickhouse to align with latest upstream qpl. [#49811](https://github.com/ClickHouse/ClickHouse/pull/49811) ([jasperzhu](https://github.com/jinjunzh)). +* Allow loading dictionaries and functions from YAML by default. In previous versions, it required editing the `dictionaries_config` or `user_defined_executable_functions_config` in the configuration file, as they expected `*.xml` files. [#49812](https://github.com/ClickHouse/ClickHouse/pull/49812) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* The Kafka table engine now allows to use alias columns. [#49824](https://github.com/ClickHouse/ClickHouse/pull/49824) ([Aleksandr Musorin](https://github.com/AVMusorin)). +* Add setting to limit the max number of pairs produced by extractKeyValuePairs, safeguard to avoid using way too much memory. [#49836](https://github.com/ClickHouse/ClickHouse/pull/49836) ([Arthur Passos](https://github.com/arthurpassos)). +* Add support for (an unusual) case where the arguments in the `IN` operator are single-element tuples. [#49844](https://github.com/ClickHouse/ClickHouse/pull/49844) ([MikhailBurdukov](https://github.com/MikhailBurdukov)). +* `bitHammingDistance` function support `String` and `FixedString` data type. Closes [#48827](https://github.com/ClickHouse/ClickHouse/issues/48827). [#49858](https://github.com/ClickHouse/ClickHouse/pull/49858) ([flynn](https://github.com/ucasfl)). +* Fix timeout resetting errors in the client on OS X. [#49863](https://github.com/ClickHouse/ClickHouse/pull/49863) ([alekar](https://github.com/alekar)). +* Add support for big integers, such as UInt128, Int128, UInt256, and Int256 in the function `bitCount`. This enables Hamming distance over large bit masks for AI applications. [#49867](https://github.com/ClickHouse/ClickHouse/pull/49867) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* This PR makes fingerprints to be used instead of key IDs in encrypted disks. [#49882](https://github.com/ClickHouse/ClickHouse/pull/49882) ([Vitaly Baranov](https://github.com/vitlibar)). +* Add UUID data type to PostgreSQL. Closes [#49739](https://github.com/ClickHouse/ClickHouse/issues/49739). [#49894](https://github.com/ClickHouse/ClickHouse/pull/49894) ([Nikolay Degterinsky](https://github.com/evillique)). +* Make `allow_experimental_query_cache` setting as obsolete for backward-compatibility. It was removed in https://github.com/ClickHouse/ClickHouse/pull/47977. [#49934](https://github.com/ClickHouse/ClickHouse/pull/49934) ([Timur Solodovnikov](https://github.com/tsolodov)). +* Function toUnixTimestamp() now accepts Date and Date32 arguments. [#49989](https://github.com/ClickHouse/ClickHouse/pull/49989) ([Victor Krasnov](https://github.com/sirvickr)). +* Charge only server memory for dictionaries. [#49995](https://github.com/ClickHouse/ClickHouse/pull/49995) ([Azat Khuzhin](https://github.com/azat)). +* Add schema inference to PostgreSQL, MySQL, MeiliSearch, and SQLite table engines. Closes [#49972](https://github.com/ClickHouse/ClickHouse/issues/49972). [#50000](https://github.com/ClickHouse/ClickHouse/pull/50000) ([Nikolay Degterinsky](https://github.com/evillique)). +* The server will allow using the `SQL_*` settings such as `SQL_AUTO_IS_NULL` as no-ops for MySQL compatibility. This closes [#49927](https://github.com/ClickHouse/ClickHouse/issues/49927). [#50013](https://github.com/ClickHouse/ClickHouse/pull/50013) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Preserve initial_query_id for ON CLUSTER queries, which is useful for introspection (under `distributed_ddl_entry_format_version=5`). [#50015](https://github.com/ClickHouse/ClickHouse/pull/50015) ([Azat Khuzhin](https://github.com/azat)). +* Preserve backward incompatibility for renamed settings by using aliases (`allow_experimental_projection_optimization` for `optimize_use_projections`, `allow_experimental_lightweight_delete` for `enable_lightweight_delete`). [#50044](https://github.com/ClickHouse/ClickHouse/pull/50044) ([Azat Khuzhin](https://github.com/azat)). +* Support cross-replication in distributed queries using the new infrastructure. [#50097](https://github.com/ClickHouse/ClickHouse/pull/50097) ([Dmitry Novik](https://github.com/novikd)). +* Support passing fqdn through setting my_hostname to register cluster node in keeper. Add setting of invisible to support multi compute groups. A compute group as a cluster, is invisible to other compute groups. [#50186](https://github.com/ClickHouse/ClickHouse/pull/50186) ([Yangkuan Liu](https://github.com/LiuYangkuan)). +* Fix PostgreSQL reading all the data even though `LIMIT n` could be specified. [#50187](https://github.com/ClickHouse/ClickHouse/pull/50187) ([Kseniia Sumarokova](https://github.com/kssenii)). +* 1) Fixed an error `NOT_FOUND_COLUMN_IN_BLOCK` in case of using parallel replicas with non-replicated storage with disabled setting `parallel_replicas_for_non_replicated_merge_tree` 2) Now `allow_experimental_parallel_reading_from_replicas` have 3 possible values - 0, 1 and 2. 0 - disabled, 1 - enabled, silently disable them in case of failure (in case of FINAL or JOIN), 2 - enabled, throw an expection in case of failure. 3) If FINAL modifier is used in SELECT query and parallel replicas are enabled, ClickHouse will try to disable them if `allow_experimental_parallel_reading_from_replicas` is set to 1 and throw an exception otherwise. [#50195](https://github.com/ClickHouse/ClickHouse/pull/50195) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Don't send head request for all keys in Iceberg schema inference, only for keys that are used for reaing data. [#50203](https://github.com/ClickHouse/ClickHouse/pull/50203) ([Kruglov Pavel](https://github.com/Avogar)). +* Add new profile events for queries with subqueries (`QueriesWithSubqueries`/`SelectQueriesWithSubqueries`/`InsertQueriesWithSubqueries`). [#50204](https://github.com/ClickHouse/ClickHouse/pull/50204) ([Azat Khuzhin](https://github.com/azat)). +* Adding the roles field in the users.xml file, which allows specifying roles with grants via a config file. [#50278](https://github.com/ClickHouse/ClickHouse/pull/50278) ([pufit](https://github.com/pufit)). +* When parallel replicas are enabled they will always skip unavailable servers (the behavior is controlled by the setting `skip_unavailable_shards`, enabled by default and can be only disabled). This closes: [#48565](https://github.com/ClickHouse/ClickHouse/issues/48565). [#50293](https://github.com/ClickHouse/ClickHouse/pull/50293) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Fix a typo. [#50306](https://github.com/ClickHouse/ClickHouse/pull/50306) ([helifu](https://github.com/helifu)). +* Setting `enable_memory_bound_merging_of_aggregation_results` is enabled by default. If you update from version prior to 22.12, we recommend to set this flag to `false` until update is finished. [#50319](https://github.com/ClickHouse/ClickHouse/pull/50319) ([Nikita Taranov](https://github.com/nickitat)). +* Report `CGroupCpuCfsPeriod` and `CGroupCpuCfsQuota` in AsynchronousMetrics. - Respect cgroup v2 memory limits during server startup. [#50379](https://github.com/ClickHouse/ClickHouse/pull/50379) ([alekar](https://github.com/alekar)). +* Bump internal protobuf to v3.18 (fixes CVE-2022-1941). [#50400](https://github.com/ClickHouse/ClickHouse/pull/50400) ([Robert Schulze](https://github.com/rschu1ze)). +* Bump internal libxml2 to v2.10.4 (fixes CVE-2023-28484 and CVE-2023-29469). [#50402](https://github.com/ClickHouse/ClickHouse/pull/50402) ([Robert Schulze](https://github.com/rschu1ze)). +* Bump c-ares to v1.19.1 (CVE-2023-32067, CVE-2023-31130, CVE-2023-31147). [#50403](https://github.com/ClickHouse/ClickHouse/pull/50403) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix CVE-2022-2469 in libgsasl. [#50404](https://github.com/ClickHouse/ClickHouse/pull/50404) ([Robert Schulze](https://github.com/rschu1ze)). +* Make filter push down through cross join. [#50430](https://github.com/ClickHouse/ClickHouse/pull/50430) ([Han Fei](https://github.com/hanfei1991)). +* Add a signal handler for SIGQUIT to work the same way as SIGINT. Closes [#50298](https://github.com/ClickHouse/ClickHouse/issues/50298). [#50435](https://github.com/ClickHouse/ClickHouse/pull/50435) ([Nikolay Degterinsky](https://github.com/evillique)). +* In case JSON parse fails due to the large size of the object output the last position to allow debugging. [#50474](https://github.com/ClickHouse/ClickHouse/pull/50474) ([Valentin Alexeev](https://github.com/valentinalexeev)). +* Support decimals with not fixed size. Closes [#49130](https://github.com/ClickHouse/ClickHouse/issues/49130). [#50586](https://github.com/ClickHouse/ClickHouse/pull/50586) ([Kruglov Pavel](https://github.com/Avogar)). +* Disable pure parallel replicas if trivial count optimization is possible. [#50594](https://github.com/ClickHouse/ClickHouse/pull/50594) ([Raúl Marín](https://github.com/Algunenano)). +* Added support of TRUNCATE db.table additional to TRUNCATE TABLE db.table in MaterializedMySQL. [#50624](https://github.com/ClickHouse/ClickHouse/pull/50624) ([Val Doroshchuk](https://github.com/valbok)). +* Disable parallel replicas automatically when the estimated number of granules is less than threshold. The behavior is controlled by a setting `parallel_replicas_min_number_of_granules_to_enable`. [#50639](https://github.com/ClickHouse/ClickHouse/pull/50639) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* When creating skipping indexes via "ALTER TABLE table ADD INDEX", the "GRANULARITY" clause can now be omitted. In that case, GRANULARITY is assumed to be 1. [#50658](https://github.com/ClickHouse/ClickHouse/pull/50658) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix slow cache in presence of big inserts. [#50680](https://github.com/ClickHouse/ClickHouse/pull/50680) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Set default max_elements limit in filesystem cache to 10000000. [#50682](https://github.com/ClickHouse/ClickHouse/pull/50682) ([Kseniia Sumarokova](https://github.com/kssenii)). +* SHOW INDICES is now an alias of statement SHOW INDEX/INDEXES/KEYS. [#50713](https://github.com/ClickHouse/ClickHouse/pull/50713) ([Robert Schulze](https://github.com/rschu1ze)). + +#### Build/Testing/Packaging Improvement +* New and improved keeper-bench. Everything can be customized from yaml/XML file: - request generator - each type of request generator can have a specific set of fields - multi requests can be generated just by doing the same under `multi` key - for each request or subrequest in multi a `weight` field can be defined to control distribution - define trees that need to be setup for a test run - hosts can be defined with all timeouts customizable and it's possible to control how many sessions to generate for each host - integers defined with `min_value` and `max_value` fields are random number generators. [#48547](https://github.com/ClickHouse/ClickHouse/pull/48547) ([Antonio Andelic](https://github.com/antonio2368)). +* ... Add a test to check max_rows_to_read_leaf behaviour. [#48950](https://github.com/ClickHouse/ClickHouse/pull/48950) ([Sean Haynes](https://github.com/seandhaynes)). +* Io_uring is not supported on macos, don't choose it when running tests on local to avoid occassional failures. [#49250](https://github.com/ClickHouse/ClickHouse/pull/49250) ([Frank Chen](https://github.com/FrankChen021)). +* Support named fault injection for testing. [#49361](https://github.com/ClickHouse/ClickHouse/pull/49361) ([Han Fei](https://github.com/hanfei1991)). +* Fix the 01193_metadata_loading test to match the query execution time specific to s390x. [#49455](https://github.com/ClickHouse/ClickHouse/pull/49455) ([MeenaRenganathan22](https://github.com/MeenaRenganathan22)). +* Use the RapidJSONParser library to parse the JSON float values in case of s390x. [#49457](https://github.com/ClickHouse/ClickHouse/pull/49457) ([MeenaRenganathan22](https://github.com/MeenaRenganathan22)). +* Allow running ClickHouse in the OS where the `prctl` (process control) syscall is not available, such as AWS Lambda. [#49538](https://github.com/ClickHouse/ClickHouse/pull/49538) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Improve CI check with an enabled analyzer. Now it should be green if only tests from `tests/broken_tests.txt` are broken. [#49562](https://github.com/ClickHouse/ClickHouse/pull/49562) ([Dmitry Novik](https://github.com/novikd)). +* Fixed the issue of build conflict between contrib/isa-l and isa-l in qpl [49296](https://github.com/ClickHouse/ClickHouse/issues/49296). [#49584](https://github.com/ClickHouse/ClickHouse/pull/49584) ([jasperzhu](https://github.com/jinjunzh)). +* Utilities are now only build if explicitly requested ("-DENABLE_UTILS=1") instead of by default, this reduces link times in typical development builds. [#49620](https://github.com/ClickHouse/ClickHouse/pull/49620) ([Robert Schulze](https://github.com/rschu1ze)). +* Pull build description of idxd-config into a separate CMake file to avoid accidental removal in future. [#49651](https://github.com/ClickHouse/ClickHouse/pull/49651) ([jasperzhu](https://github.com/jinjunzh)). +* Add CI check with an enabled analyzer in the master. Followup [#49562](https://github.com/ClickHouse/ClickHouse/issues/49562). [#49668](https://github.com/ClickHouse/ClickHouse/pull/49668) ([Dmitry Novik](https://github.com/novikd)). +* Switch to LLVM/clang 16. [#49678](https://github.com/ClickHouse/ClickHouse/pull/49678) ([Azat Khuzhin](https://github.com/azat)). +* Fixed DefaultHash64 for non-64 bit integers on s390x. [#49833](https://github.com/ClickHouse/ClickHouse/pull/49833) ([Harry Lee](https://github.com/HarryLeeIBM)). +* Allow building ClickHouse with clang-17. [#49851](https://github.com/ClickHouse/ClickHouse/pull/49851) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* ClickHouse is now easier to be integrated into other cmake projects. [#49991](https://github.com/ClickHouse/ClickHouse/pull/49991) ([Amos Bird](https://github.com/amosbird)). +* Link `boost::context` library to `clickhouse_common_io`. This closes: [#50381](https://github.com/ClickHouse/ClickHouse/issues/50381). [#50385](https://github.com/ClickHouse/ClickHouse/pull/50385) ([HaiBo Li](https://github.com/marising)). +* Add support for building with clang-17. [#50410](https://github.com/ClickHouse/ClickHouse/pull/50410) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix strange additional QEMU logging after [#47151](https://github.com/ClickHouse/ClickHouse/issues/47151), see https://s3.amazonaws.com/clickhouse-test-reports/50078/a4743996ee4f3583884d07bcd6501df0cfdaa346/stateless_tests__release__databasereplicated__[3_4].html. [#50442](https://github.com/ClickHouse/ClickHouse/pull/50442) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* ClickHouse can work on Linux RISC-V 6.1.22. This closes [#50456](https://github.com/ClickHouse/ClickHouse/issues/50456). [#50457](https://github.com/ClickHouse/ClickHouse/pull/50457) ([Alexey Milovidov](https://github.com/alexey-milovidov)). + +#### Bug Fix (user-visible misbehavior in an official stable release) + +* ActionsDAG: fix wrong optimization [#47584](https://github.com/ClickHouse/ClickHouse/pull/47584) ([Salvatore Mesoraca](https://github.com/aiven-sal)). +* Correctly handle concurrent snapshots in Keeper [#48466](https://github.com/ClickHouse/ClickHouse/pull/48466) ([Antonio Andelic](https://github.com/antonio2368)). +* MergeTreeMarksLoader holds DataPart instead of DataPartStorage [#48515](https://github.com/ClickHouse/ClickHouse/pull/48515) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* sequence state fix [#48603](https://github.com/ClickHouse/ClickHouse/pull/48603) ([Ilya Golshtein](https://github.com/ilejn)). +* Back/Restore concurrency check on previous fails [#48726](https://github.com/ClickHouse/ClickHouse/pull/48726) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* Fix Attaching a table with non-existent ZK path does not increase the ReadonlyReplica metric [#48954](https://github.com/ClickHouse/ClickHouse/pull/48954) ([wangxiaobo](https://github.com/wzb5212)). +* Fix possible terminate called for uncaught exception in some places [#49112](https://github.com/ClickHouse/ClickHouse/pull/49112) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix key not found error for queries with multiple StorageJoin [#49137](https://github.com/ClickHouse/ClickHouse/pull/49137) ([vdimir](https://github.com/vdimir)). +* Fix wrong query result when using nullable primary key [#49172](https://github.com/ClickHouse/ClickHouse/pull/49172) ([Duc Canh Le](https://github.com/canhld94)). +* Revert "Fix GCS native copy ([#48981](https://github.com/ClickHouse/ClickHouse/issues/48981))" [#49194](https://github.com/ClickHouse/ClickHouse/pull/49194) ([Raúl Marín](https://github.com/Algunenano)). +* Fix reinterpretAs*() on big endian machines [#49198](https://github.com/ClickHouse/ClickHouse/pull/49198) ([Suzy Wang](https://github.com/SuzyWangIBMer)). +* Lock zero copy parts more atomically [#49211](https://github.com/ClickHouse/ClickHouse/pull/49211) ([alesapin](https://github.com/alesapin)). +* Fix race on Outdated parts loading [#49223](https://github.com/ClickHouse/ClickHouse/pull/49223) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Fix all key value is null and group use rollup return wrong answer [#49282](https://github.com/ClickHouse/ClickHouse/pull/49282) ([Shuai li](https://github.com/loneylee)). +* Fix calculating load_factor for HASHED dictionaries with SHARDS [#49319](https://github.com/ClickHouse/ClickHouse/pull/49319) ([Azat Khuzhin](https://github.com/azat)). +* Disallow configuring compression CODECs for alias columns [#49363](https://github.com/ClickHouse/ClickHouse/pull/49363) ([Timur Solodovnikov](https://github.com/tsolodov)). +* Fix bug in removal of existing part directory [#49365](https://github.com/ClickHouse/ClickHouse/pull/49365) ([alesapin](https://github.com/alesapin)). +* Properly fix GCS when HMAC is used [#49390](https://github.com/ClickHouse/ClickHouse/pull/49390) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix fuzz bug when subquery set is not built when reading from remote() [#49425](https://github.com/ClickHouse/ClickHouse/pull/49425) ([Alexander Gololobov](https://github.com/davenger)). +* Invert `shutdown_wait_unfinished_queries` [#49427](https://github.com/ClickHouse/ClickHouse/pull/49427) ([Konstantin Bogdanov](https://github.com/thevar1able)). +* Fix another zero copy bug [#49473](https://github.com/ClickHouse/ClickHouse/pull/49473) ([alesapin](https://github.com/alesapin)). +* Fix postgres database setting [#49481](https://github.com/ClickHouse/ClickHouse/pull/49481) ([Mal Curtis](https://github.com/snikch)). +* Correctly handle s3Cluster arguments [#49490](https://github.com/ClickHouse/ClickHouse/pull/49490) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix bug in TraceCollector destructor. [#49508](https://github.com/ClickHouse/ClickHouse/pull/49508) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Fix AsynchronousReadIndirectBufferFromRemoteFS breaking on short seeks [#49525](https://github.com/ClickHouse/ClickHouse/pull/49525) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix dictionaries loading order [#49560](https://github.com/ClickHouse/ClickHouse/pull/49560) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Forbid the change of data type of Object('json') column [#49563](https://github.com/ClickHouse/ClickHouse/pull/49563) ([Nikolay Degterinsky](https://github.com/evillique)). +* Fix stress test (Logical error: Expected 7134 >= 11030) [#49623](https://github.com/ClickHouse/ClickHouse/pull/49623) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix bug in DISTINCT [#49628](https://github.com/ClickHouse/ClickHouse/pull/49628) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix: DISTINCT in order with zero values in non-sorted columns [#49636](https://github.com/ClickHouse/ClickHouse/pull/49636) ([Igor Nikonov](https://github.com/devcrafter)). +* Fix one-off error in big integers found by UBSan with fuzzer [#49645](https://github.com/ClickHouse/ClickHouse/pull/49645) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix reading from sparse columns after restart [#49660](https://github.com/ClickHouse/ClickHouse/pull/49660) ([Anton Popov](https://github.com/CurtizJ)). +* Fix assert in SpanHolder::finish() with fibers [#49673](https://github.com/ClickHouse/ClickHouse/pull/49673) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix short circuit functions and mutations with sparse arguments [#49716](https://github.com/ClickHouse/ClickHouse/pull/49716) ([Anton Popov](https://github.com/CurtizJ)). +* Fix writing appended files to incremental backups [#49725](https://github.com/ClickHouse/ClickHouse/pull/49725) ([Vitaly Baranov](https://github.com/vitlibar)). +* Ignore LWD column in checkPartDynamicColumns [#49737](https://github.com/ClickHouse/ClickHouse/pull/49737) ([Alexander Gololobov](https://github.com/davenger)). +* Fix msan issue in randomStringUTF8() [#49750](https://github.com/ClickHouse/ClickHouse/pull/49750) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix aggregate function kolmogorovSmirnovTest [#49768](https://github.com/ClickHouse/ClickHouse/pull/49768) ([FFFFFFFHHHHHHH](https://github.com/FFFFFFFHHHHHHH)). +* Fix settings aliases in native protocol [#49776](https://github.com/ClickHouse/ClickHouse/pull/49776) ([Azat Khuzhin](https://github.com/azat)). +* Fix `arrayMap` with array of tuples with single argument [#49789](https://github.com/ClickHouse/ClickHouse/pull/49789) ([Anton Popov](https://github.com/CurtizJ)). +* Fix per-query IO/BACKUPs throttling settings [#49797](https://github.com/ClickHouse/ClickHouse/pull/49797) ([Azat Khuzhin](https://github.com/azat)). +* Fix setting NULL in profile definition [#49831](https://github.com/ClickHouse/ClickHouse/pull/49831) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix a bug with projections and the aggregate_functions_null_for_empty setting (for query_plan_optimize_projection) [#49873](https://github.com/ClickHouse/ClickHouse/pull/49873) ([Amos Bird](https://github.com/amosbird)). +* Fix processing pending batch for Distributed async INSERT after restart [#49884](https://github.com/ClickHouse/ClickHouse/pull/49884) ([Azat Khuzhin](https://github.com/azat)). +* Fix assertion in CacheMetadata::doCleanup [#49914](https://github.com/ClickHouse/ClickHouse/pull/49914) ([Kseniia Sumarokova](https://github.com/kssenii)). +* fix `is_prefix` in OptimizeRegularExpression [#49919](https://github.com/ClickHouse/ClickHouse/pull/49919) ([Han Fei](https://github.com/hanfei1991)). +* Fix metrics `WriteBufferFromS3Bytes`, `WriteBufferFromS3Microseconds` and `WriteBufferFromS3RequestsErrors` [#49930](https://github.com/ClickHouse/ClickHouse/pull/49930) ([Aleksandr Musorin](https://github.com/AVMusorin)). +* Fix IPv6 encoding in protobuf [#49933](https://github.com/ClickHouse/ClickHouse/pull/49933) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Fix possible Logical error on bad Nullable parsing for text formats [#49960](https://github.com/ClickHouse/ClickHouse/pull/49960) ([Kruglov Pavel](https://github.com/Avogar)). +* Add setting output_format_parquet_compliant_nested_types to produce more compatible Parquet files [#50001](https://github.com/ClickHouse/ClickHouse/pull/50001) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix logical error in stress test "Not enough space to add ..." [#50021](https://github.com/ClickHouse/ClickHouse/pull/50021) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Avoid deadlock when starting table in attach thread of `ReplicatedMergeTree` [#50026](https://github.com/ClickHouse/ClickHouse/pull/50026) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix assert in SpanHolder::finish() with fibers attempt 2 [#50034](https://github.com/ClickHouse/ClickHouse/pull/50034) ([Kruglov Pavel](https://github.com/Avogar)). +* Add proper escaping for DDL OpenTelemetry context serialization [#50045](https://github.com/ClickHouse/ClickHouse/pull/50045) ([Azat Khuzhin](https://github.com/azat)). +* Fix reporting broken projection parts [#50052](https://github.com/ClickHouse/ClickHouse/pull/50052) ([Amos Bird](https://github.com/amosbird)). +* JIT compilation not equals NaN fix [#50056](https://github.com/ClickHouse/ClickHouse/pull/50056) ([Maksim Kita](https://github.com/kitaisreal)). +* Fix crashing in case of Replicated database without arguments [#50058](https://github.com/ClickHouse/ClickHouse/pull/50058) ([Azat Khuzhin](https://github.com/azat)). +* Fix crash with `multiIf` and constant condition and nullable arguments [#50123](https://github.com/ClickHouse/ClickHouse/pull/50123) ([Anton Popov](https://github.com/CurtizJ)). +* Fix invalid index analysis for date related keys [#50153](https://github.com/ClickHouse/ClickHouse/pull/50153) ([Amos Bird](https://github.com/amosbird)). +* do not allow modify order by when there are no order by cols [#50154](https://github.com/ClickHouse/ClickHouse/pull/50154) ([Han Fei](https://github.com/hanfei1991)). +* Fix broken index analysis when binary operator contains a null constant argument [#50177](https://github.com/ClickHouse/ClickHouse/pull/50177) ([Amos Bird](https://github.com/amosbird)). +* clickhouse-client: disallow usage of `--query` and `--queries-file` at the same time [#50210](https://github.com/ClickHouse/ClickHouse/pull/50210) ([Alexey Gerasimchuck](https://github.com/Demilivor)). +* Fix UB for INTO OUTFILE extensions (APPEND / AND STDOUT) and WATCH EVENTS [#50216](https://github.com/ClickHouse/ClickHouse/pull/50216) ([Azat Khuzhin](https://github.com/azat)). +* Fix skipping spaces at end of row in CustomSeparatedIgnoreSpaces format [#50224](https://github.com/ClickHouse/ClickHouse/pull/50224) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix iceberg metadata parsing [#50232](https://github.com/ClickHouse/ClickHouse/pull/50232) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix nested distributed SELECT in WITH clause [#50234](https://github.com/ClickHouse/ClickHouse/pull/50234) ([Azat Khuzhin](https://github.com/azat)). +* Fix reconnecting of HTTPS session when target host IP was changed [#50240](https://github.com/ClickHouse/ClickHouse/pull/50240) ([Aleksei Filatov](https://github.com/aalexfvk)). +* Fix msan issue in keyed siphash [#50245](https://github.com/ClickHouse/ClickHouse/pull/50245) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix bugs in Poco sockets in non-blocking mode, use true non-blocking sockets [#50252](https://github.com/ClickHouse/ClickHouse/pull/50252) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix checksum calculation for backup entries [#50264](https://github.com/ClickHouse/ClickHouse/pull/50264) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fixed type conversion from Date/Date32 to DateTime64 when querying with DateTime64 index [#50280](https://github.com/ClickHouse/ClickHouse/pull/50280) ([Lucas Chang](https://github.com/lucas-tubi)). +* Comparison functions NaN fix [#50287](https://github.com/ClickHouse/ClickHouse/pull/50287) ([Maksim Kita](https://github.com/kitaisreal)). +* JIT aggregation nullable key fix [#50291](https://github.com/ClickHouse/ClickHouse/pull/50291) ([Maksim Kita](https://github.com/kitaisreal)). +* Fix clickhouse-local crashing when writing empty Arrow or Parquet output [#50328](https://github.com/ClickHouse/ClickHouse/pull/50328) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix crash when Pool::Entry::disconnect() is called [#50334](https://github.com/ClickHouse/ClickHouse/pull/50334) ([Val Doroshchuk](https://github.com/valbok)). +* Improved fetch part by holding directory lock longer [#50339](https://github.com/ClickHouse/ClickHouse/pull/50339) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* Fix bitShift* functions with both constant arguments [#50343](https://github.com/ClickHouse/ClickHouse/pull/50343) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix Keeper deadlock on exception when preprocessing requests. [#50387](https://github.com/ClickHouse/ClickHouse/pull/50387) ([frinkr](https://github.com/frinkr)). +* Fix hashing of const integer values [#50421](https://github.com/ClickHouse/ClickHouse/pull/50421) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix excessive memory usage for FINAL (due to too much streams usage) [#50429](https://github.com/ClickHouse/ClickHouse/pull/50429) ([Azat Khuzhin](https://github.com/azat)). +* Fix merge_tree_min_rows_for_seek/merge_tree_min_bytes_for_seek for data skipping indexes [#50432](https://github.com/ClickHouse/ClickHouse/pull/50432) ([Azat Khuzhin](https://github.com/azat)). +* Limit the number of in-flight tasks for loading outdated parts [#50450](https://github.com/ClickHouse/ClickHouse/pull/50450) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Keeper fix: apply uncommitted state after snapshot install [#50483](https://github.com/ClickHouse/ClickHouse/pull/50483) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix incorrect constant folding [#50536](https://github.com/ClickHouse/ClickHouse/pull/50536) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix logical error in stress test (Not enough space to add ...) [#50583](https://github.com/ClickHouse/ClickHouse/pull/50583) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix converting Null to LowCardinality(Nullable) in values table function [#50637](https://github.com/ClickHouse/ClickHouse/pull/50637) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix crash in anti/semi join [#50638](https://github.com/ClickHouse/ClickHouse/pull/50638) ([vdimir](https://github.com/vdimir)). +* Revert invalid RegExpTreeDictionary optimization [#50642](https://github.com/ClickHouse/ClickHouse/pull/50642) ([Johann Gan](https://github.com/johanngan)). +* Correctly disable async insert with deduplication when it's not needed [#50663](https://github.com/ClickHouse/ClickHouse/pull/50663) ([Antonio Andelic](https://github.com/antonio2368)). + +#### Build Improvement + +* Fixed Functional Test 00870_t64_codec, 00871_t64_codec_signed, 00872_t64_bit_codec. [#49658](https://github.com/ClickHouse/ClickHouse/pull/49658) ([Sanjam Panda](https://github.com/saitama951)). + +#### NO CL ENTRY + +* NO CL ENTRY: 'Fix user MemoryTracker counter in async inserts'. [#47630](https://github.com/ClickHouse/ClickHouse/pull/47630) ([Dmitry Novik](https://github.com/novikd)). +* NO CL ENTRY: 'Revert "Make `Pretty` formats even prettier."'. [#49850](https://github.com/ClickHouse/ClickHouse/pull/49850) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* NO CL ENTRY: 'Update first_value.md:remove redundant 's''. [#50331](https://github.com/ClickHouse/ClickHouse/pull/50331) ([sslouis](https://github.com/savezed)). +* NO CL ENTRY: 'Revert "less logs in WriteBufferFromS3"'. [#50390](https://github.com/ClickHouse/ClickHouse/pull/50390) ([Alexander Tokmakov](https://github.com/tavplubix)). + +#### NOT FOR CHANGELOG / INSIGNIFICANT + +* Attempt to fix the "system.stack_trace" test [#44627](https://github.com/ClickHouse/ClickHouse/pull/44627) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* rework WriteBufferFromS3, add tests, add abortion [#44869](https://github.com/ClickHouse/ClickHouse/pull/44869) ([Sema Checherinda](https://github.com/CheSema)). +* Rework locking in fs cache [#44985](https://github.com/ClickHouse/ClickHouse/pull/44985) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Update ubuntu_ami_for_ci.sh [#47151](https://github.com/ClickHouse/ClickHouse/pull/47151) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Implement status comment [#48468](https://github.com/ClickHouse/ClickHouse/pull/48468) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Update curl to 8.0.1 (for CVEs) [#48765](https://github.com/ClickHouse/ClickHouse/pull/48765) ([Boris Kuschel](https://github.com/bkuschel)). +* Fix some tests [#48792](https://github.com/ClickHouse/ClickHouse/pull/48792) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Bug Fix for 02432_s3_parallel_parts_cleanup.sql with zero copy replication [#48865](https://github.com/ClickHouse/ClickHouse/pull/48865) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* Add AsyncLoader with dependency tracking and runtime prioritization [#48923](https://github.com/ClickHouse/ClickHouse/pull/48923) ([Sergei Trifonov](https://github.com/serxa)). +* Fix incorrect createColumn call on join clause [#48998](https://github.com/ClickHouse/ClickHouse/pull/48998) ([Ongkong](https://github.com/ongkong)). +* Try fix flaky 01346_alter_enum_partition_key_replicated_zookeeper_long [#49099](https://github.com/ClickHouse/ClickHouse/pull/49099) ([Sergei Trifonov](https://github.com/serxa)). +* Fix possible logical error "Cannot cancel. Either no query sent or already cancelled" [#49106](https://github.com/ClickHouse/ClickHouse/pull/49106) ([Kruglov Pavel](https://github.com/Avogar)). +* Refactor ColumnLowCardinality::cutAndCompact [#49111](https://github.com/ClickHouse/ClickHouse/pull/49111) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix tests with enabled analyzer [#49116](https://github.com/ClickHouse/ClickHouse/pull/49116) ([Dmitry Novik](https://github.com/novikd)). +* Use `SharedMutex` instead of `UpgradableMutex` [#49139](https://github.com/ClickHouse/ClickHouse/pull/49139) ([Sergei Trifonov](https://github.com/serxa)). +* Don't add metadata_version file if it doesn't exist [#49146](https://github.com/ClickHouse/ClickHouse/pull/49146) ([alesapin](https://github.com/alesapin)). +* clearing s3 between tests in a robust way [#49157](https://github.com/ClickHouse/ClickHouse/pull/49157) ([Sema Checherinda](https://github.com/CheSema)). +* Align connect timeout with aws sdk default [#49161](https://github.com/ClickHouse/ClickHouse/pull/49161) ([Nikita Taranov](https://github.com/nickitat)). +* Fix test_encrypted_disk_replication [#49193](https://github.com/ClickHouse/ClickHouse/pull/49193) ([Vitaly Baranov](https://github.com/vitlibar)). +* Allow using function `concat` with `Map` type [#49200](https://github.com/ClickHouse/ClickHouse/pull/49200) ([Anton Popov](https://github.com/CurtizJ)). +* Slight improvements to coordinator logging [#49204](https://github.com/ClickHouse/ClickHouse/pull/49204) ([Raúl Marín](https://github.com/Algunenano)). +* Fix some typos in conversion functions [#49221](https://github.com/ClickHouse/ClickHouse/pull/49221) ([Raúl Marín](https://github.com/Algunenano)). +* CMake: Remove some GCC-specific code [#49224](https://github.com/ClickHouse/ClickHouse/pull/49224) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix oss-fuzz build errors [#49236](https://github.com/ClickHouse/ClickHouse/pull/49236) ([Nikita Taranov](https://github.com/nickitat)). +* Update version after release [#49237](https://github.com/ClickHouse/ClickHouse/pull/49237) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Update version_date.tsv and changelogs after v23.4.1.1943-stable [#49239](https://github.com/ClickHouse/ClickHouse/pull/49239) ([robot-clickhouse](https://github.com/robot-clickhouse)). +* Merge [#24050](https://github.com/ClickHouse/ClickHouse/issues/24050) [#49240](https://github.com/ClickHouse/ClickHouse/pull/49240) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Add file name to exception raised during decompression [#49241](https://github.com/ClickHouse/ClickHouse/pull/49241) ([Nikolay Degterinsky](https://github.com/evillique)). +* Disable ISA-L on aarch64 architectures [#49256](https://github.com/ClickHouse/ClickHouse/pull/49256) ([Jordi Villar](https://github.com/jrdi)). +* Add a comment in FileCache.cpp [#49260](https://github.com/ClickHouse/ClickHouse/pull/49260) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix garbage [#48719](https://github.com/ClickHouse/ClickHouse/issues/48719) [#49263](https://github.com/ClickHouse/ClickHouse/pull/49263) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Update build for nasm [#49288](https://github.com/ClickHouse/ClickHouse/pull/49288) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix race in `waitForProcessingQueue` [#49302](https://github.com/ClickHouse/ClickHouse/pull/49302) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Fix stress test [#49309](https://github.com/ClickHouse/ClickHouse/pull/49309) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix 02516_join_with_totals_and_subquery_bug with new analyzer [#49310](https://github.com/ClickHouse/ClickHouse/pull/49310) ([Dmitry Novik](https://github.com/novikd)). +* Fallback auth gh api [#49314](https://github.com/ClickHouse/ClickHouse/pull/49314) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Unpoison stack frame ptrs from libunwind for msan [#49316](https://github.com/ClickHouse/ClickHouse/pull/49316) ([Robert Schulze](https://github.com/rschu1ze)). +* Respect projections in 01600_parts [#49318](https://github.com/ClickHouse/ClickHouse/pull/49318) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* move pipe compute into initializePipeline [#49326](https://github.com/ClickHouse/ClickHouse/pull/49326) ([Konstantin Morozov](https://github.com/k-morozov)). +* Fix compiling average example (suppress -Wframe-larger-than) [#49358](https://github.com/ClickHouse/ClickHouse/pull/49358) ([Azat Khuzhin](https://github.com/azat)). +* Fix join_use_nulls in analyzer [#49359](https://github.com/ClickHouse/ClickHouse/pull/49359) ([vdimir](https://github.com/vdimir)). +* Fix 02680_mysql_ast_logical_err in analyzer [#49362](https://github.com/ClickHouse/ClickHouse/pull/49362) ([vdimir](https://github.com/vdimir)). +* Remove wrong assertion in cache [#49376](https://github.com/ClickHouse/ClickHouse/pull/49376) ([Kseniia Sumarokova](https://github.com/kssenii)). +* A better way of excluding ISA-L on non-x86 [#49378](https://github.com/ClickHouse/ClickHouse/pull/49378) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix decimal aggregates test for s390x [#49382](https://github.com/ClickHouse/ClickHouse/pull/49382) ([Harry Lee](https://github.com/HarryLeeIBM)). +* Move logging one line higher [#49387](https://github.com/ClickHouse/ClickHouse/pull/49387) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Improve CI: status commit, auth for get_gh_api [#49388](https://github.com/ClickHouse/ClickHouse/pull/49388) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Fix printing hung queries in clickhouse-test. [#49389](https://github.com/ClickHouse/ClickHouse/pull/49389) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Correctly stop CNF convert for too many atomics in new analyzer [#49402](https://github.com/ClickHouse/ClickHouse/pull/49402) ([Antonio Andelic](https://github.com/antonio2368)). +* Remove 02707_complex_query_fails_analyzer test [#49403](https://github.com/ClickHouse/ClickHouse/pull/49403) ([Dmitry Novik](https://github.com/novikd)). +* Update FileSegment.cpp [#49411](https://github.com/ClickHouse/ClickHouse/pull/49411) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Switch Block::NameMap to google::dense_hash_map over HashMap [#49412](https://github.com/ClickHouse/ClickHouse/pull/49412) ([Azat Khuzhin](https://github.com/azat)). +* Slightly reduce inter-header dependencies [#49413](https://github.com/ClickHouse/ClickHouse/pull/49413) ([Azat Khuzhin](https://github.com/azat)). +* Update WithFileName.cpp [#49414](https://github.com/ClickHouse/ClickHouse/pull/49414) ([Nikolay Degterinsky](https://github.com/evillique)). +* Fix some assertions failing in stress test [#49415](https://github.com/ClickHouse/ClickHouse/pull/49415) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Correctly cleanup sequential node in ZooKeeperWithFaultInjection [#49418](https://github.com/ClickHouse/ClickHouse/pull/49418) ([vdimir](https://github.com/vdimir)). +* Throw an exception for non-parametric functions in new analyzer [#49419](https://github.com/ClickHouse/ClickHouse/pull/49419) ([Dmitry Novik](https://github.com/novikd)). +* Fix some bad error messages [#49420](https://github.com/ClickHouse/ClickHouse/pull/49420) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Update version_date.tsv and changelogs after v23.4.2.11-stable [#49422](https://github.com/ClickHouse/ClickHouse/pull/49422) ([robot-clickhouse](https://github.com/robot-clickhouse)). +* Remove trash [#49423](https://github.com/ClickHouse/ClickHouse/pull/49423) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Whitespaces [#49424](https://github.com/ClickHouse/ClickHouse/pull/49424) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Remove dependency from DB::Context in remote/cache readers [#49426](https://github.com/ClickHouse/ClickHouse/pull/49426) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Merging [#49066](https://github.com/ClickHouse/ClickHouse/issues/49066) (Better error handling during loading of parts) [#49430](https://github.com/ClickHouse/ClickHouse/pull/49430) ([Anton Popov](https://github.com/CurtizJ)). +* all s3-blobs removed when merge aborted, remove part from failed fetch without unlock keper [#49432](https://github.com/ClickHouse/ClickHouse/pull/49432) ([Sema Checherinda](https://github.com/CheSema)). +* Make INSERT do more things in parallel to avoid getting bottlenecked on one thread [#49434](https://github.com/ClickHouse/ClickHouse/pull/49434) ([Michael Kolupaev](https://github.com/al13n321)). +* Make 'exceptions shorter than 30' test less noisy [#49435](https://github.com/ClickHouse/ClickHouse/pull/49435) ([Michael Kolupaev](https://github.com/al13n321)). +* Build fixes for ENABLE_LIBRARIES=OFF [#49437](https://github.com/ClickHouse/ClickHouse/pull/49437) ([Azat Khuzhin](https://github.com/azat)). +* Add image for docker-server jepsen [#49452](https://github.com/ClickHouse/ClickHouse/pull/49452) ([alesapin](https://github.com/alesapin)). +* Follow-up to [#48792](https://github.com/ClickHouse/ClickHouse/issues/48792) [#49458](https://github.com/ClickHouse/ClickHouse/pull/49458) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Add method `getCurrentAvailabilityZone` to `AWSEC2MetadataClient` [#49464](https://github.com/ClickHouse/ClickHouse/pull/49464) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Add an integration test for `shutdown_wait_unfinished_queries` [#49469](https://github.com/ClickHouse/ClickHouse/pull/49469) ([Konstantin Bogdanov](https://github.com/thevar1able)). +* Replace `NO DELAY` with `SYNC` in tests [#49470](https://github.com/ClickHouse/ClickHouse/pull/49470) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Check the PRs body directly in lambda, without rerun. Fix RCE in the CI [#49475](https://github.com/ClickHouse/ClickHouse/pull/49475) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Minor changes for setThreadName [#49476](https://github.com/ClickHouse/ClickHouse/pull/49476) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Static cast std::atomic to uint64_t to serialize. [#49482](https://github.com/ClickHouse/ClickHouse/pull/49482) ([alekar](https://github.com/alekar)). +* Fix logical error in stress test, add some logging [#49491](https://github.com/ClickHouse/ClickHouse/pull/49491) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fixes in server jepsen image [#49492](https://github.com/ClickHouse/ClickHouse/pull/49492) ([alesapin](https://github.com/alesapin)). +* Fix UserTimeMicroseconds and SystemTimeMicroseconds descriptions [#49521](https://github.com/ClickHouse/ClickHouse/pull/49521) ([Sergei Trifonov](https://github.com/serxa)). +* Remove garbage from HDFS [#49531](https://github.com/ClickHouse/ClickHouse/pull/49531) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Split ReadWriteBufferFromHTTP.h into .h and .cpp file [#49533](https://github.com/ClickHouse/ClickHouse/pull/49533) ([Michael Kolupaev](https://github.com/al13n321)). +* Remove garbage from Pretty format [#49534](https://github.com/ClickHouse/ClickHouse/pull/49534) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Make input_format_parquet_preserve_order imply !parallelize_output_from_storages [#49536](https://github.com/ClickHouse/ClickHouse/pull/49536) ([Michael Kolupaev](https://github.com/al13n321)). +* Remove extra semicolons [#49545](https://github.com/ClickHouse/ClickHouse/pull/49545) ([Bulat Gaifullin](https://github.com/bgaifullin)). +* Fix 00597_push_down_predicate_long for analyzer [#49551](https://github.com/ClickHouse/ClickHouse/pull/49551) ([Igor Nikonov](https://github.com/devcrafter)). +* Fix stress test (assertion 'key_metadata.lock()') [#49554](https://github.com/ClickHouse/ClickHouse/pull/49554) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix writeAnyEscapedString if quote_character is a meta character [#49558](https://github.com/ClickHouse/ClickHouse/pull/49558) ([Robert Schulze](https://github.com/rschu1ze)). +* Add CMake option for BOOST_USE_UCONTEXT [#49564](https://github.com/ClickHouse/ClickHouse/pull/49564) ([ltrk2](https://github.com/ltrk2)). +* Fix 01655_plan_optimizations_optimize_read_in_window_order for analyzer [#49565](https://github.com/ClickHouse/ClickHouse/pull/49565) ([Igor Nikonov](https://github.com/devcrafter)). +* Fix `ThreadPool::wait` [#49572](https://github.com/ClickHouse/ClickHouse/pull/49572) ([Anton Popov](https://github.com/CurtizJ)). +* Query cache: disable for internal queries [#49573](https://github.com/ClickHouse/ClickHouse/pull/49573) ([Robert Schulze](https://github.com/rschu1ze)). +* Remove `test_merge_tree_s3_restore` [#49576](https://github.com/ClickHouse/ClickHouse/pull/49576) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix bad test [#49578](https://github.com/ClickHouse/ClickHouse/pull/49578) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Remove obsolete test about deprecated feature [#49579](https://github.com/ClickHouse/ClickHouse/pull/49579) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Avoid error found by AST Fuzzer [#49580](https://github.com/ClickHouse/ClickHouse/pull/49580) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix wrong assert [#49581](https://github.com/ClickHouse/ClickHouse/pull/49581) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Flaky test 02723_zookeeper_name.sql [#49592](https://github.com/ClickHouse/ClickHouse/pull/49592) ([Sema Checherinda](https://github.com/CheSema)). +* Query Cache: Safeguard against empty chunks [#49593](https://github.com/ClickHouse/ClickHouse/pull/49593) ([Robert Schulze](https://github.com/rschu1ze)). +* 02723_zookeeper_name: Force a deterministic result order [#49594](https://github.com/ClickHouse/ClickHouse/pull/49594) ([Robert Schulze](https://github.com/rschu1ze)). +* Remove dangerous code (stringstream) [#49595](https://github.com/ClickHouse/ClickHouse/pull/49595) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Remove some code [#49596](https://github.com/ClickHouse/ClickHouse/pull/49596) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Remove "locale" [#49597](https://github.com/ClickHouse/ClickHouse/pull/49597) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* CMake: Cleanup utils build [#49598](https://github.com/ClickHouse/ClickHouse/pull/49598) ([Robert Schulze](https://github.com/rschu1ze)). +* Follow-up for [#49580](https://github.com/ClickHouse/ClickHouse/issues/49580) [#49604](https://github.com/ClickHouse/ClickHouse/pull/49604) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix typo [#49605](https://github.com/ClickHouse/ClickHouse/pull/49605) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix bad test 01660_system_parts_smoke [#49611](https://github.com/ClickHouse/ClickHouse/pull/49611) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Minor changes [#49612](https://github.com/ClickHouse/ClickHouse/pull/49612) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Follow-up for [#49576](https://github.com/ClickHouse/ClickHouse/issues/49576) [#49615](https://github.com/ClickHouse/ClickHouse/pull/49615) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix error in [#48300](https://github.com/ClickHouse/ClickHouse/issues/48300) [#49616](https://github.com/ClickHouse/ClickHouse/pull/49616) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix typo: "as much slots" -> "as many slots" [#49617](https://github.com/ClickHouse/ClickHouse/pull/49617) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Better concurrent parts removal with zero copy [#49619](https://github.com/ClickHouse/ClickHouse/pull/49619) ([alesapin](https://github.com/alesapin)). +* CMake: Remove legacy switch for ccache [#49627](https://github.com/ClickHouse/ClickHouse/pull/49627) ([Robert Schulze](https://github.com/rschu1ze)). +* Try to fix integration test 'test_ssl_cert_authentication' [#49632](https://github.com/ClickHouse/ClickHouse/pull/49632) ([Nikolay Degterinsky](https://github.com/evillique)). +* Unflake 01660_system_parts_smoke [#49633](https://github.com/ClickHouse/ClickHouse/pull/49633) ([Robert Schulze](https://github.com/rschu1ze)). +* Add trash [#49634](https://github.com/ClickHouse/ClickHouse/pull/49634) ([Robert Schulze](https://github.com/rschu1ze)). +* Remove commented code [#49635](https://github.com/ClickHouse/ClickHouse/pull/49635) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Add flaky test [#49646](https://github.com/ClickHouse/ClickHouse/pull/49646) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix race in `Context::createCopy` [#49663](https://github.com/ClickHouse/ClickHouse/pull/49663) ([Anton Popov](https://github.com/CurtizJ)). +* Disable 01710_projection_aggregation_in_order.sql [#49667](https://github.com/ClickHouse/ClickHouse/pull/49667) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Fix flaky 02684_bson.sql [#49674](https://github.com/ClickHouse/ClickHouse/pull/49674) ([Kruglov Pavel](https://github.com/Avogar)). +* Some cache cleanup after rework locking [#49675](https://github.com/ClickHouse/ClickHouse/pull/49675) ([Igor Nikonov](https://github.com/devcrafter)). +* Correctly update log pointer during database replica recovery [#49676](https://github.com/ClickHouse/ClickHouse/pull/49676) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Enable distinct in order after fix [#49636](https://github.com/ClickHouse/ClickHouse/issues/49636) [#49677](https://github.com/ClickHouse/ClickHouse/pull/49677) ([Igor Nikonov](https://github.com/devcrafter)). +* Build fixes for RISCV64 [#49688](https://github.com/ClickHouse/ClickHouse/pull/49688) ([Azat Khuzhin](https://github.com/azat)). +* Add some logging [#49690](https://github.com/ClickHouse/ClickHouse/pull/49690) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix a wrong built generator removal, use `depth=1` [#49692](https://github.com/ClickHouse/ClickHouse/pull/49692) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Fix member call on null pointer in AST fuzzer [#49696](https://github.com/ClickHouse/ClickHouse/pull/49696) ([Nikolay Degterinsky](https://github.com/evillique)). +* Improve woboq codebrowser pipeline [#49701](https://github.com/ClickHouse/ClickHouse/pull/49701) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Enable `do_not_evict_index_and_mark_files` by default [#49702](https://github.com/ClickHouse/ClickHouse/pull/49702) ([Nikita Taranov](https://github.com/nickitat)). +* Backport fix for UBSan error in musl/logf.c [#49705](https://github.com/ClickHouse/ClickHouse/pull/49705) ([Nikita Taranov](https://github.com/nickitat)). +* Fix flaky test for `kolmogorovSmirnovTest` function [#49710](https://github.com/ClickHouse/ClickHouse/pull/49710) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Update clickhouse-test [#49712](https://github.com/ClickHouse/ClickHouse/pull/49712) ([Alexander Tokmakov](https://github.com/tavplubix)). +* IBM s390x: ip encoding fix [#49713](https://github.com/ClickHouse/ClickHouse/pull/49713) ([Suzy Wang](https://github.com/SuzyWangIBMer)). +* Remove not used ErrorCodes [#49715](https://github.com/ClickHouse/ClickHouse/pull/49715) ([Sergei Trifonov](https://github.com/serxa)). +* Disable mmap for StorageFile in clickhouse-server [#49717](https://github.com/ClickHouse/ClickHouse/pull/49717) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix typo [#49718](https://github.com/ClickHouse/ClickHouse/pull/49718) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Do not launch workflows for PRs w/o "can be tested" [#49726](https://github.com/ClickHouse/ClickHouse/pull/49726) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Move assertions after logging [#49729](https://github.com/ClickHouse/ClickHouse/pull/49729) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Docs: Fix sidebar label for dictionary table function [#49730](https://github.com/ClickHouse/ClickHouse/pull/49730) ([Robert Schulze](https://github.com/rschu1ze)). +* Do not allocate own buffer in CachedOnDiskReadBufferFromFile when `use_external_buffer == true` [#49733](https://github.com/ClickHouse/ClickHouse/pull/49733) ([Nikita Taranov](https://github.com/nickitat)). +* fix convertation [#49749](https://github.com/ClickHouse/ClickHouse/pull/49749) ([Sema Checherinda](https://github.com/CheSema)). +* fix flaky test 02504_regexp_dictionary_ua_parser [#49753](https://github.com/ClickHouse/ClickHouse/pull/49753) ([Han Fei](https://github.com/hanfei1991)). +* Fix unit test `ExceptionFromWait` [#49755](https://github.com/ClickHouse/ClickHouse/pull/49755) ([Anton Popov](https://github.com/CurtizJ)). +* Add forgotten lock (addition to [#49117](https://github.com/ClickHouse/ClickHouse/issues/49117)) [#49757](https://github.com/ClickHouse/ClickHouse/pull/49757) ([Anton Popov](https://github.com/CurtizJ)). +* Fix typo [#49762](https://github.com/ClickHouse/ClickHouse/pull/49762) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix build of `libfiu` on clang-16 [#49766](https://github.com/ClickHouse/ClickHouse/pull/49766) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Update README.md [#49782](https://github.com/ClickHouse/ClickHouse/pull/49782) ([Tyler Hannan](https://github.com/tylerhannan)). +* Analyzer: fix column not found for optimized prewhere with sample by [#49784](https://github.com/ClickHouse/ClickHouse/pull/49784) ([vdimir](https://github.com/vdimir)). +* Typo: demange.cpp --> demangle.cpp [#49799](https://github.com/ClickHouse/ClickHouse/pull/49799) ([Robert Schulze](https://github.com/rschu1ze)). +* Analyzer: apply _CAST to constants only once [#49800](https://github.com/ClickHouse/ClickHouse/pull/49800) ([Dmitry Novik](https://github.com/novikd)). +* Use CLOCK_MONOTONIC_RAW over CLOCK_MONOTONIC on Linux (fixes non monotonic clock) [#49819](https://github.com/ClickHouse/ClickHouse/pull/49819) ([Azat Khuzhin](https://github.com/azat)). +* README.md: 4 --> 5 [#49822](https://github.com/ClickHouse/ClickHouse/pull/49822) ([Robert Schulze](https://github.com/rschu1ze)). +* Allow ASOF JOIN over nullable right column [#49826](https://github.com/ClickHouse/ClickHouse/pull/49826) ([vdimir](https://github.com/vdimir)). +* Make 01533_multiple_nested test more reliable [#49828](https://github.com/ClickHouse/ClickHouse/pull/49828) ([Salvatore Mesoraca](https://github.com/aiven-sal)). +* What happens if I remove everything in msan_suppressions? [#49829](https://github.com/ClickHouse/ClickHouse/pull/49829) ([Robert Schulze](https://github.com/rschu1ze)). +* Update README.md [#49832](https://github.com/ClickHouse/ClickHouse/pull/49832) ([AnneClickHouse](https://github.com/AnneClickHouse)). +* Randomize enable_multiple_prewhere_read_steps setting [#49834](https://github.com/ClickHouse/ClickHouse/pull/49834) ([Alexander Gololobov](https://github.com/davenger)). +* Analyzer: do not optimize GROUP BY keys with ROLLUP and CUBE [#49838](https://github.com/ClickHouse/ClickHouse/pull/49838) ([Dmitry Novik](https://github.com/novikd)). +* Clearable hash table and zero values [#49846](https://github.com/ClickHouse/ClickHouse/pull/49846) ([Igor Nikonov](https://github.com/devcrafter)). +* Reset vectorscan reference to an "official" repo [#49848](https://github.com/ClickHouse/ClickHouse/pull/49848) ([Robert Schulze](https://github.com/rschu1ze)). +* Enable few slow clang-tidy checks for clangd [#49855](https://github.com/ClickHouse/ClickHouse/pull/49855) ([Azat Khuzhin](https://github.com/azat)). +* Update QPL docs [#49857](https://github.com/ClickHouse/ClickHouse/pull/49857) ([Robert Schulze](https://github.com/rschu1ze)). +* Small-ish .clang-tidy update [#49859](https://github.com/ClickHouse/ClickHouse/pull/49859) ([Robert Schulze](https://github.com/rschu1ze)). +* Follow-up for clang-tidy [#49861](https://github.com/ClickHouse/ClickHouse/pull/49861) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix "reference to local binding" after fixes for clang-17 [#49874](https://github.com/ClickHouse/ClickHouse/pull/49874) ([Azat Khuzhin](https://github.com/azat)). +* fix typo [#49876](https://github.com/ClickHouse/ClickHouse/pull/49876) ([JackyWoo](https://github.com/JackyWoo)). +* Log with warning if the server was terminated forcefully [#49881](https://github.com/ClickHouse/ClickHouse/pull/49881) ([Azat Khuzhin](https://github.com/azat)). +* Fix some tests [#49889](https://github.com/ClickHouse/ClickHouse/pull/49889) ([Alexander Tokmakov](https://github.com/tavplubix)). +* use chassert in MergeTreeDeduplicationLog to have better log info [#49891](https://github.com/ClickHouse/ClickHouse/pull/49891) ([Han Fei](https://github.com/hanfei1991)). +* Multiple pools support for AsyncLoader [#49893](https://github.com/ClickHouse/ClickHouse/pull/49893) ([Sergei Trifonov](https://github.com/serxa)). +* Fix stack-use-after-scope in resource manager test [#49908](https://github.com/ClickHouse/ClickHouse/pull/49908) ([Sergei Trifonov](https://github.com/serxa)). +* Retry connection expired in test_rename_column/test.py [#49911](https://github.com/ClickHouse/ClickHouse/pull/49911) ([alesapin](https://github.com/alesapin)). +* Try to fix flaky test_distributed_load_balancing tests [#49912](https://github.com/ClickHouse/ClickHouse/pull/49912) ([Kruglov Pavel](https://github.com/Avogar)). +* Remove unused code [#49918](https://github.com/ClickHouse/ClickHouse/pull/49918) ([alesapin](https://github.com/alesapin)). +* Fix flakiness of test_distributed_load_balancing test [#49921](https://github.com/ClickHouse/ClickHouse/pull/49921) ([Azat Khuzhin](https://github.com/azat)). +* Add some logging [#49925](https://github.com/ClickHouse/ClickHouse/pull/49925) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Support hardlinking parts transactionally [#49931](https://github.com/ClickHouse/ClickHouse/pull/49931) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix for analyzer: 02377_ optimize_sorting_by_input_stream_properties_e… [#49943](https://github.com/ClickHouse/ClickHouse/pull/49943) ([Igor Nikonov](https://github.com/devcrafter)). +* Follow up to [#49429](https://github.com/ClickHouse/ClickHouse/issues/49429) [#49964](https://github.com/ClickHouse/ClickHouse/pull/49964) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix flaky test_ssl_cert_authentication to use urllib3 [#49982](https://github.com/ClickHouse/ClickHouse/pull/49982) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* Fix woboq codebrowser build with -Wno-poison-system-directories [#49992](https://github.com/ClickHouse/ClickHouse/pull/49992) ([Azat Khuzhin](https://github.com/azat)). +* test for [#46128](https://github.com/ClickHouse/ClickHouse/issues/46128) [#49993](https://github.com/ClickHouse/ClickHouse/pull/49993) ([Denny Crane](https://github.com/den-crane)). +* Fix test_insert_same_partition_and_merge failing if one Azure request attempt fails [#49996](https://github.com/ClickHouse/ClickHouse/pull/49996) ([Michael Kolupaev](https://github.com/al13n321)). +* Check return value of `ftruncate` in Keeper [#50020](https://github.com/ClickHouse/ClickHouse/pull/50020) ([Antonio Andelic](https://github.com/antonio2368)). +* Add some assertions [#50025](https://github.com/ClickHouse/ClickHouse/pull/50025) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Update 02441_alter_delete_and_drop_column.sql [#50027](https://github.com/ClickHouse/ClickHouse/pull/50027) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Move some common code to common [#50028](https://github.com/ClickHouse/ClickHouse/pull/50028) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Add method getCredentials() to S3::Client [#50030](https://github.com/ClickHouse/ClickHouse/pull/50030) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Update query_log.md [#50032](https://github.com/ClickHouse/ClickHouse/pull/50032) ([Sergei Trifonov](https://github.com/serxa)). +* Get rid of indirect write buffer in object storages [#50033](https://github.com/ClickHouse/ClickHouse/pull/50033) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Load balancing bugfixes [#50036](https://github.com/ClickHouse/ClickHouse/pull/50036) ([Sergei Trifonov](https://github.com/serxa)). +* Update S3 sdk to v1.11.61 [#50037](https://github.com/ClickHouse/ClickHouse/pull/50037) ([Nikita Taranov](https://github.com/nickitat)). +* Fix 02735_system_zookeeper_connection for DatabaseReplicated [#50047](https://github.com/ClickHouse/ClickHouse/pull/50047) ([Azat Khuzhin](https://github.com/azat)). +* Add more profile events for distributed connections [#50051](https://github.com/ClickHouse/ClickHouse/pull/50051) ([Sergei Trifonov](https://github.com/serxa)). +* FileCache: simple tryReserve() cleanup [#50059](https://github.com/ClickHouse/ClickHouse/pull/50059) ([Igor Nikonov](https://github.com/devcrafter)). +* Fix hashed/sparse_hashed dictionaries max_load_factor upper range [#50065](https://github.com/ClickHouse/ClickHouse/pull/50065) ([Azat Khuzhin](https://github.com/azat)). +* Clearer coordinator log [#50101](https://github.com/ClickHouse/ClickHouse/pull/50101) ([Raúl Marín](https://github.com/Algunenano)). +* Analyzer: Do not execute table functions multiple times [#50105](https://github.com/ClickHouse/ClickHouse/pull/50105) ([Dmitry Novik](https://github.com/novikd)). +* Update default settings for Replicated database [#50108](https://github.com/ClickHouse/ClickHouse/pull/50108) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Make async prefetched buffer work with arbitrary impl [#50109](https://github.com/ClickHouse/ClickHouse/pull/50109) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Update github.com/distribution/distribution [#50114](https://github.com/ClickHouse/ClickHouse/pull/50114) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Docs: Update clickhouse-local arguments [#50138](https://github.com/ClickHouse/ClickHouse/pull/50138) ([Robert Schulze](https://github.com/rschu1ze)). +* Change fields destruction order in AsyncTaskExecutor [#50151](https://github.com/ClickHouse/ClickHouse/pull/50151) ([Kruglov Pavel](https://github.com/Avogar)). +* Follow-up to [#49889](https://github.com/ClickHouse/ClickHouse/issues/49889) [#50152](https://github.com/ClickHouse/ClickHouse/pull/50152) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Clarification comment on retries controller behavior [#50155](https://github.com/ClickHouse/ClickHouse/pull/50155) ([Igor Nikonov](https://github.com/devcrafter)). +* Switch to upstream repository of vectorscan [#50159](https://github.com/ClickHouse/ClickHouse/pull/50159) ([Azat Khuzhin](https://github.com/azat)). +* Refactor lambdas, prepare to prio runners [#50160](https://github.com/ClickHouse/ClickHouse/pull/50160) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Speed-up the shellcheck with parallel xargs [#50164](https://github.com/ClickHouse/ClickHouse/pull/50164) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Update an exception message [#50180](https://github.com/ClickHouse/ClickHouse/pull/50180) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Upgrade boost submodule [#50188](https://github.com/ClickHouse/ClickHouse/pull/50188) ([ltrk2](https://github.com/ltrk2)). +* Implement a uniform way to query processor core IDs [#50190](https://github.com/ClickHouse/ClickHouse/pull/50190) ([ltrk2](https://github.com/ltrk2)). +* Don't replicate delete through DDL worker if there is just 1 shard [#50193](https://github.com/ClickHouse/ClickHouse/pull/50193) ([Alexander Gololobov](https://github.com/davenger)). +* Fix codebrowser by using clang-15 image [#50197](https://github.com/ClickHouse/ClickHouse/pull/50197) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Add comments to build reports [#50200](https://github.com/ClickHouse/ClickHouse/pull/50200) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Automatic backports of important fixes to cloud-release [#50202](https://github.com/ClickHouse/ClickHouse/pull/50202) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Unify priorities: lower value means higher priority [#50205](https://github.com/ClickHouse/ClickHouse/pull/50205) ([Sergei Trifonov](https://github.com/serxa)). +* Use transactions for encrypted disks [#50206](https://github.com/ClickHouse/ClickHouse/pull/50206) ([alesapin](https://github.com/alesapin)). +* Get detailed error instead of unknown error for function test [#50207](https://github.com/ClickHouse/ClickHouse/pull/50207) ([Suzy Wang](https://github.com/SuzyWangIBMer)). +* README.md: Remove Berlin Meetup from upcoming events [#50218](https://github.com/ClickHouse/ClickHouse/pull/50218) ([Robert Schulze](https://github.com/rschu1ze)). +* Minor adjustment of clickhouse-client/local parameter docs [#50219](https://github.com/ClickHouse/ClickHouse/pull/50219) ([Robert Schulze](https://github.com/rschu1ze)). +* Unify priorities: rework IO scheduling subsystem [#50231](https://github.com/ClickHouse/ClickHouse/pull/50231) ([Sergei Trifonov](https://github.com/serxa)). +* Add new metrics BrokenDistributedBytesToInsert/DistributedBytesToInsert [#50238](https://github.com/ClickHouse/ClickHouse/pull/50238) ([Azat Khuzhin](https://github.com/azat)). +* Fix URL in backport comment [#50241](https://github.com/ClickHouse/ClickHouse/pull/50241) ([pufit](https://github.com/pufit)). +* Fix `02535_max_parallel_replicas_custom_key` [#50242](https://github.com/ClickHouse/ClickHouse/pull/50242) ([Antonio Andelic](https://github.com/antonio2368)). +* Fixes for MergeTree with readonly disks [#50244](https://github.com/ClickHouse/ClickHouse/pull/50244) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Yet another refactoring [#50257](https://github.com/ClickHouse/ClickHouse/pull/50257) ([Anton Popov](https://github.com/CurtizJ)). +* Unify priorities: rework AsyncLoader [#50272](https://github.com/ClickHouse/ClickHouse/pull/50272) ([Sergei Trifonov](https://github.com/serxa)). +* buffers d-tor finalize free [#50275](https://github.com/ClickHouse/ClickHouse/pull/50275) ([Sema Checherinda](https://github.com/CheSema)). +* Fix 02767_into_outfile_extensions_msan under analyzer [#50290](https://github.com/ClickHouse/ClickHouse/pull/50290) ([Azat Khuzhin](https://github.com/azat)). +* QPL: Add a comment about isal [#50308](https://github.com/ClickHouse/ClickHouse/pull/50308) ([Robert Schulze](https://github.com/rschu1ze)). +* Avoid clang 15 crash [#50310](https://github.com/ClickHouse/ClickHouse/pull/50310) ([Raúl Marín](https://github.com/Algunenano)). +* Cleanup Annoy index [#50312](https://github.com/ClickHouse/ClickHouse/pull/50312) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix flaky `AsyncLoader.StaticPriorities` unit test [#50313](https://github.com/ClickHouse/ClickHouse/pull/50313) ([Sergei Trifonov](https://github.com/serxa)). +* Update gtest_async_loader.cpp [#50317](https://github.com/ClickHouse/ClickHouse/pull/50317) ([Nikita Taranov](https://github.com/nickitat)). +* Fix IS (NOT) NULL operator priority [#50327](https://github.com/ClickHouse/ClickHouse/pull/50327) ([Nikolay Degterinsky](https://github.com/evillique)). +* Update README.md [#50340](https://github.com/ClickHouse/ClickHouse/pull/50340) ([Tyler Hannan](https://github.com/tylerhannan)). +* do not fix the event list in test [#50342](https://github.com/ClickHouse/ClickHouse/pull/50342) ([Sema Checherinda](https://github.com/CheSema)). +* less logs in WriteBufferFromS3 [#50347](https://github.com/ClickHouse/ClickHouse/pull/50347) ([Sema Checherinda](https://github.com/CheSema)). +* Remove legacy install scripts superseded by universal.sh [#50360](https://github.com/ClickHouse/ClickHouse/pull/50360) ([Robert Schulze](https://github.com/rschu1ze)). +* Fail perf tests when too many queries slowed down [#50361](https://github.com/ClickHouse/ClickHouse/pull/50361) ([Nikita Taranov](https://github.com/nickitat)). +* Fix after [#50109](https://github.com/ClickHouse/ClickHouse/issues/50109) [#50362](https://github.com/ClickHouse/ClickHouse/pull/50362) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix log message [#50363](https://github.com/ClickHouse/ClickHouse/pull/50363) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Compare functions NaN update test [#50366](https://github.com/ClickHouse/ClickHouse/pull/50366) ([Maksim Kita](https://github.com/kitaisreal)). +* Add re-creation for cherry-pick PRs [#50373](https://github.com/ClickHouse/ClickHouse/pull/50373) ([pufit](https://github.com/pufit)). +* Without applying `prepareRightBlock` will cause mismatch block structrue [#50383](https://github.com/ClickHouse/ClickHouse/pull/50383) ([lgbo](https://github.com/lgbo-ustc)). +* fix hung in unit tests [#50391](https://github.com/ClickHouse/ClickHouse/pull/50391) ([Sema Checherinda](https://github.com/CheSema)). +* Fix poll timeout in MaterializedMySQL [#50392](https://github.com/ClickHouse/ClickHouse/pull/50392) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Compile aggregate expressions enable by default [#50401](https://github.com/ClickHouse/ClickHouse/pull/50401) ([Maksim Kita](https://github.com/kitaisreal)). +* Update app.py [#50407](https://github.com/ClickHouse/ClickHouse/pull/50407) ([Nikita Taranov](https://github.com/nickitat)). +* reuse s3_mocks, rewrite test test_paranoid_check_in_logs [#50408](https://github.com/ClickHouse/ClickHouse/pull/50408) ([Sema Checherinda](https://github.com/CheSema)). +* test for [#42610](https://github.com/ClickHouse/ClickHouse/issues/42610) [#50409](https://github.com/ClickHouse/ClickHouse/pull/50409) ([Denny Crane](https://github.com/den-crane)). +* Remove something [#50411](https://github.com/ClickHouse/ClickHouse/pull/50411) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Mark the builds without results as pending [#50415](https://github.com/ClickHouse/ClickHouse/pull/50415) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Revert "Fix msan issue in keyed siphash" [#50426](https://github.com/ClickHouse/ClickHouse/pull/50426) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Revert "Revert "less logs in WriteBufferFromS3" ([#50390](https://github.com/ClickHouse/ClickHouse/issues/50390))" [#50444](https://github.com/ClickHouse/ClickHouse/pull/50444) ([Sema Checherinda](https://github.com/CheSema)). +* Paranoid fix for removing parts from ZooKeeper [#50448](https://github.com/ClickHouse/ClickHouse/pull/50448) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Add timeout for unit tests [#50449](https://github.com/ClickHouse/ClickHouse/pull/50449) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Changes related to an internal feature [#50453](https://github.com/ClickHouse/ClickHouse/pull/50453) ([Michael Kolupaev](https://github.com/al13n321)). +* Don't crash if config doesn't have logger section [#50455](https://github.com/ClickHouse/ClickHouse/pull/50455) ([Michael Kolupaev](https://github.com/al13n321)). +* Update function docs [#50466](https://github.com/ClickHouse/ClickHouse/pull/50466) ([Robert Schulze](https://github.com/rschu1ze)). +* Revert "make filter push down through cross join" [#50467](https://github.com/ClickHouse/ClickHouse/pull/50467) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Add some assertions [#50470](https://github.com/ClickHouse/ClickHouse/pull/50470) ([Kseniia Sumarokova](https://github.com/kssenii)). +* CI: Enable aspell on nested docs [#50476](https://github.com/ClickHouse/ClickHouse/pull/50476) ([Robert Schulze](https://github.com/rschu1ze)). +* Try fix flaky test test_async_query_sending [#50480](https://github.com/ClickHouse/ClickHouse/pull/50480) ([Kruglov Pavel](https://github.com/Avogar)). +* Disable 00534_functions_bad_arguments with msan [#50481](https://github.com/ClickHouse/ClickHouse/pull/50481) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Typos: Follow-up to [#50476](https://github.com/ClickHouse/ClickHouse/issues/50476) [#50482](https://github.com/ClickHouse/ClickHouse/pull/50482) ([Robert Schulze](https://github.com/rschu1ze)). +* Remove unneeded Keeper test [#50485](https://github.com/ClickHouse/ClickHouse/pull/50485) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix KeyError in cherry-pick [#50493](https://github.com/ClickHouse/ClickHouse/pull/50493) ([pufit](https://github.com/pufit)). +* Make typeid_cast for pointers noexcept [#50495](https://github.com/ClickHouse/ClickHouse/pull/50495) ([Sergey Kazmin ](https://github.com/yerseg)). +* less traces in logs [#50518](https://github.com/ClickHouse/ClickHouse/pull/50518) ([Sema Checherinda](https://github.com/CheSema)). +* Implement endianness-independent serialization for UUID [#50519](https://github.com/ClickHouse/ClickHouse/pull/50519) ([ltrk2](https://github.com/ltrk2)). +* Remove strange object storage methods [#50521](https://github.com/ClickHouse/ClickHouse/pull/50521) ([alesapin](https://github.com/alesapin)). +* Fix low quality code around metadata in RocksDB (experimental feature never used in production) [#50527](https://github.com/ClickHouse/ClickHouse/pull/50527) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Function if constant folding [#50529](https://github.com/ClickHouse/ClickHouse/pull/50529) ([Maksim Kita](https://github.com/kitaisreal)). +* Add profile events for fs cache eviction [#50533](https://github.com/ClickHouse/ClickHouse/pull/50533) ([Kseniia Sumarokova](https://github.com/kssenii)). +* QueryNode small fix [#50535](https://github.com/ClickHouse/ClickHouse/pull/50535) ([Maksim Kita](https://github.com/kitaisreal)). +* Control memory usage in generateRandom [#50538](https://github.com/ClickHouse/ClickHouse/pull/50538) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Disable skim (Rust library) under memory sanitizer [#50539](https://github.com/ClickHouse/ClickHouse/pull/50539) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* MSan support for Rust [#50541](https://github.com/ClickHouse/ClickHouse/pull/50541) ([Azat Khuzhin](https://github.com/azat)). +* Make 01565_query_loop_after_client_error slightly more robust [#50542](https://github.com/ClickHouse/ClickHouse/pull/50542) ([Azat Khuzhin](https://github.com/azat)). +* Resize BufferFromVector underlying vector only pos_offset == vector.size() [#50546](https://github.com/ClickHouse/ClickHouse/pull/50546) ([auxten](https://github.com/auxten)). +* Add async iteration to object storage [#50548](https://github.com/ClickHouse/ClickHouse/pull/50548) ([alesapin](https://github.com/alesapin)). +* skip extracting darwin toolchain in builder when unncessary [#50550](https://github.com/ClickHouse/ClickHouse/pull/50550) ([SuperDJY](https://github.com/cmsxbc)). +* Remove flaky test [#50558](https://github.com/ClickHouse/ClickHouse/pull/50558) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Revert "Disable skim (Rust library) under memory sanitizer" [#50574](https://github.com/ClickHouse/ClickHouse/pull/50574) ([Azat Khuzhin](https://github.com/azat)). +* Analyzer: fix 01487_distributed_in_not_default_db [#50587](https://github.com/ClickHouse/ClickHouse/pull/50587) ([Dmitry Novik](https://github.com/novikd)). +* Fix commit for DiskObjectStorage [#50599](https://github.com/ClickHouse/ClickHouse/pull/50599) ([alesapin](https://github.com/alesapin)). +* Fix Jepsen runs in PRs [#50615](https://github.com/ClickHouse/ClickHouse/pull/50615) ([Antonio Andelic](https://github.com/antonio2368)). +* Revert incorrect optimizations [#50629](https://github.com/ClickHouse/ClickHouse/pull/50629) ([Raúl Marín](https://github.com/Algunenano)). +* Disable 01676_clickhouse_client_autocomplete under UBSan [#50636](https://github.com/ClickHouse/ClickHouse/pull/50636) ([Nikita Taranov](https://github.com/nickitat)). +* Merging [#50329](https://github.com/ClickHouse/ClickHouse/issues/50329) [#50660](https://github.com/ClickHouse/ClickHouse/pull/50660) ([Anton Popov](https://github.com/CurtizJ)). +* Revert "date_trunc function to always return DateTime type" [#50670](https://github.com/ClickHouse/ClickHouse/pull/50670) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Fix flaky test 02461_prewhere_row_level_policy_lightweight_delete [#50674](https://github.com/ClickHouse/ClickHouse/pull/50674) ([Alexander Gololobov](https://github.com/davenger)). +* Fix asan issue with analyzer and prewhere [#50685](https://github.com/ClickHouse/ClickHouse/pull/50685) ([Alexander Gololobov](https://github.com/davenger)). +* Catch issues with dockerd during the build [#50700](https://github.com/ClickHouse/ClickHouse/pull/50700) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Temporarily disable annoy index tests (flaky for analyzer) [#50714](https://github.com/ClickHouse/ClickHouse/pull/50714) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix assertion from stress test [#50718](https://github.com/ClickHouse/ClickHouse/pull/50718) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix flaky unit test [#50719](https://github.com/ClickHouse/ClickHouse/pull/50719) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Show correct sharing state in system.query_cache [#50728](https://github.com/ClickHouse/ClickHouse/pull/50728) ([Robert Schulze](https://github.com/rschu1ze)). + diff --git a/docs/changelogs/v23.5.2.7-stable.md b/docs/changelogs/v23.5.2.7-stable.md new file mode 100644 index 00000000000..2e4931c64e0 --- /dev/null +++ b/docs/changelogs/v23.5.2.7-stable.md @@ -0,0 +1,18 @@ +--- +sidebar_position: 1 +sidebar_label: 2023 +--- + +# 2023 Changelog + +### ClickHouse release v23.5.2.7-stable (5751aa1ab9f) FIXME as compared to v23.5.1.3174-stable (2fec796e73e) + +#### Bug Fix (user-visible misbehavior in an official stable release) + +* Do not read all the columns from right GLOBAL JOIN table. [#50721](https://github.com/ClickHouse/ClickHouse/pull/50721) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). + +#### NOT FOR CHANGELOG / INSIGNIFICANT + +* Fix build for aarch64 (temporary disable azure) [#50770](https://github.com/ClickHouse/ClickHouse/pull/50770) ([alesapin](https://github.com/alesapin)). +* Rename azure_blob_storage to azureBlobStorage [#50812](https://github.com/ClickHouse/ClickHouse/pull/50812) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). + diff --git a/docs/en/engines/table-engines/integrations/azureBlobStorage.md b/docs/en/engines/table-engines/integrations/azureBlobStorage.md new file mode 100644 index 00000000000..b8e621fd513 --- /dev/null +++ b/docs/en/engines/table-engines/integrations/azureBlobStorage.md @@ -0,0 +1,51 @@ +--- +slug: /en/engines/table-engines/integrations/azureBlobStorage +sidebar_label: Azure Blob Storage +--- + +# AzureBlobStorage Table Engine + +This engine provides an integration with [Azure Blob Storage](https://azure.microsoft.com/en-us/products/storage/blobs) ecosystem. + +## Create Table + +``` sql +CREATE TABLE azure_blob_storage_table (name String, value UInt32) + ENGINE = AzureBlobStorage(connection_string|storage_account_url, container_name, blobpath, [account_name, account_key, format, compression]) + [PARTITION BY expr] + [SETTINGS ...] +``` + +### Engine parameters + +- `connection_string|storage_account_url` — connection_string includes account name & key ([Create connection string](https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json#configure-a-connection-string-for-an-azure-storage-account)) or you could also provide the storage account url here and account name & account key as separate parameters (see parameters account_name & account_key) +- `container_name` - Container name +- `blobpath` - file path. Supports following wildcards in readonly mode: `*`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` — numbers, `'abc'`, `'def'` — strings. +- `account_name` - if storage_account_url is used, then account name can be specified here +- `account_key` - if storage_account_url is used, then account key can be specified here +- `format` — The [format](/docs/en/interfaces/formats.md) of the file. +- `compression` — Supported values: `none`, `gzip/gz`, `brotli/br`, `xz/LZMA`, `zstd/zst`. By default, it will autodetect compression by file extension. (same as setting to `auto`). + +**Example** + +``` sql +CREATE TABLE test_table (key UInt64, data String) + ENGINE = AzureBlobStorage('DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1/;', + 'test_container', 'test_table', 'CSV'); + +INSERT INTO test_table VALUES (1, 'a'), (2, 'b'), (3, 'c'); + +SELECT * FROM test_table; +``` + +```text +┌─key──┬─data──┐ +│ 1 │ a │ +│ 2 │ b │ +│ 3 │ c │ +└──────┴───────┘ +``` + +## See also + +[Azure Blob Storage Table Function](/docs/en/sql-reference/table-functions/azureBlobStorage.md) diff --git a/docs/en/engines/table-engines/integrations/index.md b/docs/en/engines/table-engines/integrations/index.md index b321a644d32..b9171d9033b 100644 --- a/docs/en/engines/table-engines/integrations/index.md +++ b/docs/en/engines/table-engines/integrations/index.md @@ -6,24 +6,5 @@ sidebar_label: Integrations # Table Engines for Integrations -ClickHouse provides various means for integrating with external systems, including table engines. Like with all other table engines, the configuration is done using `CREATE TABLE` or `ALTER TABLE` queries. Then from a user perspective, the configured integration looks like a normal table, but queries to it are proxied to the external system. This transparent querying is one of the key advantages of this approach over alternative integration methods, like dictionaries or table functions, which require to use custom query methods on each use. +ClickHouse provides various means for integrating with external systems, including table engines. Like with all other table engines, the configuration is done using `CREATE TABLE` or `ALTER TABLE` queries. Then from a user perspective, the configured integration looks like a normal table, but queries to it are proxied to the external system. This transparent querying is one of the key advantages of this approach over alternative integration methods, like dictionaries or table functions, which require the use of custom query methods on each use. -List of supported integrations: - -- [ODBC](../../../engines/table-engines/integrations/odbc.md) -- [JDBC](../../../engines/table-engines/integrations/jdbc.md) -- [MySQL](../../../engines/table-engines/integrations/mysql.md) -- [MongoDB](../../../engines/table-engines/integrations/mongodb.md) -- [HDFS](../../../engines/table-engines/integrations/hdfs.md) -- [S3](../../../engines/table-engines/integrations/s3.md) -- [Kafka](../../../engines/table-engines/integrations/kafka.md) -- [EmbeddedRocksDB](../../../engines/table-engines/integrations/embedded-rocksdb.md) -- [RabbitMQ](../../../engines/table-engines/integrations/rabbitmq.md) -- [PostgreSQL](../../../engines/table-engines/integrations/postgresql.md) -- [SQLite](../../../engines/table-engines/integrations/sqlite.md) -- [Hive](../../../engines/table-engines/integrations/hive.md) -- [ExternalDistributed](../../../engines/table-engines/integrations/ExternalDistributed.md) -- [MaterializedPostgreSQL](../../../engines/table-engines/integrations/materialized-postgresql.md) -- [NATS](../../../engines/table-engines/integrations/nats.md) -- [DeltaLake](../../../engines/table-engines/integrations/deltalake.md) -- [Hudi](../../../engines/table-engines/integrations/hudi.md) diff --git a/docs/en/interfaces/schema-inference.md b/docs/en/interfaces/schema-inference.md index c448d0aee47..bef858eaba0 100644 --- a/docs/en/interfaces/schema-inference.md +++ b/docs/en/interfaces/schema-inference.md @@ -1192,7 +1192,7 @@ DESC format(JSONEachRow, '{"id" : 1, "age" : 25, "name" : "Josh", "status" : nul #### schema_inference_make_columns_nullable Controls making inferred types `Nullable` in schema inference for formats without information about nullability. -If the setting is enabled, all inferred type will be `Nullable`, if disabled, the inferred type will be `Nullable` only if the column contains `NULL` in a sample that is parsed during schema inference. +If the setting is enabled, all inferred type will be `Nullable`, if disabled, the inferred type will be `Nullable` only if `input_format_null_as_default` is disabled and the column contains `NULL` in a sample that is parsed during schema inference. Enabled by default. @@ -1215,7 +1215,8 @@ DESC format(JSONEachRow, $$ └─────────┴─────────────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘ ``` ```sql -SET schema_inference_make_columns_nullable = 0 +SET schema_inference_make_columns_nullable = 0; +SET input_format_null_as_default = 0; DESC format(JSONEachRow, $$ {"id" : 1, "age" : 25, "name" : "Josh", "status" : null, "hobbies" : ["football", "cooking"]} {"id" : 2, "age" : 19, "name" : "Alan", "status" : "married", "hobbies" : ["tennis", "art"]} @@ -1232,6 +1233,25 @@ DESC format(JSONEachRow, $$ └─────────┴──────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘ ``` +```sql +SET schema_inference_make_columns_nullable = 0; +SET input_format_null_as_default = 1; +DESC format(JSONEachRow, $$ + {"id" : 1, "age" : 25, "name" : "Josh", "status" : null, "hobbies" : ["football", "cooking"]} + {"id" : 2, "age" : 19, "name" : "Alan", "status" : "married", "hobbies" : ["tennis", "art"]} + $$) +``` +```response + +┌─name────┬─type──────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐ +│ id │ Int64 │ │ │ │ │ │ +│ age │ Int64 │ │ │ │ │ │ +│ name │ String │ │ │ │ │ │ +│ status │ String │ │ │ │ │ │ +│ hobbies │ Array(String) │ │ │ │ │ │ +└─────────┴───────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘ +``` + #### input_format_try_infer_integers If enabled, ClickHouse will try to infer integers instead of floats in schema inference for text formats. diff --git a/docs/en/operations/server-configuration-parameters/settings.md b/docs/en/operations/server-configuration-parameters/settings.md index f93ab264511..3398c7afb67 100644 --- a/docs/en/operations/server-configuration-parameters/settings.md +++ b/docs/en/operations/server-configuration-parameters/settings.md @@ -1881,6 +1881,32 @@ The default server configuration file `config.xml` contains the following settin ``` +## asynchronous_insert_log {#server_configuration_parameters-asynchronous_insert_log} + +Settings for the [asynchronous_insert_log](../../operations/system-tables/asynchronous_insert_log.md#system_tables-asynchronous_insert_log) system table for logging async inserts. + +Parameters: + +- `database` — Database name. +- `table` — Table name. +- `partition_by` — [Custom partitioning key](../../engines/table-engines/mergetree-family/custom-partitioning-key.md) for a system table. Can't be used if `engine` defined. +- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` defined. +- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table. +- `storage_policy` – Name of storage policy to use for the table (optional) + +**Example** +```xml + + + system + asynchronous_insert_log
+ 7500 + toYYYYMM(event_date) + +
+
+``` + ## query_masking_rules {#query-masking-rules} Regexp-based rules, which will be applied to queries as well as all log messages before storing them in server logs, diff --git a/docs/en/operations/settings/settings.md b/docs/en/operations/settings/settings.md index 5b0c6b3c8c2..cb38a6eecc5 100644 --- a/docs/en/operations/settings/settings.md +++ b/docs/en/operations/settings/settings.md @@ -535,8 +535,6 @@ Possible values: The first phase of a grace join reads the right table and splits it into N buckets depending on the hash value of key columns (initially, N is `grace_hash_join_initial_buckets`). This is done in a way to ensure that each bucket can be processed independently. Rows from the first bucket are added to an in-memory hash table while the others are saved to disk. If the hash table grows beyond the memory limit (e.g., as set by [`max_bytes_in_join`](/docs/en/operations/settings/query-complexity.md/#settings-max_bytes_in_join)), the number of buckets is increased and the assigned bucket for each row. Any rows which don’t belong to the current bucket are flushed and reassigned. - Supports `INNER/LEFT/RIGHT/FULL ALL/ANY JOIN`. - - hash [Hash join algorithm](https://en.wikipedia.org/wiki/Hash_join) is used. The most generic implementation that supports all combinations of kind and strictness and multiple join keys that are combined with `OR` in the `JOIN ON` section. @@ -3238,7 +3236,7 @@ Possible values: - Positive integer. - 0 or 1 — Disabled. `SELECT` queries are executed in a single thread. -Default value: `16`. +Default value: `max_threads`. ## opentelemetry_start_trace_probability {#opentelemetry-start-trace-probability} diff --git a/docs/en/operations/system-tables/asynchronous_insert_log.md b/docs/en/operations/system-tables/asynchronous_insert_log.md new file mode 100644 index 00000000000..c3aaa8e6c41 --- /dev/null +++ b/docs/en/operations/system-tables/asynchronous_insert_log.md @@ -0,0 +1,64 @@ +--- +slug: /en/operations/system-tables/asynchronous_insert_log +--- +# asynchronous_insert_log + +Contains information about async inserts. Each entry represents an insert query buffered into an async insert query. + +To start logging configure parameters in the [asynchronous_insert_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-asynchronous_insert_log) section. + +The flushing period of data is set in `flush_interval_milliseconds` parameter of the [asynchronous_insert_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-asynchronous_insert_log) server settings section. To force flushing, use the [SYSTEM FLUSH LOGS](../../sql-reference/statements/system.md#query_language-system-flush_logs) query. + +ClickHouse does not delete data from the table automatically. See [Introduction](../../operations/system-tables/index.md#system-tables-introduction) for more details. + +Columns: + +- `event_date` ([Date](../../sql-reference/data-types/date.md)) — The date when the async insert happened. +- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — The date and time when the async insert finished execution. +- `event_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — The date and time when the async insert finished execution with microseconds precision. +- `query` ([String](../../sql-reference/data-types/string.md)) — Query string. +- `database` ([String](../../sql-reference/data-types/string.md)) — The name of the database the table is in. +- `table` ([String](../../sql-reference/data-types/string.md)) — Table name. +- `format` ([String](/docs/en/sql-reference/data-types/string.md)) — Format name. +- `query_id` ([String](../../sql-reference/data-types/string.md)) — ID of the initial query. +- `bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Number of inserted bytes. +- `exception` ([String](../../sql-reference/data-types/string.md)) — Exception message. +- `status` ([Enum8](../../sql-reference/data-types/enum.md)) — Status of the view. Values: + - `'Ok' = 1` — Successful insert. + - `'ParsingError' = 2` — Exception when parsing the data. + - `'FlushError' = 3` — Exception when flushing the data. +- `flush_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — The date and time when the flush happened. +- `flush_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — The date and time when the flush happened with microseconds precision. +- `flush_query_id` ([String](../../sql-reference/data-types/string.md)) — ID of the flush query. + +**Example** + +Query: + +``` sql +SELECT * FROM system.asynchronous_insert_log LIMIT 1 \G; +``` + +Result: + +``` text +event_date: 2023-06-08 +event_time: 2023-06-08 10:08:53 +event_time_microseconds: 2023-06-08 10:08:53.199516 +query: INSERT INTO public.data_guess (user_id, datasource_id, timestamp, path, type, num, str) FORMAT CSV +database: public +table: data_guess +format: CSV +query_id: b46cd4c4-0269-4d0b-99f5-d27668c6102e +bytes: 133223 +exception: +status: Ok +flush_time: 2023-06-08 10:08:55 +flush_time_microseconds: 2023-06-08 10:08:55.139676 +flush_query_id: cd2c1e43-83f5-49dc-92e4-2fbc7f8d3716 +``` + +**See Also** + +- [system.query_log](../../operations/system-tables/query_log.md#system_tables-query_log) — Description of the `query_log` system table which contains common information about queries execution. +- [system.asynchronous_inserts](../../operations/system-tables/asynchronous_inserts.md#system_tables-asynchronous_inserts) — This table contains information about pending asynchronous inserts in queue. diff --git a/docs/en/operations/system-tables/asynchronous_inserts.md b/docs/en/operations/system-tables/asynchronous_inserts.md new file mode 100644 index 00000000000..8fd5f2bb520 --- /dev/null +++ b/docs/en/operations/system-tables/asynchronous_inserts.md @@ -0,0 +1,45 @@ +--- +slug: /en/operations/system-tables/asynchronous_inserts +--- +# asynchronous_inserts + +Contains information about pending asynchronous inserts in queue. + +Columns: + +- `query` ([String](../../sql-reference/data-types/string.md)) — Query string. +- `database` ([String](../../sql-reference/data-types/string.md)) — The name of the database the table is in. +- `table` ([String](../../sql-reference/data-types/string.md)) — Table name. +- `format` ([String](/docs/en/sql-reference/data-types/string.md)) — Format name. +- `first_update` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — First insert time with microseconds resolution. +- `total_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number of bytes waiting in the queue. +- `entries.query_id` ([Array(String)](../../sql-reference/data-types/array.md)) - Array of query ids of the inserts waiting in the queue. +- `entries.bytes` ([Array(UInt64)](../../sql-reference/data-types/array.md)) - Array of bytes of each insert query waiting in the queue. + +**Example** + +Query: + +``` sql +SELECT * FROM system.asynchronous_inserts LIMIT 1 \G; +``` + +Result: + +``` text +Row 1: +────── +query: INSERT INTO public.data_guess (user_id, datasource_id, timestamp, path, type, num, str) FORMAT CSV +database: public +table: data_guess +format: CSV +first_update: 2023-06-08 10:08:54.199606 +total_bytes: 133223 +entries.query_id: ['b46cd4c4-0269-4d0b-99f5-d27668c6102e'] +entries.bytes: [133223] +``` + +**See Also** + +- [system.query_log](../../operations/system-tables/query_log.md#system_tables-query_log) — Description of the `query_log` system table which contains common information about queries execution. +- [system.asynchronous_insert_log](../../operations/system-tables/asynchronous_insert_log.md#system_tables-asynchronous_insert_log) — This table contains information about async inserts performed. diff --git a/docs/en/sql-reference/statements/alter/projection.md b/docs/en/sql-reference/statements/alter/projection.md index 030e9352a00..b7399442d41 100644 --- a/docs/en/sql-reference/statements/alter/projection.md +++ b/docs/en/sql-reference/statements/alter/projection.md @@ -142,11 +142,11 @@ The following operations with [projections](/docs/en/engines/table-engines/merge ## ADD PROJECTION -`ALTER TABLE [db].name ADD PROJECTION name ( SELECT [GROUP BY] [ORDER BY] )` - Adds projection description to tables metadata. +`ALTER TABLE [db].name ADD PROJECTION [IF NOT EXISTS] name ( SELECT [GROUP BY] [ORDER BY] )` - Adds projection description to tables metadata. ## DROP PROJECTION -`ALTER TABLE [db].name DROP PROJECTION name` - Removes projection description from tables metadata and deletes projection files from disk. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations). +`ALTER TABLE [db].name DROP PROJECTION [IF EXISTS] name` - Removes projection description from tables metadata and deletes projection files from disk. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations). ## MATERIALIZE PROJECTION @@ -154,7 +154,7 @@ The following operations with [projections](/docs/en/engines/table-engines/merge ## CLEAR PROJECTION -`ALTER TABLE [db.]table CLEAR PROJECTION name IN PARTITION partition_name` - Deletes projection files from disk without removing description. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations). +`ALTER TABLE [db.]table CLEAR PROJECTION [IF EXISTS] name IN PARTITION partition_name` - Deletes projection files from disk without removing description. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations). The commands `ADD`, `DROP` and `CLEAR` are lightweight in a sense that they only change metadata or remove files. diff --git a/docs/en/sql-reference/statements/show.md b/docs/en/sql-reference/statements/show.md index 21c0010498a..f96eb55aa45 100644 --- a/docs/en/sql-reference/statements/show.md +++ b/docs/en/sql-reference/statements/show.md @@ -273,7 +273,7 @@ SHOW DICTIONARIES FROM db LIKE '%reg%' LIMIT 2 Displays a list of primary and data skipping indexes of a table. ```sql -SHOW [EXTENDED] {INDEX | INDEXES | KEYS } {FROM | IN} [{FROM | IN} ] [WHERE ] [INTO OUTFILE ] [FORMAT ] +SHOW [EXTENDED] {INDEX | INDEXES | INDICES | KEYS } {FROM | IN}
[{FROM | IN} ] [WHERE ] [INTO OUTFILE ] [FORMAT ] ``` The database and table name can be specified in abbreviated form as `.
`, i.e. `FROM tab FROM db` and `FROM db.tab` are diff --git a/docs/en/sql-reference/table-functions/azureBlobStorage.md b/docs/en/sql-reference/table-functions/azureBlobStorage.md new file mode 100644 index 00000000000..369bf7a964d --- /dev/null +++ b/docs/en/sql-reference/table-functions/azureBlobStorage.md @@ -0,0 +1,71 @@ +--- +slug: /en/sql-reference/table-functions/azure_blob_storage +sidebar_label: azure_blob_storage +keywords: [azure blob storage] +--- + +# azure\_blob\_storage Table Function + +Provides a table-like interface to select/insert files in [Azure Blob Storage](https://azure.microsoft.com/en-us/products/storage/blobs). This table function is similar to the [s3 function](../../sql-reference/table-functions/s3.md). + +**Syntax** + +``` sql +azureBlobStorage(- connection_string|storage_account_url, container_name, blobpath, [account_name, account_key, format, compression, structure]) +``` + +**Arguments** + +- `connection_string|storage_account_url` — connection_string includes account name & key ([Create connection string](https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json#configure-a-connection-string-for-an-azure-storage-account)) or you could also provide the storage account url here and account name & account key as separate parameters (see parameters account_name & account_key) +- `container_name` - Container name +- `blobpath` - file path. Supports following wildcards in readonly mode: `*`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` — numbers, `'abc'`, `'def'` — strings. +- `account_name` - if storage_account_url is used, then account name can be specified here +- `account_key` - if storage_account_url is used, then account key can be specified here +- `format` — The [format](../../interfaces/formats.md#formats) of the file. +- `compression` — Supported values: `none`, `gzip/gz`, `brotli/br`, `xz/LZMA`, `zstd/zst`. By default, it will autodetect compression by file extension. (same as setting to `auto`). +- `structure` — Structure of the table. Format `'column1_name column1_type, column2_name column2_type, ...'`. + +**Returned value** + +A table with the specified structure for reading or writing data in the specified file. + +**Examples** + +Write data into azure blob storage using the following : + +```sql +INSERT INTO TABLE FUNCTION azureBlobStorage('http://azurite1:10000/devstoreaccount1', + 'test_container', 'test_{_partition_id}.csv', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', + 'CSV', 'auto', 'column1 UInt32, column2 UInt32, column3 UInt32') PARTITION BY column3 VALUES (1, 2, 3), (3, 2, 1), (78, 43, 3); +``` + +And then it can be read using + +```sql +SELECT * FROM azureBlobStorage('http://azurite1:10000/devstoreaccount1', + 'test_container', 'test_1.csv', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', + 'CSV', 'auto', 'column1 UInt32, column2 UInt32, column3 UInt32'); +``` + +```response +┌───column1─┬────column2─┬───column3─┐ +│ 3 │ 2 │ 1 │ +└───────────┴────────────┴───────────┘ +``` + +or using connection_string + +```sql +SELECT count(*) FROM azureBlobStorage('DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;EndPointSuffix=core.windows.net', + 'test_container', 'test_3.csv', 'CSV', 'auto' , 'column1 UInt32, column2 UInt32, column3 UInt32'); +``` + +``` text +┌─count()─┐ +│ 2 │ +└─────────┘ +``` + +**See Also** + +- [AzureBlobStorage Table Engine](/docs/en/engines/table-engines/integrations/azureBlobStorage.md) diff --git a/docs/ru/sql-reference/statements/alter/projection.md b/docs/ru/sql-reference/statements/alter/projection.md index 63b068611ec..33e52b93add 100644 --- a/docs/ru/sql-reference/statements/alter/projection.md +++ b/docs/ru/sql-reference/statements/alter/projection.md @@ -8,13 +8,13 @@ sidebar_label: PROJECTION Доступны следующие операции с [проекциями](../../../engines/table-engines/mergetree-family/mergetree.md#projections): -- `ALTER TABLE [db].name ADD PROJECTION name ( SELECT [GROUP BY] [ORDER BY] )` — добавляет описание проекции в метаданные. +- `ALTER TABLE [db].name ADD PROJECTION [IF NOT EXISTS] name ( SELECT [GROUP BY] [ORDER BY] )` — добавляет описание проекции в метаданные. -- `ALTER TABLE [db].name DROP PROJECTION name` — удаляет описание проекции из метаданных и удаляет файлы проекции с диска. +- `ALTER TABLE [db].name DROP PROJECTION [IF EXISTS] name` — удаляет описание проекции из метаданных и удаляет файлы проекции с диска. - `ALTER TABLE [db.]table MATERIALIZE PROJECTION name IN PARTITION partition_name` — перестраивает проекцию в указанной партиции. Реализовано как [мутация](../../../sql-reference/statements/alter/index.md#mutations). -- `ALTER TABLE [db.]table CLEAR PROJECTION name IN PARTITION partition_name` — удаляет файлы проекции с диска без удаления описания. +- `ALTER TABLE [db.]table CLEAR PROJECTION [IF EXISTS] name IN PARTITION partition_name` — удаляет файлы проекции с диска без удаления описания. Команды `ADD`, `DROP` и `CLEAR` — легковесны, поскольку они только меняют метаданные или удаляют файлы. @@ -22,4 +22,4 @@ sidebar_label: PROJECTION :::note Манипуляции с проекциями поддерживаются только для таблиц с движком [`*MergeTree`](../../../engines/table-engines/mergetree-family/mergetree.md) (включая [replicated](../../../engines/table-engines/mergetree-family/replication.md) варианты). - ::: \ No newline at end of file + ::: diff --git a/src/Access/Common/AccessType.h b/src/Access/Common/AccessType.h index 6394c0279a7..84c99939f2d 100644 --- a/src/Access/Common/AccessType.h +++ b/src/Access/Common/AccessType.h @@ -210,6 +210,7 @@ enum class AccessType M(HDFS, "", GLOBAL, SOURCES) \ M(S3, "", GLOBAL, SOURCES) \ M(HIVE, "", GLOBAL, SOURCES) \ + M(AZURE, "", GLOBAL, SOURCES) \ M(SOURCES, "", GROUP, ALL) \ \ M(CLUSTER, "", GLOBAL, ALL) /* ON CLUSTER queries */ \ diff --git a/src/AggregateFunctions/UniqExactSet.h b/src/AggregateFunctions/UniqExactSet.h index 916dfe4a424..90cfe700179 100644 --- a/src/AggregateFunctions/UniqExactSet.h +++ b/src/AggregateFunctions/UniqExactSet.h @@ -1,10 +1,11 @@ #pragma once +#include #include #include #include -#include #include +#include namespace DB @@ -48,30 +49,38 @@ public: } else { - auto next_bucket_to_merge = std::make_shared(0); - - auto thread_func = [&lhs, &rhs, next_bucket_to_merge, thread_group = CurrentThread::getGroup()]() + try { - SCOPE_EXIT_SAFE( - if (thread_group) - CurrentThread::detachFromGroupIfNotDetached(); - ); - if (thread_group) - CurrentThread::attachToGroupIfDetached(thread_group); - setThreadName("UniqExactMerger"); + auto next_bucket_to_merge = std::make_shared(0); - while (true) + auto thread_func = [&lhs, &rhs, next_bucket_to_merge, thread_group = CurrentThread::getGroup()]() { - const auto bucket = next_bucket_to_merge->fetch_add(1); - if (bucket >= rhs.NUM_BUCKETS) - return; - lhs.impls[bucket].merge(rhs.impls[bucket]); - } - }; + SCOPE_EXIT_SAFE( + if (thread_group) + CurrentThread::detachFromGroupIfNotDetached(); + ); + if (thread_group) + CurrentThread::attachToGroupIfDetached(thread_group); + setThreadName("UniqExactMerger"); - for (size_t i = 0; i < std::min(thread_pool->getMaxThreads(), rhs.NUM_BUCKETS); ++i) - thread_pool->scheduleOrThrowOnError(thread_func); - thread_pool->wait(); + while (true) + { + const auto bucket = next_bucket_to_merge->fetch_add(1); + if (bucket >= rhs.NUM_BUCKETS) + return; + lhs.impls[bucket].merge(rhs.impls[bucket]); + } + }; + + for (size_t i = 0; i < std::min(thread_pool->getMaxThreads(), rhs.NUM_BUCKETS); ++i) + thread_pool->scheduleOrThrowOnError(thread_func); + thread_pool->wait(); + } + catch (...) + { + thread_pool->wait(); + throw; + } } } } diff --git a/src/Columns/ColumnLowCardinality.cpp b/src/Columns/ColumnLowCardinality.cpp index 4f9ab8215be..9269ea4ee4d 100644 --- a/src/Columns/ColumnLowCardinality.cpp +++ b/src/Columns/ColumnLowCardinality.cpp @@ -313,6 +313,11 @@ MutableColumnPtr ColumnLowCardinality::cloneResized(size_t size) const MutableColumnPtr ColumnLowCardinality::cloneNullable() const { auto res = cloneFinalized(); + /* Compact required not to share dictionary. + * If `shared` flag is not set `cloneFinalized` will return shallow copy + * and `nestedToNullable` will mutate source column. + */ + assert_cast(*res).compactInplace(); assert_cast(*res).nestedToNullable(); return res; } diff --git a/src/Columns/tests/gtest_low_cardinality.cpp b/src/Columns/tests/gtest_low_cardinality.cpp index 3ffc88f6a7d..5e01279b7df 100644 --- a/src/Columns/tests/gtest_low_cardinality.cpp +++ b/src/Columns/tests/gtest_low_cardinality.cpp @@ -48,3 +48,16 @@ TEST(ColumnLowCardinality, Insert) testLowCardinalityNumberInsert(std::make_shared()); testLowCardinalityNumberInsert(std::make_shared()); } + +TEST(ColumnLowCardinality, Clone) +{ + auto data_type = std::make_shared(); + auto low_cardinality_type = std::make_shared(data_type); + auto column = low_cardinality_type->createColumn(); + ASSERT_FALSE(assert_cast(*column).nestedIsNullable()); + + auto nullable_column = assert_cast(*column).cloneNullable(); + + ASSERT_TRUE(assert_cast(*nullable_column).nestedIsNullable()); + ASSERT_FALSE(assert_cast(*column).nestedIsNullable()); +} diff --git a/src/Common/ProfileEvents.cpp b/src/Common/ProfileEvents.cpp index fdee9902634..f66f7bc6465 100644 --- a/src/Common/ProfileEvents.cpp +++ b/src/Common/ProfileEvents.cpp @@ -348,6 +348,9 @@ The server successfully detected this situation and will download merged part fr M(S3PutObject, "Number of S3 API PutObject calls.") \ M(S3GetObject, "Number of S3 API GetObject calls.") \ \ + M(AzureDeleteObjects, "Number of Azure blob storage API DeleteObject(s) calls.") \ + M(AzureListObjects, "Number of Azure blob storage API ListObjects calls.") \ + \ M(DiskS3DeleteObjects, "Number of DiskS3 API DeleteObject(s) calls.") \ M(DiskS3CopyObject, "Number of DiskS3 API CopyObject calls.") \ M(DiskS3ListObjects, "Number of DiskS3 API ListObjects calls.") \ diff --git a/src/Coordination/Changelog.cpp b/src/Coordination/Changelog.cpp index 894fd93cfa7..c0dfbc2cbc3 100644 --- a/src/Coordination/Changelog.cpp +++ b/src/Coordination/Changelog.cpp @@ -13,6 +13,7 @@ #include #include #include +#include namespace DB @@ -479,7 +480,7 @@ public: continue; /// Create log entry for read data - auto log_entry = nuraft::cs_new(record.header.term, record.blob, record.header.value_type); + auto log_entry = nuraft::cs_new(record.header.term, record.blob, static_cast(record.header.value_type)); if (result.first_read_index == 0) result.first_read_index = record.header.index; diff --git a/src/Coordination/Changelog.h b/src/Coordination/Changelog.h index 56b0475ba8b..3c09370182d 100644 --- a/src/Coordination/Changelog.h +++ b/src/Coordination/Changelog.h @@ -39,7 +39,7 @@ struct ChangelogRecordHeader ChangelogVersion version = CURRENT_CHANGELOG_VERSION; uint64_t index = 0; /// entry log number uint64_t term = 0; - nuraft::log_val_type value_type{}; + int32_t value_type{}; uint64_t blob_size = 0; }; diff --git a/src/Coordination/KeeperServer.cpp b/src/Coordination/KeeperServer.cpp index 45db9e85fa5..6e47412cd3a 100644 --- a/src/Coordination/KeeperServer.cpp +++ b/src/Coordination/KeeperServer.cpp @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -607,12 +608,30 @@ nuraft::cb_func::ReturnCode KeeperServer::callbackFunc(nuraft::cb_func::Type typ } } + const auto follower_preappend = [&](const auto & entry) + { + if (entry->get_val_type() != nuraft::app_log) + return nuraft::cb_func::ReturnCode::Ok; + + try + { + state_machine->parseRequest(entry->get_buf(), /*final=*/false); + } + catch (...) + { + tryLogCurrentException(log, "Failed to parse request from log entry"); + throw; + } + return nuraft::cb_func::ReturnCode::Ok; + + }; + if (initialized_flag) { switch (type) { // This event is called before a single log is appended to the entry on the leader node - case nuraft::cb_func::PreAppendLog: + case nuraft::cb_func::PreAppendLogLeader: { // we are relying on the fact that request are being processed under a mutex // and not a RW lock @@ -665,7 +684,12 @@ nuraft::cb_func::ReturnCode KeeperServer::callbackFunc(nuraft::cb_func::Type typ if (request_for_session->digest->version != KeeperStorage::NO_DIGEST) writeIntBinary(request_for_session->digest->value, write_buf); - break; + return nuraft::cb_func::ReturnCode::Ok; + } + case nuraft::cb_func::PreAppendLogFollower: + { + const auto & entry = *static_cast(param->ctx); + return follower_preappend(entry); } case nuraft::cb_func::AppendLogFailed: { @@ -678,13 +702,11 @@ nuraft::cb_func::ReturnCode KeeperServer::callbackFunc(nuraft::cb_func::Type typ auto & entry_buf = entry->get_buf(); auto request_for_session = state_machine->parseRequest(entry_buf, true); state_machine->rollbackRequest(*request_for_session, true); - break; + return nuraft::cb_func::ReturnCode::Ok; } default: - break; + return nuraft::cb_func::ReturnCode::Ok; } - - return nuraft::cb_func::ReturnCode::Ok; } size_t last_commited = state_machine->last_commit_index(); @@ -737,6 +759,11 @@ nuraft::cb_func::ReturnCode KeeperServer::callbackFunc(nuraft::cb_func::Type typ initial_batch_committed = true; return nuraft::cb_func::ReturnCode::Ok; } + case nuraft::cb_func::PreAppendLogFollower: + { + const auto & entry = *static_cast(param->ctx); + return follower_preappend(entry); + } default: /// ignore other events return nuraft::cb_func::ReturnCode::Ok; } diff --git a/src/Coordination/KeeperStateMachine.cpp b/src/Coordination/KeeperStateMachine.cpp index 65abee44050..7d251ad48b9 100644 --- a/src/Coordination/KeeperStateMachine.cpp +++ b/src/Coordination/KeeperStateMachine.cpp @@ -272,9 +272,8 @@ bool KeeperStateMachine::preprocess(const KeeperStorage::RequestForSession & req } catch (...) { - tryLogCurrentException(__PRETTY_FUNCTION__); - rollbackRequestNoLock(request_for_session, true); - throw; + tryLogCurrentException(__PRETTY_FUNCTION__, "Failed to preprocess stored log, aborting to avoid inconsistent state"); + std::abort(); } if (keeper_context->digest_enabled && request_for_session.digest) diff --git a/src/Core/Settings.h b/src/Core/Settings.h index b6050ffcd2c..14abdb440c0 100644 --- a/src/Core/Settings.h +++ b/src/Core/Settings.h @@ -81,7 +81,9 @@ class IColumn; M(UInt64, s3_upload_part_size_multiply_parts_count_threshold, 500, "Each time this number of parts was uploaded to S3 s3_min_upload_part_size multiplied by s3_upload_part_size_multiply_factor.", 0) \ M(UInt64, s3_max_inflight_parts_for_one_file, 20, "The maximum number of a concurrent loaded parts in multipart upload request. 0 means unlimited. You ", 0) \ M(UInt64, s3_max_single_part_upload_size, 32*1024*1024, "The maximum size of object to upload using singlepart upload to S3.", 0) \ + M(UInt64, azure_max_single_part_upload_size, 100*1024*1024, "The maximum size of object to upload using singlepart upload to Azure blob storage.", 0) \ M(UInt64, s3_max_single_read_retries, 4, "The maximum number of retries during single S3 read.", 0) \ + M(UInt64, azure_max_single_read_retries, 4, "The maximum number of retries during single Azure blob storage read.", 0) \ M(UInt64, s3_max_unexpected_write_error_retries, 4, "The maximum number of retries in case of unexpected errors during S3 write.", 0) \ M(UInt64, s3_max_redirects, 10, "Max number of S3 redirects hops allowed.", 0) \ M(UInt64, s3_max_connections, 1024, "The maximum number of connections per server.", 0) \ @@ -90,8 +92,11 @@ class IColumn; M(UInt64, s3_max_put_rps, 0, "Limit on S3 PUT request per second rate before throttling. Zero means unlimited.", 0) \ M(UInt64, s3_max_put_burst, 0, "Max number of requests that can be issued simultaneously before hitting request per second limit. By default (0) equals to `s3_max_put_rps`", 0) \ M(UInt64, s3_list_object_keys_size, 1000, "Maximum number of files that could be returned in batch by ListObject request", 0) \ + M(UInt64, azure_list_object_keys_size, 1000, "Maximum number of files that could be returned in batch by ListObject request", 0) \ M(Bool, s3_truncate_on_insert, false, "Enables or disables truncate before insert in s3 engine tables.", 0) \ + M(Bool, azure_truncate_on_insert, false, "Enables or disables truncate before insert in azure engine tables.", 0) \ M(Bool, s3_create_new_file_on_insert, false, "Enables or disables creating a new file on each insert in s3 engine tables", 0) \ + M(Bool, azure_create_new_file_on_insert, false, "Enables or disables creating a new file on each insert in azure engine tables", 0) \ M(Bool, s3_check_objects_after_upload, false, "Check each uploaded object to s3 with head request to be sure that upload was successful", 0) \ M(Bool, s3_allow_parallel_part_upload, true, "Use multiple threads for s3 multipart upload. It may lead to slightly higher memory usage", 0) \ M(Bool, s3_throw_on_zero_files_match, false, "Throw an error, when ListObjects request cannot match any files", 0) \ @@ -147,6 +152,7 @@ class IColumn; M(Bool, enable_memory_bound_merging_of_aggregation_results, true, "Enable memory bound merging strategy for aggregation.", 0) \ M(Bool, enable_positional_arguments, true, "Enable positional arguments in ORDER BY, GROUP BY and LIMIT BY", 0) \ M(Bool, enable_extended_results_for_datetime_functions, false, "Enable date functions like toLastDayOfMonth return Date32 results (instead of Date results) for Date32/DateTime64 arguments.", 0) \ + M(Bool, allow_nonconst_timezone_arguments, false, "Allow non-const timezone arguments in certain time-related functions like toTimeZone(), fromUnixTimestamp*(), snowflakeToDateTime*()", 0) \ \ M(Bool, group_by_use_nulls, false, "Treat columns mentioned in ROLLUP, CUBE or GROUPING SETS as Nullable", 0) \ \ @@ -708,6 +714,7 @@ class IColumn; \ M(Bool, schema_inference_use_cache_for_file, true, "Use cache in schema inference while using file table function", 0) \ M(Bool, schema_inference_use_cache_for_s3, true, "Use cache in schema inference while using s3 table function", 0) \ + M(Bool, schema_inference_use_cache_for_azure, true, "Use cache in schema inference while using azure table function", 0) \ M(Bool, schema_inference_use_cache_for_hdfs, true, "Use cache in schema inference while using hdfs table function", 0) \ M(Bool, schema_inference_use_cache_for_url, true, "Use cache in schema inference while using url table function", 0) \ M(Bool, schema_inference_cache_require_modification_time_for_url, true, "Use schema from cache for URL with last modification time validation (for urls with Last-Modified header)", 0) \ diff --git a/src/Core/SettingsChangesHistory.h b/src/Core/SettingsChangesHistory.h index c0f10b13282..9fd45ac16d6 100644 --- a/src/Core/SettingsChangesHistory.h +++ b/src/Core/SettingsChangesHistory.h @@ -85,6 +85,7 @@ static std::map sett {"use_with_fill_by_sorting_prefix", false, true, "Columns preceding WITH FILL columns in ORDER BY clause form sorting prefix. Rows with different values in sorting prefix are filled independently"}, {"output_format_parquet_compliant_nested_types", false, true, "Change an internal field name in output Parquet file schema."}}}, {"23.4", {{"allow_suspicious_indices", true, false, "If true, index can defined with identical expressions"}, + {"allow_nonconst_timezone_arguments", true, false, "Allow non-const timezone arguments in certain time-related functions like toTimeZone(), fromUnixTimestamp*(), snowflakeToDateTime*()."}, {"connect_timeout_with_failover_ms", 50, 1000, "Increase default connect timeout because of async connect"}, {"connect_timeout_with_failover_secure_ms", 100, 1000, "Increase default secure connect timeout because of async connect"}, {"hedged_connection_timeout_ms", 100, 50, "Start new connection in hedged requests after 50 ms instead of 100 to correspond with previous connect timeout"}}}, diff --git a/src/Dictionaries/MongoDBDictionarySource.cpp b/src/Dictionaries/MongoDBDictionarySource.cpp index b7e342f3c80..46910fa9f6a 100644 --- a/src/Dictionaries/MongoDBDictionarySource.cpp +++ b/src/Dictionaries/MongoDBDictionarySource.cpp @@ -170,7 +170,7 @@ MongoDBDictionarySource::~MongoDBDictionarySource() = default; QueryPipeline MongoDBDictionarySource::loadAll() { - return QueryPipeline(std::make_shared(connection, createCursor(db, collection, sample_block), sample_block, max_block_size)); + return QueryPipeline(std::make_shared(connection, db, collection, Poco::MongoDB::Document{}, sample_block, max_block_size)); } QueryPipeline MongoDBDictionarySource::loadIds(const std::vector & ids) @@ -178,7 +178,7 @@ QueryPipeline MongoDBDictionarySource::loadIds(const std::vector & ids) if (!dict_struct.id) throw Exception(ErrorCodes::UNSUPPORTED_METHOD, "'id' is required for selective loading"); - auto cursor = createCursor(db, collection, sample_block); + Poco::MongoDB::Document query; /** NOTE: While building array, Poco::MongoDB requires passing of different unused element names, along with values. * In general, Poco::MongoDB is quite inefficient and bulky. @@ -188,9 +188,9 @@ QueryPipeline MongoDBDictionarySource::loadIds(const std::vector & ids) for (const UInt64 id : ids) ids_array->add(DB::toString(id), static_cast(id)); - cursor->query().selector().addNewDocument(dict_struct.id->name).add("$in", ids_array); + query.addNewDocument(dict_struct.id->name).add("$in", ids_array); - return QueryPipeline(std::make_shared(connection, std::move(cursor), sample_block, max_block_size)); + return QueryPipeline(std::make_shared(connection, db, collection, query, sample_block, max_block_size)); } @@ -199,8 +199,7 @@ QueryPipeline MongoDBDictionarySource::loadKeys(const Columns & key_columns, con if (!dict_struct.key) throw Exception(ErrorCodes::UNSUPPORTED_METHOD, "'key' is required for selective loading"); - auto cursor = createCursor(db, collection, sample_block); - + Poco::MongoDB::Document query; Poco::MongoDB::Array::Ptr keys_array(new Poco::MongoDB::Array); for (const auto row_idx : requested_rows) @@ -254,9 +253,9 @@ QueryPipeline MongoDBDictionarySource::loadKeys(const Columns & key_columns, con } /// If more than one key we should use $or - cursor->query().selector().add("$or", keys_array); + query.add("$or", keys_array); - return QueryPipeline(std::make_shared(connection, std::move(cursor), sample_block, max_block_size)); + return QueryPipeline(std::make_shared(connection, db, collection, query, sample_block, max_block_size)); } std::string MongoDBDictionarySource::toString() const diff --git a/src/Dictionaries/MongoDBDictionarySource.h b/src/Dictionaries/MongoDBDictionarySource.h index fefcb1bff9f..6d93bc6c090 100644 --- a/src/Dictionaries/MongoDBDictionarySource.h +++ b/src/Dictionaries/MongoDBDictionarySource.h @@ -16,7 +16,6 @@ namespace Util namespace MongoDB { class Connection; - class Cursor; } } diff --git a/src/Disks/IO/ReadBufferFromAzureBlobStorage.cpp b/src/Disks/IO/ReadBufferFromAzureBlobStorage.cpp index 0f197c2ff06..129bb97be09 100644 --- a/src/Disks/IO/ReadBufferFromAzureBlobStorage.cpp +++ b/src/Disks/IO/ReadBufferFromAzureBlobStorage.cpp @@ -35,6 +35,7 @@ ReadBufferFromAzureBlobStorage::ReadBufferFromAzureBlobStorage( size_t max_single_read_retries_, size_t max_single_download_retries_, bool use_external_buffer_, + bool restricted_seek_, size_t read_until_position_) : ReadBufferFromFileBase(use_external_buffer_ ? 0 : read_settings_.remote_fs_buffer_size, nullptr, 0) , blob_container_client(blob_container_client_) @@ -44,6 +45,7 @@ ReadBufferFromAzureBlobStorage::ReadBufferFromAzureBlobStorage( , read_settings(read_settings_) , tmp_buffer_size(read_settings.remote_fs_buffer_size) , use_external_buffer(use_external_buffer_) + , restricted_seek(restricted_seek_) , read_until_position(read_until_position_) { if (!use_external_buffer) @@ -54,6 +56,22 @@ ReadBufferFromAzureBlobStorage::ReadBufferFromAzureBlobStorage( } } + +void ReadBufferFromAzureBlobStorage::setReadUntilEnd() +{ + if (read_until_position) + { + read_until_position = 0; + if (initialized) + { + offset = getPosition(); + resetWorkingBuffer(); + initialized = false; + } + } + +} + void ReadBufferFromAzureBlobStorage::setReadUntilPosition(size_t position) { read_until_position = position; @@ -118,8 +136,17 @@ bool ReadBufferFromAzureBlobStorage::nextImpl() off_t ReadBufferFromAzureBlobStorage::seek(off_t offset_, int whence) { - if (initialized) - throw Exception(ErrorCodes::CANNOT_SEEK_THROUGH_FILE, "Seek is allowed only before first read attempt from the buffer."); + if (offset_ == getPosition() && whence == SEEK_SET) + return offset_; + + if (initialized && restricted_seek) + { + throw Exception( + ErrorCodes::CANNOT_SEEK_THROUGH_FILE, + "Seek is allowed only before first read attempt from the buffer (current offset: " + "{}, new offset: {}, reading until position: {}, available: {})", + getPosition(), offset_, read_until_position, available()); + } if (whence != SEEK_SET) throw Exception(ErrorCodes::CANNOT_SEEK_THROUGH_FILE, "Only SEEK_SET mode is allowed."); @@ -127,8 +154,36 @@ off_t ReadBufferFromAzureBlobStorage::seek(off_t offset_, int whence) if (offset_ < 0) throw Exception(ErrorCodes::SEEK_POSITION_OUT_OF_BOUND, "Seek position is out of bounds. Offset: {}", offset_); - offset = offset_; + if (!restricted_seek) + { + if (!working_buffer.empty() + && static_cast(offset_) >= offset - working_buffer.size() + && offset_ < offset) + { + pos = working_buffer.end() - (offset - offset_); + assert(pos >= working_buffer.begin()); + assert(pos < working_buffer.end()); + return getPosition(); + } + + off_t position = getPosition(); + if (initialized && offset_ > position) + { + size_t diff = offset_ - position; + if (diff < read_settings.remote_read_min_bytes_for_seek) + { + ignore(diff); + return offset_; + } + } + + resetWorkingBuffer(); + if (initialized) + initialized = false; + } + + offset = offset_; return offset; } @@ -152,7 +207,8 @@ void ReadBufferFromAzureBlobStorage::initialize() download_options.Range = {static_cast(offset), length}; - blob_client = std::make_unique(blob_container_client->GetBlobClient(path)); + if (!blob_client) + blob_client = std::make_unique(blob_container_client->GetBlobClient(path)); size_t sleep_time_with_backoff_milliseconds = 100; for (size_t i = 0; i < max_single_download_retries; ++i) @@ -182,6 +238,18 @@ void ReadBufferFromAzureBlobStorage::initialize() initialized = true; } +size_t ReadBufferFromAzureBlobStorage::getFileSize() +{ + if (!blob_client) + blob_client = std::make_unique(blob_container_client->GetBlobClient(path)); + + if (file_size.has_value()) + return *file_size; + + file_size = blob_client->GetProperties().Value.BlobSize; + return *file_size; +} + } #endif diff --git a/src/Disks/IO/ReadBufferFromAzureBlobStorage.h b/src/Disks/IO/ReadBufferFromAzureBlobStorage.h index 6164a005773..4e21f543653 100644 --- a/src/Disks/IO/ReadBufferFromAzureBlobStorage.h +++ b/src/Disks/IO/ReadBufferFromAzureBlobStorage.h @@ -24,6 +24,7 @@ public: size_t max_single_read_retries_, size_t max_single_download_retries_, bool use_external_buffer_ = false, + bool restricted_seek_ = false, size_t read_until_position_ = 0); off_t seek(off_t off, int whence) override; @@ -37,9 +38,12 @@ public: String getFileName() const override { return path; } void setReadUntilPosition(size_t position) override; + void setReadUntilEnd() override; bool supportsRightBoundedReads() const override { return true; } + size_t getFileSize() override; + private: void initialize(); @@ -55,6 +59,12 @@ private: std::vector tmp_buffer; size_t tmp_buffer_size; bool use_external_buffer; + + /// There is different seek policy for disk seek and for non-disk seek + /// (non-disk seek is applied for seekable input formats: orc, arrow, parquet). + bool restricted_seek; + + off_t read_until_position = 0; off_t offset = 0; diff --git a/src/Disks/IO/ReadBufferFromRemoteFSGather.cpp b/src/Disks/IO/ReadBufferFromRemoteFSGather.cpp index 04030fe5f8f..eb9c509e459 100644 --- a/src/Disks/IO/ReadBufferFromRemoteFSGather.cpp +++ b/src/Disks/IO/ReadBufferFromRemoteFSGather.cpp @@ -22,13 +22,15 @@ ReadBufferFromRemoteFSGather::ReadBufferFromRemoteFSGather( ReadBufferCreator && read_buffer_creator_, const StoredObjects & blobs_to_read_, const ReadSettings & settings_, - std::shared_ptr cache_log_) - : ReadBufferFromFileBase(0, nullptr, 0) + std::shared_ptr cache_log_, + bool use_external_buffer_) + : ReadBufferFromFileBase(use_external_buffer_ ? 0 : settings_.remote_fs_buffer_size, nullptr, 0) , settings(settings_) , blobs_to_read(blobs_to_read_) , read_buffer_creator(std::move(read_buffer_creator_)) , cache_log(settings.enable_filesystem_cache_log ? cache_log_ : nullptr) , query_id(CurrentThread::isInitialized() && CurrentThread::get().getQueryContext() != nullptr ? CurrentThread::getQueryId() : "") + , use_external_buffer(use_external_buffer_) , log(&Poco::Logger::get("ReadBufferFromRemoteFSGather")) { if (!blobs_to_read.empty()) @@ -36,7 +38,9 @@ ReadBufferFromRemoteFSGather::ReadBufferFromRemoteFSGather( with_cache = settings.remote_fs_cache && settings.enable_filesystem_cache - && (!query_id.empty() || settings.read_from_filesystem_cache_if_exists_otherwise_bypass_cache || !settings.avoid_readthrough_cache_outside_query_context); + && (!query_id.empty() + || settings.read_from_filesystem_cache_if_exists_otherwise_bypass_cache + || !settings.avoid_readthrough_cache_outside_query_context); } SeekableReadBufferPtr ReadBufferFromRemoteFSGather::createImplementationBuffer(const StoredObject & object) @@ -235,22 +239,49 @@ void ReadBufferFromRemoteFSGather::reset() off_t ReadBufferFromRemoteFSGather::seek(off_t offset, int whence) { - if (whence != SEEK_SET) - throw Exception(ErrorCodes::CANNOT_SEEK_THROUGH_FILE, "Only seeking with SEEK_SET is allowed"); + if (offset == getPosition() && whence == SEEK_SET) + return offset; + + if (whence != SEEK_SET) + throw Exception(ErrorCodes::CANNOT_SEEK_THROUGH_FILE, "Only SEEK_SET mode is allowed."); + + if (use_external_buffer) + { + /// In case use_external_buffer == true, the buffer manages seeks itself. + reset(); + } + else + { + if (!working_buffer.empty() + && static_cast(offset) >= file_offset_of_buffer_end - working_buffer.size() + && static_cast(offset) < file_offset_of_buffer_end) + { + pos = working_buffer.end() - (file_offset_of_buffer_end - offset); + assert(pos >= working_buffer.begin()); + assert(pos < working_buffer.end()); + + return getPosition(); + } + + off_t position = getPosition(); + if (current_buf && offset > position) + { + size_t diff = offset - position; + if (diff < settings.remote_read_min_bytes_for_seek) + { + ignore(diff); + return offset; + } + } + + resetWorkingBuffer(); + reset(); + } - reset(); file_offset_of_buffer_end = offset; return file_offset_of_buffer_end; } -size_t ReadBufferFromRemoteFSGather::getImplementationBufferOffset() const -{ - if (!current_buf) - return file_offset_of_buffer_end; - - return current_buf->getFileOffsetOfBufferEnd(); -} - ReadBufferFromRemoteFSGather::~ReadBufferFromRemoteFSGather() { if (!with_cache) diff --git a/src/Disks/IO/ReadBufferFromRemoteFSGather.h b/src/Disks/IO/ReadBufferFromRemoteFSGather.h index 39b81d6f9ac..cb98ac6d9f3 100644 --- a/src/Disks/IO/ReadBufferFromRemoteFSGather.h +++ b/src/Disks/IO/ReadBufferFromRemoteFSGather.h @@ -27,7 +27,8 @@ public: ReadBufferCreator && read_buffer_creator_, const StoredObjects & blobs_to_read_, const ReadSettings & settings_, - std::shared_ptr cache_log_); + std::shared_ptr cache_log_, + bool use_external_buffer_); ~ReadBufferFromRemoteFSGather() override; @@ -37,16 +38,14 @@ public: void setReadUntilPosition(size_t position) override; + void setReadUntilEnd() override { return setReadUntilPosition(getFileSize()); } + IAsynchronousReader::Result readInto(char * data, size_t size, size_t offset, size_t ignore) override; size_t getFileSize() override { return getTotalSize(blobs_to_read); } size_t getFileOffsetOfBufferEnd() const override { return file_offset_of_buffer_end; } - bool initialized() const { return current_buf != nullptr; } - - size_t getImplementationBufferOffset() const; - off_t seek(off_t offset, int whence) override; off_t getPosition() override { return file_offset_of_buffer_end - available() + bytes_to_ignore; } @@ -71,6 +70,7 @@ private: const ReadBufferCreator read_buffer_creator; const std::shared_ptr cache_log; const String query_id; + const bool use_external_buffer; bool with_cache; size_t read_until_position = 0; diff --git a/src/Disks/IO/ReadBufferFromWebServer.h b/src/Disks/IO/ReadBufferFromWebServer.h index dd9cf63224f..fa899cf2c5e 100644 --- a/src/Disks/IO/ReadBufferFromWebServer.h +++ b/src/Disks/IO/ReadBufferFromWebServer.h @@ -12,8 +12,6 @@ namespace DB /* Read buffer, which reads via http, but is used as ReadBufferFromFileBase. * Used to read files, hosted on a web server with static files. - * - * Usage: ReadIndirectBufferFromRemoteFS -> SeekAvoidingReadBuffer -> ReadBufferFromWebServer -> ReadWriteBufferFromHTTP. */ class ReadBufferFromWebServer : public ReadBufferFromFileBase { diff --git a/src/Disks/IO/ReadIndirectBufferFromRemoteFS.cpp b/src/Disks/IO/ReadIndirectBufferFromRemoteFS.cpp deleted file mode 100644 index a559b47f2cc..00000000000 --- a/src/Disks/IO/ReadIndirectBufferFromRemoteFS.cpp +++ /dev/null @@ -1,118 +0,0 @@ -#include "ReadIndirectBufferFromRemoteFS.h" - -#include - - -namespace DB -{ - -namespace ErrorCodes -{ - extern const int CANNOT_SEEK_THROUGH_FILE; -} - - -ReadIndirectBufferFromRemoteFS::ReadIndirectBufferFromRemoteFS( - std::shared_ptr impl_, const ReadSettings & settings) - : ReadBufferFromFileBase(settings.remote_fs_buffer_size, nullptr, 0) - , impl(impl_) - , read_settings(settings) -{ -} - -size_t ReadIndirectBufferFromRemoteFS::getFileSize() -{ - return impl->getFileSize(); -} - -off_t ReadIndirectBufferFromRemoteFS::getPosition() -{ - return impl->file_offset_of_buffer_end - available(); -} - - -String ReadIndirectBufferFromRemoteFS::getFileName() const -{ - return impl->getFileName(); -} - - -void ReadIndirectBufferFromRemoteFS::setReadUntilPosition(size_t position) -{ - impl->setReadUntilPosition(position); -} - - -void ReadIndirectBufferFromRemoteFS::setReadUntilEnd() -{ - impl->setReadUntilPosition(impl->getFileSize()); -} - - -off_t ReadIndirectBufferFromRemoteFS::seek(off_t offset_, int whence) -{ - if (whence == SEEK_CUR) - { - /// If position within current working buffer - shift pos. - if (!working_buffer.empty() && size_t(getPosition() + offset_) < impl->file_offset_of_buffer_end) - { - pos += offset_; - return getPosition(); - } - else - { - impl->file_offset_of_buffer_end += offset_; - } - } - else if (whence == SEEK_SET) - { - /// If position within current working buffer - shift pos. - if (!working_buffer.empty() - && size_t(offset_) >= impl->file_offset_of_buffer_end - working_buffer.size() - && size_t(offset_) < impl->file_offset_of_buffer_end) - { - pos = working_buffer.end() - (impl->file_offset_of_buffer_end - offset_); - return getPosition(); - } - else - { - impl->file_offset_of_buffer_end = offset_; - } - } - else - throw Exception(ErrorCodes::CANNOT_SEEK_THROUGH_FILE, "Only SEEK_SET or SEEK_CUR modes are allowed."); - - impl->seek(impl->file_offset_of_buffer_end, SEEK_SET); - resetWorkingBuffer(); - - file_offset_of_buffer_end = impl->file_offset_of_buffer_end; - return impl->file_offset_of_buffer_end; -} - - -bool ReadIndirectBufferFromRemoteFS::nextImpl() -{ - chassert(internal_buffer.size() == read_settings.remote_fs_buffer_size); - chassert(file_offset_of_buffer_end <= impl->getFileSize()); - - auto [size, offset, _] = impl->readInto(internal_buffer.begin(), internal_buffer.size(), file_offset_of_buffer_end, /* ignore */0); - - chassert(offset <= size); - chassert(size <= internal_buffer.size()); - - size_t bytes_read = size - offset; - if (bytes_read) - working_buffer = Buffer(internal_buffer.begin() + offset, internal_buffer.begin() + size); - - file_offset_of_buffer_end = impl->getFileOffsetOfBufferEnd(); - - /// In case of multiple files for the same file in clickhouse (i.e. log family) - /// file_offset_of_buffer_end will not match getImplementationBufferOffset() - /// so we use [impl->getImplementationBufferOffset(), impl->getFileSize()] - chassert(file_offset_of_buffer_end >= impl->getImplementationBufferOffset()); - chassert(file_offset_of_buffer_end <= impl->getFileSize()); - - return bytes_read; -} - -} diff --git a/src/Disks/IO/ReadIndirectBufferFromRemoteFS.h b/src/Disks/IO/ReadIndirectBufferFromRemoteFS.h deleted file mode 100644 index 19647b1fa39..00000000000 --- a/src/Disks/IO/ReadIndirectBufferFromRemoteFS.h +++ /dev/null @@ -1,46 +0,0 @@ -#pragma once - -#include "config.h" -#include -#include -#include - - -namespace DB -{ - -class ReadBufferFromRemoteFSGather; - -/** -* Reads data from S3/HDFS/Web using stored paths in metadata. -* There is asynchronous version of this class -- AsynchronousReadIndirectBufferFromRemoteFS. -*/ -class ReadIndirectBufferFromRemoteFS : public ReadBufferFromFileBase -{ - -public: - explicit ReadIndirectBufferFromRemoteFS(std::shared_ptr impl_, const ReadSettings & settings); - - off_t seek(off_t offset_, int whence) override; - - off_t getPosition() override; - - String getFileName() const override; - - void setReadUntilPosition(size_t position) override; - - void setReadUntilEnd() override; - - size_t getFileSize() override; - -private: - bool nextImpl() override; - - std::shared_ptr impl; - - ReadSettings read_settings; - - size_t file_offset_of_buffer_end = 0; -}; - -} diff --git a/src/Disks/ObjectStorages/AzureBlobStorage/AzureBlobStorageAuth.cpp b/src/Disks/ObjectStorages/AzureBlobStorage/AzureBlobStorageAuth.cpp index 1e06490b5bc..1b62b5fdb05 100644 --- a/src/Disks/ObjectStorages/AzureBlobStorage/AzureBlobStorageAuth.cpp +++ b/src/Disks/ObjectStorages/AzureBlobStorage/AzureBlobStorageAuth.cpp @@ -57,14 +57,22 @@ void validateContainerName(const String & container_name) AzureBlobStorageEndpoint processAzureBlobStorageEndpoint(const Poco::Util::AbstractConfiguration & config, const String & config_prefix) { - String storage_account_url = config.getString(config_prefix + ".storage_account_url"); - validateStorageAccountUrl(storage_account_url); + std::string storage_url; + if (config.has(config_prefix + ".storage_account_url")) + { + storage_url = config.getString(config_prefix + ".storage_account_url"); + validateStorageAccountUrl(storage_url); + } + else + { + storage_url = config.getString(config_prefix + ".connection_string"); + } String container_name = config.getString(config_prefix + ".container_name", "default-container"); validateContainerName(container_name); std::optional container_already_exists {}; if (config.has(config_prefix + ".container_already_exists")) container_already_exists = {config.getBool(config_prefix + ".container_already_exists")}; - return {storage_account_url, container_name, container_already_exists}; + return {storage_url, container_name, container_already_exists}; } @@ -136,10 +144,7 @@ std::unique_ptr getAzureBlobContainerClient( /// If container_already_exists is not set (in config), ignore already exists error. /// (Conflict - The specified container already exists) if (!endpoint.container_already_exists.has_value() && e.StatusCode == Azure::Core::Http::HttpStatusCode::Conflict) - { - tryLogCurrentException("Container already exists, returning the existing container"); return getAzureBlobStorageClientWithAuth(final_url, container_name, config, config_prefix); - } throw; } } diff --git a/src/Disks/ObjectStorages/AzureBlobStorage/AzureObjectStorage.cpp b/src/Disks/ObjectStorages/AzureBlobStorage/AzureObjectStorage.cpp index 23a0da39dd3..dbb41851053 100644 --- a/src/Disks/ObjectStorages/AzureBlobStorage/AzureObjectStorage.cpp +++ b/src/Disks/ObjectStorages/AzureBlobStorage/AzureObjectStorage.cpp @@ -6,7 +6,6 @@ #include #include #include -#include #include #include @@ -57,6 +56,7 @@ public: private: bool getBatchAndCheckNext(RelativePathsWithMetadata & batch) override { + batch.clear(); auto outcome = client->ListBlobs(options); auto blob_list_response = client->ListBlobs(options); auto blobs_list = blob_list_response.Blobs; @@ -73,11 +73,11 @@ private: {}}); } - options.ContinuationToken = blob_list_response.NextPageToken; - if (blob_list_response.HasPage()) - return true; + if (!blob_list_response.NextPageToken.HasValue() || blob_list_response.NextPageToken.Value().empty()) + return false; - return false; + options.ContinuationToken = blob_list_response.NextPageToken; + return true; } std::shared_ptr client; @@ -215,27 +215,37 @@ std::unique_ptr AzureObjectStorage::readObjects( /// NOL settings_ptr->max_single_read_retries, settings_ptr->max_single_download_retries, /* use_external_buffer */true, + /* restricted_seek */true, read_until_position); }; - auto reader_impl = std::make_unique( - std::move(read_buffer_creator), - objects, - disk_read_settings, - global_context->getFilesystemCacheLog()); + switch (read_settings.remote_fs_method) + { + case RemoteFSReadMethod::read: + { + return std::make_unique( + std::move(read_buffer_creator), + objects, + disk_read_settings, + global_context->getFilesystemCacheLog(), + /* use_external_buffer */false); - if (disk_read_settings.remote_fs_method == RemoteFSReadMethod::threadpool) - { - auto & reader = global_context->getThreadPoolReader(FilesystemReaderType::ASYNCHRONOUS_REMOTE_FS_READER); - return std::make_unique( - std::move(reader_impl), reader, disk_read_settings, - global_context->getAsyncReadCounters(), - global_context->getFilesystemReadPrefetchesLog()); - } - else - { - auto buf = std::make_unique(std::move(reader_impl), disk_read_settings); - return std::make_unique(std::move(buf), settings_ptr->min_bytes_for_seek); + } + case RemoteFSReadMethod::threadpool: + { + auto impl = std::make_unique( + std::move(read_buffer_creator), + objects, + disk_read_settings, + global_context->getFilesystemCacheLog(), + /* use_external_buffer */true); + + auto & reader = global_context->getThreadPoolReader(FilesystemReaderType::ASYNCHRONOUS_REMOTE_FS_READER); + return std::make_unique( + std::move(impl), reader, disk_read_settings, + global_context->getAsyncReadCounters(), + global_context->getFilesystemReadPrefetchesLog()); + } } } diff --git a/src/Disks/ObjectStorages/AzureBlobStorage/AzureObjectStorage.h b/src/Disks/ObjectStorages/AzureBlobStorage/AzureObjectStorage.h index 5b08ceb80e3..b5f81cef235 100644 --- a/src/Disks/ObjectStorages/AzureBlobStorage/AzureObjectStorage.h +++ b/src/Disks/ObjectStorages/AzureBlobStorage/AzureObjectStorage.h @@ -5,13 +5,9 @@ #include #include -#include #include #include - -#if USE_AZURE_BLOB_STORAGE #include -#endif namespace Poco { @@ -37,11 +33,13 @@ struct AzureObjectStorageSettings { } - size_t max_single_part_upload_size; /// NOTE: on 32-bit machines it will be at most 4GB, but size_t is also used in BufferBase for offset - uint64_t min_bytes_for_seek; - size_t max_single_read_retries; - size_t max_single_download_retries; - int list_object_keys_size; + AzureObjectStorageSettings() = default; + + size_t max_single_part_upload_size = 100 * 1024 * 1024; /// NOTE: on 32-bit machines it will be at most 4GB, but size_t is also used in BufferBase for offset + uint64_t min_bytes_for_seek = 1024 * 1024; + size_t max_single_read_retries = 3; + size_t max_single_download_retries = 3; + int list_object_keys_size = 1000; }; using AzureClient = Azure::Storage::Blobs::BlobContainerClient; diff --git a/src/Disks/ObjectStorages/HDFS/HDFSObjectStorage.cpp b/src/Disks/ObjectStorages/HDFS/HDFSObjectStorage.cpp index a3092bc6f12..60230ce2fb0 100644 --- a/src/Disks/ObjectStorages/HDFS/HDFSObjectStorage.cpp +++ b/src/Disks/ObjectStorages/HDFS/HDFSObjectStorage.cpp @@ -1,13 +1,10 @@ #include -#include #include - #include #include #include -#include #include #include @@ -72,9 +69,8 @@ std::unique_ptr HDFSObjectStorage::readObjects( /// NOLI hdfs_uri, hdfs_path, config, disk_read_settings, /* read_until_position */0, /* use_external_buffer */true); }; - auto hdfs_impl = std::make_unique(std::move(read_buffer_creator), objects, disk_read_settings, nullptr); - auto buf = std::make_unique(std::move(hdfs_impl), read_settings); - return std::make_unique(std::move(buf), settings->min_bytes_for_seek); + return std::make_unique( + std::move(read_buffer_creator), objects, disk_read_settings, nullptr, /* use_external_buffer */false); } std::unique_ptr HDFSObjectStorage::writeObject( /// NOLINT diff --git a/src/Disks/ObjectStorages/Local/LocalObjectStorage.cpp b/src/Disks/ObjectStorages/Local/LocalObjectStorage.cpp index 05c0c8f3961..69ccf309096 100644 --- a/src/Disks/ObjectStorages/Local/LocalObjectStorage.cpp +++ b/src/Disks/ObjectStorages/Local/LocalObjectStorage.cpp @@ -4,11 +4,9 @@ #include #include #include -#include #include #include #include -#include #include #include #include @@ -59,25 +57,26 @@ std::unique_ptr LocalObjectStorage::readObjects( /// NOL return createReadBufferFromFileBase(file_path, modified_settings, read_hint, file_size); }; - auto impl = std::make_unique( - std::move(read_buffer_creator), objects, modified_settings, - global_context->getFilesystemCacheLog()); + switch (read_settings.remote_fs_method) + { + case RemoteFSReadMethod::read: + { + return std::make_unique( + std::move(read_buffer_creator), objects, modified_settings, + global_context->getFilesystemCacheLog(), /* use_external_buffer */false); + } + case RemoteFSReadMethod::threadpool: + { + auto impl = std::make_unique( + std::move(read_buffer_creator), objects, modified_settings, + global_context->getFilesystemCacheLog(), /* use_external_buffer */true); - /// We use `remove_fs_method` (not `local_fs_method`) because we are about to use - /// AsynchronousBoundedReadBuffer which works by the remote_fs_* settings. - if (modified_settings.remote_fs_method == RemoteFSReadMethod::threadpool) - { - auto & reader = global_context->getThreadPoolReader(FilesystemReaderType::ASYNCHRONOUS_REMOTE_FS_READER); - return std::make_unique( - std::move(impl), reader, modified_settings, - global_context->getAsyncReadCounters(), - global_context->getFilesystemReadPrefetchesLog()); - } - else - { - auto buf = std::make_unique(std::move(impl), modified_settings); - return std::make_unique( - std::move(buf), modified_settings.remote_read_min_bytes_for_seek); + auto & reader = global_context->getThreadPoolReader(FilesystemReaderType::ASYNCHRONOUS_REMOTE_FS_READER); + return std::make_unique( + std::move(impl), reader, read_settings, + global_context->getAsyncReadCounters(), + global_context->getFilesystemReadPrefetchesLog()); + } } } diff --git a/src/Disks/ObjectStorages/ObjectStorageIterator.cpp b/src/Disks/ObjectStorages/ObjectStorageIterator.cpp index 188b743958c..72ec6e0e500 100644 --- a/src/Disks/ObjectStorages/ObjectStorageIterator.cpp +++ b/src/Disks/ObjectStorages/ObjectStorageIterator.cpp @@ -9,7 +9,7 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } -RelativePathWithMetadata ObjectStorageIteratorFromList::current() const +RelativePathWithMetadata ObjectStorageIteratorFromList::current() { if (!isValid()) throw Exception(ErrorCodes::LOGICAL_ERROR, "Trying to access invalid iterator"); diff --git a/src/Disks/ObjectStorages/ObjectStorageIterator.h b/src/Disks/ObjectStorages/ObjectStorageIterator.h index c3afd395a74..2ff5ce60acc 100644 --- a/src/Disks/ObjectStorages/ObjectStorageIterator.h +++ b/src/Disks/ObjectStorages/ObjectStorageIterator.h @@ -10,8 +10,10 @@ class IObjectStorageIterator { public: virtual void next() = 0; - virtual bool isValid() const = 0; - virtual RelativePathWithMetadata current() const = 0; + virtual void nextBatch() = 0; + virtual bool isValid() = 0; + virtual RelativePathWithMetadata current() = 0; + virtual RelativePathsWithMetadata currentBatch() = 0; virtual size_t getAccumulatedSize() const = 0; virtual ~IObjectStorageIterator() = default; @@ -34,12 +36,22 @@ public: ++batch_iterator; } - bool isValid() const override + void nextBatch() override + { + batch_iterator = batch.end(); + } + + bool isValid() override { return batch_iterator != batch.end(); } - RelativePathWithMetadata current() const override; + RelativePathWithMetadata current() override; + + RelativePathsWithMetadata currentBatch() override + { + return batch; + } size_t getAccumulatedSize() const override { diff --git a/src/Disks/ObjectStorages/ObjectStorageIteratorAsync.cpp b/src/Disks/ObjectStorages/ObjectStorageIteratorAsync.cpp index 766071cf815..f91c19f2fb9 100644 --- a/src/Disks/ObjectStorages/ObjectStorageIteratorAsync.cpp +++ b/src/Disks/ObjectStorages/ObjectStorageIteratorAsync.cpp @@ -1,5 +1,7 @@ #include +#include + namespace DB { @@ -8,6 +10,33 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } +void IObjectStorageIteratorAsync::nextBatch() +{ + std::lock_guard lock(mutex); + if (!is_finished) + { + if (!is_initialized) + { + outcome_future = scheduleBatch(); + is_initialized = true; + } + + BatchAndHasNext next_batch = outcome_future.get(); + current_batch = std::move(next_batch.batch); + accumulated_size.fetch_add(current_batch.size(), std::memory_order_relaxed); + current_batch_iterator = current_batch.begin(); + if (next_batch.has_next) + outcome_future = scheduleBatch(); + else + is_finished = true; + } + else + { + current_batch.clear(); + current_batch_iterator = current_batch.begin(); + } +} + void IObjectStorageIteratorAsync::next() { std::lock_guard lock(mutex); @@ -43,19 +72,34 @@ std::future IObjectStorageIterator } -bool IObjectStorageIteratorAsync::isValid() const +bool IObjectStorageIteratorAsync::isValid() { + if (!is_initialized) + nextBatch(); + + std::lock_guard lock(mutex); return current_batch_iterator != current_batch.end(); } -RelativePathWithMetadata IObjectStorageIteratorAsync::current() const +RelativePathWithMetadata IObjectStorageIteratorAsync::current() { if (!isValid()) throw Exception(ErrorCodes::LOGICAL_ERROR, "Trying to access invalid iterator"); + std::lock_guard lock(mutex); return *current_batch_iterator; } + +RelativePathsWithMetadata IObjectStorageIteratorAsync::currentBatch() +{ + if (!isValid()) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Trying to access invalid iterator"); + + std::lock_guard lock(mutex); + return current_batch; +} + size_t IObjectStorageIteratorAsync::getAccumulatedSize() const { return accumulated_size.load(std::memory_order_relaxed); diff --git a/src/Disks/ObjectStorages/ObjectStorageIteratorAsync.h b/src/Disks/ObjectStorages/ObjectStorageIteratorAsync.h index 81ba9bce137..a2b06da9a91 100644 --- a/src/Disks/ObjectStorages/ObjectStorageIteratorAsync.h +++ b/src/Disks/ObjectStorages/ObjectStorageIteratorAsync.h @@ -22,8 +22,10 @@ public: } void next() override; - bool isValid() const override; - RelativePathWithMetadata current() const override; + void nextBatch() override; + bool isValid() override; + RelativePathWithMetadata current() override; + RelativePathsWithMetadata currentBatch() override; size_t getAccumulatedSize() const override; ~IObjectStorageIteratorAsync() override @@ -43,9 +45,10 @@ protected: std::future scheduleBatch(); + bool is_initialized{false}; bool is_finished{false}; - std::mutex mutex; + mutable std::mutex mutex; ThreadPool list_objects_pool; ThreadPoolCallbackRunner list_objects_scheduler; std::future outcome_future; diff --git a/src/Disks/ObjectStorages/S3/S3ObjectStorage.cpp b/src/Disks/ObjectStorages/S3/S3ObjectStorage.cpp index d19be20f920..e48924326e1 100644 --- a/src/Disks/ObjectStorages/S3/S3ObjectStorage.cpp +++ b/src/Disks/ObjectStorages/S3/S3ObjectStorage.cpp @@ -8,11 +8,9 @@ #include #include #include -#include #include #include #include -#include #include #include #include @@ -182,24 +180,33 @@ std::unique_ptr S3ObjectStorage::readObjects( /// NOLINT /* restricted_seek */true); }; - auto s3_impl = std::make_unique( - std::move(read_buffer_creator), - objects, - disk_read_settings, - global_context->getFilesystemCacheLog()); + switch (read_settings.remote_fs_method) + { + case RemoteFSReadMethod::read: + { + return std::make_unique( + std::move(read_buffer_creator), + objects, + disk_read_settings, + global_context->getFilesystemCacheLog(), + /* use_external_buffer */false); - if (read_settings.remote_fs_method == RemoteFSReadMethod::threadpool) - { - auto & reader = global_context->getThreadPoolReader(FilesystemReaderType::ASYNCHRONOUS_REMOTE_FS_READER); - return std::make_unique( - std::move(s3_impl), reader, disk_read_settings, - global_context->getAsyncReadCounters(), - global_context->getFilesystemReadPrefetchesLog()); - } - else - { - auto buf = std::make_unique(std::move(s3_impl), disk_read_settings); - return std::make_unique(std::move(buf), settings_ptr->min_bytes_for_seek); + } + case RemoteFSReadMethod::threadpool: + { + auto impl = std::make_unique( + std::move(read_buffer_creator), + objects, + disk_read_settings, + global_context->getFilesystemCacheLog(), + /* use_external_buffer */true); + + auto & reader = global_context->getThreadPoolReader(FilesystemReaderType::ASYNCHRONOUS_REMOTE_FS_READER); + return std::make_unique( + std::move(impl), reader, disk_read_settings, + global_context->getAsyncReadCounters(), + global_context->getFilesystemReadPrefetchesLog()); + } } } diff --git a/src/Disks/ObjectStorages/Web/WebObjectStorage.cpp b/src/Disks/ObjectStorages/Web/WebObjectStorage.cpp index 4f34f3eed9c..690a0d3372c 100644 --- a/src/Disks/ObjectStorages/Web/WebObjectStorage.cpp +++ b/src/Disks/ObjectStorages/Web/WebObjectStorage.cpp @@ -4,11 +4,9 @@ #include #include -#include #include #include -#include #include #include #include @@ -181,24 +179,33 @@ std::unique_ptr WebObjectStorage::readObject( /// NOLINT }; auto global_context = Context::getGlobalContextInstance(); - auto web_impl = std::make_unique( - std::move(read_buffer_creator), - StoredObjects{object}, - read_settings, - global_context->getFilesystemCacheLog()); - if (read_settings.remote_fs_method == RemoteFSReadMethod::threadpool) + switch (read_settings.remote_fs_method) { - auto & reader = global_context->getThreadPoolReader(FilesystemReaderType::ASYNCHRONOUS_REMOTE_FS_READER); - return std::make_unique( - std::move(web_impl), reader, read_settings, - global_context->getAsyncReadCounters(), - global_context->getFilesystemReadPrefetchesLog()); - } - else - { - auto buf = std::make_unique(std::move(web_impl), read_settings); - return std::make_unique(std::move(buf), min_bytes_for_seek); + case RemoteFSReadMethod::read: + { + return std::make_unique( + std::move(read_buffer_creator), + StoredObjects{object}, + read_settings, + global_context->getFilesystemCacheLog(), + /* use_external_buffer */false); + } + case RemoteFSReadMethod::threadpool: + { + auto impl = std::make_unique( + std::move(read_buffer_creator), + StoredObjects{object}, + read_settings, + global_context->getFilesystemCacheLog(), + /* use_external_buffer */true); + + auto & reader = global_context->getThreadPoolReader(FilesystemReaderType::ASYNCHRONOUS_REMOTE_FS_READER); + return std::make_unique( + std::move(impl), reader, read_settings, + global_context->getAsyncReadCounters(), + global_context->getFilesystemReadPrefetchesLog()); + } } } diff --git a/src/Functions/FunctionDateOrDateTimeAddInterval.h b/src/Functions/FunctionDateOrDateTimeAddInterval.h index 507dc37e266..1546c24d30c 100644 --- a/src/Functions/FunctionDateOrDateTimeAddInterval.h +++ b/src/Functions/FunctionDateOrDateTimeAddInterval.h @@ -679,7 +679,7 @@ public: } else if constexpr (std::is_same_v) { - return std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 2, 0)); + return std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 2, 0, false)); } else if constexpr (std::is_same_v) { @@ -696,7 +696,7 @@ public: return {}; }); - auto timezone = extractTimeZoneNameFromFunctionArguments(arguments, 2, 0); + auto timezone = extractTimeZoneNameFromFunctionArguments(arguments, 2, 0, false); if (const auto* datetime64_type = typeid_cast(arguments[0].type.get())) { const auto from_scale = datetime64_type->getScale(); diff --git a/src/Functions/FunctionDateOrDateTimeToDateOrDate32.h b/src/Functions/FunctionDateOrDateTimeToDateOrDate32.h index 8e006b93b98..6eb3e534b62 100644 --- a/src/Functions/FunctionDateOrDateTimeToDateOrDate32.h +++ b/src/Functions/FunctionDateOrDateTimeToDateOrDate32.h @@ -36,7 +36,7 @@ public: /// If the time zone is specified but empty, throw an exception. /// only validate the time_zone part if the number of arguments is 2. if ((which.isDateTime() || which.isDateTime64()) && arguments.size() == 2 - && extractTimeZoneNameFromFunctionArguments(arguments, 1, 0).empty()) + && extractTimeZoneNameFromFunctionArguments(arguments, 1, 0, false).empty()) throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "Function {} supports a 2nd argument (optional) that must be a valid time zone", this->getName()); diff --git a/src/Functions/FunctionDateOrDateTimeToDateTimeOrDateTime64.h b/src/Functions/FunctionDateOrDateTimeToDateTimeOrDateTime64.h index 3d1f0f192cf..9f1066fd687 100644 --- a/src/Functions/FunctionDateOrDateTimeToDateTimeOrDateTime64.h +++ b/src/Functions/FunctionDateOrDateTimeToDateTimeOrDateTime64.h @@ -34,7 +34,7 @@ public: WhichDataType which(from_type); - std::string time_zone = extractTimeZoneNameFromFunctionArguments(arguments, 1, 0); + std::string time_zone = extractTimeZoneNameFromFunctionArguments(arguments, 1, 0, false); /// If the time zone is specified but empty, throw an exception. /// only validate the time_zone part if the number of arguments is 2. diff --git a/src/Functions/FunctionDateOrDateTimeToSomething.h b/src/Functions/FunctionDateOrDateTimeToSomething.h index 47433d13e0b..82818cc3d2b 100644 --- a/src/Functions/FunctionDateOrDateTimeToSomething.h +++ b/src/Functions/FunctionDateOrDateTimeToSomething.h @@ -24,7 +24,7 @@ public: /// If the time zone is specified but empty, throw an exception. if constexpr (std::is_same_v) { - std::string time_zone = extractTimeZoneNameFromFunctionArguments(arguments, 1, 0); + std::string time_zone = extractTimeZoneNameFromFunctionArguments(arguments, 1, 0, false); /// only validate the time_zone part if the number of arguments is 2. This is mainly /// to accommodate functions like toStartOfDay(today()), toStartOfDay(yesterday()) etc. if (arguments.size() == 2 && time_zone.empty()) @@ -53,7 +53,7 @@ public: scale = std::max(source_scale, static_cast(9)); } - return std::make_shared(scale, extractTimeZoneNameFromFunctionArguments(arguments, 1, 0)); + return std::make_shared(scale, extractTimeZoneNameFromFunctionArguments(arguments, 1, 0, false)); } else return std::make_shared(); diff --git a/src/Functions/FunctionSnowflake.h b/src/Functions/FunctionSnowflake.h index 998db98890a..ce3a48269b4 100644 --- a/src/Functions/FunctionSnowflake.h +++ b/src/Functions/FunctionSnowflake.h @@ -6,6 +6,7 @@ #include #include #include +#include #include @@ -72,9 +73,13 @@ class FunctionSnowflakeToDateTime : public IFunction { private: const char * name; + const bool allow_nonconst_timezone_arguments; public: - explicit FunctionSnowflakeToDateTime(const char * name_) : name(name_) { } + explicit FunctionSnowflakeToDateTime(const char * name_, ContextPtr context) + : name(name_) + , allow_nonconst_timezone_arguments(context->getSettings().allow_nonconst_timezone_arguments) + {} String getName() const override { return name; } size_t getNumberOfArguments() const override { return 0; } @@ -92,7 +97,7 @@ public: std::string timezone; if (arguments.size() == 2) - timezone = extractTimeZoneNameFromFunctionArguments(arguments, 1, 0); + timezone = extractTimeZoneNameFromFunctionArguments(arguments, 1, 0, allow_nonconst_timezone_arguments); return std::make_shared(timezone); } @@ -162,9 +167,13 @@ class FunctionSnowflakeToDateTime64 : public IFunction { private: const char * name; + const bool allow_nonconst_timezone_arguments; public: - explicit FunctionSnowflakeToDateTime64(const char * name_) : name(name_) { } + explicit FunctionSnowflakeToDateTime64(const char * name_, ContextPtr context) + : name(name_) + , allow_nonconst_timezone_arguments(context->getSettings().allow_nonconst_timezone_arguments) + {} String getName() const override { return name; } size_t getNumberOfArguments() const override { return 0; } @@ -182,7 +191,7 @@ public: std::string timezone; if (arguments.size() == 2) - timezone = extractTimeZoneNameFromFunctionArguments(arguments, 1, 0); + timezone = extractTimeZoneNameFromFunctionArguments(arguments, 1, 0, allow_nonconst_timezone_arguments); return std::make_shared(3, timezone); } diff --git a/src/Functions/FunctionUnixTimestamp64.h b/src/Functions/FunctionUnixTimestamp64.h index 7519e46f9dc..58a23f7266e 100644 --- a/src/Functions/FunctionUnixTimestamp64.h +++ b/src/Functions/FunctionUnixTimestamp64.h @@ -6,6 +6,7 @@ #include #include #include +#include #include @@ -99,11 +100,13 @@ class FunctionFromUnixTimestamp64 : public IFunction private: size_t target_scale; const char * name; + const bool allow_nonconst_timezone_arguments; public: - FunctionFromUnixTimestamp64(size_t target_scale_, const char * name_) - : target_scale(target_scale_), name(name_) - { - } + FunctionFromUnixTimestamp64(size_t target_scale_, const char * name_, ContextPtr context) + : target_scale(target_scale_) + , name(name_) + , allow_nonconst_timezone_arguments(context->getSettings().allow_nonconst_timezone_arguments) + {} String getName() const override { return name; } size_t getNumberOfArguments() const override { return 0; } @@ -121,7 +124,7 @@ public: std::string timezone; if (arguments.size() == 2) - timezone = extractTimeZoneNameFromFunctionArguments(arguments, 1, 0); + timezone = extractTimeZoneNameFromFunctionArguments(arguments, 1, 0, allow_nonconst_timezone_arguments); return std::make_shared(target_scale, timezone); } diff --git a/src/Functions/FunctionsConversion.h b/src/Functions/FunctionsConversion.h index 940585d6d57..87229b8ad04 100644 --- a/src/Functions/FunctionsConversion.h +++ b/src/Functions/FunctionsConversion.h @@ -1796,13 +1796,13 @@ public: if (to_datetime64 || scale != 0) /// toDateTime('xxxx-xx-xx xx:xx:xx', 0) return DateTime return std::make_shared(scale, - extractTimeZoneNameFromFunctionArguments(arguments, timezone_arg_position, 0)); + extractTimeZoneNameFromFunctionArguments(arguments, timezone_arg_position, 0, false)); - return std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, timezone_arg_position, 0)); + return std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, timezone_arg_position, 0, false)); } if constexpr (std::is_same_v) - return std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, timezone_arg_position, 0)); + return std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, timezone_arg_position, 0, false)); else if constexpr (std::is_same_v) throw Exception(ErrorCodes::LOGICAL_ERROR, "Unexpected branch in code of conversion function: it is a bug."); else @@ -2067,7 +2067,7 @@ public: UInt64 scale = to_datetime64 ? DataTypeDateTime64::default_scale : 0; if (arguments.size() > 1) scale = extractToDecimalScale(arguments[1]); - const auto timezone = extractTimeZoneNameFromFunctionArguments(arguments, 2, 0); + const auto timezone = extractTimeZoneNameFromFunctionArguments(arguments, 2, 0, false); res = scale == 0 ? res = std::make_shared(timezone) : std::make_shared(scale, timezone); } @@ -2117,7 +2117,7 @@ public: } if constexpr (std::is_same_v) - res = std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 1, 0)); + res = std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 1, 0, false)); else if constexpr (std::is_same_v) throw Exception(ErrorCodes::LOGICAL_ERROR, "LOGICAL ERROR: It is a bug."); else if constexpr (to_decimal) diff --git a/src/Functions/FunctionsTimeWindow.cpp b/src/Functions/FunctionsTimeWindow.cpp index 8a57a4da692..231e8b6fa77 100644 --- a/src/Functions/FunctionsTimeWindow.cpp +++ b/src/Functions/FunctionsTimeWindow.cpp @@ -138,7 +138,7 @@ struct TimeWindowImpl if (result_type_is_date) data_type = std::make_shared(); else - data_type = std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 2, 0)); + data_type = std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 2, 0, false)); return std::make_shared(DataTypes{data_type, data_type}); } @@ -322,7 +322,7 @@ struct TimeWindowImpl if (result_type_is_date) data_type = std::make_shared(); else - data_type = std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 3, 0)); + data_type = std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 3, 0, false)); return std::make_shared(DataTypes{data_type, data_type}); } diff --git a/src/Functions/date_trunc.cpp b/src/Functions/date_trunc.cpp index 016b8f4da5e..414512fc4f8 100644 --- a/src/Functions/date_trunc.cpp +++ b/src/Functions/date_trunc.cpp @@ -107,7 +107,7 @@ public: if (result_type_is_date) return std::make_shared(); else - return std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 2, 1)); + return std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 2, 1, false)); } bool useDefaultImplementationForConstants() const override { return true; } diff --git a/src/Functions/extractTimeZoneFromFunctionArguments.cpp b/src/Functions/extractTimeZoneFromFunctionArguments.cpp index 7ed240fdbcf..7168c68c9c9 100644 --- a/src/Functions/extractTimeZoneFromFunctionArguments.cpp +++ b/src/Functions/extractTimeZoneFromFunctionArguments.cpp @@ -30,10 +30,11 @@ std::string extractTimeZoneNameFromColumn(const IColumn * column, const String & } -std::string extractTimeZoneNameFromFunctionArguments(const ColumnsWithTypeAndName & arguments, size_t time_zone_arg_num, size_t datetime_arg_num) +std::string extractTimeZoneNameFromFunctionArguments(const ColumnsWithTypeAndName & arguments, size_t time_zone_arg_num, size_t datetime_arg_num, bool allow_nonconst_timezone_arguments) { /// Explicit time zone may be passed in last argument. - if (arguments.size() == time_zone_arg_num + 1) + if ((arguments.size() == time_zone_arg_num + 1) + && (!allow_nonconst_timezone_arguments || arguments[time_zone_arg_num].column)) { return extractTimeZoneNameFromColumn(arguments[time_zone_arg_num].column.get(), arguments[time_zone_arg_num].name); } diff --git a/src/Functions/extractTimeZoneFromFunctionArguments.h b/src/Functions/extractTimeZoneFromFunctionArguments.h index 858be40def7..3c012c64c49 100644 --- a/src/Functions/extractTimeZoneFromFunctionArguments.h +++ b/src/Functions/extractTimeZoneFromFunctionArguments.h @@ -16,8 +16,16 @@ std::string extractTimeZoneNameFromColumn(const IColumn * column, const String & /// Determine working timezone either from optional argument with time zone name or from time zone in DateTime type of argument. /// Returns empty string if default time zone should be used. +/// +/// Parameter allow_nonconst_timezone_arguments toggles if non-const timezone function arguments are accepted (legacy behavior) or not. The +/// problem with the old behavior is that the timezone is part of the type, and not part of the value. This lead to confusion and unexpected +/// results. +/// - For new functions, set allow_nonconst_timezone_arguments = false. +/// - For existing functions +/// - which disallow non-const timezone arguments anyways (e.g. getArgumentsThatAreAlwaysConstant()), set allow_nonconst_timezone_arguments = false, +/// - which allow non-const timezone arguments, set allow_nonconst_timezone_arguments according to the corresponding setting. std::string extractTimeZoneNameFromFunctionArguments( - const ColumnsWithTypeAndName & arguments, size_t time_zone_arg_num, size_t datetime_arg_num); + const ColumnsWithTypeAndName & arguments, size_t time_zone_arg_num, size_t datetime_arg_num, bool allow_nonconst_timezone_arguments); const DateLUTImpl & extractTimeZoneFromFunctionArguments( const ColumnsWithTypeAndName & arguments, size_t time_zone_arg_num, size_t datetime_arg_num); diff --git a/src/Functions/fromUnixTimestamp64Micro.cpp b/src/Functions/fromUnixTimestamp64Micro.cpp index 70dcbcd1d4b..191e2137a0d 100644 --- a/src/Functions/fromUnixTimestamp64Micro.cpp +++ b/src/Functions/fromUnixTimestamp64Micro.cpp @@ -7,8 +7,8 @@ namespace DB REGISTER_FUNCTION(FromUnixTimestamp64Micro) { factory.registerFunction("fromUnixTimestamp64Micro", - [](ContextPtr){ return std::make_unique( - std::make_shared(6, "fromUnixTimestamp64Micro")); }); + [](ContextPtr context){ return std::make_unique( + std::make_shared(6, "fromUnixTimestamp64Micro", context)); }); } } diff --git a/src/Functions/fromUnixTimestamp64Milli.cpp b/src/Functions/fromUnixTimestamp64Milli.cpp index 532013dfe5f..c6d4fcd30a2 100644 --- a/src/Functions/fromUnixTimestamp64Milli.cpp +++ b/src/Functions/fromUnixTimestamp64Milli.cpp @@ -7,8 +7,8 @@ namespace DB REGISTER_FUNCTION(FromUnixTimestamp64Milli) { factory.registerFunction("fromUnixTimestamp64Milli", - [](ContextPtr){ return std::make_unique( - std::make_shared(3, "fromUnixTimestamp64Milli")); }); + [](ContextPtr context){ return std::make_unique( + std::make_shared(3, "fromUnixTimestamp64Milli", context)); }); } } diff --git a/src/Functions/fromUnixTimestamp64Nano.cpp b/src/Functions/fromUnixTimestamp64Nano.cpp index 96afdda0fa8..2b5a7addbfc 100644 --- a/src/Functions/fromUnixTimestamp64Nano.cpp +++ b/src/Functions/fromUnixTimestamp64Nano.cpp @@ -7,8 +7,8 @@ namespace DB REGISTER_FUNCTION(FromUnixTimestamp64Nano) { factory.registerFunction("fromUnixTimestamp64Nano", - [](ContextPtr){ return std::make_unique( - std::make_shared(9, "fromUnixTimestamp64Nano")); }); + [](ContextPtr context){ return std::make_unique( + std::make_shared(9, "fromUnixTimestamp64Nano", context)); }); } } diff --git a/src/Functions/now.cpp b/src/Functions/now.cpp index 3c3bff1524f..d3a94379a61 100644 --- a/src/Functions/now.cpp +++ b/src/Functions/now.cpp @@ -5,6 +5,7 @@ #include #include #include +#include namespace DB { @@ -87,7 +88,10 @@ public: bool isVariadic() const override { return true; } size_t getNumberOfArguments() const override { return 0; } - static FunctionOverloadResolverPtr create(ContextPtr) { return std::make_unique(); } + static FunctionOverloadResolverPtr create(ContextPtr context) { return std::make_unique(context); } + explicit NowOverloadResolver(ContextPtr context) + : allow_nonconst_timezone_arguments(context->getSettings().allow_nonconst_timezone_arguments) + {} DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override { @@ -102,7 +106,7 @@ public: } if (arguments.size() == 1) { - return std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 0, 0)); + return std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 0, 0, allow_nonconst_timezone_arguments)); } return std::make_shared(); } @@ -121,10 +125,12 @@ public: if (arguments.size() == 1) return std::make_unique( time(nullptr), DataTypes{arguments.front().type}, - std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 0, 0))); + std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 0, 0, allow_nonconst_timezone_arguments))); return std::make_unique(time(nullptr), DataTypes(), std::make_shared()); } +private: + const bool allow_nonconst_timezone_arguments; }; } diff --git a/src/Functions/now64.cpp b/src/Functions/now64.cpp index f29b73061d9..349b8c71145 100644 --- a/src/Functions/now64.cpp +++ b/src/Functions/now64.cpp @@ -5,6 +5,7 @@ #include #include #include +#include #include @@ -115,7 +116,10 @@ public: bool isVariadic() const override { return true; } size_t getNumberOfArguments() const override { return 0; } - static FunctionOverloadResolverPtr create(ContextPtr) { return std::make_unique(); } + static FunctionOverloadResolverPtr create(ContextPtr context) { return std::make_unique(context); } + explicit Now64OverloadResolver(ContextPtr context) + : allow_nonconst_timezone_arguments(context->getSettings().allow_nonconst_timezone_arguments) + {} DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override { @@ -137,7 +141,7 @@ public: } if (arguments.size() == 2) { - timezone_name = extractTimeZoneNameFromFunctionArguments(arguments, 1, 0); + timezone_name = extractTimeZoneNameFromFunctionArguments(arguments, 1, 0, allow_nonconst_timezone_arguments); } return std::make_shared(scale, timezone_name); @@ -157,6 +161,8 @@ public: return std::make_unique(nowSubsecond(scale), std::move(arg_types), result_type); } +private: + const bool allow_nonconst_timezone_arguments; }; } diff --git a/src/Functions/nowInBlock.cpp b/src/Functions/nowInBlock.cpp index dfb3ed7c34a..0d5f9c45780 100644 --- a/src/Functions/nowInBlock.cpp +++ b/src/Functions/nowInBlock.cpp @@ -4,6 +4,7 @@ #include #include #include +#include namespace DB @@ -25,10 +26,13 @@ class FunctionNowInBlock : public IFunction { public: static constexpr auto name = "nowInBlock"; - static FunctionPtr create(ContextPtr) + static FunctionPtr create(ContextPtr context) { - return std::make_shared(); + return std::make_shared(context); } + explicit FunctionNowInBlock(ContextPtr context) + : allow_nonconst_timezone_arguments(context->getSettings().allow_nonconst_timezone_arguments) + {} String getName() const override { @@ -68,7 +72,7 @@ public: } if (arguments.size() == 1) { - return std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 0, 0)); + return std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 0, 0, allow_nonconst_timezone_arguments)); } return std::make_shared(); } @@ -77,6 +81,8 @@ public: { return ColumnDateTime::create(input_rows_count, static_cast(time(nullptr))); } +private: + const bool allow_nonconst_timezone_arguments; }; } diff --git a/src/Functions/snowflake.cpp b/src/Functions/snowflake.cpp index 4849d6512ca..c18f1c03332 100644 --- a/src/Functions/snowflake.cpp +++ b/src/Functions/snowflake.cpp @@ -21,14 +21,14 @@ REGISTER_FUNCTION(DateTime64ToSnowflake) REGISTER_FUNCTION(SnowflakeToDateTime) { factory.registerFunction("snowflakeToDateTime", - [](ContextPtr){ return std::make_unique( - std::make_shared("snowflakeToDateTime")); }); + [](ContextPtr context){ return std::make_unique( + std::make_shared("snowflakeToDateTime", context)); }); } REGISTER_FUNCTION(SnowflakeToDateTime64) { factory.registerFunction("snowflakeToDateTime64", - [](ContextPtr){ return std::make_unique( - std::make_shared("snowflakeToDateTime64")); }); + [](ContextPtr context){ return std::make_unique( + std::make_shared("snowflakeToDateTime64", context)); }); } } diff --git a/src/Functions/timeSlots.cpp b/src/Functions/timeSlots.cpp index 568ab5e5a47..040495ab023 100644 --- a/src/Functions/timeSlots.cpp +++ b/src/Functions/timeSlots.cpp @@ -270,14 +270,14 @@ public: /// Note that there is no explicit time zone argument for this function (we specify 2 as an argument number with explicit time zone). if (WhichDataType(arguments[0].type).isDateTime()) { - return std::make_shared(std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 3, 0))); + return std::make_shared(std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 3, 0, false))); } else { auto start_time_scale = assert_cast(*arguments[0].type).getScale(); auto duration_scale = assert_cast(*arguments[1].type).getScale(); return std::make_shared( - std::make_shared(std::max(start_time_scale, duration_scale), extractTimeZoneNameFromFunctionArguments(arguments, 3, 0))); + std::make_shared(std::max(start_time_scale, duration_scale), extractTimeZoneNameFromFunctionArguments(arguments, 3, 0, false))); } } diff --git a/src/Functions/toStartOfInterval.cpp b/src/Functions/toStartOfInterval.cpp index c0220f1aed2..649242d0d86 100644 --- a/src/Functions/toStartOfInterval.cpp +++ b/src/Functions/toStartOfInterval.cpp @@ -384,7 +384,7 @@ public: if (result_type_is_date) return std::make_shared(); else if (result_type_is_datetime) - return std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 2, 0)); + return std::make_shared(extractTimeZoneNameFromFunctionArguments(arguments, 2, 0, false)); else { auto scale = 0; @@ -396,7 +396,7 @@ public: else if (interval_type->getKind() == IntervalKind::Millisecond) scale = 3; - return std::make_shared(scale, extractTimeZoneNameFromFunctionArguments(arguments, 2, 0)); + return std::make_shared(scale, extractTimeZoneNameFromFunctionArguments(arguments, 2, 0, false)); } } diff --git a/src/Functions/toTimezone.cpp b/src/Functions/toTimezone.cpp index 0a54e5a86b7..a0d90351898 100644 --- a/src/Functions/toTimezone.cpp +++ b/src/Functions/toTimezone.cpp @@ -5,6 +5,8 @@ #include #include +#include + #include #include @@ -84,7 +86,10 @@ public: String getName() const override { return name; } size_t getNumberOfArguments() const override { return 2; } - static FunctionOverloadResolverPtr create(ContextPtr) { return std::make_unique(); } + static FunctionOverloadResolverPtr create(ContextPtr context) { return std::make_unique(context); } + explicit ToTimeZoneOverloadResolver(ContextPtr context) + : allow_nonconst_timezone_arguments(context->getSettings().allow_nonconst_timezone_arguments) + {} DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override { @@ -98,7 +103,7 @@ public: throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "Illegal type {} of argument of function {}. " "Should be DateTime or DateTime64", arguments[0].type->getName(), getName()); - String time_zone_name = extractTimeZoneNameFromFunctionArguments(arguments, 1, 0); + String time_zone_name = extractTimeZoneNameFromFunctionArguments(arguments, 1, 0, allow_nonconst_timezone_arguments); if (which_type.isDateTime()) return std::make_shared(time_zone_name); @@ -119,6 +124,8 @@ public: return std::make_unique(is_constant_timezone, data_types, result_type); } +private: + const bool allow_nonconst_timezone_arguments; }; } diff --git a/src/IO/HTTPCommon.cpp b/src/IO/HTTPCommon.cpp index 4bea646a42b..3ec9b3d0a83 100644 --- a/src/IO/HTTPCommon.cpp +++ b/src/IO/HTTPCommon.cpp @@ -68,7 +68,8 @@ namespace if (https) { #if USE_SSL - String resolved_host = resolve_host ? DNSResolver::instance().resolveHost(host).toString() : host; + /// Cannot resolve host in advance, otherwise SNI won't work in Poco. + /// For more information about SNI, see the https://en.wikipedia.org/wiki/Server_Name_Indication auto https_session = std::make_shared(host, port); if (resolve_host) https_session->setResolvedHost(DNSResolver::instance().resolveHost(host).toString()); @@ -184,6 +185,24 @@ namespace std::mutex mutex; std::unordered_map endpoints_pool; + void updateHostIfIpChanged(Entry & session, const String & new_ip) + { + const auto old_ip = session->getResolvedHost().empty() ? session->getHost() : session->getResolvedHost(); + + if (new_ip != old_ip) + { + session->reset(); + if (session->getResolvedHost().empty()) + { + session->setHost(new_ip); + } + else + { + session->setResolvedHost(new_ip); + } + } + } + protected: HTTPSessionPool() = default; @@ -238,13 +257,7 @@ namespace if (resolve_host) { - /// Host can change IP - const auto ip = DNSResolver::instance().resolveHost(host).toString(); - if (ip != session->getHost()) - { - session->reset(); - session->setHost(ip); - } + updateHostIfIpChanged(session, DNSResolver::instance().resolveHost(host).toString()); } } /// Reset the message, once it has been printed, diff --git a/src/IO/SeekAvoidingReadBuffer.cpp b/src/IO/SeekAvoidingReadBuffer.cpp deleted file mode 100644 index 4d6406d8ddf..00000000000 --- a/src/IO/SeekAvoidingReadBuffer.cpp +++ /dev/null @@ -1,35 +0,0 @@ -#include - - -namespace DB -{ - -SeekAvoidingReadBuffer::SeekAvoidingReadBuffer(std::unique_ptr impl_, UInt64 min_bytes_for_seek_) - : ReadBufferFromFileDecorator(std::move(impl_)) - , min_bytes_for_seek(min_bytes_for_seek_) -{ -} - - -off_t SeekAvoidingReadBuffer::seek(off_t off, int whence) -{ - off_t position = getPosition(); - - if (whence == SEEK_CUR) - { - off += position; - whence = SEEK_SET; - } - - if (whence == SEEK_SET && off >= position && off < position + static_cast(min_bytes_for_seek)) - { - swap(*impl); - impl->ignore(off - position); - swap(*impl); - return off; - } - - return ReadBufferFromFileDecorator::seek(off, whence); -} - -} diff --git a/src/IO/SeekAvoidingReadBuffer.h b/src/IO/SeekAvoidingReadBuffer.h deleted file mode 100644 index 716d7c5046c..00000000000 --- a/src/IO/SeekAvoidingReadBuffer.h +++ /dev/null @@ -1,26 +0,0 @@ -#pragma once - -#include - - -namespace DB -{ - -/// `SeekAvoidingReadBuffer` prefers sequential reads over seeks within specified window. -/// It is useful in network and spinning disk storage media when seek is relatively expensive -/// operation. -/// See also: `merge_tree_min_rows_for_seek`. -class SeekAvoidingReadBuffer : public ReadBufferFromFileDecorator -{ -public: - SeekAvoidingReadBuffer(std::unique_ptr impl_, UInt64 min_bytes_for_seek_); - - off_t seek(off_t off, int whence) override; - - void prefetch(Priority priority) override { impl->prefetch(priority); } - -private: - UInt64 min_bytes_for_seek; /// Minimum positive seek offset which shall be executed using seek operation. -}; - -} diff --git a/src/Interpreters/ActionsDAG.cpp b/src/Interpreters/ActionsDAG.cpp index cbf6cc1cbe3..94bdca60e69 100644 --- a/src/Interpreters/ActionsDAG.cpp +++ b/src/Interpreters/ActionsDAG.cpp @@ -598,6 +598,8 @@ Block ActionsDAG::updateHeader(Block header) const } ColumnsWithTypeAndName result_columns; + + result_columns.reserve(outputs.size()); struct Frame diff --git a/src/Interpreters/Cache/FileCache.cpp b/src/Interpreters/Cache/FileCache.cpp index 50e4f6e7580..e6bc8b1f79b 100644 --- a/src/Interpreters/Cache/FileCache.cpp +++ b/src/Interpreters/Cache/FileCache.cpp @@ -988,17 +988,18 @@ void FileCache::cleanup() void FileCache::cleanupThreadFunc() { -#ifndef NDEBUG - assertCacheCorrectness(); -#endif - try { +#ifdef ABORT_ON_LOGICAL_ERROR + assertCacheCorrectness(); +#endif + cleanup(); } catch (...) { tryLogCurrentException(__PRETTY_FUNCTION__); + chassert(false); } cleanup_task->scheduleAfter(delayed_cleanup_interval_ms); diff --git a/src/Interpreters/Cache/FileCache_fwd.h b/src/Interpreters/Cache/FileCache_fwd.h index dbb85fa0e7a..01f518d0c4e 100644 --- a/src/Interpreters/Cache/FileCache_fwd.h +++ b/src/Interpreters/Cache/FileCache_fwd.h @@ -5,7 +5,7 @@ namespace DB { static constexpr int FILECACHE_DEFAULT_MAX_FILE_SEGMENT_SIZE = 8 * 1024 * 1024; -static constexpr int FILECACHE_DEFAULT_MAX_ELEMENTS = 1024 * 1024; +static constexpr int FILECACHE_DEFAULT_MAX_ELEMENTS = 10000000; static constexpr int FILECACHE_DEFAULT_HITS_THRESHOLD = 0; static constexpr size_t FILECACHE_BYPASS_THRESHOLD = 256 * 1024 * 1024; static constexpr size_t FILECACHE_DELAYED_CLEANUP_INTERVAL_MS = 1000 * 60; /// 1 min diff --git a/src/Interpreters/Cache/Metadata.cpp b/src/Interpreters/Cache/Metadata.cpp index 740d3be72b8..c9a23d1c785 100644 --- a/src/Interpreters/Cache/Metadata.cpp +++ b/src/Interpreters/Cache/Metadata.cpp @@ -275,7 +275,7 @@ void CacheMetadata::doCleanup() try { if (fs::exists(key_prefix_directory) && fs::is_empty(key_prefix_directory)) - fs::remove_all(key_prefix_directory); + fs::remove(key_prefix_directory); } catch (const fs::filesystem_error & e) { diff --git a/src/Interpreters/Context.cpp b/src/Interpreters/Context.cpp index 8fb06e21d22..995e78d8f0b 100644 --- a/src/Interpreters/Context.cpp +++ b/src/Interpreters/Context.cpp @@ -377,7 +377,7 @@ struct ContextSharedPart : boost::noncopyable { try { - LOG_DEBUG(log, "Desctructing remote fs threadpool reader"); + LOG_DEBUG(log, "Destructing remote fs threadpool reader"); asynchronous_remote_fs_reader->wait(); asynchronous_remote_fs_reader.reset(); } @@ -391,7 +391,7 @@ struct ContextSharedPart : boost::noncopyable { try { - LOG_DEBUG(log, "Desctructing local fs threadpool reader"); + LOG_DEBUG(log, "Destructing local fs threadpool reader"); asynchronous_local_fs_reader->wait(); asynchronous_local_fs_reader.reset(); } @@ -405,7 +405,7 @@ struct ContextSharedPart : boost::noncopyable { try { - LOG_DEBUG(log, "Desctructing local fs threadpool reader"); + LOG_DEBUG(log, "Destructing local fs threadpool reader"); synchronous_local_fs_reader->wait(); synchronous_local_fs_reader.reset(); } @@ -419,7 +419,7 @@ struct ContextSharedPart : boost::noncopyable { try { - LOG_DEBUG(log, "Desctructing threadpool writer"); + LOG_DEBUG(log, "Destructing threadpool writer"); threadpool_writer->wait(); threadpool_writer.reset(); } @@ -433,7 +433,7 @@ struct ContextSharedPart : boost::noncopyable { try { - LOG_DEBUG(log, "Desctructing marks loader"); + LOG_DEBUG(log, "Destructing marks loader"); load_marks_threadpool->wait(); load_marks_threadpool.reset(); } @@ -447,7 +447,7 @@ struct ContextSharedPart : boost::noncopyable { try { - LOG_DEBUG(log, "Desctructing prefetch threadpool"); + LOG_DEBUG(log, "Destructing prefetch threadpool"); prefetch_threadpool->wait(); prefetch_threadpool.reset(); } diff --git a/src/Interpreters/ExpressionAnalyzer.cpp b/src/Interpreters/ExpressionAnalyzer.cpp index c7c66f6f414..307b46b3a0b 100644 --- a/src/Interpreters/ExpressionAnalyzer.cpp +++ b/src/Interpreters/ExpressionAnalyzer.cpp @@ -444,7 +444,7 @@ void ExpressionAnalyzer::initGlobalSubqueriesAndExternalTables(bool do_global, b if (do_global) { GlobalSubqueriesVisitor::Data subqueries_data( - getContext(), subquery_depth, isRemoteStorage(), is_explain, external_tables, prepared_sets, has_global_subqueries); + getContext(), subquery_depth, isRemoteStorage(), is_explain, external_tables, prepared_sets, has_global_subqueries, syntax->analyzed_join.get()); GlobalSubqueriesVisitor(subqueries_data).visit(query); } } @@ -1056,13 +1056,6 @@ JoinPtr SelectQueryExpressionAnalyzer::appendJoin( return join; } -static ActionsDAGPtr createJoinedBlockActions(ContextPtr context, const TableJoin & analyzed_join) -{ - ASTPtr expression_list = analyzed_join.rightKeysList(); - auto syntax_result = TreeRewriter(context).analyze(expression_list, analyzed_join.columnsFromJoinedTable()); - return ExpressionAnalyzer(expression_list, syntax_result, context).getActionsDAG(true, false); -} - std::shared_ptr tryKeyValueJoin(std::shared_ptr analyzed_join, const Block & right_sample_block); @@ -1144,7 +1137,7 @@ static std::unique_ptr buildJoinedPlan( SelectQueryOptions query_options) { /// Actions which need to be calculated on joined block. - auto joined_block_actions = createJoinedBlockActions(context, analyzed_join); + auto joined_block_actions = analyzed_join.createJoinedBlockActions(context); NamesWithAliases required_columns_with_aliases = analyzed_join.getRequiredColumns( Block(joined_block_actions->getResultColumns()), joined_block_actions->getRequiredColumns().getNames()); diff --git a/src/Interpreters/GlobalSubqueriesVisitor.h b/src/Interpreters/GlobalSubqueriesVisitor.h index 2901f2e23d0..f5b837fc7f7 100644 --- a/src/Interpreters/GlobalSubqueriesVisitor.h +++ b/src/Interpreters/GlobalSubqueriesVisitor.h @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -43,6 +44,7 @@ public: TemporaryTablesMapping & external_tables; PreparedSetsPtr prepared_sets; bool & has_global_subqueries; + TableJoin * table_join; Data( ContextPtr context_, @@ -51,7 +53,8 @@ public: bool is_explain_, TemporaryTablesMapping & tables, PreparedSetsPtr prepared_sets_, - bool & has_global_subqueries_) + bool & has_global_subqueries_, + TableJoin * table_join_) : WithContext(context_) , subquery_depth(subquery_depth_) , is_remote(is_remote_) @@ -59,10 +62,11 @@ public: , external_tables(tables) , prepared_sets(prepared_sets_) , has_global_subqueries(has_global_subqueries_) + , table_join(table_join_) { } - void addExternalStorage(ASTPtr & ast, bool set_alias = false) + void addExternalStorage(ASTPtr & ast, const Names & required_columns, bool set_alias = false) { /// With nondistributed queries, creating temporary tables does not make sense. if (!is_remote) @@ -145,7 +149,7 @@ public: if (external_tables.contains(external_table_name)) return; - auto interpreter = interpretSubquery(subquery_or_table_name, getContext(), subquery_depth, {}); + auto interpreter = interpretSubquery(subquery_or_table_name, getContext(), subquery_depth, required_columns); Block sample = interpreter->getSampleBlock(); NamesAndTypesList columns = sample.getNamesAndTypesList(); @@ -238,7 +242,7 @@ private: return; } - data.addExternalStorage(ast); + data.addExternalStorage(ast, {}); data.has_global_subqueries = true; } } @@ -249,7 +253,21 @@ private: if (table_elem.table_join && (table_elem.table_join->as().locality == JoinLocality::Global || shouldBeExecutedGlobally(data))) { - data.addExternalStorage(table_elem.table_expression, true); + Names required_columns; + + /// Fill required columns for GLOBAL JOIN. + /// This code is partial copy-paste from ExpressionAnalyzer. + if (data.table_join) + { + auto joined_block_actions = data.table_join->createJoinedBlockActions(data.getContext()); + NamesWithAliases required_columns_with_aliases = data.table_join->getRequiredColumns( + Block(joined_block_actions->getResultColumns()), joined_block_actions->getRequiredColumns().getNames()); + + for (auto & pr : required_columns_with_aliases) + required_columns.push_back(pr.first); + } + + data.addExternalStorage(table_elem.table_expression, required_columns, true); data.has_global_subqueries = true; } } diff --git a/src/Interpreters/GraceHashJoin.cpp b/src/Interpreters/GraceHashJoin.cpp index 0dd61ff2793..197b64865e1 100644 --- a/src/Interpreters/GraceHashJoin.cpp +++ b/src/Interpreters/GraceHashJoin.cpp @@ -304,10 +304,8 @@ void GraceHashJoin::initBuckets() bool GraceHashJoin::isSupported(const std::shared_ptr & table_join) { - bool is_asof = (table_join->strictness() == JoinStrictness::Asof); - auto kind = table_join->kind(); - return !is_asof && (isInner(kind) || isLeft(kind) || isRight(kind) || isFull(kind)) && table_join->oneDisjunct(); + return !is_asof && isInnerOrLeft(table_join->kind()) && table_join->oneDisjunct(); } GraceHashJoin::~GraceHashJoin() = default; @@ -327,6 +325,7 @@ bool GraceHashJoin::hasMemoryOverflow(size_t total_rows, size_t total_bytes) con /// One row can't be split, avoid loop if (total_rows < 2) return false; + bool has_overflow = !table_join->sizeLimits().softCheck(total_rows, total_bytes); if (has_overflow) @@ -471,30 +470,18 @@ bool GraceHashJoin::alwaysReturnsEmptySet() const return hash_join_is_empty; } -/// Each bucket are handled by the following steps -/// 1. build hash_join by the right side blocks. -/// 2. join left side with the hash_join, -/// 3. read right non-joined blocks from hash_join. -/// buckets are handled one by one, each hash_join will not be release before the right non-joined blocks are emitted. -/// -/// There is a finished counter in JoiningTransform/DelayedJoinedBlocksWorkerTransform, -/// only one processor could take the non-joined blocks from right stream, and ensure all rows from -/// left stream have been emitted before this. -IBlocksStreamPtr -GraceHashJoin::getNonJoinedBlocks(const Block & left_sample_block_, const Block & result_sample_block_, UInt64 max_block_size_) const + +IBlocksStreamPtr GraceHashJoin::getNonJoinedBlocks(const Block &, const Block &, UInt64) const { - return hash_join->getNonJoinedBlocks(left_sample_block_, result_sample_block_, max_block_size_); + /// We do no support returning non joined blocks here. + /// TODO: They _should_ be reported by getDelayedBlocks instead + return nullptr; } class GraceHashJoin::DelayedBlocks : public IBlocksStream { public: - explicit DelayedBlocks( - size_t current_bucket_, - Buckets buckets_, - InMemoryJoinPtr hash_join_, - const Names & left_key_names_, - const Names & right_key_names_) + explicit DelayedBlocks(size_t current_bucket_, Buckets buckets_, InMemoryJoinPtr hash_join_, const Names & left_key_names_, const Names & right_key_names_) : current_bucket(current_bucket_) , buckets(std::move(buckets_)) , hash_join(std::move(hash_join_)) @@ -512,15 +499,12 @@ public: do { - // One DelayedBlocks is shared among multiple DelayedJoinedBlocksWorkerTransform. - // There is a lock inside left_reader.read(). block = left_reader.read(); if (!block) { return {}; } - // block comes from left_reader, need to join with right table to get the result. Blocks blocks = JoinCommon::scatterBlockByHash(left_key_names, block, num_buckets); block = std::move(blocks[current_idx]); @@ -571,14 +555,18 @@ IBlocksStreamPtr GraceHashJoin::getDelayedBlocks() size_t bucket_idx = current_bucket->idx; - size_t prev_keys_num = 0; - // If there is only one bucket, don't take this check. - if (hash_join && buckets.size() > 1) + if (hash_join) { - // Use previous hash_join's keys number to estimate next hash_join's size is reasonable. - prev_keys_num = hash_join->getTotalRowCount(); + auto right_blocks = hash_join->releaseJoinedBlocks(/* restructure */ false); + for (auto & block : right_blocks) + { + Blocks blocks = JoinCommon::scatterBlockByHash(right_key_names, block, buckets.size()); + flushBlocksToBuckets(blocks, buckets, bucket_idx); + } } + hash_join = makeInMemoryJoin(); + for (bucket_idx = bucket_idx + 1; bucket_idx < buckets.size(); ++bucket_idx) { current_bucket = buckets[bucket_idx].get(); @@ -591,7 +579,6 @@ IBlocksStreamPtr GraceHashJoin::getDelayedBlocks() continue; } - hash_join = makeInMemoryJoin(prev_keys_num); auto right_reader = current_bucket->startJoining(); size_t num_rows = 0; /// count rows that were written and rehashed while (Block block = right_reader.read()) @@ -602,6 +589,7 @@ IBlocksStreamPtr GraceHashJoin::getDelayedBlocks() LOG_TRACE(log, "Loaded bucket {} with {}(/{}) rows", bucket_idx, hash_join->getTotalRowCount(), num_rows); + return std::make_unique(current_bucket->idx, buckets, hash_join, left_key_names, right_key_names); } @@ -611,9 +599,9 @@ IBlocksStreamPtr GraceHashJoin::getDelayedBlocks() return nullptr; } -GraceHashJoin::InMemoryJoinPtr GraceHashJoin::makeInMemoryJoin(size_t reserve_num) +GraceHashJoin::InMemoryJoinPtr GraceHashJoin::makeInMemoryJoin() { - return std::make_unique(table_join, right_sample_block, any_take_last_row, reserve_num); + return std::make_unique(table_join, right_sample_block, any_take_last_row); } Block GraceHashJoin::prepareRightBlock(const Block & block) @@ -642,19 +630,6 @@ void GraceHashJoin::addJoinedBlockImpl(Block block) if (!hash_join) hash_join = makeInMemoryJoin(); - // buckets size has been changed in other threads. Need to scatter current_block again. - // rehash could only happen under hash_join_mutex's scope. - auto current_buckets = getCurrentBuckets(); - if (buckets_snapshot.size() != current_buckets.size()) - { - LOG_TRACE(log, "mismatch buckets size. previous:{}, current:{}", buckets_snapshot.size(), getCurrentBuckets().size()); - Blocks blocks = JoinCommon::scatterBlockByHash(right_key_names, current_block, current_buckets.size()); - flushBlocksToBuckets(blocks, current_buckets, bucket_index); - current_block = std::move(blocks[bucket_index]); - if (!current_block.rows()) - return; - } - auto prev_keys_num = hash_join->getTotalRowCount(); hash_join->addJoinedBlock(current_block, /* check_limits = */ false); if (!hasMemoryOverflow(hash_join)) @@ -663,6 +638,7 @@ void GraceHashJoin::addJoinedBlockImpl(Block block) current_block = {}; auto right_blocks = hash_join->releaseJoinedBlocks(/* restructure */ false); + hash_join = nullptr; buckets_snapshot = rehashBuckets(buckets_snapshot.size() * 2); @@ -682,7 +658,7 @@ void GraceHashJoin::addJoinedBlockImpl(Block block) current_block = concatenateBlocks(current_blocks); } - hash_join = makeInMemoryJoin(prev_keys_num); + hash_join = makeInMemoryJoin(); if (current_block.rows() > 0) hash_join->addJoinedBlock(current_block, /* check_limits = */ false); diff --git a/src/Interpreters/GraceHashJoin.h b/src/Interpreters/GraceHashJoin.h index ec611f373ed..b8d83f4cad0 100644 --- a/src/Interpreters/GraceHashJoin.h +++ b/src/Interpreters/GraceHashJoin.h @@ -13,6 +13,7 @@ namespace DB { + class TableJoin; class HashJoin; @@ -78,7 +79,7 @@ public: bool supportTotals() const override { return false; } IBlocksStreamPtr - getNonJoinedBlocks(const Block & left_sample_block_, const Block & result_sample_block_, UInt64 max_block_size) const override; + getNonJoinedBlocks(const Block & left_sample_block, const Block & result_sample_block, UInt64 max_block_size) const override; /// Open iterator over joined blocks. /// Must be called after all @joinBlock calls. @@ -90,8 +91,7 @@ public: private: void initBuckets(); /// Create empty join for in-memory processing. - /// reserve_num for reserving space in hash table. - InMemoryJoinPtr makeInMemoryJoin(size_t reserve_num = 0); + InMemoryJoinPtr makeInMemoryJoin(); /// Add right table block to the @join. Calls @rehash on overflow. void addJoinedBlockImpl(Block block); diff --git a/src/Interpreters/HashJoin.cpp b/src/Interpreters/HashJoin.cpp index 9306c9b99eb..6fe2b8464f5 100644 --- a/src/Interpreters/HashJoin.cpp +++ b/src/Interpreters/HashJoin.cpp @@ -217,7 +217,7 @@ static void correctNullabilityInplace(ColumnWithTypeAndName & column, bool nulla JoinCommon::removeColumnNullability(column); } -HashJoin::HashJoin(std::shared_ptr table_join_, const Block & right_sample_block_, bool any_take_last_row_, size_t reserve_num) +HashJoin::HashJoin(std::shared_ptr table_join_, const Block & right_sample_block_, bool any_take_last_row_) : table_join(table_join_) , kind(table_join->kind()) , strictness(table_join->strictness()) @@ -302,7 +302,7 @@ HashJoin::HashJoin(std::shared_ptr table_join_, const Block & right_s } for (auto & maps : data->maps) - dataMapInit(maps, reserve_num); + dataMapInit(maps); } HashJoin::Type HashJoin::chooseMethod(JoinKind kind, const ColumnRawPtrs & key_columns, Sizes & key_sizes) @@ -454,15 +454,13 @@ struct KeyGetterForType using Type = typename KeyGetterForTypeImpl::Type; }; -void HashJoin::dataMapInit(MapsVariant & map, size_t reserve_num) +void HashJoin::dataMapInit(MapsVariant & map) { if (kind == JoinKind::Cross) return; joinDispatchInit(kind, strictness, map); joinDispatch(kind, strictness, map, [&](auto, auto, auto & map_) { map_.create(data->type); }); - if (reserve_num) - joinDispatch(kind, strictness, map, [&](auto, auto, auto & map_) { map_.reserve(data->type, reserve_num); }); } bool HashJoin::empty() const @@ -707,8 +705,9 @@ Block HashJoin::prepareRightBlock(const Block & block, const Block & saved_block for (const auto & sample_column : saved_block_sample_.getColumnsWithTypeAndName()) { ColumnWithTypeAndName column = block.getByName(sample_column.name); - if (sample_column.column->isNullable()) - JoinCommon::convertColumnToNullable(column); + + /// There's no optimization for right side const columns. Remove constness if any. + column.column = recursiveRemoveSparse(column.column->convertToFullColumnIfConst()); if (column.column->lowCardinality() && !sample_column.column->lowCardinality()) { @@ -716,8 +715,9 @@ Block HashJoin::prepareRightBlock(const Block & block, const Block & saved_block column.type = removeLowCardinality(column.type); } - /// There's no optimization for right side const columns. Remove constness if any. - column.column = recursiveRemoveSparse(column.column->convertToFullColumnIfConst()); + if (sample_column.column->isNullable()) + JoinCommon::convertColumnToNullable(column); + structured_block.insert(std::move(column)); } diff --git a/src/Interpreters/HashJoin.h b/src/Interpreters/HashJoin.h index 58e47432d41..50eda4482bd 100644 --- a/src/Interpreters/HashJoin.h +++ b/src/Interpreters/HashJoin.h @@ -146,7 +146,7 @@ public: class HashJoin : public IJoin { public: - HashJoin(std::shared_ptr table_join_, const Block & right_sample_block, bool any_take_last_row_ = false, size_t reserve_num = 0); + HashJoin(std::shared_ptr table_join_, const Block & right_sample_block, bool any_take_last_row_ = false); ~HashJoin() override; @@ -217,16 +217,6 @@ public: M(keys256) \ M(hashed) - /// Only for maps using hash table. - #define APPLY_FOR_HASH_JOIN_VARIANTS(M) \ - M(key32) \ - M(key64) \ - M(key_string) \ - M(key_fixed_string) \ - M(keys128) \ - M(keys256) \ - M(hashed) - /// Used for reading from StorageJoin and applying joinGet function #define APPLY_FOR_JOIN_VARIANTS_LIMITED(M) \ @@ -276,22 +266,6 @@ public: } } - void reserve(Type which, size_t num) - { - switch (which) - { - case Type::EMPTY: break; - case Type::CROSS: break; - case Type::key8: break; - case Type::key16: break; - - #define M(NAME) \ - case Type::NAME: NAME->reserve(num); break; - APPLY_FOR_HASH_JOIN_VARIANTS(M) - #undef M - } - } - size_t getTotalRowCount(Type which) const { switch (which) @@ -435,7 +409,7 @@ private: /// If set HashJoin instance is not available for modification (addJoinedBlock) TableLockHolder storage_join_lock = nullptr; - void dataMapInit(MapsVariant &, size_t); + void dataMapInit(MapsVariant &); void initRightBlockStructure(Block & saved_block_sample); diff --git a/src/Interpreters/InterpreterOptimizeQuery.cpp b/src/Interpreters/InterpreterOptimizeQuery.cpp index a4507391c4e..ae456e8b31d 100644 --- a/src/Interpreters/InterpreterOptimizeQuery.cpp +++ b/src/Interpreters/InterpreterOptimizeQuery.cpp @@ -34,7 +34,7 @@ BlockIO InterpreterOptimizeQuery::execute() getContext()->checkAccess(getRequiredAccess()); - auto table_id = getContext()->resolveStorageID(ast, Context::ResolveOrdinary); + auto table_id = getContext()->resolveStorageID(ast); StoragePtr table = DatabaseCatalog::instance().getTable(table_id, getContext()); checkStorageSupportsTransactionsIfNeeded(table, getContext()); auto metadata_snapshot = table->getInMemoryMetadataPtr(); diff --git a/src/Interpreters/JoinUtils.cpp b/src/Interpreters/JoinUtils.cpp index a05b58e14a1..67ee2a64264 100644 --- a/src/Interpreters/JoinUtils.cpp +++ b/src/Interpreters/JoinUtils.cpp @@ -160,16 +160,14 @@ static ColumnPtr tryConvertColumnToNullable(ColumnPtr col) if (col->lowCardinality()) { - auto mut_col = IColumn::mutate(std::move(col)); - ColumnLowCardinality * col_lc = assert_cast(mut_col.get()); - if (col_lc->nestedIsNullable()) + const ColumnLowCardinality & col_lc = assert_cast(*col); + if (col_lc.nestedIsNullable()) { - return mut_col; + return col; } - else if (col_lc->nestedCanBeInsideNullable()) + else if (col_lc.nestedCanBeInsideNullable()) { - col_lc->nestedToNullable(); - return mut_col; + return col_lc.cloneNullable(); } } else if (const ColumnConst * col_const = checkAndGetColumn(*col)) @@ -232,11 +230,7 @@ void removeColumnNullability(ColumnWithTypeAndName & column) if (column.column && column.column->lowCardinality()) { - auto mut_col = IColumn::mutate(std::move(column.column)); - ColumnLowCardinality * col_as_lc = typeid_cast(mut_col.get()); - if (col_as_lc && col_as_lc->nestedIsNullable()) - col_as_lc->nestedRemoveNullable(); - column.column = std::move(mut_col); + column.column = assert_cast(column.column.get())->cloneWithDefaultOnNull(); } } else diff --git a/src/Interpreters/TableJoin.cpp b/src/Interpreters/TableJoin.cpp index 5a23fbd00ff..cabd0be1aa3 100644 --- a/src/Interpreters/TableJoin.cpp +++ b/src/Interpreters/TableJoin.cpp @@ -14,6 +14,8 @@ #include #include +#include +#include #include #include @@ -760,4 +762,11 @@ bool TableJoin::allowParallelHashJoin() const return true; } +ActionsDAGPtr TableJoin::createJoinedBlockActions(ContextPtr context) const +{ + ASTPtr expression_list = rightKeysList(); + auto syntax_result = TreeRewriter(context).analyze(expression_list, columnsFromJoinedTable()); + return ExpressionAnalyzer(expression_list, syntax_result, context).getActionsDAG(true, false); +} + } diff --git a/src/Interpreters/TableJoin.h b/src/Interpreters/TableJoin.h index 0e0c905e30c..ba3befab59b 100644 --- a/src/Interpreters/TableJoin.h +++ b/src/Interpreters/TableJoin.h @@ -217,6 +217,8 @@ public: const SizeLimits & sizeLimits() const { return size_limits; } VolumePtr getGlobalTemporaryVolume() { return tmp_volume; } + ActionsDAGPtr createJoinedBlockActions(ContextPtr context) const; + bool isEnabledAlgorithm(JoinAlgorithm val) const { /// When join_algorithm = 'default' (not specified by user) we use hash or direct algorithm. @@ -232,8 +234,17 @@ public: bool allowParallelHashJoin() const; bool joinUseNulls() const { return join_use_nulls; } - bool forceNullableRight() const { return join_use_nulls && isLeftOrFull(kind()); } - bool forceNullableLeft() const { return join_use_nulls && isRightOrFull(kind()); } + + bool forceNullableRight() const + { + return join_use_nulls && isLeftOrFull(kind()); + } + + bool forceNullableLeft() const + { + return join_use_nulls && isRightOrFull(kind()); + } + size_t defaultMaxBytes() const { return default_max_bytes; } size_t maxJoinedBlockRows() const { return max_joined_block_rows; } size_t maxRowsInRightBlock() const { return partial_merge_join_rows_in_right_blocks; } diff --git a/src/Interpreters/tests/gtest_lru_file_cache.cpp b/src/Interpreters/tests/gtest_lru_file_cache.cpp index fe9e3a18024..8c8e715ce92 100644 --- a/src/Interpreters/tests/gtest_lru_file_cache.cpp +++ b/src/Interpreters/tests/gtest_lru_file_cache.cpp @@ -467,6 +467,7 @@ TEST_F(FileCacheTest, get) cv.notify_one(); file_segment2.wait(file_segment2.range().left); + file_segment2.complete(); ASSERT_TRUE(file_segment2.state() == State::DOWNLOADED); }); diff --git a/src/Parsers/MySQL/ASTDropQuery.cpp b/src/Parsers/MySQL/ASTDropQuery.cpp index fb76d93363a..890451e3e55 100644 --- a/src/Parsers/MySQL/ASTDropQuery.cpp +++ b/src/Parsers/MySQL/ASTDropQuery.cpp @@ -44,8 +44,9 @@ bool ParserDropQuery::parseImpl(IParser::Pos & pos, ASTPtr & node, Expected & ex bool if_exists = false; bool is_truncate = false; - if (s_truncate.ignore(pos, expected) && s_table.ignore(pos, expected)) + if (s_truncate.ignore(pos, expected)) { + s_table.ignore(pos, expected); is_truncate = true; query->kind = ASTDropQuery::Kind::Table; ASTDropQuery::QualifiedName name; diff --git a/src/Parsers/ParserShowIndexesQuery.cpp b/src/Parsers/ParserShowIndexesQuery.cpp index fe08b463069..7852ed9806d 100644 --- a/src/Parsers/ParserShowIndexesQuery.cpp +++ b/src/Parsers/ParserShowIndexesQuery.cpp @@ -28,7 +28,7 @@ bool ParserShowIndexesQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expe if (ParserKeyword("EXTENDED").ignore(pos, expected)) query->extended = true; - if (!(ParserKeyword("INDEX").ignore(pos, expected) || ParserKeyword("INDEXES").ignore(pos, expected) || ParserKeyword("KEYS").ignore(pos, expected))) + if (!(ParserKeyword("INDEX").ignore(pos, expected) || ParserKeyword("INDEXES").ignore(pos, expected) || ParserKeyword("INDICES").ignore(pos, expected) || ParserKeyword("KEYS").ignore(pos, expected))) return false; if (ParserKeyword("FROM").ignore(pos, expected) || ParserKeyword("IN").ignore(pos, expected)) diff --git a/src/Planner/PlannerJoinTree.cpp b/src/Planner/PlannerJoinTree.cpp index b6e0785b5d6..5d8f8ca8741 100644 --- a/src/Planner/PlannerJoinTree.cpp +++ b/src/Planner/PlannerJoinTree.cpp @@ -877,10 +877,11 @@ JoinTreeQueryPlan buildQueryPlanForJoinNode(const QueryTreeNodePtr & join_table_ JoinClausesAndActions join_clauses_and_actions; JoinKind join_kind = join_node.getKind(); + JoinStrictness join_strictness = join_node.getStrictness(); std::optional join_constant; - if (join_node.getStrictness() == JoinStrictness::All) + if (join_strictness == JoinStrictness::All) join_constant = tryExtractConstantFromJoinNode(join_table_expression); if (join_constant) diff --git a/src/Processors/Formats/ISchemaReader.cpp b/src/Processors/Formats/ISchemaReader.cpp index c96cb373a2d..0cb6499f423 100644 --- a/src/Processors/Formats/ISchemaReader.cpp +++ b/src/Processors/Formats/ISchemaReader.cpp @@ -1,6 +1,7 @@ #include #include #include +#include #include #include #include @@ -54,6 +55,11 @@ void checkFinalInferredType( if (settings.schema_inference_make_columns_nullable) type = makeNullableRecursively(type); + /// In case when data for some column could contain nulls and regular values, + /// resulting inferred type is Nullable. + /// If input_format_null_as_default is enabled, we should remove Nullable type. + else if (settings.null_as_default) + type = removeNullable(type); } IIRowSchemaReader::IIRowSchemaReader(ReadBuffer & in_, const FormatSettings & format_settings_, DataTypePtr default_type_) diff --git a/src/Processors/Formats/Impl/ArrowBlockOutputFormat.cpp b/src/Processors/Formats/Impl/ArrowBlockOutputFormat.cpp index 1e72c949b09..8bd1cf3897d 100644 --- a/src/Processors/Formats/Impl/ArrowBlockOutputFormat.cpp +++ b/src/Processors/Formats/Impl/ArrowBlockOutputFormat.cpp @@ -58,7 +58,9 @@ void ArrowBlockOutputFormat::consume(Chunk chunk) format_settings.arrow.output_fixed_string_as_fixed_byte_array); } - ch_column_to_arrow_column->chChunkToArrowTable(arrow_table, chunk, columns_num); + auto chunks = std::vector(); + chunks.push_back(std::move(chunk)); + ch_column_to_arrow_column->chChunkToArrowTable(arrow_table, chunks, columns_num); if (!writer) prepareWriter(arrow_table->schema()); diff --git a/src/Processors/Formats/Impl/AvroRowInputFormat.cpp b/src/Processors/Formats/Impl/AvroRowInputFormat.cpp index 60e541a0109..1ec7491658e 100644 --- a/src/Processors/Formats/Impl/AvroRowInputFormat.cpp +++ b/src/Processors/Formats/Impl/AvroRowInputFormat.cpp @@ -184,8 +184,14 @@ static AvroDeserializer::DeserializeFn createDecimalDeserializeFn(const avro::No field_type_size, tmp.size()); else if (tmp.size() != field_type_size) - /// Add padding with 0-bytes. - tmp = std::string(field_type_size - tmp.size(), '\0') + tmp; + { + /// Extent value to required size by adding padding. + /// Check if value is negative or positive. + if (tmp[0] & 128) + tmp = std::string(field_type_size - tmp.size(), 0xff) + tmp; + else + tmp = std::string(field_type_size - tmp.size(), 0) + tmp; + } typename DecimalType::FieldType field; ReadBufferFromString buf(tmp); diff --git a/src/Processors/Formats/Impl/CHColumnToArrowColumn.cpp b/src/Processors/Formats/Impl/CHColumnToArrowColumn.cpp index 19b2dcccf64..c3685e813d3 100644 --- a/src/Processors/Formats/Impl/CHColumnToArrowColumn.cpp +++ b/src/Processors/Formats/Impl/CHColumnToArrowColumn.cpp @@ -976,56 +976,75 @@ namespace DB void CHColumnToArrowColumn::chChunkToArrowTable( std::shared_ptr & res, - const Chunk & chunk, + const std::vector & chunks, size_t columns_num) { - /// For arrow::Schema and arrow::Table creation - std::vector> arrow_arrays; - arrow_arrays.reserve(columns_num); - for (size_t column_i = 0; column_i < columns_num; ++column_i) + std::shared_ptr arrow_schema; + std::vector table_data(columns_num); + + for (const auto & chunk : chunks) { - const ColumnWithTypeAndName & header_column = header_columns[column_i]; - auto column = chunk.getColumns()[column_i]; - - if (!low_cardinality_as_dictionary) - column = recursiveRemoveLowCardinality(column); - - if (!is_arrow_fields_initialized) + /// For arrow::Schema and arrow::Table creation + for (size_t column_i = 0; column_i < columns_num; ++column_i) { - bool is_column_nullable = false; - auto arrow_type = getArrowType(header_column.type, column, header_column.name, format_name, output_string_as_string, output_fixed_string_as_fixed_byte_array, &is_column_nullable); - arrow_fields.emplace_back(std::make_shared(header_column.name, arrow_type, is_column_nullable)); + const ColumnWithTypeAndName & header_column = header_columns[column_i]; + auto column = chunk.getColumns()[column_i]; + + if (!low_cardinality_as_dictionary) + column = recursiveRemoveLowCardinality(column); + + if (!is_arrow_fields_initialized) + { + bool is_column_nullable = false; + auto arrow_type = getArrowType( + header_column.type, + column, + header_column.name, + format_name, + output_string_as_string, + output_fixed_string_as_fixed_byte_array, + &is_column_nullable); + arrow_fields.emplace_back(std::make_shared(header_column.name, arrow_type, is_column_nullable)); + } + + arrow::MemoryPool * pool = arrow::default_memory_pool(); + std::unique_ptr array_builder; + arrow::Status status = MakeBuilder(pool, arrow_fields[column_i]->type(), &array_builder); + checkStatus(status, column->getName(), format_name); + + fillArrowArray( + header_column.name, + column, + header_column.type, + nullptr, + array_builder.get(), + format_name, + 0, + column->size(), + output_string_as_string, + output_fixed_string_as_fixed_byte_array, + dictionary_values); + + std::shared_ptr arrow_array; + status = array_builder->Finish(&arrow_array); + checkStatus(status, column->getName(), format_name); + + table_data.at(column_i).emplace_back(std::move(arrow_array)); } - arrow::MemoryPool* pool = arrow::default_memory_pool(); - std::unique_ptr array_builder; - arrow::Status status = MakeBuilder(pool, arrow_fields[column_i]->type(), &array_builder); - checkStatus(status, column->getName(), format_name); - - fillArrowArray( - header_column.name, - column, - header_column.type, - nullptr, - array_builder.get(), - format_name, - 0, - column->size(), - output_string_as_string, - output_fixed_string_as_fixed_byte_array, - dictionary_values); - - std::shared_ptr arrow_array; - status = array_builder->Finish(&arrow_array); - checkStatus(status, column->getName(), format_name); - arrow_arrays.emplace_back(std::move(arrow_array)); + is_arrow_fields_initialized = true; + if (!arrow_schema) + arrow_schema = std::make_shared(arrow_fields); } - std::shared_ptr arrow_schema = std::make_shared(arrow_fields); + std::vector> columns; + columns.reserve(columns_num); + for (size_t column_i = 0; column_i < columns_num; ++column_i) + columns.emplace_back(std::make_shared(table_data.at(column_i))); - res = arrow::Table::Make(arrow_schema, arrow_arrays); - is_arrow_fields_initialized = true; + res = arrow::Table::Make(arrow_schema, columns); } + } #endif diff --git a/src/Processors/Formats/Impl/CHColumnToArrowColumn.h b/src/Processors/Formats/Impl/CHColumnToArrowColumn.h index 3649d0eed9b..02766e299a0 100644 --- a/src/Processors/Formats/Impl/CHColumnToArrowColumn.h +++ b/src/Processors/Formats/Impl/CHColumnToArrowColumn.h @@ -16,7 +16,7 @@ class CHColumnToArrowColumn public: CHColumnToArrowColumn(const Block & header, const std::string & format_name_, bool low_cardinality_as_dictionary_, bool output_string_as_string_, bool output_fixed_string_as_fixed_byte_array_); - void chChunkToArrowTable(std::shared_ptr & res, const Chunk & chunk, size_t columns_num); + void chChunkToArrowTable(std::shared_ptr & res, const std::vector & chunk, size_t columns_num); private: ColumnsWithTypeAndName header_columns; diff --git a/src/Processors/Formats/Impl/ParquetBlockOutputFormat.cpp b/src/Processors/Formats/Impl/ParquetBlockOutputFormat.cpp index 031e499b545..91840cd2c50 100644 --- a/src/Processors/Formats/Impl/ParquetBlockOutputFormat.cpp +++ b/src/Processors/Formats/Impl/ParquetBlockOutputFormat.cpp @@ -20,47 +20,47 @@ namespace ErrorCodes namespace { -parquet::ParquetVersion::type getParquetVersion(const FormatSettings & settings) -{ - switch (settings.parquet.output_version) + parquet::ParquetVersion::type getParquetVersion(const FormatSettings & settings) { - case FormatSettings::ParquetVersion::V1_0: - return parquet::ParquetVersion::PARQUET_1_0; - case FormatSettings::ParquetVersion::V2_4: - return parquet::ParquetVersion::PARQUET_2_4; - case FormatSettings::ParquetVersion::V2_6: - return parquet::ParquetVersion::PARQUET_2_6; - case FormatSettings::ParquetVersion::V2_LATEST: - return parquet::ParquetVersion::PARQUET_2_LATEST; + switch (settings.parquet.output_version) + { + case FormatSettings::ParquetVersion::V1_0: + return parquet::ParquetVersion::PARQUET_1_0; + case FormatSettings::ParquetVersion::V2_4: + return parquet::ParquetVersion::PARQUET_2_4; + case FormatSettings::ParquetVersion::V2_6: + return parquet::ParquetVersion::PARQUET_2_6; + case FormatSettings::ParquetVersion::V2_LATEST: + return parquet::ParquetVersion::PARQUET_2_LATEST; + } } -} -parquet::Compression::type getParquetCompression(FormatSettings::ParquetCompression method) -{ - if (method == FormatSettings::ParquetCompression::NONE) - return parquet::Compression::type::UNCOMPRESSED; + parquet::Compression::type getParquetCompression(FormatSettings::ParquetCompression method) + { + if (method == FormatSettings::ParquetCompression::NONE) + return parquet::Compression::type::UNCOMPRESSED; #if USE_SNAPPY - if (method == FormatSettings::ParquetCompression::SNAPPY) - return parquet::Compression::type::SNAPPY; + if (method == FormatSettings::ParquetCompression::SNAPPY) + return parquet::Compression::type::SNAPPY; #endif #if USE_BROTLI - if (method == FormatSettings::ParquetCompression::BROTLI) - return parquet::Compression::type::BROTLI; + if (method == FormatSettings::ParquetCompression::BROTLI) + return parquet::Compression::type::BROTLI; #endif - if (method == FormatSettings::ParquetCompression::ZSTD) - return parquet::Compression::type::ZSTD; + if (method == FormatSettings::ParquetCompression::ZSTD) + return parquet::Compression::type::ZSTD; - if (method == FormatSettings::ParquetCompression::LZ4) - return parquet::Compression::type::LZ4; + if (method == FormatSettings::ParquetCompression::LZ4) + return parquet::Compression::type::LZ4; - if (method == FormatSettings::ParquetCompression::GZIP) - return parquet::Compression::type::GZIP; + if (method == FormatSettings::ParquetCompression::GZIP) + return parquet::Compression::type::GZIP; - throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Unsupported compression method"); -} + throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Unsupported compression method"); + } } @@ -69,70 +69,9 @@ ParquetBlockOutputFormat::ParquetBlockOutputFormat(WriteBuffer & out_, const Blo { } -void ParquetBlockOutputFormat::consume(Chunk chunk) +void ParquetBlockOutputFormat::consumeStaged() { - /// Do something like SquashingTransform to produce big enough row groups. - /// Because the real SquashingTransform is only used for INSERT, not for SELECT ... INTO OUTFILE. - /// The latter doesn't even have a pipeline where a transform could be inserted, so it's more - /// convenient to do the squashing here. - - appendToAccumulatedChunk(std::move(chunk)); - - if (!accumulated_chunk) - return; - - const size_t target_rows = std::max(static_cast(1), format_settings.parquet.row_group_rows); - - if (accumulated_chunk.getNumRows() < target_rows && - accumulated_chunk.bytes() < format_settings.parquet.row_group_bytes) - return; - - /// Increase row group size slightly (by < 2x) to avoid adding a small row groups for the - /// remainder of the new chunk. - /// E.g. suppose input chunks are 70K rows each, and max_rows = 1M. Then we'll have - /// getNumRows() = 1.05M. We want to write all 1.05M as one row group instead of 1M and 0.05M. - size_t num_row_groups = std::max(static_cast(1), accumulated_chunk.getNumRows() / target_rows); - size_t row_group_size = (accumulated_chunk.getNumRows() - 1) / num_row_groups + 1; // round up - - write(std::move(accumulated_chunk), row_group_size); - accumulated_chunk.clear(); -} - -void ParquetBlockOutputFormat::finalizeImpl() -{ - if (accumulated_chunk) - write(std::move(accumulated_chunk), format_settings.parquet.row_group_rows); - - if (!file_writer) - { - Block header = materializeBlock(getPort(PortKind::Main).getHeader()); - write(Chunk(header.getColumns(), 0), 1); - } - - auto status = file_writer->Close(); - if (!status.ok()) - throw Exception(ErrorCodes::UNKNOWN_EXCEPTION, "Error while closing a table: {}", status.ToString()); -} - -void ParquetBlockOutputFormat::resetFormatterImpl() -{ - file_writer.reset(); -} - -void ParquetBlockOutputFormat::appendToAccumulatedChunk(Chunk chunk) -{ - if (!accumulated_chunk) - { - accumulated_chunk = std::move(chunk); - return; - } - chassert(accumulated_chunk.getNumColumns() == chunk.getNumColumns()); - accumulated_chunk.append(chunk); -} - -void ParquetBlockOutputFormat::write(Chunk chunk, size_t row_group_size) -{ - const size_t columns_num = chunk.getNumColumns(); + const size_t columns_num = staging_chunks.at(0).getNumColumns(); std::shared_ptr arrow_table; if (!ch_column_to_arrow_column) @@ -146,7 +85,7 @@ void ParquetBlockOutputFormat::write(Chunk chunk, size_t row_group_size) format_settings.parquet.output_fixed_string_as_fixed_byte_array); } - ch_column_to_arrow_column->chChunkToArrowTable(arrow_table, chunk, columns_num); + ch_column_to_arrow_column->chChunkToArrowTable(arrow_table, staging_chunks, columns_num); if (!file_writer) { @@ -173,12 +112,66 @@ void ParquetBlockOutputFormat::write(Chunk chunk, size_t row_group_size) file_writer = std::move(result.ValueOrDie()); } - auto status = file_writer->WriteTable(*arrow_table, row_group_size); + // TODO: calculate row_group_size depending on a number of rows and table size + + // allow slightly bigger than row_group_size to avoid a very small tail row group + auto status = file_writer->WriteTable(*arrow_table, std::max(format_settings.parquet.row_group_rows, staging_rows)); if (!status.ok()) throw Exception(ErrorCodes::UNKNOWN_EXCEPTION, "Error while writing a table: {}", status.ToString()); } +void ParquetBlockOutputFormat::consume(Chunk chunk) +{ + /// Do something like SquashingTransform to produce big enough row groups. + /// Because the real SquashingTransform is only used for INSERT, not for SELECT ... INTO OUTFILE. + /// The latter doesn't even have a pipeline where a transform could be inserted, so it's more + /// convenient to do the squashing here. + staging_rows += chunk.getNumRows(); + staging_bytes += chunk.bytes(); + staging_chunks.push_back(std::move(chunk)); + chassert(staging_chunks.back().getNumColumns() == staging_chunks.front().getNumColumns()); + if (staging_rows < format_settings.parquet.row_group_rows && + staging_bytes < format_settings.parquet.row_group_bytes) + { + return; + } + else + { + consumeStaged(); + staging_chunks.clear(); + staging_rows = 0; + staging_bytes = 0; + } +} + +void ParquetBlockOutputFormat::finalizeImpl() +{ + if (!file_writer && staging_chunks.empty()) + { + Block header = materializeBlock(getPort(PortKind::Main).getHeader()); + + consume(Chunk(header.getColumns(), 0)); // this will make staging_chunks non-empty + } + + if (!staging_chunks.empty()) + { + consumeStaged(); + staging_chunks.clear(); + staging_rows = 0; + staging_bytes = 0; + } + + auto status = file_writer->Close(); + if (!status.ok()) + throw Exception(ErrorCodes::UNKNOWN_EXCEPTION, "Error while closing a table: {}", status.ToString()); +} + +void ParquetBlockOutputFormat::resetFormatterImpl() +{ + file_writer.reset(); +} + void registerOutputFormatParquet(FormatFactory & factory) { factory.registerOutputFormat( diff --git a/src/Processors/Formats/Impl/ParquetBlockOutputFormat.h b/src/Processors/Formats/Impl/ParquetBlockOutputFormat.h index 0fb7013e6d6..482c778bc52 100644 --- a/src/Processors/Formats/Impl/ParquetBlockOutputFormat.h +++ b/src/Processors/Formats/Impl/ParquetBlockOutputFormat.h @@ -34,18 +34,19 @@ public: String getContentType() const override { return "application/octet-stream"; } private: + void consumeStaged(); void consume(Chunk) override; - void appendToAccumulatedChunk(Chunk chunk); - void write(Chunk chunk, size_t row_group_size); void finalizeImpl() override; void resetFormatterImpl() override; + std::vector staging_chunks; + size_t staging_rows = 0; + size_t staging_bytes = 0; + const FormatSettings format_settings; std::unique_ptr file_writer; std::unique_ptr ch_column_to_arrow_column; - - Chunk accumulated_chunk; }; } diff --git a/src/Processors/QueryPlan/Optimizations/optimizeTree.cpp b/src/Processors/QueryPlan/Optimizations/optimizeTree.cpp index dd17c1b5a30..73632d34671 100644 --- a/src/Processors/QueryPlan/Optimizations/optimizeTree.cpp +++ b/src/Processors/QueryPlan/Optimizations/optimizeTree.cpp @@ -114,32 +114,35 @@ void optimizeTreeSecondPass(const QueryPlanOptimizationSettings & optimization_s while (!stack.empty()) { - auto & frame = stack.back(); - - if (frame.next_child == 0) { - has_reading_from_mt |= typeid_cast(frame.node->step.get()) != nullptr; + /// NOTE: frame cannot be safely used after stack was modified. + auto & frame = stack.back(); - if (optimization_settings.read_in_order) - optimizeReadInOrder(*frame.node, nodes); + if (frame.next_child == 0) + { + has_reading_from_mt |= typeid_cast(frame.node->step.get()) != nullptr; - if (optimization_settings.optimize_projection) - num_applied_projection += optimizeUseAggregateProjections(*frame.node, nodes); + if (optimization_settings.read_in_order) + optimizeReadInOrder(*frame.node, nodes); - if (optimization_settings.aggregation_in_order) - optimizeAggregationInOrder(*frame.node, nodes); + if (optimization_settings.optimize_projection) + num_applied_projection += optimizeUseAggregateProjections(*frame.node, nodes); - if (optimization_settings.distinct_in_order) - tryDistinctReadInOrder(frame.node); - } + if (optimization_settings.aggregation_in_order) + optimizeAggregationInOrder(*frame.node, nodes); - /// Traverse all children first. - if (frame.next_child < frame.node->children.size()) - { - auto next_frame = Frame{.node = frame.node->children[frame.next_child]}; - ++frame.next_child; - stack.push_back(next_frame); - continue; + if (optimization_settings.distinct_in_order) + tryDistinctReadInOrder(frame.node); + } + + /// Traverse all children first. + if (frame.next_child < frame.node->children.size()) + { + auto next_frame = Frame{.node = frame.node->children[frame.next_child]}; + ++frame.next_child; + stack.push_back(next_frame); + continue; + } } if (optimization_settings.optimize_projection) @@ -160,9 +163,10 @@ void optimizeTreeSecondPass(const QueryPlanOptimizationSettings & optimization_s } } + /// NOTE: optimizePrewhere can modify the stack. optimizePrewhere(stack, nodes); optimizePrimaryKeyCondition(stack); - enableMemoryBoundMerging(*frame.node, nodes); + enableMemoryBoundMerging(*stack.back().node, nodes); stack.pop_back(); } diff --git a/src/Processors/Sources/MongoDBSource.cpp b/src/Processors/Sources/MongoDBSource.cpp index 279a842143f..cd4db416a29 100644 --- a/src/Processors/Sources/MongoDBSource.cpp +++ b/src/Processors/Sources/MongoDBSource.cpp @@ -3,10 +3,12 @@ #include #include +#include +#include #include #include +#include #include -#include #include #include @@ -365,27 +367,79 @@ namespace } -std::unique_ptr createCursor(const std::string & database, const std::string & collection, const Block & sample_block_to_select) +bool isMongoDBWireProtocolOld(Poco::MongoDB::Connection & connection_) { - auto cursor = std::make_unique(database, collection); + Poco::MongoDB::Database db("config"); + Poco::MongoDB::Document::Ptr doc = db.queryServerHello(connection_); + auto wire_version = doc->getInteger("maxWireVersion"); + return wire_version < Poco::MongoDB::Database::WireVersion::VER_36; +} + + +MongoDBCursor::MongoDBCursor( + const std::string & database, + const std::string & collection, + const Block & sample_block_to_select, + const Poco::MongoDB::Document & query, + Poco::MongoDB::Connection & connection) + : is_wire_protocol_old(isMongoDBWireProtocolOld(connection)) +{ + Poco::MongoDB::Document projection; /// Looks like selecting _id column is implicit by default. if (!sample_block_to_select.has("_id")) - cursor->query().returnFieldSelector().add("_id", 0); + projection.add("_id", 0); for (const auto & column : sample_block_to_select) - cursor->query().returnFieldSelector().add(column.name, 1); - return cursor; + projection.add(column.name, 1); + + if (is_wire_protocol_old) + { + old_cursor = std::make_unique(database, collection); + old_cursor->query().selector() = query; + old_cursor->query().returnFieldSelector() = projection; + } + else + { + new_cursor = std::make_unique(database, collection); + new_cursor->query().setCommandName(Poco::MongoDB::OpMsgMessage::CMD_FIND); + new_cursor->query().body().addNewDocument("filter") = query; + new_cursor->query().body().addNewDocument("projection") = projection; + } } +Poco::MongoDB::Document::Vector MongoDBCursor::nextDocuments(Poco::MongoDB::Connection & connection) +{ + if (is_wire_protocol_old) + { + auto response = old_cursor->next(connection); + cursor_id = response.cursorID(); + return std::move(response.documents()); + } + else + { + auto response = new_cursor->next(connection); + cursor_id = new_cursor->cursorID(); + return std::move(response.documents()); + } +} + +Int64 MongoDBCursor::cursorID() const +{ + return cursor_id; +} + + MongoDBSource::MongoDBSource( std::shared_ptr & connection_, - std::unique_ptr cursor_, + const String & database_name_, + const String & collection_name_, + const Poco::MongoDB::Document & query_, const Block & sample_block, UInt64 max_block_size_) : ISource(sample_block.cloneEmpty()) , connection(connection_) - , cursor{std::move(cursor_)} + , cursor(database_name_, collection_name_, sample_block, query_, *connection_) , max_block_size{max_block_size_} { description.init(sample_block); @@ -412,9 +466,9 @@ Chunk MongoDBSource::generate() size_t num_rows = 0; while (num_rows < max_block_size) { - Poco::MongoDB::ResponseMessage & response = cursor->next(*connection); + auto documents = cursor.nextDocuments(*connection); - for (auto & document : response.documents()) + for (auto & document : documents) { if (document->exists("ok") && document->exists("$err") && document->exists("code") && document->getInteger("ok") == 0) @@ -458,7 +512,7 @@ Chunk MongoDBSource::generate() } } - if (response.cursorID() == 0) + if (cursor.cursorID() == 0) { all_read = true; break; diff --git a/src/Processors/Sources/MongoDBSource.h b/src/Processors/Sources/MongoDBSource.h index d4681d2c05f..0e95d42c028 100644 --- a/src/Processors/Sources/MongoDBSource.h +++ b/src/Processors/Sources/MongoDBSource.h @@ -1,6 +1,7 @@ #pragma once #include +#include #include #include @@ -14,7 +15,9 @@ namespace Poco namespace MongoDB { class Connection; + class Document; class Cursor; + class OpMsgCursor; } } @@ -30,7 +33,28 @@ struct MongoDBArrayInfo void authenticate(Poco::MongoDB::Connection & connection, const std::string & database, const std::string & user, const std::string & password); -std::unique_ptr createCursor(const std::string & database, const std::string & collection, const Block & sample_block_to_select); +bool isMongoDBWireProtocolOld(Poco::MongoDB::Connection & connection_); + +class MongoDBCursor +{ +public: + MongoDBCursor( + const std::string & database, + const std::string & collection, + const Block & sample_block_to_select, + const Poco::MongoDB::Document & query, + Poco::MongoDB::Connection & connection); + + Poco::MongoDB::Document::Vector nextDocuments(Poco::MongoDB::Connection & connection); + + Int64 cursorID() const; + +private: + const bool is_wire_protocol_old; + std::unique_ptr old_cursor; + std::unique_ptr new_cursor; + Int64 cursor_id = 0; +}; /// Converts MongoDB Cursor to a stream of Blocks class MongoDBSource final : public ISource @@ -38,7 +62,9 @@ class MongoDBSource final : public ISource public: MongoDBSource( std::shared_ptr & connection_, - std::unique_ptr cursor_, + const String & database_name_, + const String & collection_name_, + const Poco::MongoDB::Document & query_, const Block & sample_block, UInt64 max_block_size_); @@ -50,7 +76,7 @@ private: Chunk generate() override; std::shared_ptr connection; - std::unique_ptr cursor; + MongoDBCursor cursor; const UInt64 max_block_size; ExternalResultDescription description; bool all_read = false; diff --git a/src/Processors/Transforms/JoiningTransform.cpp b/src/Processors/Transforms/JoiningTransform.cpp index 256ef66a27d..bba8ec6fa16 100644 --- a/src/Processors/Transforms/JoiningTransform.cpp +++ b/src/Processors/Transforms/JoiningTransform.cpp @@ -189,6 +189,7 @@ void JoiningTransform::transform(Chunk & chunk) } else block = readExecute(chunk); + auto num_rows = block.rows(); chunk.setColumns(block.getColumns(), num_rows); } @@ -310,16 +311,8 @@ void FillingRightJoinSideTransform::work() } -DelayedJoinedBlocksWorkerTransform::DelayedJoinedBlocksWorkerTransform( - Block left_header_, - Block output_header_, - size_t max_block_size_, - JoinPtr join_) - : IProcessor(InputPorts{Block()}, OutputPorts{output_header_}) - , left_header(left_header_) - , output_header(output_header_) - , max_block_size(max_block_size_) - , join(join_) +DelayedJoinedBlocksWorkerTransform::DelayedJoinedBlocksWorkerTransform(Block output_header) + : IProcessor(InputPorts{Block()}, OutputPorts{output_header}) { } @@ -373,7 +366,6 @@ IProcessor::Status DelayedJoinedBlocksWorkerTransform::prepare() if (!data.chunk.hasChunkInfo()) throw Exception(ErrorCodes::LOGICAL_ERROR, "DelayedJoinedBlocksWorkerTransform must have chunk info"); task = std::dynamic_pointer_cast(data.chunk.getChunkInfo()); - } else { @@ -394,24 +386,12 @@ void DelayedJoinedBlocksWorkerTransform::work() { if (!task) return; - Block block; - if (!left_delayed_stream_finished) - { - block = task->delayed_blocks->next(); - if (!block) - { - left_delayed_stream_finished = true; - block = nextNonJoinedBlock(); - } - } - else - { - block = nextNonJoinedBlock(); - } + Block block = task->delayed_blocks->next(); + if (!block) { - resetTask(); + task.reset(); return; } @@ -420,38 +400,6 @@ void DelayedJoinedBlocksWorkerTransform::work() output_chunk.setColumns(block.getColumns(), rows); } -void DelayedJoinedBlocksWorkerTransform::resetTask() -{ - task.reset(); - left_delayed_stream_finished = false; - setup_non_joined_stream = false; - non_joined_delayed_stream = nullptr; -} - -Block DelayedJoinedBlocksWorkerTransform::nextNonJoinedBlock() -{ - if (!setup_non_joined_stream) - { - setup_non_joined_stream = true; - // Before read from non-joined stream, all blocks in left file reader must have been joined. - // For example, in HashJoin, it may return invalid mismatch rows from non-joined stream before - // the all blocks in left file reader have been finished, since the used flags are incomplete. - // To make only one processor could read from non-joined stream seems be a easy way. - if (task && task->left_delayed_stream_finish_counter->isLast()) - { - if (!non_joined_delayed_stream) - { - non_joined_delayed_stream = join->getNonJoinedBlocks(left_header, output_header, max_block_size); - } - } - } - if (non_joined_delayed_stream) - { - return non_joined_delayed_stream->next(); - } - return {}; -} - DelayedJoinedBlocksTransform::DelayedJoinedBlocksTransform(size_t num_streams, JoinPtr join_) : IProcessor(InputPorts{}, OutputPorts(num_streams, Block())) , join(std::move(join_)) @@ -485,9 +433,6 @@ IProcessor::Status DelayedJoinedBlocksTransform::prepare() if (finished) { - // Since have memory limit, cannot handle all buckets parallelly by different - // DelayedJoinedBlocksWorkerTransform. So send the same task to all outputs. - // Wait for all DelayedJoinedBlocksWorkerTransform be idle before getting next bucket. for (auto & output : outputs) { if (output.isFinished()) @@ -503,14 +448,10 @@ IProcessor::Status DelayedJoinedBlocksTransform::prepare() if (delayed_blocks) { - // This counter is used to ensure that only the last DelayedJoinedBlocksWorkerTransform - // could read right non-joined blocks from the join. - auto left_delayed_stream_finished_counter = std::make_shared(outputs.size()); for (auto & output : outputs) { Chunk chunk; - auto task = std::make_shared(delayed_blocks, left_delayed_stream_finished_counter); - chunk.setChunkInfo(task); + chunk.setChunkInfo(std::make_shared(delayed_blocks)); output.push(std::move(chunk)); } delayed_blocks = nullptr; diff --git a/src/Processors/Transforms/JoiningTransform.h b/src/Processors/Transforms/JoiningTransform.h index 3577906b26a..e7edff40c56 100644 --- a/src/Processors/Transforms/JoiningTransform.h +++ b/src/Processors/Transforms/JoiningTransform.h @@ -116,14 +116,9 @@ class DelayedBlocksTask : public ChunkInfo public: explicit DelayedBlocksTask() : finished(true) {} - explicit DelayedBlocksTask(IBlocksStreamPtr delayed_blocks_, JoiningTransform::FinishCounterPtr left_delayed_stream_finish_counter_) - : delayed_blocks(std::move(delayed_blocks_)) - , left_delayed_stream_finish_counter(left_delayed_stream_finish_counter_) - { - } + explicit DelayedBlocksTask(IBlocksStreamPtr delayed_blocks_) : delayed_blocks(std::move(delayed_blocks_)) {} IBlocksStreamPtr delayed_blocks = nullptr; - JoiningTransform::FinishCounterPtr left_delayed_stream_finish_counter = nullptr; bool finished = false; }; @@ -152,11 +147,7 @@ private: class DelayedJoinedBlocksWorkerTransform : public IProcessor { public: - explicit DelayedJoinedBlocksWorkerTransform( - Block left_header_, - Block output_header_, - size_t max_block_size_, - JoinPtr join_); + explicit DelayedJoinedBlocksWorkerTransform(Block output_header); String getName() const override { return "DelayedJoinedBlocksWorkerTransform"; } @@ -164,20 +155,10 @@ public: void work() override; private: - Block left_header; - Block output_header; - size_t max_block_size; - JoinPtr join; DelayedBlocksTaskPtr task; Chunk output_chunk; - /// All joined and non-joined rows from left stream are emitted, only right non-joined rows are left - bool left_delayed_stream_finished = false; - bool setup_non_joined_stream = false; - IBlocksStreamPtr non_joined_delayed_stream = nullptr; - - void resetTask(); - Block nextNonJoinedBlock(); + bool finished = false; }; } diff --git a/src/Processors/Transforms/ReverseTransform.cpp b/src/Processors/Transforms/ReverseTransform.cpp index 98f2bf54aa5..66b774ab50d 100644 --- a/src/Processors/Transforms/ReverseTransform.cpp +++ b/src/Processors/Transforms/ReverseTransform.cpp @@ -6,11 +6,11 @@ namespace DB void ReverseTransform::transform(Chunk & chunk) { - IColumn::Permutation permutation; - size_t num_rows = chunk.getNumRows(); + IColumn::Permutation permutation(num_rows); + for (size_t i = 0; i < num_rows; ++i) - permutation.emplace_back(num_rows - 1 - i); + permutation[i] = num_rows - 1 - i; auto columns = chunk.detachColumns(); diff --git a/src/QueryPipeline/QueryPipelineBuilder.cpp b/src/QueryPipeline/QueryPipelineBuilder.cpp index 764997e7b7e..a4edf107b2f 100644 --- a/src/QueryPipeline/QueryPipelineBuilder.cpp +++ b/src/QueryPipeline/QueryPipelineBuilder.cpp @@ -491,7 +491,7 @@ std::unique_ptr QueryPipelineBuilder::joinPipelinesRightLe if (delayed_root) { // Process delayed joined blocks when all JoiningTransform are finished. - auto delayed = std::make_shared(left_header, joined_header, max_block_size, join); + auto delayed = std::make_shared(joined_header); if (delayed->getInputs().size() != 1 || delayed->getOutputs().size() != 1) throw Exception(ErrorCodes::LOGICAL_ERROR, "DelayedJoinedBlocksWorkerTransform should have one input and one output"); diff --git a/src/Storages/MergeTree/IExecutableTask.h b/src/Storages/MergeTree/IExecutableTask.h index 9617960c182..d0c2d4a840e 100644 --- a/src/Storages/MergeTree/IExecutableTask.h +++ b/src/Storages/MergeTree/IExecutableTask.h @@ -5,6 +5,7 @@ #include #include +#include namespace DB { @@ -32,7 +33,7 @@ public: virtual bool executeStep() = 0; virtual void onCompleted() = 0; virtual StorageID getStorageID() = 0; - virtual UInt64 getPriority() = 0; + virtual Priority getPriority() = 0; virtual ~IExecutableTask() = default; }; @@ -63,7 +64,7 @@ public: void onCompleted() override { job_result_callback(!res); } StorageID getStorageID() override { return id; } - UInt64 getPriority() override + Priority getPriority() override { throw Exception(ErrorCodes::LOGICAL_ERROR, "getPriority() method is not supported by LambdaAdapter"); } diff --git a/src/Storages/MergeTree/MergeFromLogEntryTask.cpp b/src/Storages/MergeTree/MergeFromLogEntryTask.cpp index a5b4a29cb18..5cee402f88c 100644 --- a/src/Storages/MergeTree/MergeFromLogEntryTask.cpp +++ b/src/Storages/MergeTree/MergeFromLogEntryTask.cpp @@ -291,7 +291,7 @@ ReplicatedMergeMutateTaskBase::PrepareResult MergeFromLogEntryTask::prepare() /// Adjust priority for (auto & item : future_merged_part->parts) - priority += item->getBytesOnDisk(); + priority.value += item->getBytesOnDisk(); return {true, true, [this, stopwatch = *stopwatch_ptr] (const ExecutionStatus & execution_status) { diff --git a/src/Storages/MergeTree/MergeFromLogEntryTask.h b/src/Storages/MergeTree/MergeFromLogEntryTask.h index 9d870cd9a4b..2c559c06d7e 100644 --- a/src/Storages/MergeTree/MergeFromLogEntryTask.h +++ b/src/Storages/MergeTree/MergeFromLogEntryTask.h @@ -22,7 +22,7 @@ public: StorageReplicatedMergeTree & storage_, IExecutableTask::TaskResultCallback & task_result_callback_); - UInt64 getPriority() override { return priority; } + Priority getPriority() override { return priority; } protected: /// Both return false if we can't execute merge. @@ -44,7 +44,7 @@ private: StopwatchUniquePtr stopwatch_ptr{nullptr}; MergeTreeData::MutableDataPartPtr part; - UInt64 priority{0}; + Priority priority; MergeTaskPtr merge_task; }; diff --git a/src/Storages/MergeTree/MergePlainMergeTreeTask.h b/src/Storages/MergeTree/MergePlainMergeTreeTask.h index 369b4390da7..95df8c90c9b 100644 --- a/src/Storages/MergeTree/MergePlainMergeTreeTask.h +++ b/src/Storages/MergeTree/MergePlainMergeTreeTask.h @@ -34,13 +34,13 @@ public: , task_result_callback(task_result_callback_) { for (auto & item : merge_mutate_entry->future_part->parts) - priority += item->getBytesOnDisk(); + priority.value += item->getBytesOnDisk(); } bool executeStep() override; void onCompleted() override; StorageID getStorageID() override; - UInt64 getPriority() override { return priority; } + Priority getPriority() override { return priority; } void setCurrentTransaction(MergeTreeTransactionHolder && txn_holder_, MergeTreeTransactionPtr && txn_) { @@ -77,7 +77,7 @@ private: using MergeListEntryPtr = std::unique_ptr; MergeListEntryPtr merge_list_entry; - UInt64 priority{0}; + Priority priority; std::function write_part_log; std::function transfer_profile_counters_to_initial_query; diff --git a/src/Storages/MergeTree/MergeTreeBackgroundExecutor.h b/src/Storages/MergeTree/MergeTreeBackgroundExecutor.h index 8142e383d0c..552ed32e2d2 100644 --- a/src/Storages/MergeTree/MergeTreeBackgroundExecutor.h +++ b/src/Storages/MergeTree/MergeTreeBackgroundExecutor.h @@ -63,7 +63,7 @@ struct TaskRuntimeData /// This scenario in not possible in reality. Poco::Event is_done{/*autoreset=*/false}; /// This is equal to task->getPriority() not to do useless virtual calls in comparator - UInt64 priority{0}; + Priority priority; /// By default priority queue will have max element at top static bool comparePtrByPriority(const TaskRuntimeDataPtr & lhs, const TaskRuntimeDataPtr & rhs) diff --git a/src/Storages/MergeTree/MergeTreeData.cpp b/src/Storages/MergeTree/MergeTreeData.cpp index 2f6870f8b41..9cca471fddb 100644 --- a/src/Storages/MergeTree/MergeTreeData.cpp +++ b/src/Storages/MergeTree/MergeTreeData.cpp @@ -4789,7 +4789,7 @@ void MergeTreeData::checkAlterPartitionIsPossible( if (partition_ast && partition_ast->all) { if (command.type != PartitionCommand::DROP_PARTITION) - throw DB::Exception(ErrorCodes::SUPPORT_IS_DISABLED, "Only support DETACH PARTITION ALL currently"); + throw DB::Exception(ErrorCodes::SUPPORT_IS_DISABLED, "Only support DROP/DETACH PARTITION ALL currently"); } else getPartitionIDFromQuery(command.partition, getContext()); diff --git a/src/Storages/MergeTree/MutateFromLogEntryTask.cpp b/src/Storages/MergeTree/MutateFromLogEntryTask.cpp index d0b85ee65b8..42dccef7e6f 100644 --- a/src/Storages/MergeTree/MutateFromLogEntryTask.cpp +++ b/src/Storages/MergeTree/MutateFromLogEntryTask.cpp @@ -184,7 +184,7 @@ ReplicatedMergeMutateTaskBase::PrepareResult MutateFromLogEntryTask::prepare() /// Adjust priority for (auto & item : future_mutated_part->parts) - priority += item->getBytesOnDisk(); + priority.value += item->getBytesOnDisk(); return {true, true, [this] (const ExecutionStatus & execution_status) { diff --git a/src/Storages/MergeTree/MutateFromLogEntryTask.h b/src/Storages/MergeTree/MutateFromLogEntryTask.h index 2a2cc308f85..42b4debcbf2 100644 --- a/src/Storages/MergeTree/MutateFromLogEntryTask.h +++ b/src/Storages/MergeTree/MutateFromLogEntryTask.h @@ -27,7 +27,7 @@ public: {} - UInt64 getPriority() override { return priority; } + Priority getPriority() override { return priority; } private: @@ -40,7 +40,7 @@ private: return mutate_task->execute(); } - UInt64 priority{0}; + Priority priority; TableLockHolder table_lock_holder{nullptr}; ReservationSharedPtr reserved_space{nullptr}; diff --git a/src/Storages/MergeTree/MutatePlainMergeTreeTask.h b/src/Storages/MergeTree/MutatePlainMergeTreeTask.h index 823ea6d7a0f..bd03c276256 100644 --- a/src/Storages/MergeTree/MutatePlainMergeTreeTask.h +++ b/src/Storages/MergeTree/MutatePlainMergeTreeTask.h @@ -36,13 +36,13 @@ public: , task_result_callback(task_result_callback_) { for (auto & part : merge_mutate_entry->future_part->parts) - priority += part->getBytesOnDisk(); + priority.value += part->getBytesOnDisk(); } bool executeStep() override; void onCompleted() override; StorageID getStorageID() override; - UInt64 getPriority() override { return priority; } + Priority getPriority() override { return priority; } private: @@ -66,7 +66,7 @@ private: std::unique_ptr stopwatch; MergeTreeData::MutableDataPartPtr new_part; - UInt64 priority{0}; + Priority priority; using MergeListEntryPtr = std::unique_ptr; MergeListEntryPtr merge_list_entry; diff --git a/src/Storages/MergeTree/MutateTask.cpp b/src/Storages/MergeTree/MutateTask.cpp index 76096d00641..31b3c249177 100644 --- a/src/Storages/MergeTree/MutateTask.cpp +++ b/src/Storages/MergeTree/MutateTask.cpp @@ -1,5 +1,6 @@ #include +#include "Common/Priority.h" #include #include #include @@ -961,7 +962,7 @@ public: void onCompleted() override { throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented"); } StorageID getStorageID() override { throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented"); } - UInt64 getPriority() override { throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented"); } + Priority getPriority() override { throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented"); } bool executeStep() override { @@ -1283,7 +1284,7 @@ public: void onCompleted() override { throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented"); } StorageID getStorageID() override { throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented"); } - UInt64 getPriority() override { throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented"); } + Priority getPriority() override { throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented"); } bool executeStep() override { @@ -1412,7 +1413,7 @@ public: void onCompleted() override { throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented"); } StorageID getStorageID() override { throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented"); } - UInt64 getPriority() override { throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented"); } + Priority getPriority() override { throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented"); } bool executeStep() override { diff --git a/src/Storages/MergeTree/tests/gtest_executor.cpp b/src/Storages/MergeTree/tests/gtest_executor.cpp index 3a4f147b456..5815b74284a 100644 --- a/src/Storages/MergeTree/tests/gtest_executor.cpp +++ b/src/Storages/MergeTree/tests/gtest_executor.cpp @@ -51,7 +51,7 @@ public: throw std::runtime_error("Unlucky..."); } - UInt64 getPriority() override { return 0; } + Priority getPriority() override { return {}; } private: std::mt19937 generator; @@ -65,11 +65,11 @@ using StepFunc = std::function; class LambdaExecutableTask : public IExecutableTask { public: - explicit LambdaExecutableTask(const String & name_, size_t step_count_, StepFunc step_func_ = {}, UInt64 priority_ = 0) + explicit LambdaExecutableTask(const String & name_, size_t step_count_, StepFunc step_func_ = {}, Int64 priority_value = 0) : name(name_) , step_count(step_count_) , step_func(step_func_) - , priority(priority_) + , priority{priority_value} {} bool executeStep() override @@ -86,13 +86,13 @@ public: void onCompleted() override {} - UInt64 getPriority() override { return priority; } + Priority getPriority() override { return priority; } private: String name; size_t step_count; StepFunc step_func; - UInt64 priority; + Priority priority; }; diff --git a/src/Storages/StorageAzureBlob.cpp b/src/Storages/StorageAzureBlob.cpp new file mode 100644 index 00000000000..3ee176a68b7 --- /dev/null +++ b/src/Storages/StorageAzureBlob.cpp @@ -0,0 +1,1305 @@ +#include + + +#if USE_AZURE_BLOB_STORAGE +#include +#include +#include +#include +#include + +#include +#include + +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include +#include + + +using namespace Azure::Storage::Blobs; + +namespace CurrentMetrics +{ + extern const Metric ObjectStorageAzureThreads; + extern const Metric ObjectStorageAzureThreadsActive; +} + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; + extern const int BAD_ARGUMENTS; + extern const int DATABASE_ACCESS_DENIED; + extern const int CANNOT_COMPILE_REGEXP; + extern const int CANNOT_EXTRACT_TABLE_STRUCTURE; + extern const int LOGICAL_ERROR; + extern const int NOT_IMPLEMENTED; + +} + +namespace +{ + +const std::unordered_set required_configuration_keys = { + "blob_path", + "container", +}; + +const std::unordered_set optional_configuration_keys = { + "format", + "compression", + "structure", + "compression_method", + "account_name", + "account_key", + "connection_string", + "storage_account_url", +}; + +bool isConnectionString(const std::string & candidate) +{ + return candidate.starts_with("DefaultEndpointsProtocol"); +} + +} + +void StorageAzureBlob::processNamedCollectionResult(StorageAzureBlob::Configuration & configuration, const NamedCollection & collection) +{ + validateNamedCollection(collection, required_configuration_keys, optional_configuration_keys); + + if (collection.has("connection_string")) + { + configuration.connection_url = collection.get("connection_string"); + configuration.is_connection_string = true; + } + + if (collection.has("storage_account_url")) + { + configuration.connection_url = collection.get("storage_account_url"); + configuration.is_connection_string = false; + } + + configuration.container = collection.get("container"); + configuration.blob_path = collection.get("blob_path"); + + if (collection.has("account_name")) + configuration.account_name = collection.get("account_name"); + + if (collection.has("account_key")) + configuration.account_key = collection.get("account_key"); + + configuration.structure = collection.getOrDefault("structure", "auto"); + configuration.format = collection.getOrDefault("format", configuration.format); + configuration.compression_method = collection.getOrDefault("compression_method", collection.getOrDefault("compression", "auto")); +} + + +StorageAzureBlob::Configuration StorageAzureBlob::getConfiguration(ASTs & engine_args, ContextPtr local_context) +{ + StorageAzureBlob::Configuration configuration; + + /// Supported signatures: + /// + /// AzureBlobStorage(connection_string|storage_account_url, container_name, blobpath, [account_name, account_key, format, compression]) + /// + + if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args, local_context)) + { + processNamedCollectionResult(configuration, *named_collection); + + configuration.blobs_paths = {configuration.blob_path}; + + if (configuration.format == "auto") + configuration.format = FormatFactory::instance().getFormatFromFileName(configuration.blob_path, true); + + return configuration; + } + + if (engine_args.size() < 3 || engine_args.size() > 7) + throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, + "Storage AzureBlobStorage requires 3 to 7 arguments: " + "AzureBlobStorage(connection_string|storage_account_url, container_name, blobpath, [account_name, account_key, format, compression])"); + + for (auto & engine_arg : engine_args) + engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, local_context); + + std::unordered_map engine_args_to_idx; + + configuration.connection_url = checkAndGetLiteralArgument(engine_args[0], "connection_string/storage_account_url"); + configuration.is_connection_string = isConnectionString(configuration.connection_url); + + configuration.container = checkAndGetLiteralArgument(engine_args[1], "container"); + configuration.blob_path = checkAndGetLiteralArgument(engine_args[2], "blobpath"); + + auto is_format_arg = [] (const std::string & s) -> bool + { + return s == "auto" || FormatFactory::instance().getAllFormats().contains(s); + }; + + if (engine_args.size() == 4) + { + //'c1 UInt64, c2 UInt64 + auto fourth_arg = checkAndGetLiteralArgument(engine_args[3], "format/account_name"); + if (is_format_arg(fourth_arg)) + { + configuration.format = fourth_arg; + } + else + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown format or account name specified without account key"); + } + } + else if (engine_args.size() == 5) + { + auto fourth_arg = checkAndGetLiteralArgument(engine_args[3], "format/account_name"); + if (is_format_arg(fourth_arg)) + { + configuration.format = fourth_arg; + configuration.compression_method = checkAndGetLiteralArgument(engine_args[4], "compression"); + } + else + { + configuration.account_name = fourth_arg; + configuration.account_key = checkAndGetLiteralArgument(engine_args[4], "account_key"); + } + } + else if (engine_args.size() == 6) + { + auto fourth_arg = checkAndGetLiteralArgument(engine_args[3], "format/account_name"); + if (fourth_arg == "auto" || FormatFactory::instance().getAllFormats().contains(fourth_arg)) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Format and compression must be last arguments"); + } + else + { + configuration.account_name = fourth_arg; + + configuration.account_key = checkAndGetLiteralArgument(engine_args[4], "account_key"); + auto sixth_arg = checkAndGetLiteralArgument(engine_args[5], "format/account_name"); + if (!is_format_arg(sixth_arg)) + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown format {}", sixth_arg); + configuration.format = sixth_arg; + } + } + else if (engine_args.size() == 7) + { + auto fourth_arg = checkAndGetLiteralArgument(engine_args[3], "format/account_name"); + if (fourth_arg == "auto" || FormatFactory::instance().getAllFormats().contains(fourth_arg)) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Format and compression must be last arguments"); + } + else + { + configuration.account_name = fourth_arg; + configuration.account_key = checkAndGetLiteralArgument(engine_args[4], "account_key"); + auto sixth_arg = checkAndGetLiteralArgument(engine_args[5], "format/account_name"); + if (!is_format_arg(sixth_arg)) + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown format {}", sixth_arg); + configuration.format = sixth_arg; + configuration.compression_method = checkAndGetLiteralArgument(engine_args[6], "compression"); + } + } + + configuration.blobs_paths = {configuration.blob_path}; + + if (configuration.format == "auto") + configuration.format = FormatFactory::instance().getFormatFromFileName(configuration.blob_path, true); + + return configuration; +} + + +AzureObjectStorage::SettingsPtr StorageAzureBlob::createSettings(ContextPtr local_context) +{ + const auto & context_settings = local_context->getSettingsRef(); + auto settings_ptr = std::make_unique(); + settings_ptr->max_single_part_upload_size = context_settings.azure_max_single_part_upload_size; + settings_ptr->max_single_read_retries = context_settings.azure_max_single_read_retries; + settings_ptr->list_object_keys_size = static_cast(context_settings.azure_list_object_keys_size); + + return settings_ptr; +} + +void registerStorageAzureBlob(StorageFactory & factory) +{ + factory.registerStorage("AzureBlobStorage", [](const StorageFactory::Arguments & args) + { + auto & engine_args = args.engine_args; + if (engine_args.empty()) + throw Exception(ErrorCodes::BAD_ARGUMENTS, "External data source must have arguments"); + + auto configuration = StorageAzureBlob::getConfiguration(engine_args, args.getLocalContext()); + auto client = StorageAzureBlob::createClient(configuration); + // Use format settings from global server context + settings from + // the SETTINGS clause of the create query. Settings from current + // session and user are ignored. + std::optional format_settings; + if (args.storage_def->settings) + { + FormatFactorySettings user_format_settings; + + // Apply changed settings from global context, but ignore the + // unknown ones, because we only have the format settings here. + const auto & changes = args.getContext()->getSettingsRef().changes(); + for (const auto & change : changes) + { + if (user_format_settings.has(change.name)) + user_format_settings.set(change.name, change.value); + } + + // Apply changes from SETTINGS clause, with validation. + user_format_settings.applyChanges(args.storage_def->settings->changes); + format_settings = getFormatSettings(args.getContext(), user_format_settings); + } + else + { + format_settings = getFormatSettings(args.getContext()); + } + + ASTPtr partition_by; + if (args.storage_def->partition_by) + partition_by = args.storage_def->partition_by->clone(); + + auto settings = StorageAzureBlob::createSettings(args.getContext()); + + return std::make_shared( + std::move(configuration), + std::make_unique("AzureBlobStorage", std::move(client), std::move(settings)), + args.getContext(), + args.table_id, + args.columns, + args.constraints, + args.comment, + format_settings, + partition_by); + }, + { + .supports_settings = true, + .supports_sort_order = true, // for partition by + .supports_schema_inference = true, + .source_access_type = AccessType::AZURE, + }); +} + +AzureClientPtr StorageAzureBlob::createClient(StorageAzureBlob::Configuration configuration) +{ + AzureClientPtr result; + + if (configuration.is_connection_string) + { + result = std::make_unique(BlobContainerClient::CreateFromConnectionString(configuration.connection_url, configuration.container)); + result->CreateIfNotExists(); + } + else + { + if (configuration.account_name.has_value() && configuration.account_key.has_value()) + { + auto storage_shared_key_credential = std::make_shared(*configuration.account_name, *configuration.account_key); + auto blob_service_client = std::make_unique(configuration.connection_url, storage_shared_key_credential); + try + { + result = std::make_unique(blob_service_client->CreateBlobContainer(configuration.container).Value); + } + catch (const Azure::Storage::StorageException & e) + { + if (e.StatusCode == Azure::Core::Http::HttpStatusCode::Conflict) + { + auto final_url = configuration.connection_url + + (configuration.connection_url.back() == '/' ? "" : "/") + + configuration.container; + + result = std::make_unique(final_url, storage_shared_key_credential); + } + else + { + throw; + } + } + } + else + { + auto managed_identity_credential = std::make_shared(); + auto blob_service_client = std::make_unique(configuration.connection_url, managed_identity_credential); + try + { + result = std::make_unique(blob_service_client->CreateBlobContainer(configuration.container).Value); + } + catch (const Azure::Storage::StorageException & e) + { + if (e.StatusCode == Azure::Core::Http::HttpStatusCode::Conflict) + { + auto final_url = configuration.connection_url + + (configuration.connection_url.back() == '/' ? "" : "/") + + configuration.container; + + result = std::make_unique(final_url, managed_identity_credential); + } + else + { + throw; + } + } + } + } + + return result; +} + +Poco::URI StorageAzureBlob::Configuration::getConnectionURL() const +{ + if (!is_connection_string) + return Poco::URI(connection_url); + + auto parsed_connection_string = Azure::Storage::_internal::ParseConnectionString(connection_url); + return Poco::URI(parsed_connection_string.BlobServiceUrl.GetAbsoluteUrl()); +} + + +StorageAzureBlob::StorageAzureBlob( + const Configuration & configuration_, + std::unique_ptr && object_storage_, + ContextPtr context, + const StorageID & table_id_, + const ColumnsDescription & columns_, + const ConstraintsDescription & constraints_, + const String & comment, + std::optional format_settings_, + ASTPtr partition_by_) + : IStorage(table_id_) + , name("AzureBlobStorage") + , configuration(configuration_) + , object_storage(std::move(object_storage_)) + , distributed_processing(false) + , format_settings(format_settings_) + , partition_by(partition_by_) +{ + FormatFactory::instance().checkFormatName(configuration.format); + context->getGlobalContext()->getRemoteHostFilter().checkURL(configuration.getConnectionURL()); + + StorageInMemoryMetadata storage_metadata; + if (columns_.empty()) + { + auto columns = getTableStructureFromData(object_storage.get(), configuration, format_settings, context); + storage_metadata.setColumns(columns); + } + else + storage_metadata.setColumns(columns_); + + storage_metadata.setConstraints(constraints_); + storage_metadata.setComment(comment); + setInMemoryMetadata(storage_metadata); + + StoredObjects objects; + for (const auto & key : configuration.blobs_paths) + objects.emplace_back(key); + + auto default_virtuals = NamesAndTypesList{ + {"_path", std::make_shared(std::make_shared())}, + {"_file", std::make_shared(std::make_shared())}}; + + auto columns = storage_metadata.getSampleBlock().getNamesAndTypesList(); + + virtual_columns = getVirtualsForStorage(columns, default_virtuals); + for (const auto & column : virtual_columns) + virtual_block.insert({column.type->createColumn(), column.type, column.name}); +} + +void StorageAzureBlob::truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) +{ + if (configuration.withGlobs()) + { + throw Exception( + ErrorCodes::DATABASE_ACCESS_DENIED, + "S3 key '{}' contains globs, so the table is in readonly mode", + configuration.blob_path); + } + + StoredObjects objects; + for (const auto & key : configuration.blobs_paths) + objects.emplace_back(key); + + object_storage->removeObjectsIfExist(objects); +} + +namespace +{ + +class StorageAzureBlobSink : public SinkToStorage +{ +public: + StorageAzureBlobSink( + const String & format, + const Block & sample_block_, + ContextPtr context, + std::optional format_settings_, + const CompressionMethod compression_method, + AzureObjectStorage * object_storage, + const String & blob_path) + : SinkToStorage(sample_block_) + , sample_block(sample_block_) + , format_settings(format_settings_) + { + StoredObject object(blob_path); + write_buf = wrapWriteBufferWithCompressionMethod(object_storage->writeObject(object, WriteMode::Rewrite), compression_method, 3); + writer = FormatFactory::instance().getOutputFormatParallelIfPossible(format, *write_buf, sample_block, context, format_settings); + } + + String getName() const override { return "StorageAzureBlobSink"; } + + void consume(Chunk chunk) override + { + std::lock_guard lock(cancel_mutex); + if (cancelled) + return; + writer->write(getHeader().cloneWithColumns(chunk.detachColumns())); + } + + void onCancel() override + { + std::lock_guard lock(cancel_mutex); + finalize(); + cancelled = true; + } + + void onException() override + { + std::lock_guard lock(cancel_mutex); + finalize(); + } + + void onFinish() override + { + std::lock_guard lock(cancel_mutex); + finalize(); + } + +private: + void finalize() + { + if (!writer) + return; + + try + { + writer->finalize(); + writer->flush(); + write_buf->finalize(); + } + catch (...) + { + /// Stop ParallelFormattingOutputFormat correctly. + writer.reset(); + write_buf->finalize(); + throw; + } + } + + Block sample_block; + std::optional format_settings; + std::unique_ptr write_buf; + OutputFormatPtr writer; + bool cancelled = false; + std::mutex cancel_mutex; +}; + +class PartitionedStorageAzureBlobSink : public PartitionedSink +{ +public: + PartitionedStorageAzureBlobSink( + const ASTPtr & partition_by, + const String & format_, + const Block & sample_block_, + ContextPtr context_, + std::optional format_settings_, + const CompressionMethod compression_method_, + AzureObjectStorage * object_storage_, + const String & blob_) + : PartitionedSink(partition_by, context_, sample_block_) + , format(format_) + , sample_block(sample_block_) + , context(context_) + , compression_method(compression_method_) + , object_storage(object_storage_) + , blob(blob_) + , format_settings(format_settings_) + { + } + + SinkPtr createSinkForPartition(const String & partition_id) override + { + auto partition_key = replaceWildcards(blob, partition_id); + validateKey(partition_key); + + return std::make_shared( + format, + sample_block, + context, + format_settings, + compression_method, + object_storage, + partition_key + ); + } + +private: + const String format; + const Block sample_block; + const ContextPtr context; + const CompressionMethod compression_method; + AzureObjectStorage * object_storage; + const String blob; + const std::optional format_settings; + + ExpressionActionsPtr partition_by_expr; + + static void validateKey(const String & str) + { + validatePartitionKey(str, true); + } +}; + +} + +Pipe StorageAzureBlob::read( + const Names & column_names, + const StorageSnapshotPtr & storage_snapshot, + SelectQueryInfo & query_info, + ContextPtr local_context, + QueryProcessingStage::Enum /*processed_stage*/, + size_t max_block_size, + size_t num_streams) +{ + if (partition_by && configuration.withWildcard()) + throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Reading from a partitioned Azure storage is not implemented yet"); + + Pipes pipes; + + std::unordered_set column_names_set(column_names.begin(), column_names.end()); + std::vector requested_virtual_columns; + + for (const auto & virtual_column : getVirtuals()) + { + if (column_names_set.contains(virtual_column.name)) + requested_virtual_columns.push_back(virtual_column); + } + + std::shared_ptr iterator_wrapper; + if (configuration.withGlobs()) + { + /// Iterate through disclosed globs and make a source for each file + iterator_wrapper = std::make_shared( + object_storage.get(), configuration.container, std::nullopt, + configuration.blob_path, query_info.query, virtual_block, local_context, nullptr); + } + else + { + iterator_wrapper = std::make_shared( + object_storage.get(), configuration.container, configuration.blobs_paths, + std::nullopt, query_info.query, virtual_block, local_context, nullptr); + } + + ColumnsDescription columns_description; + Block block_for_format; + if (supportsSubsetOfColumns()) + { + auto fetch_columns = column_names; + const auto & virtuals = getVirtuals(); + std::erase_if( + fetch_columns, + [&](const String & col) + { return std::any_of(virtuals.begin(), virtuals.end(), [&](const NameAndTypePair & virtual_col){ return col == virtual_col.name; }); }); + + if (fetch_columns.empty()) + fetch_columns.push_back(ExpressionActions::getSmallestColumn(storage_snapshot->metadata->getColumns().getAllPhysical()).name); + + columns_description = storage_snapshot->getDescriptionForColumns(fetch_columns); + block_for_format = storage_snapshot->getSampleBlockForColumns(columns_description.getNamesOfPhysical()); + } + else + { + columns_description = storage_snapshot->metadata->getColumns(); + block_for_format = storage_snapshot->metadata->getSampleBlock(); + } + + for (size_t i = 0; i < num_streams; ++i) + { + pipes.emplace_back(std::make_shared( + requested_virtual_columns, + configuration.format, + getName(), + block_for_format, + local_context, + format_settings, + columns_description, + max_block_size, + configuration.compression_method, + object_storage.get(), + configuration.container, + iterator_wrapper)); + } + + return Pipe::unitePipes(std::move(pipes)); +} + +SinkToStoragePtr StorageAzureBlob::write(const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context, bool /*async_insert*/) +{ + auto sample_block = metadata_snapshot->getSampleBlock(); + auto chosen_compression_method = chooseCompressionMethod(configuration.blobs_paths.back(), configuration.compression_method); + auto insert_query = std::dynamic_pointer_cast(query); + + auto partition_by_ast = insert_query ? (insert_query->partition_by ? insert_query->partition_by : partition_by) : nullptr; + bool is_partitioned_implementation = partition_by_ast && configuration.withWildcard(); + + if (is_partitioned_implementation) + { + return std::make_shared( + partition_by_ast, + configuration.format, + sample_block, + local_context, + format_settings, + chosen_compression_method, + object_storage.get(), + configuration.blobs_paths.back()); + } + else + { + if (configuration.withGlobs()) + throw Exception(ErrorCodes::DATABASE_ACCESS_DENIED, + "AzureBlobStorage key '{}' contains globs, so the table is in readonly mode", configuration.blob_path); + + bool truncate_in_insert = local_context->getSettingsRef().azure_truncate_on_insert; + + if (!truncate_in_insert && object_storage->exists(StoredObject(configuration.blob_path))) + { + + if (local_context->getSettingsRef().azure_create_new_file_on_insert) + { + size_t index = configuration.blobs_paths.size(); + const auto & first_key = configuration.blobs_paths[0]; + auto pos = first_key.find_first_of('.'); + String new_key; + + do + { + new_key = first_key.substr(0, pos) + "." + std::to_string(index) + (pos == std::string::npos ? "" : first_key.substr(pos)); + ++index; + } + while (object_storage->exists(StoredObject(new_key))); + + configuration.blobs_paths.push_back(new_key); + } + else + { + throw Exception( + ErrorCodes::BAD_ARGUMENTS, + "Object in bucket {} with key {} already exists. " + "If you want to overwrite it, enable setting azure_truncate_on_insert, if you " + "want to create a new file on each insert, enable setting azure_create_new_file_on_insert", + configuration.container, configuration.blobs_paths.back()); + } + } + + return std::make_shared( + configuration.format, + sample_block, + local_context, + format_settings, + chosen_compression_method, + object_storage.get(), + configuration.blobs_paths.back()); + } +} + +NamesAndTypesList StorageAzureBlob::getVirtuals() const +{ + return virtual_columns; +} + +bool StorageAzureBlob::supportsPartitionBy() const +{ + return true; +} + +bool StorageAzureBlob::supportsSubcolumns() const +{ + return FormatFactory::instance().checkIfFormatSupportsSubcolumns(configuration.format); +} + +bool StorageAzureBlob::supportsSubsetOfColumns() const +{ + return FormatFactory::instance().checkIfFormatSupportsSubsetOfColumns(configuration.format); +} + +bool StorageAzureBlob::prefersLargeBlocks() const +{ + return FormatFactory::instance().checkIfOutputFormatPrefersLargeBlocks(configuration.format); +} + +bool StorageAzureBlob::parallelizeOutputAfterReading(ContextPtr context) const +{ + return FormatFactory::instance().checkParallelizeOutputAfterReading(configuration.format, context); +} + +static void addPathToVirtualColumns(Block & block, const String & path, size_t idx) +{ + if (block.has("_path")) + block.getByName("_path").column->assumeMutableRef().insert(path); + + if (block.has("_file")) + { + auto pos = path.find_last_of('/'); + assert(pos != std::string::npos); + + auto file = path.substr(pos + 1); + block.getByName("_file").column->assumeMutableRef().insert(file); + } + + block.getByName("_idx").column->assumeMutableRef().insert(idx); +} + +StorageAzureBlobSource::Iterator::Iterator( + AzureObjectStorage * object_storage_, + const std::string & container_, + std::optional keys_, + std::optional blob_path_with_globs_, + ASTPtr query_, + const Block & virtual_header_, + ContextPtr context_, + RelativePathsWithMetadata * outer_blobs_) + : WithContext(context_) + , object_storage(object_storage_) + , container(container_) + , keys(keys_) + , blob_path_with_globs(blob_path_with_globs_) + , query(query_) + , virtual_header(virtual_header_) + , outer_blobs(outer_blobs_) +{ + if (keys.has_value() && blob_path_with_globs.has_value()) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot specify keys and glob simultaneously it's a bug"); + + if (!keys.has_value() && !blob_path_with_globs.has_value()) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Both keys and glob mask are not specified"); + + if (keys) + { + Strings all_keys = *keys; + + blobs_with_metadata.emplace(); + /// Create a virtual block with one row to construct filter + if (query && virtual_header && !all_keys.empty()) + { + /// Append "idx" column as the filter result + virtual_header.insert({ColumnUInt64::create(), std::make_shared(), "_idx"}); + + auto block = virtual_header.cloneEmpty(); + addPathToVirtualColumns(block, fs::path(container) / all_keys.front(), 0); + + VirtualColumnUtils::prepareFilterBlockWithQuery(query, getContext(), block, filter_ast); + + if (filter_ast) + { + block = virtual_header.cloneEmpty(); + for (size_t i = 0; i < all_keys.size(); ++i) + addPathToVirtualColumns(block, fs::path(container) / all_keys[i], i); + + VirtualColumnUtils::filterBlockWithQuery(query, block, getContext(), filter_ast); + const auto & idxs = typeid_cast(*block.getByName("_idx").column); + + Strings filtered_keys; + filtered_keys.reserve(block.rows()); + for (UInt64 idx : idxs.getData()) + filtered_keys.emplace_back(std::move(all_keys[idx])); + + all_keys = std::move(filtered_keys); + } + } + + for (auto && key : all_keys) + { + ObjectMetadata object_metadata = object_storage->getObjectMetadata(key); + total_size += object_metadata.size_bytes; + blobs_with_metadata->emplace_back(RelativePathWithMetadata{key, object_metadata}); + if (outer_blobs) + outer_blobs->emplace_back(blobs_with_metadata->back()); + } + } + else + { + const String key_prefix = blob_path_with_globs->substr(0, blob_path_with_globs->find_first_of("*?{")); + + /// We don't have to list bucket, because there is no asterisks. + if (key_prefix.size() == blob_path_with_globs->size()) + { + ObjectMetadata object_metadata = object_storage->getObjectMetadata(*blob_path_with_globs); + blobs_with_metadata->emplace_back(*blob_path_with_globs, object_metadata); + if (outer_blobs) + outer_blobs->emplace_back(blobs_with_metadata->back()); + return; + } + + object_storage_iterator = object_storage->iterate(key_prefix); + + matcher = std::make_unique(makeRegexpPatternFromGlobs(*blob_path_with_globs)); + + if (!matcher->ok()) + throw Exception(ErrorCodes::CANNOT_COMPILE_REGEXP, + "Cannot compile regex from glob ({}): {}", *blob_path_with_globs, matcher->error()); + + recursive = *blob_path_with_globs == "/**" ? true : false; + } + +} + +RelativePathWithMetadata StorageAzureBlobSource::Iterator::next() +{ + if (is_finished) + return {}; + + if (keys) + { + size_t current_index = index.fetch_add(1, std::memory_order_relaxed); + if (current_index >= blobs_with_metadata->size()) + { + is_finished = true; + return {}; + } + + return (*blobs_with_metadata)[current_index]; + } + else + { + bool need_new_batch = false; + { + std::lock_guard lock(next_mutex); + need_new_batch = !blobs_with_metadata || index >= blobs_with_metadata->size(); + } + + if (need_new_batch) + { + RelativePathsWithMetadata new_batch; + while (new_batch.empty()) + { + if (object_storage_iterator->isValid()) + { + new_batch = object_storage_iterator->currentBatch(); + object_storage_iterator->nextBatch(); + } + else + { + is_finished = true; + return {}; + } + + for (auto it = new_batch.begin(); it != new_batch.end();) + { + if (!recursive && !re2::RE2::FullMatch(it->relative_path, *matcher)) + it = new_batch.erase(it); + else + ++it; + } + } + + index.store(0, std::memory_order_relaxed); + if (!is_initialized) + { + createFilterAST(new_batch.front().relative_path); + is_initialized = true; + } + + if (filter_ast) + { + auto block = virtual_header.cloneEmpty(); + for (size_t i = 0; i < new_batch.size(); ++i) + addPathToVirtualColumns(block, fs::path(container) / new_batch[i].relative_path, i); + + VirtualColumnUtils::filterBlockWithQuery(query, block, getContext(), filter_ast); + const auto & idxs = typeid_cast(*block.getByName("_idx").column); + + std::lock_guard lock(next_mutex); + blob_path_with_globs.reset(); + blob_path_with_globs.emplace(); + for (UInt64 idx : idxs.getData()) + { + total_size.fetch_add(new_batch[idx].metadata.size_bytes, std::memory_order_relaxed); + blobs_with_metadata->emplace_back(std::move(new_batch[idx])); + if (outer_blobs) + outer_blobs->emplace_back(blobs_with_metadata->back()); + } + } + else + { + if (outer_blobs) + outer_blobs->insert(outer_blobs->end(), new_batch.begin(), new_batch.end()); + + std::lock_guard lock(next_mutex); + blobs_with_metadata = std::move(new_batch); + for (const auto & [_, info] : *blobs_with_metadata) + total_size.fetch_add(info.size_bytes, std::memory_order_relaxed); + } + } + + size_t current_index = index.fetch_add(1, std::memory_order_relaxed); + + std::lock_guard lock(next_mutex); + return (*blobs_with_metadata)[current_index]; + } +} + +size_t StorageAzureBlobSource::Iterator::getTotalSize() const +{ + return total_size.load(std::memory_order_relaxed); +} + + +void StorageAzureBlobSource::Iterator::createFilterAST(const String & any_key) +{ + if (!query || !virtual_header) + return; + + /// Create a virtual block with one row to construct filter + /// Append "idx" column as the filter result + virtual_header.insert({ColumnUInt64::create(), std::make_shared(), "_idx"}); + + auto block = virtual_header.cloneEmpty(); + addPathToVirtualColumns(block, fs::path(container) / any_key, 0); + VirtualColumnUtils::prepareFilterBlockWithQuery(query, getContext(), block, filter_ast); +} + + +Chunk StorageAzureBlobSource::generate() +{ + while (true) + { + if (isCancelled() || !reader) + { + if (reader) + reader->cancel(); + break; + } + + Chunk chunk; + if (reader->pull(chunk)) + { + UInt64 num_rows = chunk.getNumRows(); + + const auto & file_path = reader.getPath(); + size_t total_size = file_iterator->getTotalSize(); + if (num_rows && total_size) + { + updateRowsProgressApprox( + *this, chunk, total_size, total_rows_approx_accumulated, total_rows_count_times, total_rows_approx_max); + } + + for (const auto & virtual_column : requested_virtual_columns) + { + if (virtual_column.name == "_path") + { + chunk.addColumn(virtual_column.type->createColumnConst(num_rows, file_path)->convertToFullColumnIfConst()); + } + else if (virtual_column.name == "_file") + { + size_t last_slash_pos = file_path.find_last_of('/'); + auto column = virtual_column.type->createColumnConst(num_rows, file_path.substr(last_slash_pos + 1)); + chunk.addColumn(column->convertToFullColumnIfConst()); + } + } + + return chunk; + } + + + assert(reader_future.valid()); + reader = reader_future.get(); + + if (!reader) + break; + + /// Even if task is finished the thread may be not freed in pool. + /// So wait until it will be freed before scheduling a new task. + create_reader_pool.wait(); + reader_future = createReaderAsync(); + } + + return {}; +} + +Block StorageAzureBlobSource::getHeader(Block sample_block, const std::vector & requested_virtual_columns) +{ + for (const auto & virtual_column : requested_virtual_columns) + sample_block.insert({virtual_column.type->createColumn(), virtual_column.type, virtual_column.name}); + + return sample_block; +} + +StorageAzureBlobSource::StorageAzureBlobSource( + const std::vector & requested_virtual_columns_, + const String & format_, + String name_, + const Block & sample_block_, + ContextPtr context_, + std::optional format_settings_, + const ColumnsDescription & columns_, + UInt64 max_block_size_, + String compression_hint_, + AzureObjectStorage * object_storage_, + const String & container_, + std::shared_ptr file_iterator_) + :ISource(getHeader(sample_block_, requested_virtual_columns_)) + , WithContext(context_) + , requested_virtual_columns(requested_virtual_columns_) + , format(format_) + , name(std::move(name_)) + , sample_block(sample_block_) + , format_settings(format_settings_) + , columns_desc(columns_) + , max_block_size(max_block_size_) + , compression_hint(compression_hint_) + , object_storage(std::move(object_storage_)) + , container(container_) + , file_iterator(file_iterator_) + , create_reader_pool(CurrentMetrics::ObjectStorageAzureThreads, CurrentMetrics::ObjectStorageAzureThreadsActive, 1) + , create_reader_scheduler(threadPoolCallbackRunner(create_reader_pool, "AzureReader")) +{ + reader = createReader(); + if (reader) + reader_future = createReaderAsync(); +} + + +StorageAzureBlobSource::~StorageAzureBlobSource() +{ + create_reader_pool.wait(); +} + +String StorageAzureBlobSource::getName() const +{ + return name; +} + +StorageAzureBlobSource::ReaderHolder StorageAzureBlobSource::createReader() +{ + auto [current_key, info] = file_iterator->next(); + if (current_key.empty()) + return {}; + + size_t object_size = info.size_bytes != 0 ? info.size_bytes : object_storage->getObjectMetadata(current_key).size_bytes; + auto compression_method = chooseCompressionMethod(current_key, compression_hint); + + auto read_buf = createAzureReadBuffer(current_key, object_size); + auto input_format = FormatFactory::instance().getInput( + format, *read_buf, sample_block, getContext(), max_block_size, + format_settings, std::nullopt, std::nullopt, + /* is_remote_fs */ true, compression_method); + + QueryPipelineBuilder builder; + builder.init(Pipe(input_format)); + + if (columns_desc.hasDefaults()) + { + builder.addSimpleTransform( + [&](const Block & header) + { return std::make_shared(header, columns_desc, *input_format, getContext()); }); + } + + auto pipeline = std::make_unique(QueryPipelineBuilder::getPipeline(std::move(builder))); + auto current_reader = std::make_unique(*pipeline); + + return ReaderHolder{fs::path(container) / current_key, std::move(read_buf), std::move(pipeline), std::move(current_reader)}; +} + +std::future StorageAzureBlobSource::createReaderAsync() +{ + return create_reader_scheduler([this] { return createReader(); }, Priority{}); +} + +std::unique_ptr StorageAzureBlobSource::createAzureReadBuffer(const String & key, size_t object_size) +{ + auto read_settings = getContext()->getReadSettings().adjustBufferSize(object_size); + read_settings.enable_filesystem_cache = false; + auto download_buffer_size = getContext()->getSettings().max_download_buffer_size; + const bool object_too_small = object_size <= 2 * download_buffer_size; + + // Create a read buffer that will prefetch the first ~1 MB of the file. + // When reading lots of tiny files, this prefetching almost doubles the throughput. + // For bigger files, parallel reading is more useful. + if (object_too_small && read_settings.remote_fs_method == RemoteFSReadMethod::threadpool) + { + LOG_TRACE(log, "Downloading object of size {} from Azure with initial prefetch", object_size); + return createAsyncAzureReadBuffer(key, read_settings, object_size); + } + + return object_storage->readObject(StoredObject(key), read_settings, {}, object_size); +} + +ColumnsDescription StorageAzureBlob::getTableStructureFromData( + AzureObjectStorage * object_storage, + const Configuration & configuration, + const std::optional & format_settings, + ContextPtr ctx) +{ + RelativePathsWithMetadata read_keys; + std::shared_ptr file_iterator; + if (configuration.withGlobs()) + { + file_iterator = std::make_shared( + object_storage, configuration.container, std::nullopt, + configuration.blob_path, nullptr, Block{}, ctx, &read_keys); + } + else + { + file_iterator = std::make_shared( + object_storage, configuration.container, configuration.blobs_paths, + std::nullopt, nullptr, Block{}, ctx, &read_keys); + } + + std::optional columns_from_cache; + size_t prev_read_keys_size = read_keys.size(); + if (ctx->getSettingsRef().schema_inference_use_cache_for_azure) + columns_from_cache = tryGetColumnsFromCache(read_keys.begin(), read_keys.end(), configuration, format_settings, ctx); + + ReadBufferIterator read_buffer_iterator = [&, first = true](ColumnsDescription & cached_columns) mutable -> std::unique_ptr + { + auto [key, metadata] = file_iterator->next(); + + if (key.empty()) + { + if (first) + throw Exception( + ErrorCodes::CANNOT_EXTRACT_TABLE_STRUCTURE, + "Cannot extract table structure from {} format file, because there are no files with provided path " + "in AzureBlobStorage. You must specify table structure manually", configuration.format); + + return nullptr; + } + + /// S3 file iterator could get new keys after new iteration, check them in schema cache. + if (ctx->getSettingsRef().schema_inference_use_cache_for_azure && read_keys.size() > prev_read_keys_size) + { + columns_from_cache = tryGetColumnsFromCache(read_keys.begin() + prev_read_keys_size, read_keys.end(), configuration, format_settings, ctx); + prev_read_keys_size = read_keys.size(); + if (columns_from_cache) + { + cached_columns = *columns_from_cache; + return nullptr; + } + } + + first = false; + int zstd_window_log_max = static_cast(ctx->getSettingsRef().zstd_window_log_max); + return wrapReadBufferWithCompressionMethod( + object_storage->readObject(StoredObject(key), ctx->getReadSettings(), {}, metadata.size_bytes), + chooseCompressionMethod(key, configuration.compression_method), + zstd_window_log_max); + }; + + ColumnsDescription columns; + if (columns_from_cache) + columns = *columns_from_cache; + else + columns = readSchemaFromFormat(configuration.format, format_settings, read_buffer_iterator, configuration.withGlobs(), ctx); + + if (ctx->getSettingsRef().schema_inference_use_cache_for_azure) + addColumnsToCache(read_keys, columns, configuration, format_settings, configuration.format, ctx); + + return columns; + +} + +std::optional StorageAzureBlob::tryGetColumnsFromCache( + const RelativePathsWithMetadata::const_iterator & begin, + const RelativePathsWithMetadata::const_iterator & end, + const StorageAzureBlob::Configuration & configuration, + const std::optional & format_settings, + const ContextPtr & ctx) +{ + auto & schema_cache = getSchemaCache(ctx); + for (auto it = begin; it < end; ++it) + { + auto get_last_mod_time = [&] -> time_t + { + return it->metadata.last_modified->epochTime(); + }; + + auto host_and_bucket = configuration.connection_url + '/' + configuration.container; + String source = host_and_bucket + '/' + it->relative_path; + auto cache_key = getKeyForSchemaCache(source, configuration.format, format_settings, ctx); + auto columns = schema_cache.tryGet(cache_key, get_last_mod_time); + if (columns) + return columns; + } + + return std::nullopt; + +} + +void StorageAzureBlob::addColumnsToCache( + const RelativePathsWithMetadata & keys, + const ColumnsDescription & columns, + const StorageAzureBlob::Configuration & configuration, + const std::optional & format_settings, + const String & format_name, + const ContextPtr & ctx) +{ + auto host_and_bucket = configuration.connection_url + '/' + configuration.container; + Strings sources; + sources.reserve(keys.size()); + std::transform(keys.begin(), keys.end(), std::back_inserter(sources), [&](const auto & elem){ return host_and_bucket + '/' + elem.relative_path; }); + auto cache_keys = getKeysForSchemaCache(sources, format_name, format_settings, ctx); + auto & schema_cache = getSchemaCache(ctx); + schema_cache.addMany(cache_keys, columns); +} + +SchemaCache & StorageAzureBlob::getSchemaCache(const ContextPtr & ctx) +{ + static SchemaCache schema_cache(ctx->getConfigRef().getUInt("schema_inference_cache_max_elements_for_azure", DEFAULT_SCHEMA_CACHE_ELEMENTS)); + return schema_cache; +} + + +std::unique_ptr StorageAzureBlobSource::createAsyncAzureReadBuffer( + const String & key, const ReadSettings & read_settings, size_t object_size) +{ + auto modified_settings{read_settings}; + modified_settings.remote_read_min_bytes_for_seek = modified_settings.remote_fs_buffer_size; + auto async_reader = object_storage->readObjects(StoredObjects{StoredObject{key, object_size}}, modified_settings); + + async_reader->setReadUntilEnd(); + if (read_settings.remote_fs_prefetch) + async_reader->prefetch(DEFAULT_PREFETCH_PRIORITY); + + return async_reader; +} + +} + +#endif diff --git a/src/Storages/StorageAzureBlob.h b/src/Storages/StorageAzureBlob.h new file mode 100644 index 00000000000..e2001fa24ae --- /dev/null +++ b/src/Storages/StorageAzureBlob.h @@ -0,0 +1,292 @@ +#pragma once + +#include "config.h" + +#if USE_AZURE_BLOB_STORAGE + +#include +#include +#include +#include +#include +#include +#include + +namespace DB +{ + +class StorageAzureBlob : public IStorage +{ +public: + + using AzureClient = Azure::Storage::Blobs::BlobContainerClient; + using AzureClientPtr = std::unique_ptr; + + struct Configuration : public StatelessTableEngineConfiguration + { + Configuration() = default; + + String getPath() const { return blob_path; } + + bool update(ContextPtr context); + + void connect(ContextPtr context); + + bool withGlobs() const { return blob_path.find_first_of("*?{") != std::string::npos; } + + bool withWildcard() const + { + static const String PARTITION_ID_WILDCARD = "{_partition_id}"; + return blobs_paths.back().find(PARTITION_ID_WILDCARD) != String::npos; + } + + Poco::URI getConnectionURL() const; + + std::string connection_url; + bool is_connection_string; + + std::optional account_name; + std::optional account_key; + + std::string container; + std::string blob_path; + std::vector blobs_paths; + }; + + StorageAzureBlob( + const Configuration & configuration_, + std::unique_ptr && object_storage_, + ContextPtr context_, + const StorageID & table_id_, + const ColumnsDescription & columns_, + const ConstraintsDescription & constraints_, + const String & comment, + std::optional format_settings_, + ASTPtr partition_by_); + + static StorageAzureBlob::Configuration getConfiguration(ASTs & engine_args, ContextPtr local_context); + static AzureClientPtr createClient(StorageAzureBlob::Configuration configuration); + + static AzureObjectStorage::SettingsPtr createSettings(ContextPtr local_context); + + static void processNamedCollectionResult(StorageAzureBlob::Configuration & configuration, const NamedCollection & collection); + + String getName() const override + { + return name; + } + + Pipe read( + const Names &, + const StorageSnapshotPtr &, + SelectQueryInfo &, + ContextPtr, + QueryProcessingStage::Enum, + size_t, + size_t) override; + + SinkToStoragePtr write(const ASTPtr & query, const StorageMetadataPtr & /* metadata_snapshot */, ContextPtr context, bool /*async_insert*/) override; + + void truncate(const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context, TableExclusiveLockHolder &) override; + + NamesAndTypesList getVirtuals() const override; + + bool supportsPartitionBy() const override; + + bool supportsSubcolumns() const override; + + bool supportsSubsetOfColumns() const override; + + bool prefersLargeBlocks() const override; + + bool parallelizeOutputAfterReading(ContextPtr context) const override; + + static SchemaCache & getSchemaCache(const ContextPtr & ctx); + + static ColumnsDescription getTableStructureFromData( + AzureObjectStorage * object_storage, + const Configuration & configuration, + const std::optional & format_settings, + ContextPtr ctx); + +private: + std::string name; + Configuration configuration; + std::unique_ptr object_storage; + NamesAndTypesList virtual_columns; + Block virtual_block; + + const bool distributed_processing; + std::optional format_settings; + ASTPtr partition_by; + + + static std::optional tryGetColumnsFromCache( + const RelativePathsWithMetadata::const_iterator & begin, + const RelativePathsWithMetadata::const_iterator & end, + const StorageAzureBlob::Configuration & configuration, + const std::optional & format_settings, + const ContextPtr & ctx); + + static void addColumnsToCache( + const RelativePathsWithMetadata & keys, + const ColumnsDescription & columns, + const Configuration & configuration, + const std::optional & format_settings, + const String & format_name, + const ContextPtr & ctx); + + +}; + +class StorageAzureBlobSource : public ISource, WithContext +{ +public: + class Iterator : WithContext + { + public: + Iterator( + AzureObjectStorage * object_storage_, + const std::string & container_, + std::optional keys_, + std::optional blob_path_with_globs_, + ASTPtr query_, + const Block & virtual_header_, + ContextPtr context_, + RelativePathsWithMetadata * outer_blobs_); + + RelativePathWithMetadata next(); + size_t getTotalSize() const; + ~Iterator() = default; + + private: + AzureObjectStorage * object_storage; + std::string container; + std::optional keys; + std::optional blob_path_with_globs; + ASTPtr query; + ASTPtr filter_ast; + Block virtual_header; + + std::atomic index = 0; + std::atomic total_size = 0; + + std::optional blobs_with_metadata; + RelativePathsWithMetadata * outer_blobs; + ObjectStorageIteratorPtr object_storage_iterator; + bool recursive{false}; + + std::unique_ptr matcher; + + void createFilterAST(const String & any_key); + std::atomic is_finished = false; + std::atomic is_initialized = false; + std::mutex next_mutex; + }; + + StorageAzureBlobSource( + const std::vector & requested_virtual_columns_, + const String & format_, + String name_, + const Block & sample_block_, + ContextPtr context_, + std::optional format_settings_, + const ColumnsDescription & columns_, + UInt64 max_block_size_, + String compression_hint_, + AzureObjectStorage * object_storage_, + const String & container_, + std::shared_ptr file_iterator_); + + ~StorageAzureBlobSource() override; + + Chunk generate() override; + + String getName() const override; + + static Block getHeader(Block sample_block, const std::vector & requested_virtual_columns); + +private: + std::vector requested_virtual_columns; + String format; + String name; + Block sample_block; + std::optional format_settings; + ColumnsDescription columns_desc; + UInt64 max_block_size; + String compression_hint; + AzureObjectStorage * object_storage; + String container; + std::shared_ptr file_iterator; + + struct ReaderHolder + { + public: + ReaderHolder( + String path_, + std::unique_ptr read_buf_, + std::unique_ptr pipeline_, + std::unique_ptr reader_) + : path(std::move(path_)) + , read_buf(std::move(read_buf_)) + , pipeline(std::move(pipeline_)) + , reader(std::move(reader_)) + { + } + + ReaderHolder() = default; + ReaderHolder(const ReaderHolder & other) = delete; + ReaderHolder & operator=(const ReaderHolder & other) = delete; + + ReaderHolder(ReaderHolder && other) noexcept + { + *this = std::move(other); + } + + ReaderHolder & operator=(ReaderHolder && other) noexcept + { + /// The order of destruction is important. + /// reader uses pipeline, pipeline uses read_buf. + reader = std::move(other.reader); + pipeline = std::move(other.pipeline); + read_buf = std::move(other.read_buf); + path = std::move(other.path); + return *this; + } + + explicit operator bool() const { return reader != nullptr; } + PullingPipelineExecutor * operator->() { return reader.get(); } + const PullingPipelineExecutor * operator->() const { return reader.get(); } + const String & getPath() const { return path; } + + private: + String path; + std::unique_ptr read_buf; + std::unique_ptr pipeline; + std::unique_ptr reader; + }; + + ReaderHolder reader; + + Poco::Logger * log = &Poco::Logger::get("StorageAzureBlobSource"); + + ThreadPool create_reader_pool; + ThreadPoolCallbackRunner create_reader_scheduler; + std::future reader_future; + + UInt64 total_rows_approx_max = 0; + size_t total_rows_count_times = 0; + UInt64 total_rows_approx_accumulated = 0; + + /// Recreate ReadBuffer and Pipeline for each file. + ReaderHolder createReader(); + std::future createReaderAsync(); + + std::unique_ptr createAzureReadBuffer(const String & key, size_t object_size); + std::unique_ptr createAsyncAzureReadBuffer( + const String & key, const ReadSettings & read_settings, size_t object_size); +}; + +} + +#endif diff --git a/src/Storages/StorageLog.cpp b/src/Storages/StorageLog.cpp index ac68de43332..d8065b8bb3c 100644 --- a/src/Storages/StorageLog.cpp +++ b/src/Storages/StorageLog.cpp @@ -775,6 +775,8 @@ void StorageLog::truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr marks_loaded = true; num_marks_saved = 0; + total_rows = 0; + total_bytes = 0; getContext()->dropMMappedFileCache(); } diff --git a/src/Storages/StorageMongoDB.cpp b/src/Storages/StorageMongoDB.cpp index ab52fde3e3d..3287e3272e3 100644 --- a/src/Storages/StorageMongoDB.cpp +++ b/src/Storages/StorageMongoDB.cpp @@ -99,6 +99,7 @@ public: , db_name(db_name_) , metadata_snapshot{metadata_snapshot_} , connection(connection_) + , is_wire_protocol_old(isMongoDBWireProtocolOld(*connection_)) { } @@ -107,7 +108,7 @@ public: void consume(Chunk chunk) override { Poco::MongoDB::Database db(db_name); - Poco::MongoDB::Document::Ptr index = new Poco::MongoDB::Document(); + Poco::MongoDB::Document::Vector documents; auto block = getHeader().cloneWithColumns(chunk.detachColumns()); @@ -118,20 +119,35 @@ public: const auto data_types = block.getDataTypes(); const auto data_names = block.getNames(); - std::vector row(num_cols); + documents.reserve(num_rows); + for (const auto i : collections::range(0, num_rows)) { + Poco::MongoDB::Document::Ptr document = new Poco::MongoDB::Document(); + for (const auto j : collections::range(0, num_cols)) { WriteBufferFromOwnString ostr; data_types[j]->getDefaultSerialization()->serializeText(*columns[j], i, ostr, FormatSettings{}); - row[j] = ostr.str(); - index->add(data_names[j], row[j]); + document->add(data_names[j], ostr.str()); } + + documents.push_back(std::move(document)); + } + + if (is_wire_protocol_old) + { + Poco::SharedPtr insert_request = db.createInsertRequest(collection_name); + insert_request->documents() = std::move(documents); + connection->sendRequest(*insert_request); + } + else + { + Poco::SharedPtr insert_request = db.createOpMsgMessage(collection_name); + insert_request->setCommandName(Poco::MongoDB::OpMsgMessage::CMD_INSERT); + insert_request->documents() = std::move(documents); + connection->sendRequest(*insert_request); } - Poco::SharedPtr insert_request = db.createInsertRequest(collection_name); - insert_request->documents().push_back(index); - connection->sendRequest(*insert_request); } private: @@ -139,6 +155,8 @@ private: String db_name; StorageMetadataPtr metadata_snapshot; std::shared_ptr connection; + + const bool is_wire_protocol_old; }; @@ -162,7 +180,7 @@ Pipe StorageMongoDB::read( sample_block.insert({ column_data.type, column_data.name }); } - return Pipe(std::make_shared(connection, createCursor(database_name, collection_name, sample_block), sample_block, max_block_size)); + return Pipe(std::make_shared(connection, database_name, collection_name, Poco::MongoDB::Document{}, sample_block, max_block_size)); } SinkToStoragePtr StorageMongoDB::write(const ASTPtr & /* query */, const StorageMetadataPtr & metadata_snapshot, ContextPtr /* context */, bool /*async_insert*/) diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index c3ed0f1af16..8bab596901c 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -657,7 +657,7 @@ std::unique_ptr StorageS3Source::createAsyncS3ReadBuffer( std::move(read_buffer_creator), StoredObjects{StoredObject{key, object_size}}, read_settings, - /* cache_log */nullptr); + /* cache_log */nullptr, /* use_external_buffer */true); auto modified_settings{read_settings}; /// FIXME: Changing this setting to default value breaks something around parquet reading diff --git a/src/Storages/StorageStripeLog.cpp b/src/Storages/StorageStripeLog.cpp index 5c704d877d1..d8bbd523cbf 100644 --- a/src/Storages/StorageStripeLog.cpp +++ b/src/Storages/StorageStripeLog.cpp @@ -424,6 +424,8 @@ void StorageStripeLog::truncate(const ASTPtr &, const StorageMetadataPtr &, Cont indices_loaded = true; num_indices_saved = 0; + total_rows = 0; + total_bytes = 0; getContext()->dropMMappedFileCache(); } diff --git a/src/Storages/System/StorageSystemContributors.generated.cpp b/src/Storages/System/StorageSystemContributors.generated.cpp index 0f307650c9c..f83ee3197fe 100644 --- a/src/Storages/System/StorageSystemContributors.generated.cpp +++ b/src/Storages/System/StorageSystemContributors.generated.cpp @@ -83,6 +83,8 @@ const char * auto_contributors[] { "Alexey Boykov", "Alexey Dushechkin", "Alexey Elymanov", + "Alexey Gerasimchuck", + "Alexey Gerasimchuk", "Alexey Gusev", "Alexey Ilyukhov", "Alexey Ivanov", @@ -143,6 +145,7 @@ const char * auto_contributors[] { "Anmol Arora", "Anna", "Anna Shakhova", + "AnneClickHouse", "Anselmo D. Adams", "Anthony N. Simon", "Anton Ivashkin", @@ -305,6 +308,7 @@ const char * auto_contributors[] { "Dr. Strange Looker", "Duc Canh Le", "DuckSoft", + "Duyet Le", "Egor O'Sten", "Egor Savin", "Eirik", @@ -321,6 +325,7 @@ const char * auto_contributors[] { "Eric", "Eric Daniel", "Eric Thomas", + "Eridanus", "Erixonich", "Ernest Poletaev", "Eugene Galkin", @@ -347,6 +352,7 @@ const char * auto_contributors[] { "Federico Ceratto", "Federico Rodriguez", "FeehanG", + "Feng Kaiyu", "FgoDt", "Filatenkov Artur", "Filipe Caixeta", @@ -444,6 +450,7 @@ const char * auto_contributors[] { "Ivan Milov", "Ivan Remen", "Ivan Starkov", + "Ivan Takarlikov", "Ivan Zhukov", "Jachen Duschletta", "Jack Song", @@ -477,6 +484,7 @@ const char * auto_contributors[] { "John Skopis", "Jonatas Freitas", "Jonathan-Ackerman", + "Jordi", "Jordi Villar", "Joris Giovannangeli", "Jose", @@ -489,6 +497,7 @@ const char * auto_contributors[] { "Jus", "Justin Hilliard", "Justin de Guzman", + "János Benjamin Antal", "Kang Liu", "Karl Pietrzak", "Keiji Yoshida", @@ -523,6 +532,7 @@ const char * auto_contributors[] { "Kruglov Pavel", "Krzysztof Góralski", "Kseniia Sumarokova", + "Kuba Kaflik", "Kunal Gurnani", "Kuz Le", "Ky Li", @@ -540,6 +550,7 @@ const char * auto_contributors[] { "Leopold Schabel", "Lev Borodin", "Lewinma", + "Li Shuai", "Li Yin", "Liu Cong", "LiuCong", @@ -549,12 +560,14 @@ const char * auto_contributors[] { "Lopatin Konstantin", "Lorenzo Mangani", "Loud_Scream", + "Lucas Chang", "Lucid Dreams", "Luck-Chang", "Luis Bosque", "Lv Feng", "Léo Ercolanelli", "M0r64n", + "M1eyu2018", "MEX7", "MaceWindu", "MagiaGroz", @@ -564,8 +577,10 @@ const char * auto_contributors[] { "Maksim Fedotov", "Maksim Kita", "Maksym Sobolyev", + "Mal Curtis", "Mallik Hassan", "Malte", + "Manas Alekar", "Manuel de la Peña", "Marat IDRISOV", "Marcelo Rodriguez", @@ -650,7 +665,9 @@ const char * auto_contributors[] { "Milad Arabi", "Mingliang Pan", "Misko Lee", + "Misz606", "Mohamad Fadhil", + "Mohammad Arab Anvari", "Mohammad Hossein Sekhavat", "Mojtaba Yaghoobzadeh", "Mostafa Dahab", @@ -764,6 +781,7 @@ const char * auto_contributors[] { "Rajkumar Varada", "Ramazan Polat", "Rami Dridi", + "Raqbit", "Ravengg", "Raúl Marín", "Realist007", @@ -791,6 +809,7 @@ const char * auto_contributors[] { "Roman Peshkurov", "Roman Tsisyk", "Roman Vasin", + "Roman Vlasenko", "Roman Zhukov", "Roy Bellingan", "Ruslan", @@ -825,6 +844,8 @@ const char * auto_contributors[] { "Sergey Demurin", "Sergey Elantsev", "Sergey Fedorov", + "Sergey Kazmin", + "Sergey Kislov", "Sergey Kononenko", "Sergey Lazarev", "Sergey Magidovich", @@ -840,6 +861,7 @@ const char * auto_contributors[] { "Sergio Tulentsev", "SevaCode", "Seyed Mehrshad Hosseini", + "Shane Andrade", "Sherry Wang", "Shoh Jahon", "Sichen Zhao", @@ -857,6 +879,8 @@ const char * auto_contributors[] { "SmitaRKulkarni", "Snow", "Sofia Antipushina", + "Sorck", + "Stanislav Dobrovolschii", "Stanislav Pavlovichev", "Stas Kelvich", "Stas Pavlovichev", @@ -922,6 +946,8 @@ const char * auto_contributors[] { "Vadym Chekan", "Vage Ogannisian", "Val", + "Val Doroshchuk", + "Valentin Alexeev", "Valera Ryaboshapko", "Varinara", "Vasily Kozhukhovskiy", @@ -935,8 +961,10 @@ const char * auto_contributors[] { "Veselkov Konstantin", "Viachaslau Boben", "Victor", + "Victor Krasnov", "Victor Tarnavsky", "Viktor Taranenko", + "Vincent", "Vincent Bernat", "Vitalii S", "Vitaliy", @@ -1018,12 +1046,14 @@ const char * auto_contributors[] { "Yuriy Korzhenevskiy", "Yury Karpovich", "Yury Stankevich", + "Yusuke Tanaka", "ZhiYong Wang", "Zhichang Yu", "Zhichun Wu", "Zhiguo Zhou", "Zhipeng", "Zijie Lu", + "Ziy1-Tan", "Zoran Pandovski", "[데이터플랫폼팀] 이호선", "a.palagashvili", @@ -1039,6 +1069,7 @@ const char * auto_contributors[] { "akazz", "akonyaev", "akuzm", + "alekar", "alekseik1", "alekseygolub", "alesapin", @@ -1072,6 +1103,7 @@ const char * auto_contributors[] { "asiana21", "atereh", "attack204", + "auxten", "avasiliev", "avogar", "avoiderboi", @@ -1094,6 +1126,7 @@ const char * auto_contributors[] { "caipengxiang", "candiduslynx", "canenoneko", + "cangyin", "caspian", "cekc", "centos7", @@ -1131,6 +1164,7 @@ const char * auto_contributors[] { "damozhaeva", "dankondr", "daoready", + "darkkeks", "dasmfm", "davydovska", "decaseal", @@ -1189,6 +1223,7 @@ const char * auto_contributors[] { "franklee", "fredchenbj", "freedomDR", + "frinkr", "fuqi", "fuwhu", "fuzhe1989", @@ -1236,6 +1271,7 @@ const char * auto_contributors[] { "ikopylov", "imgbot[bot]", "ip", + "ismailakpolat", "it1804", "ivan-klass", "ivan-kush", @@ -1255,6 +1291,7 @@ const char * auto_contributors[] { "jianmei zhang", "jinjunzh", "jkuklis", + "johanngan", "jthmath", "jun won", "jus1096", @@ -1280,6 +1317,7 @@ const char * auto_contributors[] { "kst-morozov", "l", "l1tsolaiki", + "laimuxi", "lalex", "lanfz", "larryluogit", @@ -1298,8 +1336,10 @@ const char * auto_contributors[] { "liang.huang", "liangqian", "libenwang", + "libin", "lichengxiang", "liding1992", + "lihaibo42", "linceyou", "lincion", "lingo-xp", @@ -1341,6 +1381,7 @@ const char * auto_contributors[] { "mastertheknife", "mateng0915", "mateng915", + "mauidude", "maxim", "maxim-babenko", "maxkuzn", @@ -1352,6 +1393,7 @@ const char * auto_contributors[] { "meo", "meoww-bot", "mergify[bot]", + "merlllle", "metahys", "mf5137", "mfridental", @@ -1398,6 +1440,7 @@ const char * auto_contributors[] { "olevino", "olevino999", "olgarev", + "ongkong", "orantius", "p0ny", "palasonicq", @@ -1473,6 +1516,7 @@ const char * auto_contributors[] { "spongedc", "spume", "spyros87", + "sslouis", "stan", "stavrolia", "stepenhu", @@ -1518,6 +1562,7 @@ const char * auto_contributors[] { "vicdashkov", "vicgao", "vinity", + "vitac", "vitstn", "vivarum", "vladimir golovchenko", @@ -1527,6 +1572,7 @@ const char * auto_contributors[] { "vzakaznikov", "wangchao", "wangdh15", + "wangxiaobo", "weeds085490", "whysage", "wineternity", @@ -1537,6 +1583,7 @@ const char * auto_contributors[] { "xieyichen", "xinhuitian", "xlwh", + "xmy", "yakkomajuri", "yakov-olkhovskiy", "yandd", @@ -1589,6 +1636,7 @@ const char * auto_contributors[] { "zvvr", "zxc111", "zxealous", + "zy-kkk", "zzsmdfj", "Šimon Podlipský", "Александр", @@ -1605,6 +1653,7 @@ const char * auto_contributors[] { "Сундуков Алексей", "万康", "何李夫", + "你不要过来啊", "凌涛", "刘浩林", "刘陶峰", diff --git a/src/Storages/System/StorageSystemQueryCache.cpp b/src/Storages/System/StorageSystemQueryCache.cpp index 245f4b7fd26..3dfc5cf298a 100644 --- a/src/Storages/System/StorageSystemQueryCache.cpp +++ b/src/Storages/System/StorageSystemQueryCache.cpp @@ -47,7 +47,7 @@ void StorageSystemQueryCache::fillData(MutableColumns & res_columns, ContextPtr res_columns[0]->insert(key.queryStringFromAst()); /// approximates the original query string res_columns[1]->insert(QueryCache::QueryCacheEntryWeight()(*query_result)); res_columns[2]->insert(key.expires_at < std::chrono::system_clock::now()); - res_columns[3]->insert(!key.is_shared); + res_columns[3]->insert(key.is_shared); res_columns[4]->insert(key.is_compressed); res_columns[5]->insert(std::chrono::system_clock::to_time_t(key.expires_at)); res_columns[6]->insert(key.ast->getTreeHash().first); diff --git a/src/Storages/registerStorages.cpp b/src/Storages/registerStorages.cpp index 8be176a5375..5606e6728d4 100644 --- a/src/Storages/registerStorages.cpp +++ b/src/Storages/registerStorages.cpp @@ -96,6 +96,10 @@ void registerStorageSQLite(StorageFactory & factory); void registerStorageKeeperMap(StorageFactory & factory); +#if USE_AZURE_BLOB_STORAGE +void registerStorageAzureBlob(StorageFactory & factory); +#endif + void registerStorages() { auto & factory = StorageFactory::instance(); @@ -191,6 +195,10 @@ void registerStorages() #endif registerStorageKeeperMap(factory); + + #if USE_AZURE_BLOB_STORAGE + registerStorageAzureBlob(factory); + #endif } } diff --git a/src/TableFunctions/CMakeLists.txt b/src/TableFunctions/CMakeLists.txt index b1fa61a72ee..3544c5bf8b4 100644 --- a/src/TableFunctions/CMakeLists.txt +++ b/src/TableFunctions/CMakeLists.txt @@ -17,5 +17,5 @@ add_library(clickhouse_table_functions ${clickhouse_table_functions_sources}) target_link_libraries(clickhouse_table_functions PRIVATE clickhouse_parsers clickhouse_storages_system dbms) if (TARGET ch_contrib::hivemetastore) - target_link_libraries(clickhouse_table_functions PRIVATE ch_contrib::hivemetastore ch_contrib::hdfs ch_contrib::parquet) + target_link_libraries(clickhouse_table_functions PRIVATE ch_contrib::hivemetastore ch_contrib::hdfs ch_contrib::parquet ch_contrib::azure_sdk) endif () diff --git a/src/TableFunctions/ITableFunctionCluster.h b/src/TableFunctions/ITableFunctionCluster.h index ad88d7b54f0..a8329684ee6 100644 --- a/src/TableFunctions/ITableFunctionCluster.h +++ b/src/TableFunctions/ITableFunctionCluster.h @@ -2,12 +2,13 @@ #include "config.h" -#include -#include -#include -#include #include +#include +#include #include +#include +#include +#include namespace DB diff --git a/src/TableFunctions/TableFunctionAzureBlobStorage.cpp b/src/TableFunctions/TableFunctionAzureBlobStorage.cpp new file mode 100644 index 00000000000..d2a96173491 --- /dev/null +++ b/src/TableFunctions/TableFunctionAzureBlobStorage.cpp @@ -0,0 +1,255 @@ +#include "config.h" + +#if USE_AZURE_BLOB_STORAGE + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "registerTableFunctions.h" +#include +#include +#include + +#include + + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; + extern const int BAD_ARGUMENTS; +} + +namespace +{ + +bool isConnectionString(const std::string & candidate) +{ + return candidate.starts_with("DefaultEndpointsProtocol"); +} + +} + +StorageAzureBlob::Configuration TableFunctionAzureBlobStorage::parseArgumentsImpl(ASTs & engine_args, const ContextPtr & local_context, bool get_format_from_file) +{ + StorageAzureBlob::Configuration configuration; + + /// Supported signatures: + /// + /// AzureBlobStorage(connection_string|storage_account_url, container_name, blobpath, [account_name, account_key, format, compression, structure]) + /// + + if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args, local_context)) + { + StorageAzureBlob::processNamedCollectionResult(configuration, *named_collection); + + configuration.blobs_paths = {configuration.blob_path}; + + if (configuration.format == "auto" && get_format_from_file) + configuration.format = FormatFactory::instance().getFormatFromFileName(configuration.blob_path, true); + + return configuration; + } + + if (engine_args.size() < 3 || engine_args.size() > 8) + throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, + "Storage Azure requires 3 to 7 arguments: " + "AzureBlobStorage(connection_string|storage_account_url, container_name, blobpath, [account_name, account_key, format, compression, structure])"); + + for (auto & engine_arg : engine_args) + engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, local_context); + + std::unordered_map engine_args_to_idx; + + configuration.connection_url = checkAndGetLiteralArgument(engine_args[0], "connection_string/storage_account_url"); + configuration.is_connection_string = isConnectionString(configuration.connection_url); + + configuration.container = checkAndGetLiteralArgument(engine_args[1], "container"); + configuration.blob_path = checkAndGetLiteralArgument(engine_args[2], "blobpath"); + + auto is_format_arg = [] (const std::string & s) -> bool + { + return s == "auto" || FormatFactory::instance().getAllFormats().contains(s); + }; + + if (engine_args.size() == 4) + { + auto fourth_arg = checkAndGetLiteralArgument(engine_args[3], "format/account_name/structure"); + if (is_format_arg(fourth_arg)) + { + configuration.format = fourth_arg; + } + else + { + configuration.structure = fourth_arg; + } + } + else if (engine_args.size() == 5) + { + auto fourth_arg = checkAndGetLiteralArgument(engine_args[3], "format/account_name"); + if (is_format_arg(fourth_arg)) + { + configuration.format = fourth_arg; + configuration.compression_method = checkAndGetLiteralArgument(engine_args[4], "compression"); + } + else + { + configuration.account_name = fourth_arg; + configuration.account_key = checkAndGetLiteralArgument(engine_args[4], "account_key"); + } + } + else if (engine_args.size() == 6) + { + auto fourth_arg = checkAndGetLiteralArgument(engine_args[3], "format/account_name"); + if (is_format_arg(fourth_arg)) + { + configuration.format = fourth_arg; + configuration.compression_method = checkAndGetLiteralArgument(engine_args[4], "compression"); + configuration.structure = checkAndGetLiteralArgument(engine_args[5], "structure"); + } + else + { + configuration.account_name = fourth_arg; + configuration.account_key = checkAndGetLiteralArgument(engine_args[4], "account_key"); + auto sixth_arg = checkAndGetLiteralArgument(engine_args[5], "format/account_name"); + if (!is_format_arg(sixth_arg)) + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown format {}", sixth_arg); + configuration.format = sixth_arg; + } + } + else if (engine_args.size() == 7) + { + auto fourth_arg = checkAndGetLiteralArgument(engine_args[3], "format/account_name"); + if (is_format_arg(fourth_arg)) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Format, compression and structure must be last arguments"); + } + else + { + configuration.account_name = fourth_arg; + configuration.account_key = checkAndGetLiteralArgument(engine_args[4], "account_key"); + auto sixth_arg = checkAndGetLiteralArgument(engine_args[5], "format/account_name"); + if (!is_format_arg(sixth_arg)) + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown format {}", sixth_arg); + configuration.format = sixth_arg; + configuration.compression_method = checkAndGetLiteralArgument(engine_args[6], "compression"); + } + } + else if (engine_args.size() == 8) + { + + auto fourth_arg = checkAndGetLiteralArgument(engine_args[3], "format/account_name"); + if (is_format_arg(fourth_arg)) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Format and compression must be last arguments"); + } + else + { + configuration.account_name = fourth_arg; + configuration.account_key = checkAndGetLiteralArgument(engine_args[4], "account_key"); + auto sixth_arg = checkAndGetLiteralArgument(engine_args[5], "format/account_name"); + if (!is_format_arg(sixth_arg)) + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown format {}", sixth_arg); + configuration.format = sixth_arg; + configuration.compression_method = checkAndGetLiteralArgument(engine_args[6], "compression"); + configuration.structure = checkAndGetLiteralArgument(engine_args[7], "structure"); + } + } + + configuration.blobs_paths = {configuration.blob_path}; + + if (configuration.format == "auto" && get_format_from_file) + configuration.format = FormatFactory::instance().getFormatFromFileName(configuration.blob_path, true); + + return configuration; +} + +void TableFunctionAzureBlobStorage::parseArguments(const ASTPtr & ast_function, ContextPtr context) +{ + /// Clone ast function, because we can modify its arguments like removing headers. + auto ast_copy = ast_function->clone(); + + ASTs & args_func = ast_function->children; + + if (args_func.size() != 1) + throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Table function '{}' must have arguments.", getName()); + + auto & args = args_func.at(0)->children; + + configuration = parseArgumentsImpl(args, context); +} + +ColumnsDescription TableFunctionAzureBlobStorage::getActualTableStructure(ContextPtr context) const +{ + if (configuration.structure == "auto") + { + context->checkAccess(getSourceAccessType()); + auto client = StorageAzureBlob::createClient(configuration); + auto settings = StorageAzureBlob::createSettings(context); + + auto object_storage = std::make_unique("AzureBlobStorageTableFunction", std::move(client), std::move(settings)); + return StorageAzureBlob::getTableStructureFromData(object_storage.get(), configuration, std::nullopt, context); + } + + return parseColumnsListFromString(configuration.structure, context); +} + +bool TableFunctionAzureBlobStorage::supportsReadingSubsetOfColumns() +{ + return FormatFactory::instance().checkIfFormatSupportsSubsetOfColumns(configuration.format); +} + +StoragePtr TableFunctionAzureBlobStorage::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +{ + auto client = StorageAzureBlob::createClient(configuration); + auto settings = StorageAzureBlob::createSettings(context); + + ColumnsDescription columns; + if (configuration.structure != "auto") + columns = parseColumnsListFromString(configuration.structure, context); + else if (!structure_hint.empty()) + columns = structure_hint; + + StoragePtr storage = std::make_shared( + configuration, + std::make_unique(table_name, std::move(client), std::move(settings)), + context, + StorageID(getDatabaseName(), table_name), + columns, + ConstraintsDescription{}, + String{}, + /// No format_settings for table function Azure + std::nullopt, + nullptr); + + storage->startup(); + + return storage; +} + +void registerTableFunctionAzureBlobStorage(TableFunctionFactory & factory) +{ + factory.registerFunction( + {.documentation + = {.description=R"(The table function can be used to read the data stored on Azure Blob Storage.)", + .examples{{"azureBlobStorage", "SELECT * FROM azureBlobStorage(connection_string|storage_account_url, container_name, blobpath, [account_name, account_key, format, compression, structure])", ""}}}, + .allow_readonly = false}); +} + +} + +#endif diff --git a/src/TableFunctions/TableFunctionAzureBlobStorage.h b/src/TableFunctions/TableFunctionAzureBlobStorage.h new file mode 100644 index 00000000000..0ac3f9771c7 --- /dev/null +++ b/src/TableFunctions/TableFunctionAzureBlobStorage.h @@ -0,0 +1,70 @@ +#pragma once + +#include "config.h" + +#if USE_AZURE_BLOB_STORAGE + +#include +#include + + +namespace DB +{ + +class Context; + +/* AzureBlob(source, [access_key_id, secret_access_key,] [format, structure, compression]) - creates a temporary storage for a file in AzureBlob. + */ +class TableFunctionAzureBlobStorage : public ITableFunction +{ +public: + static constexpr auto name = "azureBlobStorage"; + static constexpr auto signature = "- connection_string|storage_account_url, container_name, blobpath, [account_name, account_key, format, compression, structure]\n"; + + static size_t getMaxNumberOfArguments() { return 8; } + + String getName() const override + { + return name; + } + + virtual String getSignature() const + { + return signature; + } + + bool hasStaticStructure() const override { return configuration.structure != "auto"; } + + bool needStructureHint() const override { return configuration.structure == "auto"; } + + void setStructureHint(const ColumnsDescription & structure_hint_) override { structure_hint = structure_hint_; } + + bool supportsReadingSubsetOfColumns() override; + + std::unordered_set getVirtualsToCheckBeforeUsingStructureHint() const override + { + return {"_path", "_file"}; + } + + static StorageAzureBlob::Configuration parseArgumentsImpl(ASTs & args, const ContextPtr & context, bool get_format_from_file = true); + +protected: + + StoragePtr executeImpl( + const ASTPtr & ast_function, + ContextPtr context, + const std::string & table_name, + ColumnsDescription cached_columns) const override; + + const char * getStorageTypeName() const override { return "Azure"; } + + ColumnsDescription getActualTableStructure(ContextPtr context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; + + mutable StorageAzureBlob::Configuration configuration; + ColumnsDescription structure_hint; +}; + +} + +#endif diff --git a/src/TableFunctions/registerTableFunctions.cpp b/src/TableFunctions/registerTableFunctions.cpp index 4f3411df4c5..0499524a912 100644 --- a/src/TableFunctions/registerTableFunctions.cpp +++ b/src/TableFunctions/registerTableFunctions.cpp @@ -71,6 +71,12 @@ void registerTableFunctions() registerTableFunctionFormat(factory); registerTableFunctionExplain(factory); + +#if USE_AZURE_BLOB_STORAGE + registerTableFunctionAzureBlobStorage(factory); +#endif + + } } diff --git a/src/TableFunctions/registerTableFunctions.h b/src/TableFunctions/registerTableFunctions.h index c51522a5e99..393bc080a3d 100644 --- a/src/TableFunctions/registerTableFunctions.h +++ b/src/TableFunctions/registerTableFunctions.h @@ -69,6 +69,10 @@ void registerTableFunctionFormat(TableFunctionFactory & factory); void registerTableFunctionExplain(TableFunctionFactory & factory); +#if USE_AZURE_BLOB_STORAGE +void registerTableFunctionAzureBlobStorage(TableFunctionFactory & factory); +#endif + void registerTableFunctions(); } diff --git a/tests/ci/build_check.py b/tests/ci/build_check.py index 4bc61c79fc0..35b98a7c3bb 100644 --- a/tests/ci/build_check.py +++ b/tests/ci/build_check.py @@ -343,6 +343,15 @@ def main(): f"sudo chown -R ubuntu:ubuntu {build_output_path}", shell=True ) logging.info("Build finished with %s, log path %s", success, log_path) + if not success: + # We check if docker works, because if it's down, it's infrastructure + try: + subprocess.check_call("docker info", shell=True) + except subprocess.CalledProcessError: + logging.error( + "The dockerd looks down, won't upload anything and generate report" + ) + sys.exit(1) # FIXME performance performance_urls = [] diff --git a/tests/ci/stress.py b/tests/ci/stress.py index b9044874071..e370ddbdd21 100755 --- a/tests/ci/stress.py +++ b/tests/ci/stress.py @@ -37,9 +37,6 @@ def get_options(i, upgrade_check): client_options.append("join_algorithm='partial_merge'") if join_alg_num % 5 == 2: client_options.append("join_algorithm='full_sorting_merge'") - if join_alg_num % 5 == 3 and not upgrade_check: - # Some crashes are not fixed in 23.2 yet, so ignore the setting in Upgrade check - client_options.append("join_algorithm='grace_hash'") if join_alg_num % 5 == 4: client_options.append("join_algorithm='auto'") client_options.append("max_rows_in_join=1000") diff --git a/tests/instructions/easy_tasks_sorted_ru.md b/tests/instructions/easy_tasks_sorted_ru.md index a98a5766ffe..09ea48d0bd9 100644 --- a/tests/instructions/easy_tasks_sorted_ru.md +++ b/tests/instructions/easy_tasks_sorted_ru.md @@ -6,7 +6,7 @@ Делаем `chmod 000 /etc/clickhouse-client/config.xml` и смотрим, что получится. -## Уменьшать max_memory_usage, если на сервере мало оперативки. +## + Уменьшать max_memory_usage, если на сервере мало оперативки. Смотрим, сколько на сервере оперативки. Если `max_memory_usage`, `max_memory_usage_for_all_queries` ограничены, но больше 90% (настройка) от имеющейся оперативки, то уменьшать их и выводить предупреждение в лог.. @@ -42,10 +42,12 @@ void memoryBitAnd(const char * a, const char * b, char * result, size_t size); В ClickHouse есть возможность указать collation для сортировки строк. Это не работает для `Nullable(String)`. -## Запретить чтение значений типа AggregateFunction по-умолчанию и добавить настройку. +## + Запретить чтение значений типа AggregateFunction по-умолчанию и добавить настройку. Состояния агрегатных функций могут быть записаны в дамп и считаны из него. Но десериализация состояний агрегатных функций небезопасна. Аккуратно выбранные пользовательские данные могут привести к segfault или порче памяти. Поэтому нужно просто сделать настройку, которая запрещает читать AggregateFunction из пользовательских данных. +Upd: сделали по-другому: теперь всё безопасно. + ## + В статистику jemalloc добавить информацию по arenas. В `system.asynchronous_metrics` - суммарный размер арен. @@ -56,9 +58,9 @@ void memoryBitAnd(const char * a, const char * b, char * result, size_t size); Как cache, но без кэша — всегда прямой запрос в источник. -## Функции randomFixedString, randomBinaryString, fuzzBits, fuzzBytes. +## + Функции randomFixedString, randomBinaryString, fuzzBits, fuzzBytes. -## Агрегатные функции для статистических тестов (e.g. тест нормальности распределения) и статистик. +## + Агрегатные функции для статистических тестов (e.g. тест нормальности распределения) и статистик. ## + Функции создания и обновления состояния агрегатной функции по одному кортежу аргументов. @@ -119,11 +121,11 @@ position с конца строки. Добавляем счётчики всех ошибок (ErrorCodes) по аналогии с ProfileEvents. Кроме количества запоминаем также время последней ошибки, стек трейс, сообщение. Добавляем системную таблицу system.errors. Отправка в Graphite. -## Добавить Lizard, LZSSE и density в качестве вариантов алгоритмов сжатия. +## + Добавить Lizard, LZSSE и density в качестве вариантов алгоритмов сжатия. Экспериментальные алгоритмы сжатия. Сейчас ClickHouse поддерживает только lz4 и zstd. -## Запрос CREATE OR REPLACE TABLE +## + Запрос CREATE OR REPLACE TABLE Атомарно удаляет таблицу перед созданием новой, если такая была. @@ -149,12 +151,16 @@ https://clickhouse.com/docs/en/query_language/create/#create-table Запретить модификацию данных в партиции. На партицию ставится флаг, что она заблокирована. В неё нельзя делать INSERT и ALTER. С файлов снимается доступ на запись. +Upd: не нужно. + ## Настройка join_use_nulls: поддержка для LEFT ARRAY JOIN. -## Внешние словари из Aerospike/Couchbase/Cassandra (на выбор). +## + Внешние словари из Aerospike/Couchbase/Cassandra (на выбор). Подключить одну из key-value БД как источник. +Upd: сделали Redis, Cassandra, MongoDB. + ## + Движок таблиц Mongo, табличная функция mongo. Возможность легко импортировать данные из MongoDB. @@ -181,7 +187,7 @@ https://clickhouse.com/docs/en/operations/table_engines/external_data/ Не работает, если открыть clickhouse-client в интерактивном режиме и делать несколько запросов. -## Настройка для возможности получить частичный результат при cancel-е. +## + Настройка для возможности получить частичный результат при cancel-е. Хотим по Ctrl+C получить те данные, которые успели обработаться. diff --git a/tests/integration/test_https_replication/configs/listen_host.xml b/tests/integration/test_https_replication/configs/listen_host.xml new file mode 100644 index 00000000000..f94e5c88568 --- /dev/null +++ b/tests/integration/test_https_replication/configs/listen_host.xml @@ -0,0 +1,3 @@ + + :: + diff --git a/tests/integration/test_https_replication/test_change_ip.py b/tests/integration/test_https_replication/test_change_ip.py new file mode 100644 index 00000000000..14fe5351c8d --- /dev/null +++ b/tests/integration/test_https_replication/test_change_ip.py @@ -0,0 +1,96 @@ +import pytest +from helpers.cluster import ClickHouseCluster +from helpers.test_tools import assert_eq_with_retry + +""" +Both ssl_conf.xml and no_ssl_conf.xml have the same port +""" + + +def _fill_nodes(nodes, shard): + for node in nodes: + node.query( + """ + CREATE DATABASE test; + + CREATE TABLE test_table(date Date, id UInt32) + ENGINE = ReplicatedMergeTree('/clickhouse/tables/test{shard}/replicated', '{replica}') PARTITION BY toYYYYMM(date) ORDER BY id; + """.format( + shard=shard, replica=node.name + ) + ) + + +cluster = ClickHouseCluster(__file__) +node1 = cluster.add_instance( + "node1", + main_configs=[ + "configs/remote_servers.xml", + "configs/listen_host.xml", + "configs/ssl_conf.xml", + "configs/server.crt", + "configs/server.key", + "configs/dhparam.pem", + ], + with_zookeeper=True, + ipv6_address="2001:3984:3989::1:1111", +) +node2 = cluster.add_instance( + "node2", + main_configs=[ + "configs/remote_servers.xml", + "configs/listen_host.xml", + "configs/ssl_conf.xml", + "configs/server.crt", + "configs/server.key", + "configs/dhparam.pem", + ], + with_zookeeper=True, + ipv6_address="2001:3984:3989::1:1112", +) + + +@pytest.fixture(scope="module") +def both_https_cluster(): + try: + cluster.start() + + _fill_nodes([node1, node2], 1) + + yield cluster + + finally: + cluster.shutdown() + + +def test_replication_when_node_ip_changed(both_https_cluster): + """ + Test for a bug when replication over HTTPS stops working when the IP of the source replica was changed. + + node1 is a source node + node2 fethes data from node1 + """ + node1.query("truncate table test_table") + node2.query("truncate table test_table") + + # First we check, that normal replication works + node1.query( + "INSERT INTO test_table VALUES ('2022-10-01', 1), ('2022-10-02', 2), ('2022-10-03', 3)" + ) + assert node1.query("SELECT count(*) from test_table") == "3\n" + assert_eq_with_retry(node2, "SELECT count(*) from test_table", "3") + + # We change source node ip + cluster.restart_instance_with_ip_change(node1, "2001:3984:3989::1:7777") + + # Put some data to source node1 + node1.query( + "INSERT INTO test_table VALUES ('2018-10-01', 4), ('2018-10-02', 4), ('2018-10-03', 6)" + ) + # Check that data is placed on node1 + assert node1.query("SELECT count(*) from test_table") == "6\n" + + # drop DNS cache + node2.query("SYSTEM DROP DNS CACHE") + # Data is fetched + assert_eq_with_retry(node2, "SELECT count(*) from test_table", "6") diff --git a/tests/integration/test_materialized_mysql_database/materialize_with_ddl.py b/tests/integration/test_materialized_mysql_database/materialize_with_ddl.py index 97e2de49ceb..2bbbe9a3f13 100644 --- a/tests/integration/test_materialized_mysql_database/materialize_with_ddl.py +++ b/tests/integration/test_materialized_mysql_database/materialize_with_ddl.py @@ -379,6 +379,46 @@ def drop_table_with_materialized_mysql_database( "", ) + mysql_node.query( + "CREATE TABLE test_database_drop.test_table_3 (id INT NOT NULL PRIMARY KEY) ENGINE = InnoDB" + ) + mysql_node.query("INSERT INTO test_database_drop.test_table_3 VALUES(1), (2)") + check_query( + clickhouse_node, + "SHOW TABLES FROM test_database_drop FORMAT TSV", + "test_table_2\ntest_table_3\n", + ) + check_query( + clickhouse_node, + "SELECT * FROM test_database_drop.test_table_3 ORDER BY id FORMAT TSV", + "1\n2\n", + ) + mysql_node.query("TRUNCATE test_database_drop.test_table_3") + check_query( + clickhouse_node, + "SELECT * FROM test_database_drop.test_table_3 ORDER BY id FORMAT TSV", + "", + ) + + mysql_node.query( + "CREATE TABLE test_database_drop.test_table_4 (id INT NOT NULL PRIMARY KEY) ENGINE = InnoDB" + ) + mysql_node.query("INSERT INTO test_database_drop.test_table_4 VALUES(1), (2)") + check_query( + clickhouse_node, + "SELECT * FROM test_database_drop.test_table_4 ORDER BY id FORMAT TSV", + "1\n2\n", + ) + with mysql_node.alloc_connection() as mysql: + mysql.query("USE test_database_drop") + mysql.query("TRUNCATE test_table_4") + + check_query( + clickhouse_node, + "SELECT * FROM test_database_drop.test_table_4 ORDER BY id FORMAT TSV", + "", + ) + clickhouse_node.query("DROP DATABASE test_database_drop") mysql_node.query("DROP DATABASE test_database_drop") diff --git a/tests/integration/test_storage_azure_blob_storage/__init__.py b/tests/integration/test_storage_azure_blob_storage/__init__.py new file mode 100644 index 00000000000..e5a0d9b4834 --- /dev/null +++ b/tests/integration/test_storage_azure_blob_storage/__init__.py @@ -0,0 +1 @@ +#!/usr/bin/env python3 diff --git a/tests/integration/test_storage_azure_blob_storage/configs/disable_profilers.xml b/tests/integration/test_storage_azure_blob_storage/configs/disable_profilers.xml new file mode 100644 index 00000000000..a39badbf8ec --- /dev/null +++ b/tests/integration/test_storage_azure_blob_storage/configs/disable_profilers.xml @@ -0,0 +1,9 @@ + + + + + 0 + 0 + + + diff --git a/tests/integration/test_storage_azure_blob_storage/configs/named_collections.xml b/tests/integration/test_storage_azure_blob_storage/configs/named_collections.xml new file mode 100644 index 00000000000..e0c18d11940 --- /dev/null +++ b/tests/integration/test_storage_azure_blob_storage/configs/named_collections.xml @@ -0,0 +1,16 @@ + + + + DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1; + cont + test_simple_write_named.csv + key UInt64, data String + CSV + + + http://azurite1:10000/devstoreaccount1 + devstoreaccount1 + Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw== + + + diff --git a/tests/integration/test_storage_azure_blob_storage/test.py b/tests/integration/test_storage_azure_blob_storage/test.py new file mode 100644 index 00000000000..f9d337b6d86 --- /dev/null +++ b/tests/integration/test_storage_azure_blob_storage/test.py @@ -0,0 +1,594 @@ +#!/usr/bin/env python3 + +import gzip +import json +import logging +import os +import io +import random +import threading +import time + +from azure.storage.blob import BlobServiceClient +import helpers.client +import pytest +from helpers.cluster import ClickHouseCluster, ClickHouseInstance +from helpers.network import PartitionManager +from helpers.mock_servers import start_mock_servers +from helpers.test_tools import exec_query_with_retry + + +@pytest.fixture(scope="module") +def cluster(): + try: + cluster = ClickHouseCluster(__file__) + cluster.add_instance( + "node", + main_configs=["configs/named_collections.xml"], + user_configs=["configs/disable_profilers.xml"], + with_azurite=True, + ) + cluster.start() + + yield cluster + finally: + cluster.shutdown() + + +def azure_query(node, query, try_num=10, settings={}): + for i in range(try_num): + try: + return node.query(query, settings=settings) + except Exception as ex: + retriable_errors = [ + "DB::Exception: Azure::Core::Http::TransportException: Connection was closed by the server while trying to read a response", + "DB::Exception: Azure::Core::Http::TransportException: Connection closed before getting full response or response is less than expected", + ] + retry = False + for error in retriable_errors: + if error in str(ex): + retry = True + print(f"Try num: {i}. Having retriable error: {ex}") + time.sleep(i) + break + if not retry or i == try_num - 1: + raise Exception(ex) + continue + + +def get_azure_file_content(filename): + container_name = "cont" + connection_string = "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;" + blob_service_client = BlobServiceClient.from_connection_string(connection_string) + container_client = blob_service_client.get_container_client(container_name) + blob_client = container_client.get_blob_client(filename) + download_stream = blob_client.download_blob() + return download_stream.readall().decode("utf-8") + + +def put_azure_file_content(filename, data): + container_name = "cont" + connection_string = "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;" + blob_service_client = BlobServiceClient.from_connection_string(connection_string) + try: + container_client = blob_service_client.create_container(container_name) + except: + container_client = blob_service_client.get_container_client(container_name) + + blob_client = container_client.get_blob_client(filename) + buf = io.BytesIO(data) + blob_client.upload_blob(buf) + + +def test_create_table_connection_string(cluster): + node = cluster.instances["node"] + azure_query( + node, + "CREATE TABLE test_create_table_conn_string (key UInt64, data String) Engine = AzureBlobStorage('DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1/;', 'cont', 'test_create_connection_string', 'CSV')", + ) + + +def test_create_table_account_string(cluster): + node = cluster.instances["node"] + azure_query( + node, + "CREATE TABLE test_create_table_account_url (key UInt64, data String) Engine = AzureBlobStorage('http://azurite1:10000/devstoreaccount1', 'cont', 'test_create_connection_string', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV')", + ) + + +def test_simple_write_account_string(cluster): + node = cluster.instances["node"] + azure_query( + node, + "CREATE TABLE test_simple_write (key UInt64, data String) Engine = AzureBlobStorage('http://azurite1:10000/devstoreaccount1', 'cont', 'test_simple_write.csv', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV')", + ) + azure_query(node, "INSERT INTO test_simple_write VALUES (1, 'a')") + print(get_azure_file_content("test_simple_write.csv")) + assert get_azure_file_content("test_simple_write.csv") == '1,"a"\n' + + +def test_simple_write_connection_string(cluster): + node = cluster.instances["node"] + azure_query( + node, + "CREATE TABLE test_simple_write_connection_string (key UInt64, data String) Engine = AzureBlobStorage('DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1;', 'cont', 'test_simple_write_c.csv', 'CSV')", + ) + azure_query(node, "INSERT INTO test_simple_write_connection_string VALUES (1, 'a')") + print(get_azure_file_content("test_simple_write_c.csv")) + assert get_azure_file_content("test_simple_write_c.csv") == '1,"a"\n' + + +def test_simple_write_named_collection_1(cluster): + node = cluster.instances["node"] + azure_query( + node, + "CREATE TABLE test_simple_write_named_collection_1 (key UInt64, data String) Engine = AzureBlobStorage(azure_conf1)", + ) + azure_query( + node, "INSERT INTO test_simple_write_named_collection_1 VALUES (1, 'a')" + ) + print(get_azure_file_content("test_simple_write_named.csv")) + assert get_azure_file_content("test_simple_write_named.csv") == '1,"a"\n' + azure_query(node, "TRUNCATE TABLE test_simple_write_named_collection_1") + + +def test_simple_write_named_collection_2(cluster): + node = cluster.instances["node"] + azure_query( + node, + "CREATE TABLE test_simple_write_named_collection_2 (key UInt64, data String) Engine = AzureBlobStorage(azure_conf2, container='cont', blob_path='test_simple_write_named_2.csv', format='CSV')", + ) + azure_query( + node, "INSERT INTO test_simple_write_named_collection_2 VALUES (1, 'a')" + ) + print(get_azure_file_content("test_simple_write_named_2.csv")) + assert get_azure_file_content("test_simple_write_named_2.csv") == '1,"a"\n' + + +def test_partition_by(cluster): + node = cluster.instances["node"] + table_format = "column1 UInt32, column2 UInt32, column3 UInt32" + partition_by = "column3" + values = "(1, 2, 3), (3, 2, 1), (78, 43, 45)" + filename = "test_{_partition_id}.csv" + + azure_query( + node, + f"CREATE TABLE test_partitioned_write ({table_format}) Engine = AzureBlobStorage('http://azurite1:10000/devstoreaccount1', 'cont', '{filename}', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV') PARTITION BY {partition_by}", + ) + azure_query(node, f"INSERT INTO test_partitioned_write VALUES {values}") + + assert "1,2,3\n" == get_azure_file_content("test_3.csv") + assert "3,2,1\n" == get_azure_file_content("test_1.csv") + assert "78,43,45\n" == get_azure_file_content("test_45.csv") + + +def test_partition_by_string_column(cluster): + node = cluster.instances["node"] + table_format = "col_num UInt32, col_str String" + partition_by = "col_str" + values = "(1, 'foo/bar'), (3, 'йцук'), (78, '你好')" + filename = "test_{_partition_id}.csv" + azure_query( + node, + f"CREATE TABLE test_partitioned_string_write ({table_format}) Engine = AzureBlobStorage('http://azurite1:10000/devstoreaccount1', 'cont', '{filename}', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV') PARTITION BY {partition_by}", + ) + azure_query(node, f"INSERT INTO test_partitioned_string_write VALUES {values}") + + assert '1,"foo/bar"\n' == get_azure_file_content("test_foo/bar.csv") + assert '3,"йцук"\n' == get_azure_file_content("test_йцук.csv") + assert '78,"你好"\n' == get_azure_file_content("test_你好.csv") + + +def test_partition_by_const_column(cluster): + node = cluster.instances["node"] + table_format = "column1 UInt32, column2 UInt32, column3 UInt32" + values = "(1, 2, 3), (3, 2, 1), (78, 43, 45)" + partition_by = "'88'" + values_csv = "1,2,3\n3,2,1\n78,43,45\n" + filename = "test_{_partition_id}.csv" + azure_query( + node, + f"CREATE TABLE test_partitioned_const_write ({table_format}) Engine = AzureBlobStorage('http://azurite1:10000/devstoreaccount1', 'cont', '{filename}', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV') PARTITION BY {partition_by}", + ) + azure_query(node, f"INSERT INTO test_partitioned_const_write VALUES {values}") + assert values_csv == get_azure_file_content("test_88.csv") + + +def test_truncate(cluster): + node = cluster.instances["node"] + azure_query( + node, + "CREATE TABLE test_truncate (key UInt64, data String) Engine = AzureBlobStorage(azure_conf2, container='cont', blob_path='test_truncate.csv', format='CSV')", + ) + azure_query(node, "INSERT INTO test_truncate VALUES (1, 'a')") + assert get_azure_file_content("test_truncate.csv") == '1,"a"\n' + azure_query(node, "TRUNCATE TABLE test_truncate") + with pytest.raises(Exception): + print(get_azure_file_content("test_truncate.csv")) + + +def test_simple_read_write(cluster): + node = cluster.instances["node"] + azure_query( + node, + "CREATE TABLE test_simple_read_write (key UInt64, data String) Engine = AzureBlobStorage(azure_conf2, container='cont', blob_path='test_simple_read_write.csv', format='CSV')", + ) + + azure_query(node, "INSERT INTO test_simple_read_write VALUES (1, 'a')") + assert get_azure_file_content("test_simple_read_write.csv") == '1,"a"\n' + print(azure_query(node, "SELECT * FROM test_simple_read_write")) + assert azure_query(node, "SELECT * FROM test_simple_read_write") == "1\ta\n" + + +def test_create_new_files_on_insert(cluster): + node = cluster.instances["node"] + + azure_query( + node, + f"create table test_multiple_inserts(a Int32, b String) ENGINE = AzureBlobStorage(azure_conf2, container='cont', blob_path='test_parquet', format='Parquet')", + ) + azure_query(node, "truncate table test_multiple_inserts") + azure_query( + node, + f"insert into test_multiple_inserts select number, randomString(100) from numbers(10) settings azure_truncate_on_insert=1", + ) + azure_query( + node, + f"insert into test_multiple_inserts select number, randomString(100) from numbers(20) settings azure_create_new_file_on_insert=1", + ) + azure_query( + node, + f"insert into test_multiple_inserts select number, randomString(100) from numbers(30) settings azure_create_new_file_on_insert=1", + ) + + result = azure_query(node, f"select count() from test_multiple_inserts") + assert int(result) == 60 + + azure_query(node, f"drop table test_multiple_inserts") + + +def test_overwrite(cluster): + node = cluster.instances["node"] + + azure_query( + node, + f"create table test_overwrite(a Int32, b String) ENGINE = AzureBlobStorage(azure_conf2, container='cont', blob_path='test_parquet_overwrite', format='Parquet')", + ) + azure_query(node, "truncate table test_overwrite") + + azure_query( + node, + f"insert into test_overwrite select number, randomString(100) from numbers(50) settings azure_truncate_on_insert=1", + ) + node.query_and_get_error( + f"insert into test_overwrite select number, randomString(100) from numbers(100)" + ) + azure_query( + node, + f"insert into test_overwrite select number, randomString(100) from numbers(200) settings azure_truncate_on_insert=1", + ) + + result = azure_query(node, f"select count() from test_overwrite") + assert int(result) == 200 + + +def test_insert_with_path_with_globs(cluster): + node = cluster.instances["node"] + azure_query( + node, + f"create table test_insert_globs(a Int32, b String) ENGINE = AzureBlobStorage(azure_conf2, container='cont', blob_path='test_insert_with_globs*', format='Parquet')", + ) + node.query_and_get_error( + f"insert into table function test_insert_globs SELECT number, randomString(100) FROM numbers(500)" + ) + + +def test_put_get_with_globs(cluster): + # type: (ClickHouseCluster) -> None + unique_prefix = random.randint(1, 10000) + node = cluster.instances["node"] # type: ClickHouseInstance + table_format = "column1 UInt32, column2 UInt32, column3 UInt32" + max_path = "" + for i in range(10): + for j in range(10): + path = "{}/{}_{}/{}.csv".format( + unique_prefix, i, random.choice(["a", "b", "c", "d"]), j + ) + max_path = max(path, max_path) + values = f"({i},{j},{i + j})" + + azure_query( + node, + f"CREATE TABLE test_{i}_{j} ({table_format}) Engine = AzureBlobStorage(azure_conf2, container='cont', blob_path='{path}', format='CSV')", + ) + + query = f"insert into test_{i}_{j} VALUES {values}" + azure_query(node, query) + + azure_query( + node, + f"CREATE TABLE test_glob_select ({table_format}) Engine = AzureBlobStorage(azure_conf2, container='cont', blob_path='{unique_prefix}/*_{{a,b,c,d}}/?.csv', format='CSV')", + ) + query = "select sum(column1), sum(column2), sum(column3), min(_file), max(_path) from test_glob_select" + assert azure_query(node, query).splitlines() == [ + "450\t450\t900\t0.csv\t{bucket}/{max_path}".format( + bucket="cont", max_path=max_path + ) + ] + + +def test_azure_glob_scheherazade(cluster): + node = cluster.instances["node"] # type: ClickHouseInstance + table_format = "column1 UInt32, column2 UInt32, column3 UInt32" + values = "(1, 1, 1)" + nights_per_job = 1001 // 30 + jobs = [] + for night in range(0, 1001, nights_per_job): + + def add_tales(start, end): + for i in range(start, end): + path = "night_{}/tale.csv".format(i) + unique_num = random.randint(1, 10000) + azure_query( + node, + f"CREATE TABLE test_{i}_{unique_num} ({table_format}) Engine = AzureBlobStorage(azure_conf2, container='cont', blob_path='{path}', format='CSV')", + ) + query = f"insert into test_{i}_{unique_num} VALUES {values}" + azure_query(node, query) + + jobs.append( + threading.Thread( + target=add_tales, args=(night, min(night + nights_per_job, 1001)) + ) + ) + jobs[-1].start() + + for job in jobs: + job.join() + + azure_query( + node, + f"CREATE TABLE test_glob_select_scheherazade ({table_format}) Engine = AzureBlobStorage(azure_conf2, container='cont', blob_path='night_*/tale.csv', format='CSV')", + ) + query = "select count(), sum(column1), sum(column2), sum(column3) from test_glob_select_scheherazade" + assert azure_query(node, query).splitlines() == ["1001\t1001\t1001\t1001"] + + +@pytest.mark.parametrize( + "extension,method", + [pytest.param("bin", "gzip", id="bin"), pytest.param("gz", "auto", id="gz")], +) +def test_storage_azure_get_gzip(cluster, extension, method): + node = cluster.instances["node"] + filename = f"test_get_gzip.{extension}" + name = f"test_get_gzip_{extension}" + data = [ + "Sophia Intrieri,55", + "Jack Taylor,71", + "Christopher Silva,66", + "Clifton Purser,35", + "Richard Aceuedo,43", + "Lisa Hensley,31", + "Alice Wehrley,1", + "Mary Farmer,47", + "Samara Ramirez,19", + "Shirley Lloyd,51", + "Santos Cowger,0", + "Richard Mundt,88", + "Jerry Gonzalez,15", + "Angela James,10", + "Norman Ortega,33", + "", + ] + azure_query(node, f"DROP TABLE IF EXISTS {name}") + + buf = io.BytesIO() + compressed = gzip.GzipFile(fileobj=buf, mode="wb") + compressed.write(("\n".join(data)).encode()) + compressed.close() + put_azure_file_content(filename, buf.getvalue()) + + azure_query( + node, + f"""CREATE TABLE {name} (name String, id UInt32) ENGINE = AzureBlobStorage( + azure_conf2, container='cont', blob_path ='{filename}', + format='CSV', + compression='{method}')""", + ) + + assert azure_query(node, f"SELECT sum(id) FROM {name}").splitlines() == ["565"] + azure_query(node, f"DROP TABLE {name}") + + +def test_schema_inference_no_globs(cluster): + node = cluster.instances["node"] # type: ClickHouseInstance + table_format = "column1 UInt32, column2 String, column3 UInt32" + azure_query( + node, + f"CREATE TABLE test_schema_inference_src ({table_format}) Engine = AzureBlobStorage(azure_conf2, container='cont', blob_path='test_schema_inference_no_globs.csv', format='CSVWithNames')", + ) + + query = f"insert into test_schema_inference_src SELECT number, toString(number), number * number FROM numbers(1000)" + azure_query(node, query) + + azure_query( + node, + f"CREATE TABLE test_select_inference Engine = AzureBlobStorage(azure_conf2, container='cont', blob_path='test_schema_inference_no_globs.csv')", + ) + + print(node.query("SHOW CREATE TABLE test_select_inference")) + + query = "select sum(column1), sum(length(column2)), sum(column3), min(_file), max(_path) from test_select_inference" + assert azure_query(node, query).splitlines() == [ + "499500\t2890\t332833500\ttest_schema_inference_no_globs.csv\tcont/test_schema_inference_no_globs.csv" + ] + + +def test_schema_inference_from_globs(cluster): + node = cluster.instances["node"] + unique_prefix = random.randint(1, 10000) + node = cluster.instances["node"] # type: ClickHouseInstance + table_format = "column1 UInt32, column2 UInt32, column3 UInt32" + max_path = "" + for i in range(10): + for j in range(10): + path = "{}/{}_{}/{}.csv".format( + unique_prefix, i, random.choice(["a", "b", "c", "d"]), j + ) + max_path = max(path, max_path) + values = f"({i},{j},{i + j})" + + azure_query( + node, + f"CREATE TABLE test_schema_{i}_{j} ({table_format}) Engine = AzureBlobStorage(azure_conf2, container='cont', blob_path='{path}', format='CSVWithNames')", + ) + + query = f"insert into test_schema_{i}_{j} VALUES {values}" + azure_query(node, query) + + azure_query( + node, + f"CREATE TABLE test_glob_select_inference Engine = AzureBlobStorage(azure_conf2, container='cont', blob_path='{unique_prefix}/*_{{a,b,c,d}}/?.csv')", + ) + + print(node.query("SHOW CREATE TABLE test_glob_select_inference")) + + query = "select sum(column1), sum(column2), sum(column3), min(_file), max(_path) from test_glob_select_inference" + assert azure_query(node, query).splitlines() == [ + "450\t450\t900\t0.csv\t{bucket}/{max_path}".format( + bucket="cont", max_path=max_path + ) + ] + + +def test_simple_write_account_string_table_function(cluster): + node = cluster.instances["node"] + azure_query( + node, + "INSERT INTO TABLE FUNCTION azureBlobStorage('http://azurite1:10000/devstoreaccount1', 'cont', 'test_simple_write_tf.csv', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV', 'auto', 'key UInt64, data String') VALUES (1, 'a')", + ) + print(get_azure_file_content("test_simple_write_tf.csv")) + assert get_azure_file_content("test_simple_write_tf.csv") == '1,"a"\n' + + +def test_simple_write_connection_string_table_function(cluster): + node = cluster.instances["node"] + azure_query( + node, + "INSERT INTO TABLE FUNCTION azureBlobStorage('DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1;', 'cont', 'test_simple_write_connection_tf.csv', 'CSV', 'auto', 'key UInt64, data String') VALUES (1, 'a')", + ) + print(get_azure_file_content("test_simple_write_connection_tf.csv")) + assert get_azure_file_content("test_simple_write_connection_tf.csv") == '1,"a"\n' + + +def test_simple_write_named_collection_1_table_function(cluster): + node = cluster.instances["node"] + azure_query( + node, + "INSERT INTO TABLE FUNCTION azureBlobStorage(azure_conf1) VALUES (1, 'a')", + ) + print(get_azure_file_content("test_simple_write_named.csv")) + assert get_azure_file_content("test_simple_write_named.csv") == '1,"a"\n' + + azure_query( + node, + "CREATE TABLE drop_table (key UInt64, data String) Engine = AzureBlobStorage(azure_conf1)", + ) + + azure_query( + node, + "TRUNCATE TABLE drop_table", + ) + + +def test_simple_write_named_collection_2_table_function(cluster): + node = cluster.instances["node"] + + azure_query( + node, + "INSERT INTO TABLE FUNCTION azureBlobStorage(azure_conf2, container='cont', blob_path='test_simple_write_named_2_tf.csv', format='CSV', structure='key UInt64, data String') VALUES (1, 'a')", + ) + print(get_azure_file_content("test_simple_write_named_2_tf.csv")) + assert get_azure_file_content("test_simple_write_named_2_tf.csv") == '1,"a"\n' + + +def test_put_get_with_globs_tf(cluster): + # type: (ClickHouseCluster) -> None + unique_prefix = random.randint(1, 10000) + node = cluster.instances["node"] # type: ClickHouseInstance + table_format = "column1 UInt32, column2 UInt32, column3 UInt32" + max_path = "" + for i in range(10): + for j in range(10): + path = "{}/{}_{}/{}.csv".format( + unique_prefix, i, random.choice(["a", "b", "c", "d"]), j + ) + max_path = max(path, max_path) + values = f"({i},{j},{i + j})" + + azure_query( + node, + f"INSERT INTO TABLE FUNCTION azureBlobStorage(azure_conf2, container='cont', blob_path='{path}', format='CSV', compression='auto', structure='{table_format}') VALUES {values}", + ) + query = f"select sum(column1), sum(column2), sum(column3), min(_file), max(_path) from azureBlobStorage(azure_conf2, container='cont', blob_path='{unique_prefix}/*_{{a,b,c,d}}/?.csv', format='CSV', structure='{table_format}')" + assert azure_query(node, query).splitlines() == [ + "450\t450\t900\t0.csv\t{bucket}/{max_path}".format( + bucket="cont", max_path=max_path + ) + ] + + +def test_schema_inference_no_globs_tf(cluster): + node = cluster.instances["node"] # type: ClickHouseInstance + table_format = "column1 UInt32, column2 String, column3 UInt32" + + query = f"insert into table function azureBlobStorage(azure_conf2, container='cont', blob_path='test_schema_inference_no_globs_tf.csv', format='CSVWithNames', structure='{table_format}') SELECT number, toString(number), number * number FROM numbers(1000)" + azure_query(node, query) + + query = "select sum(column1), sum(length(column2)), sum(column3), min(_file), max(_path) from azureBlobStorage(azure_conf2, container='cont', blob_path='test_schema_inference_no_globs_tf.csv')" + assert azure_query(node, query).splitlines() == [ + "499500\t2890\t332833500\ttest_schema_inference_no_globs_tf.csv\tcont/test_schema_inference_no_globs_tf.csv" + ] + + +def test_schema_inference_from_globs_tf(cluster): + node = cluster.instances["node"] + unique_prefix = random.randint(1, 10000) + node = cluster.instances["node"] # type: ClickHouseInstance + table_format = "column1 UInt32, column2 UInt32, column3 UInt32" + max_path = "" + for i in range(10): + for j in range(10): + path = "{}/{}_{}/{}.csv".format( + unique_prefix, i, random.choice(["a", "b", "c", "d"]), j + ) + max_path = max(path, max_path) + values = f"({i},{j},{i + j})" + + query = f"insert into table function azureBlobStorage(azure_conf2, container='cont', blob_path='{path}', format='CSVWithNames', structure='{table_format}') VALUES {values}" + azure_query(node, query) + + query = f"select sum(column1), sum(column2), sum(column3), min(_file), max(_path) from azureBlobStorage(azure_conf2, container='cont', blob_path='{unique_prefix}/*_{{a,b,c,d}}/?.csv')" + assert azure_query(node, query).splitlines() == [ + "450\t450\t900\t0.csv\t{bucket}/{max_path}".format( + bucket="cont", max_path=max_path + ) + ] + + +def test_partition_by_tf(cluster): + node = cluster.instances["node"] + table_format = "column1 UInt32, column2 UInt32, column3 UInt32" + partition_by = "column3" + values = "(1, 2, 3), (3, 2, 1), (78, 43, 45)" + filename = "test_tf_{_partition_id}.csv" + + azure_query( + node, + f"INSERT INTO TABLE FUNCTION azureBlobStorage('http://azurite1:10000/devstoreaccount1', 'cont', '{filename}', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV', 'auto', '{table_format}') PARTITION BY {partition_by} VALUES {values}", + ) + + assert "1,2,3\n" == get_azure_file_content("test_tf_3.csv") + assert "3,2,1\n" == get_azure_file_content("test_tf_1.csv") + assert "78,43,45\n" == get_azure_file_content("test_tf_45.csv") diff --git a/tests/integration/test_storage_mongodb/test.py b/tests/integration/test_storage_mongodb/test.py index 6ba5520704d..6ce71fb91fa 100644 --- a/tests/integration/test_storage_mongodb/test.py +++ b/tests/integration/test_storage_mongodb/test.py @@ -71,6 +71,40 @@ def test_simple_select(started_cluster): simple_mongo_table.drop() +@pytest.mark.parametrize("started_cluster", [False], indirect=["started_cluster"]) +def test_simple_select_from_view(started_cluster): + mongo_connection = get_mongo_connection(started_cluster) + db = mongo_connection["test"] + db.add_user("root", "clickhouse") + simple_mongo_table = db["simple_table"] + data = [] + for i in range(0, 100): + data.append({"key": i, "data": hex(i * i)}) + simple_mongo_table.insert_many(data) + simple_mongo_table_view = db.create_collection( + "simple_table_view", viewOn="simple_table" + ) + + node = started_cluster.instances["node"] + node.query( + "CREATE TABLE simple_mongo_table(key UInt64, data String) ENGINE = MongoDB('mongo1:27017', 'test', 'simple_table_view', 'root', 'clickhouse')" + ) + + assert node.query("SELECT COUNT() FROM simple_mongo_table") == "100\n" + assert ( + node.query("SELECT sum(key) FROM simple_mongo_table") + == str(sum(range(0, 100))) + "\n" + ) + + assert ( + node.query("SELECT data from simple_mongo_table where key = 42") + == hex(42 * 42) + "\n" + ) + node.query("DROP TABLE simple_mongo_table") + simple_mongo_table_view.drop() + simple_mongo_table.drop() + + @pytest.mark.parametrize("started_cluster", [False], indirect=["started_cluster"]) def test_arrays(started_cluster): mongo_connection = get_mongo_connection(started_cluster) @@ -411,13 +445,16 @@ def test_simple_insert_select(started_cluster): node.query( "CREATE TABLE simple_mongo_table(key UInt64, data String) ENGINE = MongoDB('mongo1:27017', 'test', 'simple_table', 'root', 'clickhouse')" ) - node.query("INSERT INTO simple_mongo_table SELECT 1, 'kek'") + node.query( + "INSERT INTO simple_mongo_table SELECT number, 'kek' || toString(number) FROM numbers(10)" + ) assert ( - node.query("SELECT data from simple_mongo_table where key = 1").strip() == "kek" + node.query("SELECT data from simple_mongo_table where key = 7").strip() + == "kek7" ) node.query("INSERT INTO simple_mongo_table(key) SELECT 12") - assert int(node.query("SELECT count() from simple_mongo_table")) == 2 + assert int(node.query("SELECT count() from simple_mongo_table")) == 11 assert ( node.query("SELECT data from simple_mongo_table where key = 12").strip() == "" ) diff --git a/tests/queries/0_stateless/00515_enhanced_time_zones.reference b/tests/queries/0_stateless/00515_enhanced_time_zones.reference index ad0dae35c45..2555c885558 100644 --- a/tests/queries/0_stateless/00515_enhanced_time_zones.reference +++ b/tests/queries/0_stateless/00515_enhanced_time_zones.reference @@ -16,3 +16,22 @@ 2017-11-05 08:07:47 2017-11-05 10:37:47 2017-11-05 10:37:47 +-- Test const timezone arguments -- +42 +43 +42 +43 +42 +43 +42 +43 +42 +43 +42 +43 +42 +43 +42 +43 +42 +43 diff --git a/tests/queries/0_stateless/00515_enhanced_time_zones.sql b/tests/queries/0_stateless/00515_enhanced_time_zones.sql index f719ff70d7a..5f40cfb53c1 100644 --- a/tests/queries/0_stateless/00515_enhanced_time_zones.sql +++ b/tests/queries/0_stateless/00515_enhanced_time_zones.sql @@ -21,16 +21,51 @@ SELECT toString(toDateTime('2017-11-05 08:07:47', 'Asia/Istanbul')); SELECT toString(toTimeZone(toDateTime('2017-11-05 08:07:47', 'Asia/Istanbul'), 'Asia/Kolkata')); SELECT toString(toDateTime('2017-11-05 08:07:47', 'Asia/Istanbul'), 'Asia/Kolkata'); -SELECT toTimeZone(dt, tz) FROM ( - SELECT toDateTime('2017-11-05 08:07:47', 'Asia/Istanbul') AS dt, arrayJoin(['Asia/Kolkata', 'UTC']) AS tz -); -- { serverError ILLEGAL_COLUMN } -SELECT materialize('Asia/Kolkata') t, toTimeZone(toDateTime('2017-11-05 08:07:47', 'Asia/Istanbul'), t); -- { serverError ILLEGAL_COLUMN } +SELECT '-- Test const timezone arguments --'; -CREATE TEMPORARY TABLE tmp AS SELECT arrayJoin(['Europe/Istanbul', 'Asia/Istanbul']); -SELECT toTimeZone(now(), (*,).1) FROM tmp; -- { serverError ILLEGAL_COLUMN } -SELECT now((*,).1) FROM tmp; -- { serverError ILLEGAL_COLUMN } -SELECT now64(1, (*,).1) FROM tmp; -- { serverError ILLEGAL_COLUMN } -SELECT toStartOfInterval(now(), INTERVAL 3 HOUR, (*,).1) FROM tmp; -- { serverError ILLEGAL_COLUMN } -SELECT snowflakeToDateTime(toInt64(123), (*,).1) FROM tmp; -- { serverError ILLEGAL_COLUMN } -SELECT toUnixTimestamp(now(), (*,).1) FROM tmp; -- { serverError ILLEGAL_COLUMN } -SELECT toDateTimeOrDefault('2023-04-12 16:43:32', (*,).1, now()) FROM tmp; -- { serverError ILLEGAL_COLUMN } +DROP TABLE IF EXISTS tab; + +CREATE TABLE tab (val Int64, tz String) engine=Log; +INSERT INTO tab VALUES (42, 'Asia/Singapore') (43, 'Asia/Tokyo'); + +SELECT val FROM tab WHERE now(tz) != toDateTime('2000-01-01 00:00:00') ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 0; -- { serverError ILLEGAL_COLUMN } +SELECT val FROM tab WHERE now(tz) != toDateTime('2000-01-01 00:00:00') ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 1; + +SELECT val FROM tab WHERE now64(9, tz) != toDateTime64('2000-01-01 00:00:00', 6) ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 0; -- { serverError ILLEGAL_COLUMN } +SELECT val FROM tab WHERE now64(9, tz) != toDateTime64('2000-01-01 00:00:00', 6) ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 1; + +SELECT val FROM tab WHERE nowInBlock(tz) != toDateTime('2000-01-01 00:00:00') ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 0; -- { serverError ILLEGAL_COLUMN } +SELECT val FROM tab WHERE nowInBlock(tz) != toDateTime('2000-01-01 00:00:00') ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 1; + +SELECT val FROM tab WHERE toTimeZone(toDateTime(val), tz) != toDateTime('2023-06-11 14:14:14') ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 0; -- { serverError ILLEGAL_COLUMN } +SELECT val FROM tab WHERE toTimeZone(toDateTime(val), tz) != toDateTime('2023-06-11 14:14:14') ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 1; + +SELECT val FROM tab WHERE fromUnixTimestamp64Milli(val, tz) != toDateTime64('2023-06-11 14:14:14', 6) ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 0; -- { serverError ILLEGAL_COLUMN } +SELECT val FROM tab WHERE fromUnixTimestamp64Milli(val, tz) != toDateTime64('2023-06-11 14:14:14', 6) ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 1; + +SELECT val FROM tab WHERE fromUnixTimestamp64Micro(val, tz) != toDateTime64('2023-06-11 14:14:14', 6) ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 0; -- { serverError ILLEGAL_COLUMN } +SELECT val FROM tab WHERE fromUnixTimestamp64Micro(val, tz) != toDateTime64('2023-06-11 14:14:14', 6) ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 1; + +SELECT val FROM tab WHERE fromUnixTimestamp64Nano(val, tz) != toDateTime64('2023-06-11 14:14:14', 6) ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 0; -- { serverError ILLEGAL_COLUMN } +SELECT val FROM tab WHERE fromUnixTimestamp64Nano(val, tz) != toDateTime64('2023-06-11 14:14:14', 6) ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 1; + +SELECT val FROM tab WHERE snowflakeToDateTime(val, tz) != toDateTime('2023-06-11 14:14:14') ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 0; -- { serverError ILLEGAL_COLUMN } +SELECT val FROM tab WHERE snowflakeToDateTime(val, tz) != toDateTime('2023-06-11 14:14:14') ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 1; + +SELECT val FROM tab WHERE snowflakeToDateTime64(val, tz) != toDateTime64('2023-06-11 14:14:14', 6) ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 0; -- { serverError ILLEGAL_COLUMN } +SELECT val FROM tab WHERE snowflakeToDateTime64(val, tz) != toDateTime64('2023-06-11 14:14:14', 6) ORDER BY val SETTINGS allow_nonconst_timezone_arguments = 1; + +-- test for a related bug: + +DROP TABLE tab; + +SET allow_nonconst_timezone_arguments = 1; + +CREATE TABLE tab (`country` LowCardinality(FixedString(7)) DEFAULT 'unknown', `city` LowCardinality(String) DEFAULT 'unknown', `region` LowCardinality(String) DEFAULT 'unknown', `continent` LowCardinality(FixedString(7)) DEFAULT 'unknown', `is_eu_country` Bool, `date` DateTime CODEC(DoubleDelta, LZ4), `viewer_date` DateTime ALIAS toTimezone(date, timezone), `device_browser` LowCardinality(String) DEFAULT 'unknown', `metro_code` LowCardinality(String) DEFAULT 'unknown', `domain` String DEFAULT 'unknown', `device_platform` LowCardinality(String) DEFAULT 'unknown', `device_type` LowCardinality(String) DEFAULT 'unknown', `device_vendor` LowCardinality(String) DEFAULT 'unknown', `ip` FixedString(39) DEFAULT 'unknown', `lat` Decimal(8, 6) CODEC(T64), `lng` Decimal(9, 6) CODEC(T64), `asset_id` String DEFAULT 'unknown', `is_personalized` Bool, `metric` String, `origin` String DEFAULT 'unknown', `product_id` UInt64 CODEC(T64), `referer` String DEFAULT 'unknown', `server_side` Int8 CODEC(T64), `third_party_id` String DEFAULT 'unknown', `partner_slug` LowCardinality(FixedString(10)) DEFAULT 'unknown', `user_agent` String DEFAULT 'unknown', `user_id` UUID, `zip` FixedString(10) DEFAULT 'unknown', `timezone` LowCardinality(String), `as_organization` LowCardinality(String) DEFAULT 'unknown', `content_cat` Array(String), `playback_method` LowCardinality(String) DEFAULT 'unknown', `store_id` LowCardinality(String) DEFAULT 'unknown', `store_url` String DEFAULT 'unknown', `timestamp` Nullable(DateTime), `ad_count` Int8 CODEC(T64), `ad_type` LowCardinality(FixedString(10)) DEFAULT 'unknown', `ad_categories` Array(FixedString(8)), `blocked_ad_categories` Array(FixedString(8)), `break_max_ad_length` Int8 CODEC(T64), `break_max_ads` Int8 CODEC(T64), `break_max_duration` Int8 CODEC(T64), `break_min_ad_length` Int8 CODEC(T64), `break_position` LowCardinality(FixedString(18)) DEFAULT 'unknown', `media_playhead` String DEFAULT 'unknown', `placement_type` Int8 CODEC(T64), `transaction_id` String, `universal_ad_id` Array(String), `client_ua` LowCardinality(String) DEFAULT 'unknown', `device_ip` FixedString(39) DEFAULT 'unknown', `device_ua` LowCardinality(String) DEFAULT 'unknown', `ifa` String, `ifa_type` LowCardinality(String) DEFAULT 'unknown', `vast_lat` Decimal(8, 6) CODEC(T64), `vast_long` Decimal(9, 6) CODEC(T64), `server_ua` String DEFAULT 'unknown', `app_bundle` String DEFAULT 'unknown', `page_url` String DEFAULT 'unknown', `api_framework` Array(UInt8), `click_type` LowCardinality(String), `extensions` Array(String), `media_mime` Array(String), `om_id_partner` LowCardinality(String) DEFAULT 'unknown', `player_capabilities` Array(FixedString(12)), `vast_versions` Array(UInt8), `verification_vendors` Array(String), `ad_play_head` String DEFAULT 'unknown', `ad_serving_id` String DEFAULT 'unknown', `asset_uri` String DEFAULT 'unknown', `content_id` String DEFAULT 'unknown', `content_uri` String DEFAULT 'unknown', `inventory_state` Array(FixedString(14)), `player_size` Array(UInt8), `player_state` Array(FixedString(12)), `pod_sequence` Int8 CODEC(T64), `click_position` Array(UInt32), `error_code` Int16 CODEC(T64), `error_reason` Int8 CODEC(T64), `gdpr_consent` String DEFAULT 'unknown', `limited_tracking` Bool, `regulations` String DEFAULT 'unknown', `content_category` Array(String), PROJECTION projection_TPAG_VAST_date (SELECT * ORDER BY toYYYYMMDD(date), metric, product_id, asset_id)) ENGINE = MergeTree ORDER BY (product_id, metric, asset_id, toYYYYMMDD(date)); + +DETACH TABLE tab; + +ATTACH TABLE tab SETTINGS allow_nonconst_timezone_arguments = 0; -- { serverError ILLEGAL_COLUMN } +ATTACH TABLE tab SETTINGS allow_nonconst_timezone_arguments = 1; + +DROP TABLE tab; diff --git a/tests/queries/0_stateless/01271_show_privileges.reference b/tests/queries/0_stateless/01271_show_privileges.reference index ec245d8b9e0..5d30da5d2ea 100644 --- a/tests/queries/0_stateless/01271_show_privileges.reference +++ b/tests/queries/0_stateless/01271_show_privileges.reference @@ -158,6 +158,7 @@ JDBC [] GLOBAL SOURCES HDFS [] GLOBAL SOURCES S3 [] GLOBAL SOURCES HIVE [] GLOBAL SOURCES +AZURE [] GLOBAL SOURCES SOURCES [] \N ALL CLUSTER [] GLOBAL ALL ALL ['ALL PRIVILEGES'] \N \N diff --git a/tests/queries/0_stateless/01721_join_implicit_cast_long.reference.j2 b/tests/queries/0_stateless/01721_join_implicit_cast_long.reference.j2 index ae43aa7195c..e9f32087439 100644 --- a/tests/queries/0_stateless/01721_join_implicit_cast_long.reference.j2 +++ b/tests/queries/0_stateless/01721_join_implicit_cast_long.reference.j2 @@ -1,6 +1,7 @@ {% for join_algorithm in ['hash', 'partial_merge', 'auto', 'full_sorting_merge', 'grace_hash'] -%} === {{ join_algorithm }} === = full = +{% if join_algorithm not in ['grace_hash'] -%} -4 0 196 -3 0 197 -2 0 198 @@ -16,6 +17,7 @@ 8 108 \N 9 109 \N 10 110 \N +{% endif -%} = left = 1 101 201 2 102 202 @@ -28,6 +30,7 @@ 9 109 \N 10 110 \N = right = +{% if join_algorithm not in ['grace_hash'] -%} -4 0 196 -3 0 197 -2 0 198 @@ -38,6 +41,7 @@ 3 103 203 4 104 204 5 105 205 +{% endif -%} = inner = 1 101 201 2 102 202 @@ -45,6 +49,7 @@ 4 104 204 5 105 205 = full = +{% if join_algorithm not in ['grace_hash'] -%} 0 0 -4 0 0 -3 0 0 -2 @@ -60,6 +65,7 @@ 8 8 0 9 9 0 10 10 0 +{% endif -%} = left = 1 1 1 2 2 2 @@ -72,6 +78,7 @@ 9 9 0 10 10 0 = right = +{% if join_algorithm not in ['grace_hash'] -%} 0 0 -4 0 0 -3 0 0 -2 @@ -82,6 +89,7 @@ 3 3 3 4 4 4 5 5 5 +{% endif -%} = inner = 1 1 1 2 2 2 @@ -90,6 +98,7 @@ 5 5 5 = join on = = full = +{% if join_algorithm not in ['grace_hash'] -%} 0 0 -4 196 0 0 -3 197 0 0 -2 198 @@ -105,6 +114,7 @@ 8 108 0 \N 9 109 0 \N 10 110 0 \N +{% endif -%} = left = 1 101 1 201 2 102 2 202 @@ -117,6 +127,7 @@ 9 109 0 \N 10 110 0 \N = right = +{% if join_algorithm not in ['grace_hash'] -%} 0 0 -4 196 0 0 -3 197 0 0 -2 198 @@ -127,6 +138,7 @@ 3 103 3 203 4 104 4 204 5 105 5 205 +{% endif -%} = inner = 1 101 1 201 2 102 2 202 @@ -134,6 +146,7 @@ 4 104 4 204 5 105 5 205 = full = +{% if join_algorithm not in ['grace_hash'] -%} 0 0 -4 196 0 0 -3 197 0 0 -2 198 @@ -149,6 +162,7 @@ 8 108 0 \N 9 109 0 \N 10 110 0 \N +{% endif -%} = left = 1 101 1 201 2 102 2 202 @@ -161,6 +175,7 @@ 9 109 0 \N 10 110 0 \N = right = +{% if join_algorithm not in ['grace_hash'] -%} 0 0 -4 196 0 0 -3 197 0 0 -2 198 @@ -171,6 +186,7 @@ 3 103 3 203 4 104 4 204 5 105 5 205 +{% endif -%} = inner = 1 101 1 201 2 102 2 202 @@ -180,6 +196,7 @@ = agg = 1 1 +{% if join_algorithm not in ['grace_hash'] -%} 1 1 1 @@ -188,11 +205,13 @@ 1 55 1055 0 0 -10 0 990 1 55 15 1055 1015 +{% endif -%} = types = 1 1 1 1 +{% if join_algorithm not in ['grace_hash'] -%} 1 1 1 @@ -200,9 +219,11 @@ 1 1 1 +{% endif -%} {% if join_algorithm not in ['full_sorting_merge'] -%} === join use nulls === = full = +{% if join_algorithm not in ['grace_hash'] -%} -4 \N 196 -3 \N 197 -2 \N 198 @@ -218,6 +239,7 @@ 8 108 \N 9 109 \N 10 110 \N +{% endif -%} = left = 1 101 201 2 102 202 @@ -230,6 +252,7 @@ 9 109 \N 10 110 \N = right = +{% if join_algorithm not in ['grace_hash'] -%} -4 \N 196 -3 \N 197 -2 \N 198 @@ -240,6 +263,7 @@ 3 103 203 4 104 204 5 105 205 +{% endif -%} = inner = 1 101 201 2 102 202 @@ -247,6 +271,7 @@ 4 104 204 5 105 205 = full = +{% if join_algorithm not in ['grace_hash'] -%} 1 1 1 2 2 2 3 3 3 @@ -262,6 +287,7 @@ \N \N -2 \N \N -1 \N \N 0 +{% endif -%} = left = 1 1 1 2 2 2 @@ -274,6 +300,7 @@ 9 9 \N 10 10 \N = right = +{% if join_algorithm not in ['grace_hash'] -%} 1 1 1 2 2 2 3 3 3 @@ -284,6 +311,7 @@ \N \N -2 \N \N -1 \N \N 0 +{% endif -%} = inner = 1 1 1 2 2 2 @@ -292,6 +320,7 @@ 5 5 5 = join on = = full = +{% if join_algorithm not in ['grace_hash'] -%} 1 101 1 201 2 102 2 202 3 103 3 203 @@ -307,6 +336,7 @@ \N \N -2 198 \N \N -1 199 \N \N 0 200 +{% endif -%} = left = 1 101 1 201 2 102 2 202 @@ -319,6 +349,7 @@ 9 109 \N \N 10 110 \N \N = right = +{% if join_algorithm not in ['grace_hash'] -%} 1 101 1 201 2 102 2 202 3 103 3 203 @@ -329,6 +360,7 @@ \N \N -2 198 \N \N -1 199 \N \N 0 200 +{% endif -%} = inner = 1 101 1 201 2 102 2 202 @@ -336,6 +368,7 @@ 4 104 4 204 5 105 5 205 = full = +{% if join_algorithm not in ['grace_hash'] -%} 1 101 1 201 2 102 2 202 3 103 3 203 @@ -351,6 +384,7 @@ \N \N -2 198 \N \N -1 199 \N \N 0 200 +{% endif -%} = left = 1 101 1 201 2 102 2 202 @@ -363,6 +397,7 @@ 9 109 \N \N 10 110 \N \N = right = +{% if join_algorithm not in ['grace_hash'] -%} 1 101 1 201 2 102 2 202 3 103 3 203 @@ -373,6 +408,7 @@ \N \N -2 198 \N \N -1 199 \N \N 0 200 +{% endif -%} = inner = 1 101 1 201 2 102 2 202 @@ -382,6 +418,7 @@ = agg = 1 1 +{% if join_algorithm not in ['grace_hash'] -%} 1 1 1 @@ -390,11 +427,13 @@ 1 55 1055 1 55 15 1055 1015 \N \N -10 \N 990 +{% endif -%} = types = 1 1 1 1 +{% if join_algorithm not in ['grace_hash'] -%} 1 1 1 @@ -403,4 +442,5 @@ 1 1 {% endif -%} +{% endif -%} {% endfor -%} diff --git a/tests/queries/0_stateless/01721_join_implicit_cast_long.sql.j2 b/tests/queries/0_stateless/01721_join_implicit_cast_long.sql.j2 index 38f71f4c5ec..f5321939f28 100644 --- a/tests/queries/0_stateless/01721_join_implicit_cast_long.sql.j2 +++ b/tests/queries/0_stateless/01721_join_implicit_cast_long.sql.j2 @@ -10,6 +10,7 @@ INSERT INTO t1 SELECT number as a, 100 + number as b FROM system.numbers LIMIT 1 INSERT INTO t2 SELECT number - 5 as a, 200 + number - 5 as b FROM system.numbers LIMIT 1, 10; {% macro is_implemented(join_algorithm) -%} +{% if join_algorithm == 'grace_hash' %} -- { serverError NOT_IMPLEMENTED } {% endif %} {% endmacro -%} {% for join_algorithm in ['hash', 'partial_merge', 'auto', 'full_sorting_merge', 'grace_hash'] -%} diff --git a/tests/queries/0_stateless/02117_show_create_table_system.reference b/tests/queries/0_stateless/02117_show_create_table_system.reference index 09cc62dac00..e864ba85018 100644 --- a/tests/queries/0_stateless/02117_show_create_table_system.reference +++ b/tests/queries/0_stateless/02117_show_create_table_system.reference @@ -297,7 +297,7 @@ CREATE TABLE system.grants ( `user_name` Nullable(String), `role_name` Nullable(String), - `access_type` Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW FILESYSTEM CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER NAMED COLLECTION' = 41, 'ALTER TABLE' = 42, 'ALTER DATABASE' = 43, 'ALTER VIEW REFRESH' = 44, 'ALTER VIEW MODIFY QUERY' = 45, 'ALTER VIEW' = 46, 'ALTER' = 47, 'CREATE DATABASE' = 48, 'CREATE TABLE' = 49, 'CREATE VIEW' = 50, 'CREATE DICTIONARY' = 51, 'CREATE TEMPORARY TABLE' = 52, 'CREATE ARBITRARY TEMPORARY TABLE' = 53, 'CREATE FUNCTION' = 54, 'CREATE NAMED COLLECTION' = 55, 'CREATE' = 56, 'DROP DATABASE' = 57, 'DROP TABLE' = 58, 'DROP VIEW' = 59, 'DROP DICTIONARY' = 60, 'DROP FUNCTION' = 61, 'DROP NAMED COLLECTION' = 62, 'DROP' = 63, 'UNDROP TABLE' = 64, 'TRUNCATE' = 65, 'OPTIMIZE' = 66, 'BACKUP' = 67, 'KILL QUERY' = 68, 'KILL TRANSACTION' = 69, 'MOVE PARTITION BETWEEN SHARDS' = 70, 'CREATE USER' = 71, 'ALTER USER' = 72, 'DROP USER' = 73, 'CREATE ROLE' = 74, 'ALTER ROLE' = 75, 'DROP ROLE' = 76, 'ROLE ADMIN' = 77, 'CREATE ROW POLICY' = 78, 'ALTER ROW POLICY' = 79, 'DROP ROW POLICY' = 80, 'CREATE QUOTA' = 81, 'ALTER QUOTA' = 82, 'DROP QUOTA' = 83, 'CREATE SETTINGS PROFILE' = 84, 'ALTER SETTINGS PROFILE' = 85, 'DROP SETTINGS PROFILE' = 86, 'SHOW USERS' = 87, 'SHOW ROLES' = 88, 'SHOW ROW POLICIES' = 89, 'SHOW QUOTAS' = 90, 'SHOW SETTINGS PROFILES' = 91, 'SHOW ACCESS' = 92, 'ACCESS MANAGEMENT' = 93, 'SHOW NAMED COLLECTIONS' = 94, 'SHOW NAMED COLLECTIONS SECRETS' = 95, 'NAMED COLLECTION CONTROL' = 96, 'SYSTEM SHUTDOWN' = 97, 'SYSTEM DROP DNS CACHE' = 98, 'SYSTEM DROP MARK CACHE' = 99, 'SYSTEM DROP UNCOMPRESSED CACHE' = 100, 'SYSTEM DROP MMAP CACHE' = 101, 'SYSTEM DROP QUERY CACHE' = 102, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 103, 'SYSTEM DROP FILESYSTEM CACHE' = 104, 'SYSTEM DROP SCHEMA CACHE' = 105, 'SYSTEM DROP S3 CLIENT CACHE' = 106, 'SYSTEM DROP CACHE' = 107, 'SYSTEM RELOAD CONFIG' = 108, 'SYSTEM RELOAD USERS' = 109, 'SYSTEM RELOAD SYMBOLS' = 110, 'SYSTEM RELOAD DICTIONARY' = 111, 'SYSTEM RELOAD MODEL' = 112, 'SYSTEM RELOAD FUNCTION' = 113, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 114, 'SYSTEM RELOAD' = 115, 'SYSTEM RESTART DISK' = 116, 'SYSTEM MERGES' = 117, 'SYSTEM TTL MERGES' = 118, 'SYSTEM FETCHES' = 119, 'SYSTEM MOVES' = 120, 'SYSTEM DISTRIBUTED SENDS' = 121, 'SYSTEM REPLICATED SENDS' = 122, 'SYSTEM SENDS' = 123, 'SYSTEM REPLICATION QUEUES' = 124, 'SYSTEM DROP REPLICA' = 125, 'SYSTEM SYNC REPLICA' = 126, 'SYSTEM RESTART REPLICA' = 127, 'SYSTEM RESTORE REPLICA' = 128, 'SYSTEM WAIT LOADING PARTS' = 129, 'SYSTEM SYNC DATABASE REPLICA' = 130, 'SYSTEM SYNC TRANSACTION LOG' = 131, 'SYSTEM SYNC FILE CACHE' = 132, 'SYSTEM FLUSH DISTRIBUTED' = 133, 'SYSTEM FLUSH LOGS' = 134, 'SYSTEM FLUSH' = 135, 'SYSTEM THREAD FUZZER' = 136, 'SYSTEM UNFREEZE' = 137, 'SYSTEM FAILPOINT' = 138, 'SYSTEM' = 139, 'dictGet' = 140, 'displaySecretsInShowAndSelect' = 141, 'addressToLine' = 142, 'addressToLineWithInlines' = 143, 'addressToSymbol' = 144, 'demangle' = 145, 'INTROSPECTION' = 146, 'FILE' = 147, 'URL' = 148, 'REMOTE' = 149, 'MONGO' = 150, 'MEILISEARCH' = 151, 'MYSQL' = 152, 'POSTGRES' = 153, 'SQLITE' = 154, 'ODBC' = 155, 'JDBC' = 156, 'HDFS' = 157, 'S3' = 158, 'HIVE' = 159, 'SOURCES' = 160, 'CLUSTER' = 161, 'ALL' = 162, 'NONE' = 163), + `access_type` Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW FILESYSTEM CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER NAMED COLLECTION' = 41, 'ALTER TABLE' = 42, 'ALTER DATABASE' = 43, 'ALTER VIEW REFRESH' = 44, 'ALTER VIEW MODIFY QUERY' = 45, 'ALTER VIEW' = 46, 'ALTER' = 47, 'CREATE DATABASE' = 48, 'CREATE TABLE' = 49, 'CREATE VIEW' = 50, 'CREATE DICTIONARY' = 51, 'CREATE TEMPORARY TABLE' = 52, 'CREATE ARBITRARY TEMPORARY TABLE' = 53, 'CREATE FUNCTION' = 54, 'CREATE NAMED COLLECTION' = 55, 'CREATE' = 56, 'DROP DATABASE' = 57, 'DROP TABLE' = 58, 'DROP VIEW' = 59, 'DROP DICTIONARY' = 60, 'DROP FUNCTION' = 61, 'DROP NAMED COLLECTION' = 62, 'DROP' = 63, 'UNDROP TABLE' = 64, 'TRUNCATE' = 65, 'OPTIMIZE' = 66, 'BACKUP' = 67, 'KILL QUERY' = 68, 'KILL TRANSACTION' = 69, 'MOVE PARTITION BETWEEN SHARDS' = 70, 'CREATE USER' = 71, 'ALTER USER' = 72, 'DROP USER' = 73, 'CREATE ROLE' = 74, 'ALTER ROLE' = 75, 'DROP ROLE' = 76, 'ROLE ADMIN' = 77, 'CREATE ROW POLICY' = 78, 'ALTER ROW POLICY' = 79, 'DROP ROW POLICY' = 80, 'CREATE QUOTA' = 81, 'ALTER QUOTA' = 82, 'DROP QUOTA' = 83, 'CREATE SETTINGS PROFILE' = 84, 'ALTER SETTINGS PROFILE' = 85, 'DROP SETTINGS PROFILE' = 86, 'SHOW USERS' = 87, 'SHOW ROLES' = 88, 'SHOW ROW POLICIES' = 89, 'SHOW QUOTAS' = 90, 'SHOW SETTINGS PROFILES' = 91, 'SHOW ACCESS' = 92, 'ACCESS MANAGEMENT' = 93, 'SHOW NAMED COLLECTIONS' = 94, 'SHOW NAMED COLLECTIONS SECRETS' = 95, 'NAMED COLLECTION CONTROL' = 96, 'SYSTEM SHUTDOWN' = 97, 'SYSTEM DROP DNS CACHE' = 98, 'SYSTEM DROP MARK CACHE' = 99, 'SYSTEM DROP UNCOMPRESSED CACHE' = 100, 'SYSTEM DROP MMAP CACHE' = 101, 'SYSTEM DROP QUERY CACHE' = 102, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 103, 'SYSTEM DROP FILESYSTEM CACHE' = 104, 'SYSTEM DROP SCHEMA CACHE' = 105, 'SYSTEM DROP S3 CLIENT CACHE' = 106, 'SYSTEM DROP CACHE' = 107, 'SYSTEM RELOAD CONFIG' = 108, 'SYSTEM RELOAD USERS' = 109, 'SYSTEM RELOAD SYMBOLS' = 110, 'SYSTEM RELOAD DICTIONARY' = 111, 'SYSTEM RELOAD MODEL' = 112, 'SYSTEM RELOAD FUNCTION' = 113, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 114, 'SYSTEM RELOAD' = 115, 'SYSTEM RESTART DISK' = 116, 'SYSTEM MERGES' = 117, 'SYSTEM TTL MERGES' = 118, 'SYSTEM FETCHES' = 119, 'SYSTEM MOVES' = 120, 'SYSTEM DISTRIBUTED SENDS' = 121, 'SYSTEM REPLICATED SENDS' = 122, 'SYSTEM SENDS' = 123, 'SYSTEM REPLICATION QUEUES' = 124, 'SYSTEM DROP REPLICA' = 125, 'SYSTEM SYNC REPLICA' = 126, 'SYSTEM RESTART REPLICA' = 127, 'SYSTEM RESTORE REPLICA' = 128, 'SYSTEM WAIT LOADING PARTS' = 129, 'SYSTEM SYNC DATABASE REPLICA' = 130, 'SYSTEM SYNC TRANSACTION LOG' = 131, 'SYSTEM SYNC FILE CACHE' = 132, 'SYSTEM FLUSH DISTRIBUTED' = 133, 'SYSTEM FLUSH LOGS' = 134, 'SYSTEM FLUSH' = 135, 'SYSTEM THREAD FUZZER' = 136, 'SYSTEM UNFREEZE' = 137, 'SYSTEM FAILPOINT' = 138, 'SYSTEM' = 139, 'dictGet' = 140, 'displaySecretsInShowAndSelect' = 141, 'addressToLine' = 142, 'addressToLineWithInlines' = 143, 'addressToSymbol' = 144, 'demangle' = 145, 'INTROSPECTION' = 146, 'FILE' = 147, 'URL' = 148, 'REMOTE' = 149, 'MONGO' = 150, 'MEILISEARCH' = 151, 'MYSQL' = 152, 'POSTGRES' = 153, 'SQLITE' = 154, 'ODBC' = 155, 'JDBC' = 156, 'HDFS' = 157, 'S3' = 158, 'HIVE' = 159, 'AZURE' = 160, 'SOURCES' = 161, 'CLUSTER' = 162, 'ALL' = 163, 'NONE' = 164), `database` Nullable(String), `table` Nullable(String), `column` Nullable(String), @@ -581,10 +581,10 @@ ENGINE = SystemPartsColumns COMMENT 'SYSTEM TABLE is built on the fly.' CREATE TABLE system.privileges ( - `privilege` Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW FILESYSTEM CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER NAMED COLLECTION' = 41, 'ALTER TABLE' = 42, 'ALTER DATABASE' = 43, 'ALTER VIEW REFRESH' = 44, 'ALTER VIEW MODIFY QUERY' = 45, 'ALTER VIEW' = 46, 'ALTER' = 47, 'CREATE DATABASE' = 48, 'CREATE TABLE' = 49, 'CREATE VIEW' = 50, 'CREATE DICTIONARY' = 51, 'CREATE TEMPORARY TABLE' = 52, 'CREATE ARBITRARY TEMPORARY TABLE' = 53, 'CREATE FUNCTION' = 54, 'CREATE NAMED COLLECTION' = 55, 'CREATE' = 56, 'DROP DATABASE' = 57, 'DROP TABLE' = 58, 'DROP VIEW' = 59, 'DROP DICTIONARY' = 60, 'DROP FUNCTION' = 61, 'DROP NAMED COLLECTION' = 62, 'DROP' = 63, 'UNDROP TABLE' = 64, 'TRUNCATE' = 65, 'OPTIMIZE' = 66, 'BACKUP' = 67, 'KILL QUERY' = 68, 'KILL TRANSACTION' = 69, 'MOVE PARTITION BETWEEN SHARDS' = 70, 'CREATE USER' = 71, 'ALTER USER' = 72, 'DROP USER' = 73, 'CREATE ROLE' = 74, 'ALTER ROLE' = 75, 'DROP ROLE' = 76, 'ROLE ADMIN' = 77, 'CREATE ROW POLICY' = 78, 'ALTER ROW POLICY' = 79, 'DROP ROW POLICY' = 80, 'CREATE QUOTA' = 81, 'ALTER QUOTA' = 82, 'DROP QUOTA' = 83, 'CREATE SETTINGS PROFILE' = 84, 'ALTER SETTINGS PROFILE' = 85, 'DROP SETTINGS PROFILE' = 86, 'SHOW USERS' = 87, 'SHOW ROLES' = 88, 'SHOW ROW POLICIES' = 89, 'SHOW QUOTAS' = 90, 'SHOW SETTINGS PROFILES' = 91, 'SHOW ACCESS' = 92, 'ACCESS MANAGEMENT' = 93, 'SHOW NAMED COLLECTIONS' = 94, 'SHOW NAMED COLLECTIONS SECRETS' = 95, 'NAMED COLLECTION CONTROL' = 96, 'SYSTEM SHUTDOWN' = 97, 'SYSTEM DROP DNS CACHE' = 98, 'SYSTEM DROP MARK CACHE' = 99, 'SYSTEM DROP UNCOMPRESSED CACHE' = 100, 'SYSTEM DROP MMAP CACHE' = 101, 'SYSTEM DROP QUERY CACHE' = 102, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 103, 'SYSTEM DROP FILESYSTEM CACHE' = 104, 'SYSTEM DROP SCHEMA CACHE' = 105, 'SYSTEM DROP S3 CLIENT CACHE' = 106, 'SYSTEM DROP CACHE' = 107, 'SYSTEM RELOAD CONFIG' = 108, 'SYSTEM RELOAD USERS' = 109, 'SYSTEM RELOAD SYMBOLS' = 110, 'SYSTEM RELOAD DICTIONARY' = 111, 'SYSTEM RELOAD MODEL' = 112, 'SYSTEM RELOAD FUNCTION' = 113, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 114, 'SYSTEM RELOAD' = 115, 'SYSTEM RESTART DISK' = 116, 'SYSTEM MERGES' = 117, 'SYSTEM TTL MERGES' = 118, 'SYSTEM FETCHES' = 119, 'SYSTEM MOVES' = 120, 'SYSTEM DISTRIBUTED SENDS' = 121, 'SYSTEM REPLICATED SENDS' = 122, 'SYSTEM SENDS' = 123, 'SYSTEM REPLICATION QUEUES' = 124, 'SYSTEM DROP REPLICA' = 125, 'SYSTEM SYNC REPLICA' = 126, 'SYSTEM RESTART REPLICA' = 127, 'SYSTEM RESTORE REPLICA' = 128, 'SYSTEM WAIT LOADING PARTS' = 129, 'SYSTEM SYNC DATABASE REPLICA' = 130, 'SYSTEM SYNC TRANSACTION LOG' = 131, 'SYSTEM SYNC FILE CACHE' = 132, 'SYSTEM FLUSH DISTRIBUTED' = 133, 'SYSTEM FLUSH LOGS' = 134, 'SYSTEM FLUSH' = 135, 'SYSTEM THREAD FUZZER' = 136, 'SYSTEM UNFREEZE' = 137, 'SYSTEM FAILPOINT' = 138, 'SYSTEM' = 139, 'dictGet' = 140, 'displaySecretsInShowAndSelect' = 141, 'addressToLine' = 142, 'addressToLineWithInlines' = 143, 'addressToSymbol' = 144, 'demangle' = 145, 'INTROSPECTION' = 146, 'FILE' = 147, 'URL' = 148, 'REMOTE' = 149, 'MONGO' = 150, 'MEILISEARCH' = 151, 'MYSQL' = 152, 'POSTGRES' = 153, 'SQLITE' = 154, 'ODBC' = 155, 'JDBC' = 156, 'HDFS' = 157, 'S3' = 158, 'HIVE' = 159, 'SOURCES' = 160, 'CLUSTER' = 161, 'ALL' = 162, 'NONE' = 163), + `privilege` Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW FILESYSTEM CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER NAMED COLLECTION' = 41, 'ALTER TABLE' = 42, 'ALTER DATABASE' = 43, 'ALTER VIEW REFRESH' = 44, 'ALTER VIEW MODIFY QUERY' = 45, 'ALTER VIEW' = 46, 'ALTER' = 47, 'CREATE DATABASE' = 48, 'CREATE TABLE' = 49, 'CREATE VIEW' = 50, 'CREATE DICTIONARY' = 51, 'CREATE TEMPORARY TABLE' = 52, 'CREATE ARBITRARY TEMPORARY TABLE' = 53, 'CREATE FUNCTION' = 54, 'CREATE NAMED COLLECTION' = 55, 'CREATE' = 56, 'DROP DATABASE' = 57, 'DROP TABLE' = 58, 'DROP VIEW' = 59, 'DROP DICTIONARY' = 60, 'DROP FUNCTION' = 61, 'DROP NAMED COLLECTION' = 62, 'DROP' = 63, 'UNDROP TABLE' = 64, 'TRUNCATE' = 65, 'OPTIMIZE' = 66, 'BACKUP' = 67, 'KILL QUERY' = 68, 'KILL TRANSACTION' = 69, 'MOVE PARTITION BETWEEN SHARDS' = 70, 'CREATE USER' = 71, 'ALTER USER' = 72, 'DROP USER' = 73, 'CREATE ROLE' = 74, 'ALTER ROLE' = 75, 'DROP ROLE' = 76, 'ROLE ADMIN' = 77, 'CREATE ROW POLICY' = 78, 'ALTER ROW POLICY' = 79, 'DROP ROW POLICY' = 80, 'CREATE QUOTA' = 81, 'ALTER QUOTA' = 82, 'DROP QUOTA' = 83, 'CREATE SETTINGS PROFILE' = 84, 'ALTER SETTINGS PROFILE' = 85, 'DROP SETTINGS PROFILE' = 86, 'SHOW USERS' = 87, 'SHOW ROLES' = 88, 'SHOW ROW POLICIES' = 89, 'SHOW QUOTAS' = 90, 'SHOW SETTINGS PROFILES' = 91, 'SHOW ACCESS' = 92, 'ACCESS MANAGEMENT' = 93, 'SHOW NAMED COLLECTIONS' = 94, 'SHOW NAMED COLLECTIONS SECRETS' = 95, 'NAMED COLLECTION CONTROL' = 96, 'SYSTEM SHUTDOWN' = 97, 'SYSTEM DROP DNS CACHE' = 98, 'SYSTEM DROP MARK CACHE' = 99, 'SYSTEM DROP UNCOMPRESSED CACHE' = 100, 'SYSTEM DROP MMAP CACHE' = 101, 'SYSTEM DROP QUERY CACHE' = 102, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 103, 'SYSTEM DROP FILESYSTEM CACHE' = 104, 'SYSTEM DROP SCHEMA CACHE' = 105, 'SYSTEM DROP S3 CLIENT CACHE' = 106, 'SYSTEM DROP CACHE' = 107, 'SYSTEM RELOAD CONFIG' = 108, 'SYSTEM RELOAD USERS' = 109, 'SYSTEM RELOAD SYMBOLS' = 110, 'SYSTEM RELOAD DICTIONARY' = 111, 'SYSTEM RELOAD MODEL' = 112, 'SYSTEM RELOAD FUNCTION' = 113, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 114, 'SYSTEM RELOAD' = 115, 'SYSTEM RESTART DISK' = 116, 'SYSTEM MERGES' = 117, 'SYSTEM TTL MERGES' = 118, 'SYSTEM FETCHES' = 119, 'SYSTEM MOVES' = 120, 'SYSTEM DISTRIBUTED SENDS' = 121, 'SYSTEM REPLICATED SENDS' = 122, 'SYSTEM SENDS' = 123, 'SYSTEM REPLICATION QUEUES' = 124, 'SYSTEM DROP REPLICA' = 125, 'SYSTEM SYNC REPLICA' = 126, 'SYSTEM RESTART REPLICA' = 127, 'SYSTEM RESTORE REPLICA' = 128, 'SYSTEM WAIT LOADING PARTS' = 129, 'SYSTEM SYNC DATABASE REPLICA' = 130, 'SYSTEM SYNC TRANSACTION LOG' = 131, 'SYSTEM SYNC FILE CACHE' = 132, 'SYSTEM FLUSH DISTRIBUTED' = 133, 'SYSTEM FLUSH LOGS' = 134, 'SYSTEM FLUSH' = 135, 'SYSTEM THREAD FUZZER' = 136, 'SYSTEM UNFREEZE' = 137, 'SYSTEM FAILPOINT' = 138, 'SYSTEM' = 139, 'dictGet' = 140, 'displaySecretsInShowAndSelect' = 141, 'addressToLine' = 142, 'addressToLineWithInlines' = 143, 'addressToSymbol' = 144, 'demangle' = 145, 'INTROSPECTION' = 146, 'FILE' = 147, 'URL' = 148, 'REMOTE' = 149, 'MONGO' = 150, 'MEILISEARCH' = 151, 'MYSQL' = 152, 'POSTGRES' = 153, 'SQLITE' = 154, 'ODBC' = 155, 'JDBC' = 156, 'HDFS' = 157, 'S3' = 158, 'HIVE' = 159, 'AZURE' = 160, 'SOURCES' = 161, 'CLUSTER' = 162, 'ALL' = 163, 'NONE' = 164), `aliases` Array(String), `level` Nullable(Enum8('GLOBAL' = 0, 'DATABASE' = 1, 'TABLE' = 2, 'DICTIONARY' = 3, 'VIEW' = 4, 'COLUMN' = 5, 'NAMED_COLLECTION' = 6)), - `parent_group` Nullable(Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW FILESYSTEM CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER NAMED COLLECTION' = 41, 'ALTER TABLE' = 42, 'ALTER DATABASE' = 43, 'ALTER VIEW REFRESH' = 44, 'ALTER VIEW MODIFY QUERY' = 45, 'ALTER VIEW' = 46, 'ALTER' = 47, 'CREATE DATABASE' = 48, 'CREATE TABLE' = 49, 'CREATE VIEW' = 50, 'CREATE DICTIONARY' = 51, 'CREATE TEMPORARY TABLE' = 52, 'CREATE ARBITRARY TEMPORARY TABLE' = 53, 'CREATE FUNCTION' = 54, 'CREATE NAMED COLLECTION' = 55, 'CREATE' = 56, 'DROP DATABASE' = 57, 'DROP TABLE' = 58, 'DROP VIEW' = 59, 'DROP DICTIONARY' = 60, 'DROP FUNCTION' = 61, 'DROP NAMED COLLECTION' = 62, 'DROP' = 63, 'UNDROP TABLE' = 64, 'TRUNCATE' = 65, 'OPTIMIZE' = 66, 'BACKUP' = 67, 'KILL QUERY' = 68, 'KILL TRANSACTION' = 69, 'MOVE PARTITION BETWEEN SHARDS' = 70, 'CREATE USER' = 71, 'ALTER USER' = 72, 'DROP USER' = 73, 'CREATE ROLE' = 74, 'ALTER ROLE' = 75, 'DROP ROLE' = 76, 'ROLE ADMIN' = 77, 'CREATE ROW POLICY' = 78, 'ALTER ROW POLICY' = 79, 'DROP ROW POLICY' = 80, 'CREATE QUOTA' = 81, 'ALTER QUOTA' = 82, 'DROP QUOTA' = 83, 'CREATE SETTINGS PROFILE' = 84, 'ALTER SETTINGS PROFILE' = 85, 'DROP SETTINGS PROFILE' = 86, 'SHOW USERS' = 87, 'SHOW ROLES' = 88, 'SHOW ROW POLICIES' = 89, 'SHOW QUOTAS' = 90, 'SHOW SETTINGS PROFILES' = 91, 'SHOW ACCESS' = 92, 'ACCESS MANAGEMENT' = 93, 'SHOW NAMED COLLECTIONS' = 94, 'SHOW NAMED COLLECTIONS SECRETS' = 95, 'NAMED COLLECTION CONTROL' = 96, 'SYSTEM SHUTDOWN' = 97, 'SYSTEM DROP DNS CACHE' = 98, 'SYSTEM DROP MARK CACHE' = 99, 'SYSTEM DROP UNCOMPRESSED CACHE' = 100, 'SYSTEM DROP MMAP CACHE' = 101, 'SYSTEM DROP QUERY CACHE' = 102, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 103, 'SYSTEM DROP FILESYSTEM CACHE' = 104, 'SYSTEM DROP SCHEMA CACHE' = 105, 'SYSTEM DROP S3 CLIENT CACHE' = 106, 'SYSTEM DROP CACHE' = 107, 'SYSTEM RELOAD CONFIG' = 108, 'SYSTEM RELOAD USERS' = 109, 'SYSTEM RELOAD SYMBOLS' = 110, 'SYSTEM RELOAD DICTIONARY' = 111, 'SYSTEM RELOAD MODEL' = 112, 'SYSTEM RELOAD FUNCTION' = 113, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 114, 'SYSTEM RELOAD' = 115, 'SYSTEM RESTART DISK' = 116, 'SYSTEM MERGES' = 117, 'SYSTEM TTL MERGES' = 118, 'SYSTEM FETCHES' = 119, 'SYSTEM MOVES' = 120, 'SYSTEM DISTRIBUTED SENDS' = 121, 'SYSTEM REPLICATED SENDS' = 122, 'SYSTEM SENDS' = 123, 'SYSTEM REPLICATION QUEUES' = 124, 'SYSTEM DROP REPLICA' = 125, 'SYSTEM SYNC REPLICA' = 126, 'SYSTEM RESTART REPLICA' = 127, 'SYSTEM RESTORE REPLICA' = 128, 'SYSTEM WAIT LOADING PARTS' = 129, 'SYSTEM SYNC DATABASE REPLICA' = 130, 'SYSTEM SYNC TRANSACTION LOG' = 131, 'SYSTEM SYNC FILE CACHE' = 132, 'SYSTEM FLUSH DISTRIBUTED' = 133, 'SYSTEM FLUSH LOGS' = 134, 'SYSTEM FLUSH' = 135, 'SYSTEM THREAD FUZZER' = 136, 'SYSTEM UNFREEZE' = 137, 'SYSTEM FAILPOINT' = 138, 'SYSTEM' = 139, 'dictGet' = 140, 'displaySecretsInShowAndSelect' = 141, 'addressToLine' = 142, 'addressToLineWithInlines' = 143, 'addressToSymbol' = 144, 'demangle' = 145, 'INTROSPECTION' = 146, 'FILE' = 147, 'URL' = 148, 'REMOTE' = 149, 'MONGO' = 150, 'MEILISEARCH' = 151, 'MYSQL' = 152, 'POSTGRES' = 153, 'SQLITE' = 154, 'ODBC' = 155, 'JDBC' = 156, 'HDFS' = 157, 'S3' = 158, 'HIVE' = 159, 'SOURCES' = 160, 'CLUSTER' = 161, 'ALL' = 162, 'NONE' = 163)) + `parent_group` Nullable(Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW FILESYSTEM CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER NAMED COLLECTION' = 41, 'ALTER TABLE' = 42, 'ALTER DATABASE' = 43, 'ALTER VIEW REFRESH' = 44, 'ALTER VIEW MODIFY QUERY' = 45, 'ALTER VIEW' = 46, 'ALTER' = 47, 'CREATE DATABASE' = 48, 'CREATE TABLE' = 49, 'CREATE VIEW' = 50, 'CREATE DICTIONARY' = 51, 'CREATE TEMPORARY TABLE' = 52, 'CREATE ARBITRARY TEMPORARY TABLE' = 53, 'CREATE FUNCTION' = 54, 'CREATE NAMED COLLECTION' = 55, 'CREATE' = 56, 'DROP DATABASE' = 57, 'DROP TABLE' = 58, 'DROP VIEW' = 59, 'DROP DICTIONARY' = 60, 'DROP FUNCTION' = 61, 'DROP NAMED COLLECTION' = 62, 'DROP' = 63, 'UNDROP TABLE' = 64, 'TRUNCATE' = 65, 'OPTIMIZE' = 66, 'BACKUP' = 67, 'KILL QUERY' = 68, 'KILL TRANSACTION' = 69, 'MOVE PARTITION BETWEEN SHARDS' = 70, 'CREATE USER' = 71, 'ALTER USER' = 72, 'DROP USER' = 73, 'CREATE ROLE' = 74, 'ALTER ROLE' = 75, 'DROP ROLE' = 76, 'ROLE ADMIN' = 77, 'CREATE ROW POLICY' = 78, 'ALTER ROW POLICY' = 79, 'DROP ROW POLICY' = 80, 'CREATE QUOTA' = 81, 'ALTER QUOTA' = 82, 'DROP QUOTA' = 83, 'CREATE SETTINGS PROFILE' = 84, 'ALTER SETTINGS PROFILE' = 85, 'DROP SETTINGS PROFILE' = 86, 'SHOW USERS' = 87, 'SHOW ROLES' = 88, 'SHOW ROW POLICIES' = 89, 'SHOW QUOTAS' = 90, 'SHOW SETTINGS PROFILES' = 91, 'SHOW ACCESS' = 92, 'ACCESS MANAGEMENT' = 93, 'SHOW NAMED COLLECTIONS' = 94, 'SHOW NAMED COLLECTIONS SECRETS' = 95, 'NAMED COLLECTION CONTROL' = 96, 'SYSTEM SHUTDOWN' = 97, 'SYSTEM DROP DNS CACHE' = 98, 'SYSTEM DROP MARK CACHE' = 99, 'SYSTEM DROP UNCOMPRESSED CACHE' = 100, 'SYSTEM DROP MMAP CACHE' = 101, 'SYSTEM DROP QUERY CACHE' = 102, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 103, 'SYSTEM DROP FILESYSTEM CACHE' = 104, 'SYSTEM DROP SCHEMA CACHE' = 105, 'SYSTEM DROP S3 CLIENT CACHE' = 106, 'SYSTEM DROP CACHE' = 107, 'SYSTEM RELOAD CONFIG' = 108, 'SYSTEM RELOAD USERS' = 109, 'SYSTEM RELOAD SYMBOLS' = 110, 'SYSTEM RELOAD DICTIONARY' = 111, 'SYSTEM RELOAD MODEL' = 112, 'SYSTEM RELOAD FUNCTION' = 113, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 114, 'SYSTEM RELOAD' = 115, 'SYSTEM RESTART DISK' = 116, 'SYSTEM MERGES' = 117, 'SYSTEM TTL MERGES' = 118, 'SYSTEM FETCHES' = 119, 'SYSTEM MOVES' = 120, 'SYSTEM DISTRIBUTED SENDS' = 121, 'SYSTEM REPLICATED SENDS' = 122, 'SYSTEM SENDS' = 123, 'SYSTEM REPLICATION QUEUES' = 124, 'SYSTEM DROP REPLICA' = 125, 'SYSTEM SYNC REPLICA' = 126, 'SYSTEM RESTART REPLICA' = 127, 'SYSTEM RESTORE REPLICA' = 128, 'SYSTEM WAIT LOADING PARTS' = 129, 'SYSTEM SYNC DATABASE REPLICA' = 130, 'SYSTEM SYNC TRANSACTION LOG' = 131, 'SYSTEM SYNC FILE CACHE' = 132, 'SYSTEM FLUSH DISTRIBUTED' = 133, 'SYSTEM FLUSH LOGS' = 134, 'SYSTEM FLUSH' = 135, 'SYSTEM THREAD FUZZER' = 136, 'SYSTEM UNFREEZE' = 137, 'SYSTEM FAILPOINT' = 138, 'SYSTEM' = 139, 'dictGet' = 140, 'displaySecretsInShowAndSelect' = 141, 'addressToLine' = 142, 'addressToLineWithInlines' = 143, 'addressToSymbol' = 144, 'demangle' = 145, 'INTROSPECTION' = 146, 'FILE' = 147, 'URL' = 148, 'REMOTE' = 149, 'MONGO' = 150, 'MEILISEARCH' = 151, 'MYSQL' = 152, 'POSTGRES' = 153, 'SQLITE' = 154, 'ODBC' = 155, 'JDBC' = 156, 'HDFS' = 157, 'S3' = 158, 'HIVE' = 159, 'AZURE' = 160, 'SOURCES' = 161, 'CLUSTER' = 162, 'ALL' = 163, 'NONE' = 164)) ) ENGINE = SystemPrivileges COMMENT 'SYSTEM TABLE is built on the fly.' diff --git a/tests/queries/0_stateless/02273_full_sort_join.reference.j2 b/tests/queries/0_stateless/02273_full_sort_join.reference.j2 index 0af4158e971..98bfd9d9b2b 100644 --- a/tests/queries/0_stateless/02273_full_sort_join.reference.j2 +++ b/tests/queries/0_stateless/02273_full_sort_join.reference.j2 @@ -1,7 +1,7 @@ {% set table_size = 15 -%} {% for join_algorithm in ['default', 'full_sorting_merge', 'grace_hash'] -%} -- {{ join_algorithm }} -- -{% for block_size in range(1, table_size + 1, 4) -%} +{% for block_size in range(1, table_size + 1) -%} ALL INNER USING | bs = {{ block_size }} 4 0 0 5 0 0 @@ -50,6 +50,7 @@ ALL LEFT | bs = {{ block_size }} 14 14 val9 0 14 14 val9 0 ALL RIGHT | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 4 4 0 val10 5 5 0 val6 6 6 0 val8 @@ -63,6 +64,7 @@ ALL RIGHT | bs = {{ block_size }} 13 13 0 val9 14 14 0 val3 14 14 0 val7 +{% endif -%} ALL INNER | bs = {{ block_size }} | copmosite key 2 2 2 2 2 2 0 0 2 2 2 2 2 2 0 0 @@ -83,6 +85,7 @@ ALL LEFT | bs = {{ block_size }} | copmosite key 2 2 2 2 2 2 val12 0 2 2 2 2 2 2 val9 0 ALL RIGHT | bs = {{ block_size }} | copmosite key +{% if join_algorithm != 'grace_hash' -%} 0 \N 0 1 1 1 1 val2 0 \N 0 1 1 1 1 val7 0 \N 0 1 1 2 1 val5 @@ -96,6 +99,7 @@ ALL RIGHT | bs = {{ block_size }} | copmosite key 0 \N 0 2 2 \N 1 val9 2 2 2 2 2 2 0 val4 2 2 2 2 2 2 0 val4 +{% endif -%} ANY INNER USING | bs = {{ block_size }} 4 0 0 5 0 0 @@ -133,6 +137,7 @@ ANY LEFT | bs = {{ block_size }} 13 13 val13 0 14 14 val9 0 ANY RIGHT | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 4 4 0 val10 5 5 0 val6 6 6 0 val8 @@ -145,6 +150,7 @@ ANY RIGHT | bs = {{ block_size }} 13 13 0 val9 14 14 0 val3 14 14 0 val7 +{% endif -%} ANY INNER | bs = {{ block_size }} | copmosite key 2 2 2 2 2 2 0 0 ANY LEFT | bs = {{ block_size }} | copmosite key @@ -164,6 +170,7 @@ ANY LEFT | bs = {{ block_size }} | copmosite key 2 2 2 2 2 2 val12 0 2 2 2 2 2 2 val9 0 ANY RIGHT | bs = {{ block_size }} | copmosite key +{% if join_algorithm != 'grace_hash' -%} 0 \N 0 1 1 1 1 val2 0 \N 0 1 1 1 1 val7 0 \N 0 1 1 2 1 val5 @@ -176,6 +183,7 @@ ANY RIGHT | bs = {{ block_size }} | copmosite key 0 \N 0 2 1 \N 1 val3 0 \N 0 2 2 \N 1 val9 2 2 2 2 2 2 0 val4 +{% endif -%} {% endfor -%} ALL INNER | join_use_nulls = 1 4 4 0 0 @@ -211,6 +219,7 @@ ALL LEFT | join_use_nulls = 1 14 14 val9 0 14 14 val9 0 ALL RIGHT | join_use_nulls = 1 +{% if join_algorithm != 'grace_hash' -%} 4 4 0 val10 5 5 0 val6 6 6 0 val8 @@ -224,6 +233,7 @@ ALL RIGHT | join_use_nulls = 1 13 13 0 val9 14 14 0 val3 14 14 0 val7 +{% endif -%} ALL INNER | join_use_nulls = 1 | copmosite key 2 2 2 2 2 2 0 0 2 2 2 2 2 2 0 0 @@ -244,6 +254,7 @@ ALL LEFT | join_use_nulls = 1 | copmosite key 2 2 2 2 2 2 val12 0 2 2 2 2 2 2 val9 0 ALL RIGHT | join_use_nulls = 1 | copmosite key +{% if join_algorithm != 'grace_hash' -%} 2 2 2 2 2 2 0 val4 2 2 2 2 2 2 0 val4 \N \N \N 1 1 1 \N val2 @@ -257,6 +268,7 @@ ALL RIGHT | join_use_nulls = 1 | copmosite key \N \N \N 2 1 2 \N val8 \N \N \N 2 1 \N \N val3 \N \N \N 2 2 \N \N val9 +{% endif -%} ANY INNER | join_use_nulls = 1 4 4 0 0 5 5 0 0 @@ -284,6 +296,7 @@ ANY LEFT | join_use_nulls = 1 13 13 val13 0 14 14 val9 0 ANY RIGHT | join_use_nulls = 1 +{% if join_algorithm != 'grace_hash' -%} 4 4 0 val10 5 5 0 val6 6 6 0 val8 @@ -296,6 +309,7 @@ ANY RIGHT | join_use_nulls = 1 13 13 0 val9 14 14 0 val3 14 14 0 val7 +{% endif -%} ANY INNER | join_use_nulls = 1 | copmosite key 2 2 2 2 2 2 0 0 ANY LEFT | join_use_nulls = 1 | copmosite key @@ -315,6 +329,7 @@ ANY LEFT | join_use_nulls = 1 | copmosite key 2 2 2 2 2 2 val12 0 2 2 2 2 2 2 val9 0 ANY RIGHT | join_use_nulls = 1 | copmosite key +{% if join_algorithm != 'grace_hash' -%} 2 2 2 2 2 2 0 val4 \N \N \N 1 1 1 \N val2 \N \N \N 1 1 1 \N val7 @@ -327,4 +342,5 @@ ANY RIGHT | join_use_nulls = 1 | copmosite key \N \N \N 2 1 2 \N val8 \N \N \N 2 1 \N \N val3 \N \N \N 2 2 \N \N val9 +{% endif -%} {% endfor -%} diff --git a/tests/queries/0_stateless/02273_full_sort_join.sql.j2 b/tests/queries/0_stateless/02273_full_sort_join.sql.j2 index 6500306356c..8b739330364 100644 --- a/tests/queries/0_stateless/02273_full_sort_join.sql.j2 +++ b/tests/queries/0_stateless/02273_full_sort_join.sql.j2 @@ -26,7 +26,9 @@ INSERT INTO t2 'val' || toString(number) as s FROM numbers_mt({{ table_size - 3 }}); + {% macro is_implemented(join_algorithm) -%} +{% if join_algorithm == 'grace_hash' %} -- { serverError NOT_IMPLEMENTED } {% endif %} {% endmacro -%} {% for join_algorithm in ['default', 'full_sorting_merge', 'grace_hash'] -%} @@ -36,7 +38,7 @@ SET max_bytes_in_join = '{% if join_algorithm == 'grace_hash' %}10K{% else %}0{% SELECT '-- {{ join_algorithm }} --'; SET join_algorithm = '{{ join_algorithm }}'; -{% for block_size in range(1, table_size + 1, 4) -%} +{% for block_size in range(1, table_size + 1) -%} {% for kind in ['ALL', 'ANY'] -%} SET max_block_size = {{ block_size }}; diff --git a/tests/queries/0_stateless/02274_full_sort_join_nodistinct.reference.j2 b/tests/queries/0_stateless/02274_full_sort_join_nodistinct.reference.j2 index df968e86e8d..2cc6c6e85d6 100644 --- a/tests/queries/0_stateless/02274_full_sort_join_nodistinct.reference.j2 +++ b/tests/queries/0_stateless/02274_full_sort_join_nodistinct.reference.j2 @@ -1,6 +1,6 @@ {% for join_algorithm in ['full_sorting_merge', 'grace_hash'] -%} --- {{ join_algorithm }} --- -{% for block_size in range(1, 11, 4) -%} +{% for block_size in range(1, 11) -%} t1 ALL INNER JOIN t2 | bs = {{ block_size }} 1 1 4 5 1 1 4 5 @@ -108,6 +108,7 @@ t1 ALL LEFT JOIN t2 | bs = {{ block_size }} 2 2 val27 5 3 3 val3 4 t1 ALL RIGHT JOIN t2 | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 1 1 4 val11 1 1 4 val12 2 2 5 val22 @@ -160,6 +161,7 @@ t1 ALL RIGHT JOIN t2 | bs = {{ block_size }} 2 2 5 val28 2 2 5 val28 3 3 4 val3 +{% endif -%} t1 ANY INNER JOIN t2 | bs = {{ block_size }} 1 1 4 5 2 2 5 5 @@ -175,6 +177,7 @@ t1 ANY LEFT JOIN t2 | bs = {{ block_size }} 2 2 val27 5 3 3 val3 4 t1 ANY RIGHT JOIN t2 | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 1 1 4 val11 1 1 4 val12 2 2 5 val22 @@ -185,7 +188,9 @@ t1 ANY RIGHT JOIN t2 | bs = {{ block_size }} 2 2 5 val27 2 2 5 val28 3 3 4 val3 +{% endif -%} t1 ALL FULL JOIN t2 | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 1 1 4 5 1 1 4 5 2 2 5 5 @@ -238,7 +243,9 @@ t1 ALL FULL JOIN t2 | bs = {{ block_size }} 2 2 5 5 2 2 5 5 3 3 4 4 +{% endif -%} t1 ALL FULL JOIN USING t2 | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 1 4 5 1 4 5 2 5 5 @@ -291,6 +298,7 @@ t1 ALL FULL JOIN USING t2 | bs = {{ block_size }} 2 5 5 2 5 5 3 4 4 +{% endif -%} t1 ALL INNER JOIN tn2 | bs = {{ block_size }} 1 1 4 5 1 1 4 5 @@ -307,6 +315,7 @@ t1 ALL LEFT JOIN tn2 | bs = {{ block_size }} 2 \N val27 0 3 3 val3 4 t1 ALL RIGHT JOIN tn2 | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 0 \N 0 val22 0 \N 0 val23 0 \N 0 val24 @@ -317,6 +326,7 @@ t1 ALL RIGHT JOIN tn2 | bs = {{ block_size }} 1 1 4 val11 1 1 4 val12 3 3 4 val3 +{% endif -%} t1 ANY INNER JOIN tn2 | bs = {{ block_size }} 1 1 4 5 3 3 4 4 @@ -331,6 +341,7 @@ t1 ANY LEFT JOIN tn2 | bs = {{ block_size }} 2 \N val27 0 3 3 val3 4 t1 ANY RIGHT JOIN tn2 | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 0 \N 0 val22 0 \N 0 val23 0 \N 0 val24 @@ -341,7 +352,9 @@ t1 ANY RIGHT JOIN tn2 | bs = {{ block_size }} 1 1 4 val11 1 1 4 val12 3 3 4 val3 +{% endif -%} t1 ALL FULL JOIN tn2 | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 0 \N 0 5 0 \N 0 5 0 \N 0 5 @@ -359,8 +372,9 @@ t1 ALL FULL JOIN tn2 | bs = {{ block_size }} 2 \N 5 0 2 \N 5 0 3 3 4 4 -{% if join_algorithm != 'grace_hash' -%} +{% endif -%} t1 ALL FULL JOIN USING tn2 | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 1 4 5 1 4 5 2 5 0 @@ -395,6 +409,7 @@ tn1 ALL LEFT JOIN t2 | bs = {{ block_size }} \N 0 val26 0 \N 0 val27 0 tn1 ALL RIGHT JOIN t2 | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 1 1 4 val11 1 1 4 val12 3 3 4 val3 @@ -405,6 +420,7 @@ tn1 ALL RIGHT JOIN t2 | bs = {{ block_size }} \N 2 0 val26 \N 2 0 val27 \N 2 0 val28 +{% endif -%} tn1 ANY INNER JOIN t2 | bs = {{ block_size }} 1 1 4 5 3 3 4 4 @@ -419,6 +435,7 @@ tn1 ANY LEFT JOIN t2 | bs = {{ block_size }} \N 0 val26 0 \N 0 val27 0 tn1 ANY RIGHT JOIN t2 | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 1 1 4 val11 1 1 4 val12 3 3 4 val3 @@ -429,7 +446,9 @@ tn1 ANY RIGHT JOIN t2 | bs = {{ block_size }} \N 2 0 val26 \N 2 0 val27 \N 2 0 val28 +{% endif -%} tn1 ALL FULL JOIN t2 | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 1 1 4 5 1 1 4 5 3 3 4 4 @@ -447,7 +466,9 @@ tn1 ALL FULL JOIN t2 | bs = {{ block_size }} \N 2 0 5 \N 2 0 5 \N 2 0 5 +{% endif -%} tn1 ALL FULL JOIN USING t2 | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 1 4 5 1 4 5 2 0 5 @@ -465,6 +486,7 @@ tn1 ALL FULL JOIN USING t2 | bs = {{ block_size }} \N 5 0 \N 5 0 \N 5 0 +{% endif -%} tn1 ALL INNER JOIN tn2 | bs = {{ block_size }} 1 1 4 5 1 1 4 5 @@ -481,6 +503,7 @@ tn1 ALL LEFT JOIN tn2 | bs = {{ block_size }} \N \N val26 0 \N \N val27 0 tn1 ALL RIGHT JOIN tn2 | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 1 1 4 val11 1 1 4 val12 3 3 4 val3 @@ -491,6 +514,7 @@ tn1 ALL RIGHT JOIN tn2 | bs = {{ block_size }} \N \N 0 val26 \N \N 0 val27 \N \N 0 val28 +{% endif -%} tn1 ANY INNER JOIN tn2 | bs = {{ block_size }} 1 1 4 5 3 3 4 4 @@ -505,6 +529,7 @@ tn1 ANY LEFT JOIN tn2 | bs = {{ block_size }} \N \N val26 0 \N \N val27 0 tn1 ANY RIGHT JOIN tn2 | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 1 1 4 val11 1 1 4 val12 3 3 4 val3 @@ -515,7 +540,9 @@ tn1 ANY RIGHT JOIN tn2 | bs = {{ block_size }} \N \N 0 val26 \N \N 0 val27 \N \N 0 val28 +{% endif -%} tn1 ALL FULL JOIN tn2 | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 1 1 4 5 1 1 4 5 3 3 4 4 @@ -533,8 +560,9 @@ tn1 ALL FULL JOIN tn2 | bs = {{ block_size }} \N \N 5 0 \N \N 5 0 \N \N 5 0 -{% if join_algorithm != 'grace_hash' -%} +{% endif -%} tn1 ALL FULL JOIN USING tn2 | bs = {{ block_size }} +{% if join_algorithm != 'grace_hash' -%} 1 4 5 1 4 5 3 4 4 diff --git a/tests/queries/0_stateless/02274_full_sort_join_nodistinct.sql.j2 b/tests/queries/0_stateless/02274_full_sort_join_nodistinct.sql.j2 index f8eb4b1a53e..613da65421e 100644 --- a/tests/queries/0_stateless/02274_full_sort_join_nodistinct.sql.j2 +++ b/tests/queries/0_stateless/02274_full_sort_join_nodistinct.sql.j2 @@ -16,6 +16,7 @@ INSERT INTO t2 VALUES (1, 'val11'), (1, 'val12'), (2, 'val22'), (2, 'val23'), (2 INSERT INTO tn2 VALUES (1, 'val11'), (1, 'val12'), (NULL, 'val22'), (NULL, 'val23'), (NULL, 'val24'), (NULL, 'val25'), (NULL, 'val26'), (NULL, 'val27'), (NULL, 'val28'), (3, 'val3'); {% macro is_implemented(join_algorithm) -%} +{% if join_algorithm == 'grace_hash' %} -- { serverError NOT_IMPLEMENTED } {% endif %} {% endmacro -%} {% for join_algorithm in ['full_sorting_merge', 'grace_hash'] -%} @@ -26,7 +27,7 @@ SET join_algorithm = '{{ join_algorithm }}'; SELECT '--- {{ join_algorithm }} ---'; -{% for block_size in range(1, 11, 4) -%} +{% for block_size in range(1, 11) -%} SET max_block_size = {{ block_size }}; {% for t1, t2 in [('t1', 't2'), ('t1', 'tn2'), ('tn1', 't2'), ('tn1', 'tn2')] -%} @@ -46,10 +47,9 @@ SELECT t1.key, t2.key, length(t1.s), t2.s FROM {{ t1 }} AS t1 {{ kind }} RIGHT J SELECT '{{ t1 }} ALL FULL JOIN {{ t2 }} | bs = {{ block_size }}'; SELECT t1.key, t2.key, length(t1.s), length(t2.s) FROM {{ t1 }} AS t1 {{ kind }} FULL JOIN {{ t2 }} AS t2 ON t1.key == t2.key ORDER BY t1.key, t2.key, length(t1.s), length(t2.s); {{ is_implemented(join_algorithm) }} -{% if join_algorithm == 'full_sorting_merge' or t2 != 'tn2' -%} SELECT '{{ t1 }} ALL FULL JOIN USING {{ t2 }} | bs = {{ block_size }}'; SELECT key, length(t1.s), length(t2.s) FROM {{ t1 }} AS t1 ALL FULL JOIN {{ t2 }} AS t2 USING (key) ORDER BY key, length(t1.s), length(t2.s); {{ is_implemented(join_algorithm) }} -{% endif -%} + {% endfor -%} {% endfor -%} SET max_bytes_in_join = 0; diff --git a/tests/queries/0_stateless/02275_full_sort_join_long.reference b/tests/queries/0_stateless/02275_full_sort_join_long.reference index 73482358d12..9ec06aea3e6 100644 --- a/tests/queries/0_stateless/02275_full_sort_join_long.reference +++ b/tests/queries/0_stateless/02275_full_sort_join_long.reference @@ -41,34 +41,16 @@ ALL INNER ALL LEFT 50195752660639 500353531835 10369589 10369589 1000342 ALL RIGHT -500353531835 684008812186 1367170 1000342 1367170 +skipped ALL INNER 500353531835 500353531835 1000342 1000342 1000342 ALL LEFT 50195752660639 500353531835 10369589 10369589 1000342 ALL RIGHT -500353531835 684008812186 1367170 1000342 1367170 +skipped ALL INNER 500353531835 500353531835 1000342 1000342 1000342 ALL LEFT 50195752660639 500353531835 10369589 10369589 1000342 ALL RIGHT -500353531835 684008812186 1367170 1000342 1367170 -ANY INNER -199622811843 199622811843 399458 399458 399458 -ANY LEFT -50010619420459 315220291655 10000000 10000000 630753 -ANY RIGHT -316611844056 500267124407 1000000 633172 1000000 -ANY INNER -199622811843 199622811843 399458 399458 399458 -ANY LEFT -50010619420459 315220291655 10000000 10000000 630753 -ANY RIGHT -316611844056 500267124407 1000000 633172 1000000 -ANY INNER -199622811843 199622811843 399458 399458 399458 -ANY LEFT -50010619420459 315220291655 10000000 10000000 630753 -ANY RIGHT -316611844056 500267124407 1000000 633172 1000000 +skipped diff --git a/tests/queries/0_stateless/02275_full_sort_join_long.sql.j2 b/tests/queries/0_stateless/02275_full_sort_join_long.sql.j2 index 0b28fd67050..98cc46c9cb4 100644 --- a/tests/queries/0_stateless/02275_full_sort_join_long.sql.j2 +++ b/tests/queries/0_stateless/02275_full_sort_join_long.sql.j2 @@ -22,6 +22,11 @@ INSERT INTO t2 FROM numbers_mt({{ rtable_size }}) ; +{% macro is_implemented(join_algorithm) -%} +{% if join_algorithm == 'grace_hash' %} -- { serverError NOT_IMPLEMENTED } +SELECT 'skipped'; +{% endif -%} +{% endmacro -%} {% for join_algorithm in ['full_sorting_merge', 'grace_hash'] -%} @@ -35,6 +40,7 @@ SET join_algorithm = '{{ join_algorithm }}'; SET max_block_size = {{ block_size }}; +{% if not (kind == 'ANY' and join_algorithm == 'grace_hash') -%} SELECT '{{ kind }} INNER'; SELECT sum(t1.key), sum(t2.key), count(), countIf(t1.key != 0), countIf(t2.key != 0) FROM t1 @@ -52,8 +58,9 @@ SELECT '{{ kind }} RIGHT'; SELECT sum(t1.key), sum(t2.key), count(), countIf(t1.key != 0), countIf(t2.key != 0) FROM t1 {{ kind }} RIGHT JOIN t2 ON t1.key == t2.key -; +; {{ is_implemented(join_algorithm) }} +{% endif -%} {% endfor -%} {% endfor -%} diff --git a/tests/queries/0_stateless/02344_describe_cache.reference b/tests/queries/0_stateless/02344_describe_cache.reference index a803ca1fab1..d8a2ffab1fa 100644 --- a/tests/queries/0_stateless/02344_describe_cache.reference +++ b/tests/queries/0_stateless/02344_describe_cache.reference @@ -1,2 +1,2 @@ -134217728 1048576 8388608 1 0 0 0 /var/lib/clickhouse/caches/s3_cache/ 0 -134217728 1048576 104857600 0 0 0 0 /var/lib/clickhouse/caches/s3_cache_2/ 0 +134217728 10000000 8388608 1 0 0 0 /var/lib/clickhouse/caches/s3_cache/ 0 +134217728 10000000 104857600 0 0 0 0 /var/lib/clickhouse/caches/s3_cache_2/ 0 diff --git a/tests/queries/0_stateless/02354_annoy_index.sql b/tests/queries/0_stateless/02354_annoy_index.sql index 0168fa04c6f..abee5e8a6e4 100644 --- a/tests/queries/0_stateless/02354_annoy_index.sql +++ b/tests/queries/0_stateless/02354_annoy_index.sql @@ -1,4 +1,4 @@ --- Tags: no-fasttest, no-ubsan, no-cpu-aarch64, no-upgrade-check +-- Tags: disabled, no-fasttest, no-ubsan, no-cpu-aarch64, no-upgrade-check SET allow_experimental_annoy_index = 1; diff --git a/tests/queries/0_stateless/02461_prewhere_row_level_policy_lightweight_delete.sql.j2 b/tests/queries/0_stateless/02461_prewhere_row_level_policy_lightweight_delete.sql.j2 index 1e4258cef7e..0ec6b2ed144 100644 --- a/tests/queries/0_stateless/02461_prewhere_row_level_policy_lightweight_delete.sql.j2 +++ b/tests/queries/0_stateless/02461_prewhere_row_level_policy_lightweight_delete.sql.j2 @@ -8,25 +8,10 @@ ORDER BY (SiteId, DateVisit) SETTINGS index_granularity = {{ index_granularity }}, min_bytes_for_wide_part = 0; -- Insert some data to have 110K rows in the range 2022-08-10 .. 2022-08-20 and some more rows before and after that range -insert into url_na_log select 209, '2022-08-09' from numbers(10000); -insert into url_na_log select 209, '2022-08-10' from numbers(10000); -insert into url_na_log select 209, '2022-08-11' from numbers(10000); -insert into url_na_log select 209, '2022-08-12' from numbers(10000); -insert into url_na_log select 209, '2022-08-13' from numbers(10000); -insert into url_na_log select 209, '2022-08-14' from numbers(10000); -insert into url_na_log select 209, '2022-08-15' from numbers(10000); -insert into url_na_log select 209, '2022-08-16' from numbers(10000); -insert into url_na_log select 209, '2022-08-17' from numbers(10000); -insert into url_na_log select 209, '2022-08-18' from numbers(10000); -insert into url_na_log select 209, '2022-08-19' from numbers(10000); -insert into url_na_log select 209, '2022-08-20' from numbers(10000); -insert into url_na_log select 209, '2022-08-21' from numbers(10000); - +insert into url_na_log select 209, ('2022-08-09'::Date + INTERVAL intDiv(number,10000) DAY) from numbers(130000) SETTINGS max_insert_block_size=200000; SET mutations_sync=2; -OPTIMIZE TABLE url_na_log FINAL; - -- { echoOn } SELECT count() FROM url_na_log; diff --git a/tests/queries/0_stateless/02525_different_engines_in_temporary_tables.sql b/tests/queries/0_stateless/02525_different_engines_in_temporary_tables.sql index 7ebc05dfece..58e9ecab30c 100644 --- a/tests/queries/0_stateless/02525_different_engines_in_temporary_tables.sql +++ b/tests/queries/0_stateless/02525_different_engines_in_temporary_tables.sql @@ -7,7 +7,9 @@ CREATE TEMPORARY TABLE table_merge_tree_02525 ENGINE = MergeTree ORDER BY id PRIMARY KEY id; -INSERT INTO table_merge_tree_02525 VALUES (1, 'a'), (2, 'b'), (3, 'c'); +INSERT INTO table_merge_tree_02525 VALUES (1, 'a'), (2, 'b'); +INSERT INTO table_merge_tree_02525 VALUES (3, 'c'); +OPTIMIZE TABLE table_merge_tree_02525 FINAL; SELECT * FROM table_merge_tree_02525; -- Check that temporary table with MergeTree is not sent to remote servers -- The query with remote() should not fail diff --git a/tests/queries/0_stateless/02724_show_indexes.reference b/tests/queries/0_stateless/02724_show_indexes.reference index 20af3954fa5..8365ade3231 100644 --- a/tests/queries/0_stateless/02724_show_indexes.reference +++ b/tests/queries/0_stateless/02724_show_indexes.reference @@ -14,6 +14,11 @@ tbl 0 mm1_idx \N \N \N \N \N \N \N minmax \N \N YES a, c, d tbl 0 mm2_idx \N \N \N \N \N \N \N minmax \N \N YES c, d, e tbl 0 PRIMARY \N \N A \N \N \N \N primary \N \N YES c, a tbl 0 set_idx \N \N \N \N \N \N \N set \N \N YES e +tbl 0 blf_idx \N \N \N \N \N \N \N bloom_filter \N \N YES d, b +tbl 0 mm1_idx \N \N \N \N \N \N \N minmax \N \N YES a, c, d +tbl 0 mm2_idx \N \N \N \N \N \N \N minmax \N \N YES c, d, e +tbl 0 PRIMARY \N \N A \N \N \N \N primary \N \N YES c, a +tbl 0 set_idx \N \N \N \N \N \N \N set \N \N YES e --- EXTENDED tbl 0 blf_idx \N \N \N \N \N \N \N bloom_filter \N \N YES d, b tbl 0 mm1_idx \N \N \N \N \N \N \N minmax \N \N YES a, c, d diff --git a/tests/queries/0_stateless/02724_show_indexes.sql b/tests/queries/0_stateless/02724_show_indexes.sql index ce8ed67c524..04a481fea4e 100644 --- a/tests/queries/0_stateless/02724_show_indexes.sql +++ b/tests/queries/0_stateless/02724_show_indexes.sql @@ -22,6 +22,7 @@ PRIMARY KEY (c, a); SELECT '--- Aliases of SHOW INDEX'; SHOW INDEX FROM tbl; SHOW INDEXES FROM tbl; +SHOW INDICES FROM tbl; SHOW KEYS FROM tbl; SELECT '--- EXTENDED'; diff --git a/tests/queries/0_stateless/02731_parallel_replicas_join_subquery.reference b/tests/queries/0_stateless/02731_parallel_replicas_join_subquery.reference index df606679523..9b08e69b9d3 100644 --- a/tests/queries/0_stateless/02731_parallel_replicas_join_subquery.reference +++ b/tests/queries/0_stateless/02731_parallel_replicas_join_subquery.reference @@ -39,6 +39,6 @@ U c 10 UlI+1 10 bX?}ix [ Ny]2 G 10 t toUInt64(\'1610517366120\')) GROUP BY `key`, `value1`, `value2` +0 3 SELECT `key`, `value1`, `value2` FROM `default`.`join_inner_table` PREWHERE (`id` = \'833c9e22-c245-4eb5-8745-117a9a1f26b1\') AND (`number` > toUInt64(\'1610517366120\')) GROUP BY `key`, `value1`, `value2` 0 3 SELECT `value1`, `value2`, count() AS `count` FROM `default`.`join_outer_table` ALL INNER JOIN `_data_11888098645495698704_17868075224240210014` USING (`key`) GROUP BY `key`, `value1`, `value2` 1 1 -- Parallel full query\nSELECT\n value1,\n value2,\n avg(count) AS avg\nFROM\n (\n SELECT\n key,\n value1,\n value2,\n count() AS count\n FROM join_outer_table\n INNER JOIN\n (\n SELECT\n key,\n value1,\n value2,\n toUInt64(min(time)) AS start_ts\n FROM join_inner_table\n PREWHERE (id = \'833c9e22-c245-4eb5-8745-117a9a1f26b1\') AND (number > toUInt64(\'1610517366120\'))\n GROUP BY key, value1, value2\n ) USING (key)\n GROUP BY key, value1, value2\n )\nGROUP BY value1, value2\nORDER BY value1, value2\nSETTINGS allow_experimental_parallel_reading_from_replicas = 1; diff --git a/tests/queries/0_stateless/02771_log_faminy_truncate_count.reference b/tests/queries/0_stateless/02771_log_faminy_truncate_count.reference new file mode 100644 index 00000000000..aa47d0d46d4 --- /dev/null +++ b/tests/queries/0_stateless/02771_log_faminy_truncate_count.reference @@ -0,0 +1,2 @@ +0 +0 diff --git a/tests/queries/0_stateless/02771_log_faminy_truncate_count.sql b/tests/queries/0_stateless/02771_log_faminy_truncate_count.sql new file mode 100644 index 00000000000..3fb22837f5b --- /dev/null +++ b/tests/queries/0_stateless/02771_log_faminy_truncate_count.sql @@ -0,0 +1,26 @@ +DROP TABLE IF EXISTS test_log; +CREATE TABLE test_log +( + `crypto_name` String, + `trade_date` Date +) +ENGINE = Log; + +INSERT INTO test_log (crypto_name, trade_date) VALUES ('abc', '2021-01-01'), ('def', '2022-02-02'); + +TRUNCATE TABLE test_log; +SELECT count() FROM test_log; + +DROP TABLE IF EXISTS test_log; +CREATE TABLE test_log +( + `crypto_name` String, + `trade_date` Date +) +ENGINE = StripeLog; + +INSERT INTO test_log (crypto_name, trade_date) VALUES ('abc', '2021-01-01'), ('def', '2022-02-02'); + +TRUNCATE TABLE test_log; +SELECT count() FROM test_log; +DROP TABLE test_log; diff --git a/tests/queries/0_stateless/02771_semi_join_use_nulls.reference b/tests/queries/0_stateless/02771_semi_join_use_nulls.reference new file mode 100644 index 00000000000..8d4b1a3a75e --- /dev/null +++ b/tests/queries/0_stateless/02771_semi_join_use_nulls.reference @@ -0,0 +1,112 @@ +0 0 +0 0 +0 \N +0 0 +0 0 +0 0 +0 0 +0 0 +0 \N +0 0 +0 0 +0 0 +0 1 +0 1 +0 0 +0 0 +0 \N +0 0 +0 0 +0 0 +0 0 +0 0 +0 \N +0 0 +0 0 +0 0 +0 2 +0 2 +0 \N +0 0 +0 \N +0 0 +0 \N +0 \N +0 \N +0 0 +0 \N +0 0 +0 \N +0 \N +0 1 +0 1 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +\N 2 +\N 2 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 1 +0 1 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +1 2 +1 2 +0 \N +0 0 +0 0 +0 0 +0 \N +0 \N +0 \N +0 0 +0 0 +0 0 +0 \N +0 \N +0 1 +0 1 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +0 0 +1 2 +1 2 diff --git a/tests/queries/0_stateless/02771_semi_join_use_nulls.sql.j2 b/tests/queries/0_stateless/02771_semi_join_use_nulls.sql.j2 new file mode 100644 index 00000000000..37b2e63761b --- /dev/null +++ b/tests/queries/0_stateless/02771_semi_join_use_nulls.sql.j2 @@ -0,0 +1,51 @@ + +{% for allow_experimental_analyzer in [0, 1] -%} +{% for join_use_nulls in [0, 1] -%} +{% for kind in ['LEFT', 'RIGHT'] -%} +{% for strictness in ['SEMI', 'ANTI'] -%} +{% for maybe_materialize in ['', 'materialize'] -%} + +SET allow_experimental_analyzer = {{ allow_experimental_analyzer }}; + +SET join_use_nulls = {{ join_use_nulls }}; + +SELECT id > 1, d.id FROM (SELECT {{ maybe_materialize }}(toLowCardinality(0)) AS id, 1 AS value) AS a +{{ strictness }} {{ kind }} JOIN (SELECT {{ maybe_materialize }}(toLowCardinality(0)) AS id) AS d +USING (id) +; + +SELECT id > 1, d.id FROM (SELECT {{ maybe_materialize }}(toLowCardinality(toNullable(0))) AS id, 1 AS value) AS a +{{ strictness }} {{ kind }} JOIN (SELECT {{ maybe_materialize }}(toLowCardinality(0)) AS id) AS d +USING (id) +; + +SELECT id > 1, d.id FROM (SELECT {{ maybe_materialize }}(toLowCardinality(0)) AS id, 1 AS value) AS a +{{ strictness }} {{ kind }} JOIN (SELECT {{ maybe_materialize }}(toLowCardinality(toNullable(0))) AS id) AS d +USING (id) +; + +SELECT id > 1, d.id FROM (SELECT {{ maybe_materialize }}(toLowCardinality(toNullable(0))) AS id, 1 AS value) AS a +{{ strictness }} {{ kind }} JOIN (SELECT {{ maybe_materialize }}(toLowCardinality(toNullable(0))) AS id) AS d +USING (id) +; + +SELECT id > 1, d.id FROM (SELECT {{ maybe_materialize }}(toLowCardinality(1)) AS id, 1 AS value) AS a +{{ strictness }} {{ kind }} JOIN (SELECT {{ maybe_materialize }}(toLowCardinality(2)) AS id) AS d +USING (id) +; + +SELECT id > 1, d.id FROM (SELECT {{ maybe_materialize }}(toLowCardinality(0)) AS id, 1 AS value) AS a +{{ strictness }} {{ kind }} JOIN (SELECT {{ maybe_materialize }}(toLowCardinality(0)) AS id) AS d +ON a.id = d.id +; + +SELECT id > 1, d.idd FROM (SELECT {{ maybe_materialize }}(toLowCardinality(0)) AS id, 1 AS value) AS a +{{ strictness }} {{ kind }} JOIN (SELECT {{ maybe_materialize }}(toLowCardinality(0)) AS idd) AS d +ON a.id = d.idd +; + +{% endfor -%} +{% endfor -%} +{% endfor -%} +{% endfor -%} +{% endfor -%} diff --git a/tests/queries/0_stateless/02782_avro_decimals.reference b/tests/queries/0_stateless/02782_avro_decimals.reference index ed46f1c3758..a16e5e4ac56 100644 --- a/tests/queries/0_stateless/02782_avro_decimals.reference +++ b/tests/queries/0_stateless/02782_avro_decimals.reference @@ -11,3 +11,15 @@ d Decimal(14, 4) 12345678.1234 123456789.1234 1234567890.1234 +0 +-1 +-1.1 +-12.12 +-123.123 +-1234.1234 +-12345.1234 +-123456.1234 +-1234567.1234 +-12345678.1234 +-123456789.1234 +-1234567890.1234 diff --git a/tests/queries/0_stateless/02782_uniq_exact_parallel_merging_bug.reference b/tests/queries/0_stateless/02782_uniq_exact_parallel_merging_bug.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/02782_uniq_exact_parallel_merging_bug.sh b/tests/queries/0_stateless/02782_uniq_exact_parallel_merging_bug.sh new file mode 100755 index 00000000000..d84ffd21b87 --- /dev/null +++ b/tests/queries/0_stateless/02782_uniq_exact_parallel_merging_bug.sh @@ -0,0 +1,21 @@ +#!/usr/bin/env bash +# Tags: long, no-random-settings, no-tsan, no-asan, no-ubsan, no-msan + +# shellcheck disable=SC2154 + +unset CLICKHOUSE_LOG_COMMENT + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + + +clickhouse-client -q " + CREATE TABLE ${CLICKHOUSE_DATABASE}.t(s String) + ENGINE = MergeTree + ORDER BY tuple(); +" + +clickhouse-client -q "insert into ${CLICKHOUSE_DATABASE}.t select number%10==0 ? toString(number) : '' from numbers_mt(1e7)" + +clickhouse-benchmark -q "select count(distinct s) from ${CLICKHOUSE_DATABASE}.t settings max_memory_usage = '50Mi'" --ignore-error -c 16 -i 1000 2>/dev/null diff --git a/tests/queries/0_stateless/02784_move_all_conditions_to_prewhere_analyzer_asan.reference b/tests/queries/0_stateless/02784_move_all_conditions_to_prewhere_analyzer_asan.reference new file mode 100644 index 00000000000..a08e8a1d440 --- /dev/null +++ b/tests/queries/0_stateless/02784_move_all_conditions_to_prewhere_analyzer_asan.reference @@ -0,0 +1,4 @@ +0 0 +0 0 +0 0 +0 0 diff --git a/tests/queries/0_stateless/02784_move_all_conditions_to_prewhere_analyzer_asan.sql b/tests/queries/0_stateless/02784_move_all_conditions_to_prewhere_analyzer_asan.sql new file mode 100644 index 00000000000..44b9ce4fdc1 --- /dev/null +++ b/tests/queries/0_stateless/02784_move_all_conditions_to_prewhere_analyzer_asan.sql @@ -0,0 +1,15 @@ +DROP TABLE IF EXISTS t_02784; + +CREATE TABLE t_02784 (c1 UInt64, c2 UInt64) ENGINE=MergeTree() ORDER BY c1 SETTINGS min_bytes_for_wide_part=1; + +INSERT INTO t_02784 SELECT number, number FROM numbers(1); + +SET allow_experimental_analyzer=1; +SET move_all_conditions_to_prewhere=1; + +SELECT c1, c2 FROM t_02784 WHERE c1 = 0 AND c2 = 0; +SELECT c1, c2 FROM t_02784 WHERE c2 = 0 AND c1 = 0; +SELECT c2, c1 FROM t_02784 WHERE c1 = 0 AND c2 = 0; +SELECT c2, c1 FROM t_02784 WHERE c2 = 0 AND c1 = 0; + +DROP TABLE t_02784; diff --git a/tests/queries/0_stateless/02784_schema_inference_null_as_default.reference b/tests/queries/0_stateless/02784_schema_inference_null_as_default.reference new file mode 100644 index 00000000000..c83819ab2e2 --- /dev/null +++ b/tests/queries/0_stateless/02784_schema_inference_null_as_default.reference @@ -0,0 +1,9 @@ +x Nullable(Int64) +\N +42 +x Nullable(Int64) +\N +42 +x Int64 +0 +42 diff --git a/tests/queries/0_stateless/02784_schema_inference_null_as_default.sql b/tests/queries/0_stateless/02784_schema_inference_null_as_default.sql new file mode 100644 index 00000000000..9c9f99d8283 --- /dev/null +++ b/tests/queries/0_stateless/02784_schema_inference_null_as_default.sql @@ -0,0 +1,7 @@ +desc format(JSONEachRow, '{"x" : null}, {"x" : 42}') settings schema_inference_make_columns_nullable=1; +select * from format(JSONEachRow, '{"x" : null}, {"x" : 42}') settings schema_inference_make_columns_nullable=1; +desc format(JSONEachRow, '{"x" : null}, {"x" : 42}') settings schema_inference_make_columns_nullable=0, input_format_null_as_default=0; +select * from format(JSONEachRow, '{"x" : null}, {"x" : 42}') settings schema_inference_make_columns_nullable=0, input_format_null_as_default=0; +desc format(JSONEachRow, '{"x" : null}, {"x" : 42}') settings schema_inference_make_columns_nullable=0, input_format_null_as_default=1; +select * from format(JSONEachRow, '{"x" : null}, {"x" : 42}') settings schema_inference_make_columns_nullable=0, input_format_null_as_default=1; + diff --git a/tests/queries/0_stateless/02785_global_join_too_many_columns.reference b/tests/queries/0_stateless/02785_global_join_too_many_columns.reference new file mode 100644 index 00000000000..425151f3a41 --- /dev/null +++ b/tests/queries/0_stateless/02785_global_join_too_many_columns.reference @@ -0,0 +1 @@ +40 diff --git a/tests/queries/0_stateless/02785_global_join_too_many_columns.sql b/tests/queries/0_stateless/02785_global_join_too_many_columns.sql new file mode 100644 index 00000000000..a49aae25f1c --- /dev/null +++ b/tests/queries/0_stateless/02785_global_join_too_many_columns.sql @@ -0,0 +1,14 @@ +drop table if exists local; +drop table if exists distr; + +create table local (a UInt64, b UInt64, c UInt64, d UInt64, e UInt64, f UInt64, g UInt64, h UInt64) engine = Log; +create table distr as local engine = Distributed('test_cluster_two_shards', currentDatabase(), local); + +insert into local (a) select number from numbers(10); + +set max_columns_to_read=1; +select count() from distr as l global all left join distr as r on l.a = r.a; + +drop table if exists local; +drop table if exists distr; + diff --git a/tests/queries/0_stateless/02786_transform_float.reference b/tests/queries/0_stateless/02786_transform_float.reference new file mode 100644 index 00000000000..3fbb2492f2e --- /dev/null +++ b/tests/queries/0_stateless/02786_transform_float.reference @@ -0,0 +1,10 @@ +1 +1 +1 +--- +1 +1 +1 +1 +1 +1 diff --git a/tests/queries/0_stateless/02786_transform_float.sql b/tests/queries/0_stateless/02786_transform_float.sql new file mode 100644 index 00000000000..4229425b084 --- /dev/null +++ b/tests/queries/0_stateless/02786_transform_float.sql @@ -0,0 +1,3 @@ +select transform(number, [1], [toFloat32(1)], toFloat32(1)) from numbers(3); +SELECT '---'; +select transform(number, [3], [toFloat32(1)], toFloat32(1)) from numbers(6); diff --git a/tests/queries/0_stateless/02787_transform_null.reference b/tests/queries/0_stateless/02787_transform_null.reference new file mode 100644 index 00000000000..a650dbbd173 --- /dev/null +++ b/tests/queries/0_stateless/02787_transform_null.reference @@ -0,0 +1,9 @@ +ZERO +ZERO +ONE +ONE +a +a + \N 0 \N 0 \N +1 1 1 \N 1 1 +a \N 3 3 3 3 diff --git a/tests/queries/0_stateless/02787_transform_null.sql b/tests/queries/0_stateless/02787_transform_null.sql new file mode 100644 index 00000000000..64a771f0f4b --- /dev/null +++ b/tests/queries/0_stateless/02787_transform_null.sql @@ -0,0 +1,40 @@ +SELECT transform(0, [0, 1], ['ZERO', 'ONE'], 'DEFAULT') AS result; +SELECT transform(0, [0, 1], ['ZERO', 'ONE'], NULL) AS result; + +SELECT CASE 1 + WHEN 0 THEN 'ZERO' + WHEN 1 THEN 'ONE' + ELSE 'NONE' +END AS result; + +SELECT CASE 1 + WHEN 0 THEN NULL + WHEN 1 THEN 'ONE' + ELSE 'NONE' +END AS result; + +select + case 1 + when 1 then 'a' + else 'b' + end value; + +select + case 1 + when 1 then 'a' + end value; + +SELECT + d, + toInt16OrNull(d), + caseWithExpression(d, 'a', 3, toInt16OrZero(d)) AS case_zero, + caseWithExpression(d, 'a', 3, toInt16OrNull(d)) AS case_null, + if(d = 'a', 3, toInt16OrZero(d)) AS if_zero, + if(d = 'a', 3, toInt16OrNull(d)) AS if_null +FROM +( + SELECT arrayJoin(['', '1', 'a']) AS d +) +ORDER BY + case_zero ASC, + d ASC; diff --git a/tests/queries/0_stateless/data_avro/decimals.avro b/tests/queries/0_stateless/data_avro/decimals.avro index 5c29ac235d5..630d6864047 100644 Binary files a/tests/queries/0_stateless/data_avro/decimals.avro and b/tests/queries/0_stateless/data_avro/decimals.avro differ diff --git a/utils/check-style/aspell-ignore/en/aspell-dict.txt b/utils/check-style/aspell-ignore/en/aspell-dict.txt index d6cef1883f4..a01b67b26b1 100644 --- a/utils/check-style/aspell-ignore/en/aspell-dict.txt +++ b/utils/check-style/aspell-ignore/en/aspell-dict.txt @@ -1081,6 +1081,7 @@ avgweighted avro avx aws +azureBlobStorage backend backoff backticks diff --git a/utils/list-versions/version_date.tsv b/utils/list-versions/version_date.tsv index 653a0cd5388..4647bcb4af1 100644 --- a/utils/list-versions/version_date.tsv +++ b/utils/list-versions/version_date.tsv @@ -1,7 +1,10 @@ +v23.5.2.7-stable 2023-06-10 +v23.5.1.3174-stable 2023-06-09 v23.4.2.11-stable 2023-05-02 v23.4.1.1943-stable 2023-04-27 v23.3.2.37-lts 2023-04-22 v23.3.1.2823-lts 2023-03-31 +v23.2.7.32-stable 2023-06-09 v23.2.6.34-stable 2023-04-23 v23.2.5.46-stable 2023-04-03 v23.2.4.12-stable 2023-03-10