mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-25 17:12:03 +00:00
Merge branch 'issue_15357_MaterializeMySQL_support_drop_mulit_table' of github.com:zzsmdfj/ClickHouse into issue_15357_MaterializeMySQL_support_drop_mulit_table
This commit is contained in:
commit
e09092c834
@ -7,6 +7,8 @@ assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**I have tried the following solutions**: https://clickhouse.com/docs/en/faq/troubleshooting/#troubleshooting-installation-errors
|
||||
|
||||
**Installation type**
|
||||
|
||||
Packages, docker, single binary, curl?
|
||||
|
110
CHANGELOG.md
110
CHANGELOG.md
@ -1,4 +1,5 @@
|
||||
### Table of Contents
|
||||
**[ClickHouse release v22.11, 2022-11-17](#2211)**<br/>
|
||||
**[ClickHouse release v22.10, 2022-10-25](#2210)**<br/>
|
||||
**[ClickHouse release v22.9, 2022-09-22](#229)**<br/>
|
||||
**[ClickHouse release v22.8-lts, 2022-08-18](#228)**<br/>
|
||||
@ -11,6 +12,109 @@
|
||||
**[ClickHouse release v22.1, 2022-01-18](#221)**<br/>
|
||||
**[Changelog for 2021](https://clickhouse.com/docs/en/whats-new/changelog/2021/)**<br/>
|
||||
|
||||
### <a id="2211"></a> ClickHouse release 22.11, 2022-11-17
|
||||
|
||||
#### Backward Incompatible Change
|
||||
* `JSONExtract` family of functions will now attempt to coerce to the requested type. [#41502](https://github.com/ClickHouse/ClickHouse/pull/41502) ([Márcio Martins](https://github.com/marcioapm)).
|
||||
|
||||
#### New Feature
|
||||
* Adds support for retries during INSERTs into ReplicatedMergeTree when a session with ClickHouse Keeper is lost. Apart from fault tolerance, it aims to provide better user experience, - avoid returning a user an error during insert if keeper is restarted (for example, due to upgrade). This is controlled by the `insert_keeper_max_retries` setting, which is disabled by default. [#42607](https://github.com/ClickHouse/ClickHouse/pull/42607) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Add `Hudi` and `DeltaLake` table engines, read-only, only for tables on S3. [#41054](https://github.com/ClickHouse/ClickHouse/pull/41054) ([Daniil Rubin](https://github.com/rubin-do), [Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Add table function `hudi` and `deltaLake`. [#43080](https://github.com/ClickHouse/ClickHouse/pull/43080) ([flynn](https://github.com/ucasfl)).
|
||||
* Support for composite time intervals. 1. Add, subtract and negate operations are now available on Intervals. In the case where the types of Intervals are different, they will be transformed into the Tuple of those types. 2. A tuple of intervals can be added to or subtracted from a Date/DateTime field. 3. Added parsing of Intervals with different types, for example: `INTERVAL '1 HOUR 1 MINUTE 1 SECOND'`. [#42195](https://github.com/ClickHouse/ClickHouse/pull/42195) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Added `**` glob support for recursive directory traversal of the filesystem and S3. Resolves [#36316](https://github.com/ClickHouse/ClickHouse/issues/36316). [#42376](https://github.com/ClickHouse/ClickHouse/pull/42376) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Introduce `s3_plain` disk type for write-once-read-many operations. Implement `ATTACH` of `MergeTree` table for `s3_plain` disk. [#42628](https://github.com/ClickHouse/ClickHouse/pull/42628) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Added applied row-level policies to `system.query_log`. [#39819](https://github.com/ClickHouse/ClickHouse/pull/39819) ([Vladimir Chebotaryov](https://github.com/quickhouse)).
|
||||
* Add four-letter command `csnp` for manually creating snapshots in ClickHouse Keeper. Additionally, `lgif` was added to get Raft information for a specific node (e.g. index of last created snapshot, last committed log index). [#41766](https://github.com/ClickHouse/ClickHouse/pull/41766) ([JackyWoo](https://github.com/JackyWoo)).
|
||||
* Add function `ascii` like in Apache Spark: https://spark.apache.org/docs/latest/api/sql/#ascii. [#42670](https://github.com/ClickHouse/ClickHouse/pull/42670) ([李扬](https://github.com/taiyang-li)).
|
||||
* Add function `positive_modulo` (`pmod`) which returns non-negative result based on modulo. [#42755](https://github.com/ClickHouse/ClickHouse/pull/42755) ([李扬](https://github.com/taiyang-li)).
|
||||
* Add function `formatReadableDecimalSize`. [#42774](https://github.com/ClickHouse/ClickHouse/pull/42774) ([Alejandro](https://github.com/alexon1234)).
|
||||
* Add function `randCanonical`, which is similar to the `rand` function in Apache Spark or Impala. The function generates pseudo random results with independent and identically distributed uniformly distributed values in [0, 1). [#43124](https://github.com/ClickHouse/ClickHouse/pull/43124) ([李扬](https://github.com/taiyang-li)).
|
||||
* Add function `displayName`, closes [#36770](https://github.com/ClickHouse/ClickHouse/issues/36770). [#37681](https://github.com/ClickHouse/ClickHouse/pull/37681) ([hongbin](https://github.com/xlwh)).
|
||||
* Add `min_age_to_force_merge_on_partition_only` setting to optimize old parts for the entire partition only. [#42659](https://github.com/ClickHouse/ClickHouse/pull/42659) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Add generic implementation for arbitrary structured named collections, access type and `system.named_collections`. [#43147](https://github.com/ClickHouse/ClickHouse/pull/43147) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
|
||||
#### Performance Improvement
|
||||
* Parallelized merging of `uniqExact` states for aggregation without key, i.e. queries like `SELECT uniqExact(number) FROM table`. The improvement becomes noticeable when the number of unique keys approaches 10^6. Also `uniq` performance is slightly optimized. [#43072](https://github.com/ClickHouse/ClickHouse/pull/43072) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* `match` function can use the index if it's a condition on string prefix. This closes [#37333](https://github.com/ClickHouse/ClickHouse/issues/37333). [#42458](https://github.com/ClickHouse/ClickHouse/pull/42458) ([clarkcaoliu](https://github.com/Clark0)).
|
||||
* Speed up AND and OR operators when they are sequenced. [#42214](https://github.com/ClickHouse/ClickHouse/pull/42214) ([Zhiguo Zhou](https://github.com/ZhiguoZh)).
|
||||
* Support parallel parsing for `LineAsString` input format. This improves performance just slightly. This closes [#42502](https://github.com/ClickHouse/ClickHouse/issues/42502). [#42780](https://github.com/ClickHouse/ClickHouse/pull/42780) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* ClickHouse Keeper performance improvement: improve commit performance for cases when many different nodes have uncommitted states. This should help with cases when a follower node can't sync fast enough. [#42926](https://github.com/ClickHouse/ClickHouse/pull/42926) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* A condition like `NOT LIKE 'prefix%'` can use the primary index. [#42209](https://github.com/ClickHouse/ClickHouse/pull/42209) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
|
||||
#### Experimental Feature
|
||||
* Support type `Object` inside other types, e.g. `Array(JSON)`. [#36969](https://github.com/ClickHouse/ClickHouse/pull/36969) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Ignore MySQL binlog SAVEPOINT event for MaterializedMySQL. [#42931](https://github.com/ClickHouse/ClickHouse/pull/42931) ([zzsmdfj](https://github.com/zzsmdfj)). Handle (ignore) SAVEPOINT queries in MaterializedMySQL. [#43086](https://github.com/ClickHouse/ClickHouse/pull/43086) ([Stig Bakken](https://github.com/stigsb)).
|
||||
|
||||
#### Improvement
|
||||
* Trivial queries with small LIMIT will properly determine the number of estimated rows to read, so that the threshold will be checked properly. Closes [#7071](https://github.com/ClickHouse/ClickHouse/issues/7071). [#42580](https://github.com/ClickHouse/ClickHouse/pull/42580) ([Han Fei](https://github.com/hanfei1991)).
|
||||
* Add support for interactive parameters in INSERT VALUES queries. [#43077](https://github.com/ClickHouse/ClickHouse/pull/43077) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Added new field `allow_readonly` in `system.table_functions` to allow using table functions in readonly mode. Resolves [#42414](https://github.com/ClickHouse/ClickHouse/issues/42414) Implementation: * Added a new field allow_readonly to table system.table_functions. * Updated to use new field allow_readonly to allow using table functions in readonly mode. Testing: * Added a test for filesystem tests/queries/0_stateless/02473_functions_in_readonly_mode.sh Documentation: * Updated the english documentation for Table Functions. [#42708](https://github.com/ClickHouse/ClickHouse/pull/42708) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* The `system.asynchronous_metrics` gets embedded documentation. This documentation is also exported to Prometheus. Fixed an error with the metrics about `cache` disks - they were calculated only for one arbitrary cache disk instead all of them. This closes [#7644](https://github.com/ClickHouse/ClickHouse/issues/7644). [#43194](https://github.com/ClickHouse/ClickHouse/pull/43194) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Throttling algorithm changed to token bucket. [#42665](https://github.com/ClickHouse/ClickHouse/pull/42665) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Mask passwords and secret keys both in `system.query_log` and `/var/log/clickhouse-server/*.log` and also in error messages. [#42484](https://github.com/ClickHouse/ClickHouse/pull/42484) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Remove covered parts for fetched part (to avoid possible replication delay grows). [#39737](https://github.com/ClickHouse/ClickHouse/pull/39737) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* If `/dev/tty` is available, the progress in clickhouse-client and clickhouse-local will be rendered directly to the terminal, without writing to STDERR. It allows getting progress even if STDERR is redirected to a file, and the file will not be polluted by terminal escape sequences. The progress can be disabled by `--progress false`. This closes [#32238](https://github.com/ClickHouse/ClickHouse/issues/32238). [#42003](https://github.com/ClickHouse/ClickHouse/pull/42003) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add support for `FixedString` input to base64 coding functions. [#42285](https://github.com/ClickHouse/ClickHouse/pull/42285) ([ltrk2](https://github.com/ltrk2)).
|
||||
* Add columns `bytes_on_disk` and `path` to `system.detached_parts`. Closes [#42264](https://github.com/ClickHouse/ClickHouse/issues/42264). [#42303](https://github.com/ClickHouse/ClickHouse/pull/42303) ([chen](https://github.com/xiedeyantu)).
|
||||
* Improve using structure from insertion table in table functions, now setting `use_structure_from_insertion_table_in_table_functions` has new possible value - `2` that means that ClickHouse will try to determine if we can use structure from insertion table or not automatically. Closes [#40028](https://github.com/ClickHouse/ClickHouse/issues/40028). [#42320](https://github.com/ClickHouse/ClickHouse/pull/42320) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix no progress indication on INSERT FROM INFILE. Closes [#42548](https://github.com/ClickHouse/ClickHouse/issues/42548). [#42634](https://github.com/ClickHouse/ClickHouse/pull/42634) ([chen](https://github.com/xiedeyantu)).
|
||||
* Refactor function `tokens` to enable max tokens returned for related functions (disabled by default). [#42673](https://github.com/ClickHouse/ClickHouse/pull/42673) ([李扬](https://github.com/taiyang-li)).
|
||||
* Allow to use `Date32` arguments for `formatDateTime` and `FROM_UNIXTIME` functions. [#42737](https://github.com/ClickHouse/ClickHouse/pull/42737) ([Roman Vasin](https://github.com/rvasin)).
|
||||
* Update tzdata to 2022f. Mexico will no longer observe DST except near the US border: https://www.timeanddate.com/news/time/mexico-abolishes-dst-2022.html. Chihuahua moves to year-round UTC-6 on 2022-10-30. Fiji no longer observes DST. See https://github.com/google/cctz/pull/235 and https://bugs.launchpad.net/ubuntu/+source/tzdata/+bug/1995209. [#42796](https://github.com/ClickHouse/ClickHouse/pull/42796) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add `FailedAsyncInsertQuery` event metric for async inserts. [#42814](https://github.com/ClickHouse/ClickHouse/pull/42814) ([Krzysztof Góralski](https://github.com/kgoralski)).
|
||||
* Implement `read-in-order` optimization on top of query plan. It is enabled by default. Set `query_plan_read_in_order = 0` to use previous AST-based version. [#42829](https://github.com/ClickHouse/ClickHouse/pull/42829) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Increase the size of upload part exponentially for backup to S3 to avoid errors about max 10 000 parts limit of the multipart upload to s3. [#42833](https://github.com/ClickHouse/ClickHouse/pull/42833) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* When the merge task is continuously busy and the disk space is insufficient, the completely expired parts cannot be selected and dropped, resulting in insufficient disk space. My idea is that when the entire Part expires, there is no need for additional disk space to guarantee, ensure the normal execution of TTL. [#42869](https://github.com/ClickHouse/ClickHouse/pull/42869) ([zhongyuankai](https://github.com/zhongyuankai)).
|
||||
* Add `oss` function and `OSS` table engine (this is convenient for users). oss is fully compatible with s3. [#43155](https://github.com/ClickHouse/ClickHouse/pull/43155) ([zzsmdfj](https://github.com/zzsmdfj)).
|
||||
* Improve error reporting in the collection of OS-related info for the `system.asynchronous_metrics` table. [#43192](https://github.com/ClickHouse/ClickHouse/pull/43192) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Modify the `INFORMATION_SCHEMA` tables in a way so that ClickHouse can connect to itself using the MySQL compatibility protocol. Add columns instead of aliases (related to [#9769](https://github.com/ClickHouse/ClickHouse/issues/9769)). It will improve the compatibility with various MySQL clients. [#43198](https://github.com/ClickHouse/ClickHouse/pull/43198) ([Filatenkov Artur](https://github.com/FArthur-cmd)).
|
||||
* Add some functions for compatibility with PowerBI, when it connects using MySQL protocol [#42612](https://github.com/ClickHouse/ClickHouse/pull/42612) ([Filatenkov Artur](https://github.com/FArthur-cmd)).
|
||||
* Better usability for Dashboard on changes [#42872](https://github.com/ClickHouse/ClickHouse/pull/42872) ([Vladimir C](https://github.com/vdimir)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Run SQLancer for each pull request and commit to master. [SQLancer](https://github.com/sqlancer/sqlancer) is an OpenSource fuzzer that focuses on automatic detection of logical bugs. [#42397](https://github.com/ClickHouse/ClickHouse/pull/42397) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Update to latest zlib-ng. [#42463](https://github.com/ClickHouse/ClickHouse/pull/42463) ([Boris Kuschel](https://github.com/bkuschel)).
|
||||
* Add support for testing ClickHouse server with Jepsen. By the way, we already have support for testing ClickHouse Keeper with Jepsen. This pull request extends it to Replicated tables. [#42619](https://github.com/ClickHouse/ClickHouse/pull/42619) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Use https://github.com/matus-chochlik/ctcache for clang-tidy results caching. [#42913](https://github.com/ClickHouse/ClickHouse/pull/42913) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Before the fix, the user-defined config was preserved by RPM in `$file.rpmsave`. The PR fixes it and won't replace the user's files from packages. [#42936](https://github.com/ClickHouse/ClickHouse/pull/42936) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Remove some libraries from Ubuntu Docker image. [#42622](https://github.com/ClickHouse/ClickHouse/pull/42622) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in official stable or prestable release)
|
||||
|
||||
* Updated normaliser to clone the alias ast. Resolves [#42452](https://github.com/ClickHouse/ClickHouse/issues/42452) Implementation: * Updated QueryNormalizer to clone alias ast, when its replaced. Previously just assigning the same leads to exception in LogicalExpressinsOptimizer as it would be the same parent being inserted again. * This bug is not seen with new analyser (allow_experimental_analyzer), so no changes for it. I added a test for the same. [#42827](https://github.com/ClickHouse/ClickHouse/pull/42827) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Fix race for backup of tables in `Lazy` databases. [#43104](https://github.com/ClickHouse/ClickHouse/pull/43104) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix for `skip_unavailable_shards`: it did not work with the `s3Cluster` table function. [#43131](https://github.com/ClickHouse/ClickHouse/pull/43131) ([chen](https://github.com/xiedeyantu)).
|
||||
* Fix schema inference in `s3Cluster` and improvement in `hdfsCluster`. [#41979](https://github.com/ClickHouse/ClickHouse/pull/41979) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix retries while reading from URL table engines / table function. (retriable errors could be retries more times than needed, non-retriable errors resulted in failed assertion in code). [#42224](https://github.com/ClickHouse/ClickHouse/pull/42224) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* A segmentation fault related to DNS & c-ares has been reported and fixed. [#42234](https://github.com/ClickHouse/ClickHouse/pull/42234) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Fix `LOGICAL_ERROR` `Arguments of 'plus' have incorrect data types` which may happen in PK analysis (monotonicity check). Fix invalid PK analysis for monotonic binary functions with first constant argument. [#42410](https://github.com/ClickHouse/ClickHouse/pull/42410) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix incorrect key analysis when key types cannot be inside Nullable. This fixes [#42456](https://github.com/ClickHouse/ClickHouse/issues/42456). [#42469](https://github.com/ClickHouse/ClickHouse/pull/42469) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix typo in a setting name that led to bad usage of schema inference cache while using setting `input_format_csv_use_best_effort_in_schema_inference`. Closes [#41735](https://github.com/ClickHouse/ClickHouse/issues/41735). [#42536](https://github.com/ClickHouse/ClickHouse/pull/42536) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix creating a Set with wrong header when data type is LowCardinality. Closes [#42460](https://github.com/ClickHouse/ClickHouse/issues/42460). [#42579](https://github.com/ClickHouse/ClickHouse/pull/42579) ([flynn](https://github.com/ucasfl)).
|
||||
* `(U)Int128` and `(U)Int256` values were correctly checked in `PREWHERE`. [#42605](https://github.com/ClickHouse/ClickHouse/pull/42605) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix a bug in functions parser that could have led to a segmentation fault. [#42724](https://github.com/ClickHouse/ClickHouse/pull/42724) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix the locking in `truncate table`. [#42728](https://github.com/ClickHouse/ClickHouse/pull/42728) ([flynn](https://github.com/ucasfl)).
|
||||
* Fix possible crash in `web` disks when file does not exist (or `OPTIMIZE TABLE FINAL`, that also can got the same error eventually). [#42767](https://github.com/ClickHouse/ClickHouse/pull/42767) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix `auth_type` mapping in `system.session_log`, by including `SSL_CERTIFICATE` for the enum values. [#42782](https://github.com/ClickHouse/ClickHouse/pull/42782) ([Miel Donkers](https://github.com/mdonkers)).
|
||||
* Fix stack-use-after-return under ASAN build in the Create User query parser. [#42804](https://github.com/ClickHouse/ClickHouse/pull/42804) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix `lowerUTF8`/`upperUTF8` in case of symbol was in between 16-byte boundary (very frequent case of you have strings > 16 bytes long). [#42812](https://github.com/ClickHouse/ClickHouse/pull/42812) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Additional bound check was added to LZ4 decompression routine to fix misbehaviour in case of malformed input. [#42868](https://github.com/ClickHouse/ClickHouse/pull/42868) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Fix rare possible hang on query cancellation. [#42874](https://github.com/ClickHouse/ClickHouse/pull/42874) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix incorrect behavior with multiple disjuncts in hash join, close [#42832](https://github.com/ClickHouse/ClickHouse/issues/42832). [#42876](https://github.com/ClickHouse/ClickHouse/pull/42876) ([Vladimir C](https://github.com/vdimir)).
|
||||
* A null pointer will be generated when select if as from ‘three table join’ , For example, this SQL query: [#42883](https://github.com/ClickHouse/ClickHouse/pull/42883) ([zzsmdfj](https://github.com/zzsmdfj)).
|
||||
* Fix memory sanitizer report in Cluster Discovery, close [#42763](https://github.com/ClickHouse/ClickHouse/issues/42763). [#42905](https://github.com/ClickHouse/ClickHouse/pull/42905) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Improve DateTime schema inference in case of empty string. [#42911](https://github.com/ClickHouse/ClickHouse/pull/42911) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix rare NOT_FOUND_COLUMN_IN_BLOCK error when projection is possible to use but there is no projection available. This fixes [#42771](https://github.com/ClickHouse/ClickHouse/issues/42771) . The bug was introduced in https://github.com/ClickHouse/ClickHouse/pull/25563. [#42938](https://github.com/ClickHouse/ClickHouse/pull/42938) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix ATTACH TABLE in `PostgreSQL` database engine if the table contains DATETIME data type. Closes [#42817](https://github.com/ClickHouse/ClickHouse/issues/42817). [#42960](https://github.com/ClickHouse/ClickHouse/pull/42960) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix lambda parsing. Closes [#41848](https://github.com/ClickHouse/ClickHouse/issues/41848). [#42979](https://github.com/ClickHouse/ClickHouse/pull/42979) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix incorrect key analysis when nullable keys appear in the middle of a hyperrectangle. This fixes [#43111](https://github.com/ClickHouse/ClickHouse/issues/43111) . [#43133](https://github.com/ClickHouse/ClickHouse/pull/43133) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix several buffer over-reads in deserialization of carefully crafted aggregate function states. [#43159](https://github.com/ClickHouse/ClickHouse/pull/43159) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix function `if` in case of NULL and const Nullable arguments. Closes [#43069](https://github.com/ClickHouse/ClickHouse/issues/43069). [#43178](https://github.com/ClickHouse/ClickHouse/pull/43178) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix decimal math overflow in parsing DateTime with the 'best effort' algorithm. Closes [#43061](https://github.com/ClickHouse/ClickHouse/issues/43061). [#43180](https://github.com/ClickHouse/ClickHouse/pull/43180) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* The `indent` field produced by the `git-import` tool was miscalculated. See https://clickhouse.com/docs/en/getting-started/example-datasets/github/. [#43191](https://github.com/ClickHouse/ClickHouse/pull/43191) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed unexpected behaviour of `Interval` types with subquery and casting. [#43193](https://github.com/ClickHouse/ClickHouse/pull/43193) ([jh0x](https://github.com/jh0x)).
|
||||
|
||||
### <a id="2210"></a> ClickHouse release 22.10, 2022-10-26
|
||||
|
||||
#### Backward Incompatible Change
|
||||
@ -570,7 +674,7 @@
|
||||
* Support SQL standard CREATE INDEX and DROP INDEX syntax. [#35166](https://github.com/ClickHouse/ClickHouse/pull/35166) ([Jianmei Zhang](https://github.com/zhangjmruc)).
|
||||
* Send profile events for INSERT queries (previously only SELECT was supported). [#37391](https://github.com/ClickHouse/ClickHouse/pull/37391) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Implement in order aggregation (`optimize_aggregation_in_order`) for fully materialized projections. [#37469](https://github.com/ClickHouse/ClickHouse/pull/37469) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Remove subprocess run for kerberos initialization. Added new integration test. Closes [#27651](https://github.com/ClickHouse/ClickHouse/issues/27651). [#38105](https://github.com/ClickHouse/ClickHouse/pull/38105) ([Roman Vasin](https://github.com/rvasin)).
|
||||
* Remove subprocess run for Kerberos initialization. Added new integration test. Closes [#27651](https://github.com/ClickHouse/ClickHouse/issues/27651). [#38105](https://github.com/ClickHouse/ClickHouse/pull/38105) ([Roman Vasin](https://github.com/rvasin)).
|
||||
* * Add setting `multiple_joins_try_to_keep_original_names` to not rewrite identifier name on multiple JOINs rewrite, close [#34697](https://github.com/ClickHouse/ClickHouse/issues/34697). [#38149](https://github.com/ClickHouse/ClickHouse/pull/38149) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Improved trace-visualizer UX. [#38169](https://github.com/ClickHouse/ClickHouse/pull/38169) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Enable stack trace collection and query profiler for AArch64. [#38181](https://github.com/ClickHouse/ClickHouse/pull/38181) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
@ -850,8 +954,8 @@
|
||||
|
||||
#### Upgrade Notes
|
||||
|
||||
* Now, background merges, mutations and `OPTIMIZE` will not increment `SelectedRows` and `SelectedBytes` metrics. They (still) will increment `MergedRows` and `MergedUncompressedBytes` as it was before. This only affects the metric values, and makes them better. This change does not introduce any incompatibility, but you may wonder about the changes of metrics, so we put in this category. [#37040](https://github.com/ClickHouse/ClickHouse/pull/37040) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Updated the BoringSSL module to the official FIPS compliant version. This makes ClickHouse FIPS compliant. [#35914](https://github.com/ClickHouse/ClickHouse/pull/35914) ([Meena-Renganathan](https://github.com/Meena-Renganathan)). The ciphers `aes-192-cfb128` and `aes-256-cfb128` were removed, because they are not included in the FIPS certified version of BoringSSL.
|
||||
* Now, background merges, mutations, and `OPTIMIZE` will not increment `SelectedRows` and `SelectedBytes` metrics. They (still) will increment `MergedRows` and `MergedUncompressedBytes` as it was before. This only affects the metric values and makes them better. This change does not introduce any incompatibility, but you may wonder about the changes to the metrics, so we put in this category. [#37040](https://github.com/ClickHouse/ClickHouse/pull/37040) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Updated the BoringSSL module to the official FIPS compliant version. This makes ClickHouse FIPS compliant in this area. [#35914](https://github.com/ClickHouse/ClickHouse/pull/35914) ([Meena-Renganathan](https://github.com/Meena-Renganathan)). The ciphers `aes-192-cfb128` and `aes-256-cfb128` were removed, because they are not included in the FIPS certified version of BoringSSL.
|
||||
* `max_memory_usage` setting is removed from the default user profile in `users.xml`. This enables flexible memory limits for queries instead of the old rigid limit of 10 GB.
|
||||
* Disable `log_query_threads` setting by default. It controls the logging of statistics about every thread participating in query execution. After supporting asynchronous reads, the total number of distinct thread ids became too large, and logging into the `query_thread_log` has become too heavy. [#37077](https://github.com/ClickHouse/ClickHouse/pull/37077) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Remove function `groupArraySorted` which has a bug. [#36822](https://github.com/ClickHouse/ClickHouse/pull/36822) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
@ -10,6 +10,7 @@ The following versions of ClickHouse server are currently being supported with s
|
||||
|
||||
| Version | Supported |
|
||||
|:-|:-|
|
||||
| 22.11 | ✔️ |
|
||||
| 22.10 | ✔️ |
|
||||
| 22.9 | ✔️ |
|
||||
| 22.8 | ✔️ |
|
||||
|
@ -2,11 +2,11 @@
|
||||
|
||||
# NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION,
|
||||
# only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes.
|
||||
SET(VERSION_REVISION 54468)
|
||||
SET(VERSION_REVISION 54469)
|
||||
SET(VERSION_MAJOR 22)
|
||||
SET(VERSION_MINOR 11)
|
||||
SET(VERSION_MINOR 12)
|
||||
SET(VERSION_PATCH 1)
|
||||
SET(VERSION_GITHASH 98ab5a3c189232ea2a3dddb9d2be7196ae8b3434)
|
||||
SET(VERSION_DESCRIBE v22.11.1.1-testing)
|
||||
SET(VERSION_STRING 22.11.1.1)
|
||||
SET(VERSION_GITHASH 0d211ed19849fe44b0e43fdebe2c15d76d560a77)
|
||||
SET(VERSION_DESCRIBE v22.12.1.1-testing)
|
||||
SET(VERSION_STRING 22.12.1.1)
|
||||
# end of autochange
|
||||
|
@ -33,7 +33,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
# lts / testing / prestable / etc
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||
ARG VERSION="22.10.2.11"
|
||||
ARG VERSION="22.11.1.1360"
|
||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||
|
||||
# user/group precreated explicitly with fixed uid/gid on purpose.
|
||||
|
@ -21,7 +21,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
|
||||
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="deb https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
||||
ARG VERSION="22.10.2.11"
|
||||
ARG VERSION="22.11.1.1360"
|
||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||
|
||||
# set non-empty deb_location_url url to create a docker image
|
||||
|
@ -388,6 +388,9 @@ else
|
||||
rm -f /etc/clickhouse-server/config.d/storage_conf.xml ||:
|
||||
rm -f /etc/clickhouse-server/config.d/azure_storage_conf.xml ||:
|
||||
|
||||
# it uses recently introduced settings which previous versions may not have
|
||||
rm -f /etc/clickhouse-server/users.d/insert_keeper_retries.xml ||:
|
||||
|
||||
start
|
||||
|
||||
clickhouse-client --query="SELECT 'Server version: ', version()"
|
||||
|
249
docs/changelogs/v22.11.1.1360-stable.md
Normal file
249
docs/changelogs/v22.11.1.1360-stable.md
Normal file
@ -0,0 +1,249 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
sidebar_label: 2022
|
||||
---
|
||||
|
||||
# 2022 Changelog
|
||||
|
||||
### ClickHouse release v22.11.1.1360-stable (0d211ed1984) FIXME as compared to v22.10.1.1877-stable (98ab5a3c189)
|
||||
|
||||
#### Backward Incompatible Change
|
||||
* JSONExtract family of functions will now attempt to coerce to the request type. [#41502](https://github.com/ClickHouse/ClickHouse/pull/41502) ([Márcio Martins](https://github.com/marcioapm)).
|
||||
|
||||
#### New Feature
|
||||
* - Add function `displayName`, closes [#36770](https://github.com/ClickHouse/ClickHouse/issues/36770). [#37681](https://github.com/ClickHouse/ClickHouse/pull/37681) ([hongbin](https://github.com/xlwh)).
|
||||
* Added applied row-level policies to `system.query_log`. [#39819](https://github.com/ClickHouse/ClickHouse/pull/39819) ([Vladimir Chebotaryov](https://github.com/quickhouse)).
|
||||
* Add Hudi and DeltaLake table engines, read-only, only for tables on S3. [#41054](https://github.com/ClickHouse/ClickHouse/pull/41054) ([Daniil Rubin](https://github.com/rubin-do)).
|
||||
* Add 4LW command `csnp` for manually creating snapshots. Additionally, `lgif` was added to get Raft information for a specific node (e.g. index of last created snapshot, last committed log index). [#41766](https://github.com/ClickHouse/ClickHouse/pull/41766) ([JackyWoo](https://github.com/JackyWoo)).
|
||||
* Support for keeper request retries during insert into replicated merge trees. Apart from fault tolerance, it aims to provide better user experience, - avoid returning a user an error during insert if keeper is restarted (for example, due to upgrade). [#42607](https://github.com/ClickHouse/ClickHouse/pull/42607) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Add function ascii like in spark: https://spark.apache.org/docs/latest/api/sql/#ascii. [#42670](https://github.com/ClickHouse/ClickHouse/pull/42670) ([李扬](https://github.com/taiyang-li)).
|
||||
* Add function pmod which return non-negative result based on modulo. [#42755](https://github.com/ClickHouse/ClickHouse/pull/42755) ([李扬](https://github.com/taiyang-li)).
|
||||
* Published function `formatReadableDecimalSize`. [#42774](https://github.com/ClickHouse/ClickHouse/pull/42774) ([Alejandro](https://github.com/alexon1234)).
|
||||
* Added S3 PUTs and GETs request per second rate throttling. Settings `s3_max_get_rps`, `s3_max_get_burst`, `s3_max_put_rps`, `s3_max_put_burst` are used to configure token bucket throttler. Can be used with both S3 ObjectStorage and S3 table function. Different limits can be configured for different S3 disks or endpoints. [#43014](https://github.com/ClickHouse/ClickHouse/pull/43014) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Add table function hudi and deltaLake. [#43080](https://github.com/ClickHouse/ClickHouse/pull/43080) ([flynn](https://github.com/ucasfl)).
|
||||
* Add function factorial, as in Impala or Spark. [#43110](https://github.com/ClickHouse/ClickHouse/pull/43110) ([李扬](https://github.com/taiyang-li)).
|
||||
* Add function randCanonical, which is similar to rand function in spark or impala. The function generates pseudo random results with independent and identically distributed uniformly distributed values in [0, 1). [#43124](https://github.com/ClickHouse/ClickHouse/pull/43124) ([李扬](https://github.com/taiyang-li)).
|
||||
|
||||
#### Performance Improvement
|
||||
* Currently, the only saturable operators are And and Or, and their code paths are affected by this change. [#42214](https://github.com/ClickHouse/ClickHouse/pull/42214) ([Zhiguo Zhou](https://github.com/ZhiguoZh)).
|
||||
* `match` function can use the index if it's a condition on string prefix. This closes [#37333](https://github.com/ClickHouse/ClickHouse/issues/37333). [#42458](https://github.com/ClickHouse/ClickHouse/pull/42458) ([clarkcaoliu](https://github.com/Clark0)).
|
||||
* Fixed slowness in JSONExtract with LowCardinality(String) tuples. [#42761](https://github.com/ClickHouse/ClickHouse/pull/42761) ([AlfVII](https://github.com/AlfVII)).
|
||||
* Support parallel parsing for LineAsString input format. This improves performance just slightly. This closes [#42502](https://github.com/ClickHouse/ClickHouse/issues/42502). [#42780](https://github.com/ClickHouse/ClickHouse/pull/42780) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Keeper performance improvement: improve commit performance for cases when many different nodes have uncommitted states. This should help with cases when a follower node can't sync fast enough. [#42926](https://github.com/ClickHouse/ClickHouse/pull/42926) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Parallelized merging of `uniqExact` states for aggregation without a key, i.e. queries like `SELECT uniqExact(number) FROM table`. The improvement becomes noticeable when the number of unique keys approaches 10^6. Also `uniq` performance is slightly optimized. This closes [#4510](https://github.com/ClickHouse/ClickHouse/issues/4510). [#43072](https://github.com/ClickHouse/ClickHouse/pull/43072) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
|
||||
#### Improvement
|
||||
* Support type `Object` inside other types, e.g. `Array(JSON)`. [#36969](https://github.com/ClickHouse/ClickHouse/pull/36969) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Remove covered parts for fetched part (to avoid possible replication delay grows). [#39737](https://github.com/ClickHouse/ClickHouse/pull/39737) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* ClickHouse Client and ClickHouse Local will show progress by default even in non-interactive mode. If `/dev/tty` is available, the progress will be rendered directly to the terminal, without writing to stderr. It allows to get progress even if stderr is redirected to a file, and the file will not be polluted by terminal escape sequences. The progress can be disabled by `--progress false`. This closes [#32238](https://github.com/ClickHouse/ClickHouse/issues/32238). [#42003](https://github.com/ClickHouse/ClickHouse/pull/42003) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* 1. Add, subtract and negate operations are now available on Intervals. In case when the types of Intervals are different they will be transformed into the Tuple of those types. 2. A tuple of intervals can be added to or subtracted from a Date/DateTime field. 3. Added parsing of Intervals with different types, for example: `INTERVAL '1 HOUR 1 MINUTE 1 SECOND'`. [#42195](https://github.com/ClickHouse/ClickHouse/pull/42195) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* - Add `notLike` to key condition atom map, so condition like `NOT LIKE 'prefix%'` can use primary index. [#42209](https://github.com/ClickHouse/ClickHouse/pull/42209) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Add support for FixedString input to base64 coding functions. [#42285](https://github.com/ClickHouse/ClickHouse/pull/42285) ([ltrk2](https://github.com/ltrk2)).
|
||||
* Add columns `bytes_on_disk` and `path` to `system.detached_parts`. Closes [#42264](https://github.com/ClickHouse/ClickHouse/issues/42264). [#42303](https://github.com/ClickHouse/ClickHouse/pull/42303) ([chen](https://github.com/xiedeyantu)).
|
||||
* Improve using structure from insertion table in table functions, now setting `use_structure_from_insertion_table_in_table_functions` has new possible value - `2` that means that ClickHouse will try to determine if we can use structure from insertion table or not automatically. Closes [#40028](https://github.com/ClickHouse/ClickHouse/issues/40028). [#42320](https://github.com/ClickHouse/ClickHouse/pull/42320) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Added ** glob support for recursive directory traversal to filesystem and S3. resolves [#36316](https://github.com/ClickHouse/ClickHouse/issues/36316). [#42376](https://github.com/ClickHouse/ClickHouse/pull/42376) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Mask passwords and secret keys both in `system.query_log` and `/var/log/clickhouse-server/*.log` and also in error messages. [#42484](https://github.com/ClickHouse/ClickHouse/pull/42484) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Add a new variable call `limit` in query_info, indicating whether this query is a limit-trivial query. If so, we will adjust the approximate total rows for later estimation. Closes [#7071](https://github.com/ClickHouse/ClickHouse/issues/7071). [#42580](https://github.com/ClickHouse/ClickHouse/pull/42580) ([Han Fei](https://github.com/hanfei1991)).
|
||||
* Implement `ATTACH` of `MergeTree` table for `s3_plain` disk (plus some fixes for `s3_plain`). [#42628](https://github.com/ClickHouse/ClickHouse/pull/42628) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix no progress indication on INSERT FROM INFILE. Closes [#42548](https://github.com/ClickHouse/ClickHouse/issues/42548). [#42634](https://github.com/ClickHouse/ClickHouse/pull/42634) ([chen](https://github.com/xiedeyantu)).
|
||||
* Add `min_age_to_force_merge_on_partition_only` setting to optimize old parts for the entire partition only. [#42659](https://github.com/ClickHouse/ClickHouse/pull/42659) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Throttling algorithm changed to token bucket. [#42665](https://github.com/ClickHouse/ClickHouse/pull/42665) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Refactor FunctionTokens to enable max tokens returned for related functions(default disabled). [#42673](https://github.com/ClickHouse/ClickHouse/pull/42673) ([李扬](https://github.com/taiyang-li)).
|
||||
* Added new field allow_readonly in system.table_functions to allow using table functions in readonly mode resolves [#42414](https://github.com/ClickHouse/ClickHouse/issues/42414) Implementation: * Added a new field allow_readonly to table system.table_functions. * Updated to use new field allow_readonly to allow using table functions in readonly mode. Testing: * Added a test for filesystem tests/queries/0_stateless/02473_functions_in_readonly_mode.sh Documentation: * Updated the english documentation for Table Functions. [#42708](https://github.com/ClickHouse/ClickHouse/pull/42708) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Allow to use Date32 arguments for formatDateTime and FROM_UNIXTIME functions. [#42737](https://github.com/ClickHouse/ClickHouse/pull/42737) ([Roman Vasin](https://github.com/rvasin)).
|
||||
* Update tzdata to 2022f. Mexico will no longer observe DST except near the US border: https://www.timeanddate.com/news/time/mexico-abolishes-dst-2022.html. Chihuahua moves to year-round UTC-6 on 2022-10-30. Fiji no longer observes DST. See https://github.com/google/cctz/pull/235 and https://bugs.launchpad.net/ubuntu/+source/tzdata/+bug/1995209. [#42796](https://github.com/ClickHouse/ClickHouse/pull/42796) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add `FailedAsyncInsertQuery` event metric for async inserts. [#42814](https://github.com/ClickHouse/ClickHouse/pull/42814) ([Krzysztof Góralski](https://github.com/kgoralski)).
|
||||
* Implement `read-in-order` optimization on top of query plan. It is enabled by default. Set `query_plan_read_in_order = 0` to use previous AST-based version. [#42829](https://github.com/ClickHouse/ClickHouse/pull/42829) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Increase the size of upload part exponentially for backup to S3. [#42833](https://github.com/ClickHouse/ClickHouse/pull/42833) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* When the merge task is continuously busy and the disk space is insufficient, the completely expired parts cannot be selected and dropped, resulting in insufficient disk space. My idea is that when the entire Part expires, there is no need for additional disk space to guarantee, ensure the normal execution of TTL. [#42869](https://github.com/ClickHouse/ClickHouse/pull/42869) ([zhongyuankai](https://github.com/zhongyuankai)).
|
||||
* bugfix: [#42856](https://github.com/ClickHouse/ClickHouse/issues/42856) ignore Mysql binlog SAVEPOINT event. [#42931](https://github.com/ClickHouse/ClickHouse/pull/42931) ([zzsmdfj](https://github.com/zzsmdfj)).
|
||||
* Add support for interactive parameters in INSERT VALUES queries. [#43077](https://github.com/ClickHouse/ClickHouse/pull/43077) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Add generic implementation for arbitrary structured named collections, access type and system.named_collections. [#43147](https://github.com/ClickHouse/ClickHouse/pull/43147) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* add oss function and StorageOSS (This is convenient for users). oss is fully compatible with s3. [#43155](https://github.com/ClickHouse/ClickHouse/pull/43155) ([zzsmdfj](https://github.com/zzsmdfj)).
|
||||
* Improve error reporting in the collection of OS-related info for the `system.asynchronous_metrics` table. [#43192](https://github.com/ClickHouse/ClickHouse/pull/43192) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* The `system.asynchronous_metrics` gets embedded documentation. This documentation is also exported to Prometheus. Fixed an error with the metrics about `cache` disks - they were calculated only for one arbitrary cache disk instead all of them. This closes [#7644](https://github.com/ClickHouse/ClickHouse/issues/7644). [#43194](https://github.com/ClickHouse/ClickHouse/pull/43194) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Modify the `INFORMATION_SCHEMA` tables in a way so that now ClickHouse can connect to itself using the MySQL compatibility protocol. Add columns instead of aliases (related to [#9769](https://github.com/ClickHouse/ClickHouse/issues/9769)). It will improve the compatibility with various MySQL clients. [#43198](https://github.com/ClickHouse/ClickHouse/pull/43198) ([Filatenkov Artur](https://github.com/FArthur-cmd)).
|
||||
* Disable `deltaLake` and `hudi` table functions in readonly mode. [#43316](https://github.com/ClickHouse/ClickHouse/pull/43316) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
|
||||
#### Bug Fix
|
||||
* Updated normaliser to clone the alias ast. resolves [#42452](https://github.com/ClickHouse/ClickHouse/issues/42452) Implementation: * Updated QueryNormalizer to clone alias ast, when its replaced. Previously just assigning the same leads to exception in LogicalExpressinsOptimizer as it would be the same parent being inserted again. * This bug is not seen with new analyser (allow_experimental_analyzer), so no changes for it. I added a test for the same. [#42827](https://github.com/ClickHouse/ClickHouse/pull/42827) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Fix race for backup of tables in Lazy databases. [#43104](https://github.com/ClickHouse/ClickHouse/pull/43104) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* fix skip_unavailable_shards does not work using s3Cluster table function. [#43131](https://github.com/ClickHouse/ClickHouse/pull/43131) ([chen](https://github.com/xiedeyantu)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Run SQLancer for each pull request and commit to master. [SQLancer](https://github.com/sqlancer/sqlancer) is an OpenSource fuzzer that focuses on automatic detection of logical bugs. [#42397](https://github.com/ClickHouse/ClickHouse/pull/42397) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Update to latest zlib-ng. [#42463](https://github.com/ClickHouse/ClickHouse/pull/42463) ([Boris Kuschel](https://github.com/bkuschel)).
|
||||
* use llvm `l64.lld` in macOS suppress ld warnings, close [#42282](https://github.com/ClickHouse/ClickHouse/issues/42282). [#42470](https://github.com/ClickHouse/ClickHouse/pull/42470) ([Lloyd-Pottiger](https://github.com/Lloyd-Pottiger)).
|
||||
* Add support for testing ClickHouse server with Jepsen. By the way, we already have support for testing ClickHouse Keeper with Jepsen. This pull request extends it to Replicated tables. [#42619](https://github.com/ClickHouse/ClickHouse/pull/42619) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* * Improve bugfix validation check: fix bug with skipping the check, port separate status in CI, run after check labels and style check. Close [#40349](https://github.com/ClickHouse/ClickHouse/issues/40349). [#42702](https://github.com/ClickHouse/ClickHouse/pull/42702) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Wait for all files are in sync before archiving them in integration tests. [#42891](https://github.com/ClickHouse/ClickHouse/pull/42891) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Use https://github.com/matus-chochlik/ctcache for clang-tidy results caching. [#42913](https://github.com/ClickHouse/ClickHouse/pull/42913) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Before the fix, the user-defined config was preserved by RPM in `$file.rpmsave`. The PR fixes it and won't replace the user's files from packages. [#42936](https://github.com/ClickHouse/ClickHouse/pull/42936) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Add a CI step to mark commits as ready for release; soft-forbid launching a release script from branches but master. [#43017](https://github.com/ClickHouse/ClickHouse/pull/43017) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in official stable or prestable release)
|
||||
|
||||
* Fix schema inference in s3Cluster and improve in hdfsCluster. [#41979](https://github.com/ClickHouse/ClickHouse/pull/41979) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix retries while reading from http table engines / table function. (retrtiable errors could be retries more times than needed, non-retrialble errors resulted in failed assertion in code). [#42224](https://github.com/ClickHouse/ClickHouse/pull/42224) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* A segmentation fault related to DNS & c-ares has been reported. The below error ocurred in multiple threads: ``` 2022-09-28 15:41:19.008,2022.09.28 15:41:19.008088 [ 356 ] {} <Fatal> BaseDaemon: ######################################## 2022-09-28 15:41:19.008,"2022.09.28 15:41:19.008147 [ 356 ] {} <Fatal> BaseDaemon: (version 22.8.5.29 (official build), build id: 92504ACA0B8E2267) (from thread 353) (no query) Received signal Segmentation fault (11)" 2022-09-28 15:41:19.008,2022.09.28 15:41:19.008196 [ 356 ] {} <Fatal> BaseDaemon: Address: 0xf Access: write. Address not mapped to object. 2022-09-28 15:41:19.008,2022.09.28 15:41:19.008216 [ 356 ] {} <Fatal> BaseDaemon: Stack trace: 0x188f8212 0x1626851b 0x1626a69e 0x16269b3f 0x16267eab 0x13cf8284 0x13d24afc 0x13c5217e 0x14ec2495 0x15ba440f 0x15b9d13b 0x15bb2699 0x1891ccb3 0x1891e00d 0x18ae0769 0x18ade022 0x7f76aa985609 0x7f76aa8aa133 2022-09-28 15:41:19.008,2022.09.28 15:41:19.008274 [ 356 ] {} <Fatal> BaseDaemon: 2. Poco::Net::IPAddress::family() const @ 0x188f8212 in /usr/bin/clickhouse 2022-09-28 15:41:19.008,2022.09.28 15:41:19.008297 [ 356 ] {} <Fatal> BaseDaemon: 3. ? @ 0x1626851b in /usr/bin/clickhouse 2022-09-28 15:41:19.008,2022.09.28 15:41:19.008309 [ 356 ] {} <Fatal> BaseDaemon: 4. ? @ 0x1626a69e in /usr/bin/clickhouse ```. [#42234](https://github.com/ClickHouse/ClickHouse/pull/42234) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Fix `LOGICAL_ERROR` `Arguments of 'plus' have incorrect data types` which may happen in PK analysis (monotonicity check). Fix invalid PK analysis for monotonic binary functions with first constant argument. [#42410](https://github.com/ClickHouse/ClickHouse/pull/42410) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix incorrect key analysis when key types cannot be inside Nullable. This fixes [#42456](https://github.com/ClickHouse/ClickHouse/issues/42456). [#42469](https://github.com/ClickHouse/ClickHouse/pull/42469) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix typo in setting name that led to bad usage of schema inference cache while using setting `input_format_csv_use_best_effort_in_schema_inference`. Closes [#41735](https://github.com/ClickHouse/ClickHouse/issues/41735). [#42536](https://github.com/ClickHouse/ClickHouse/pull/42536) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix create Set with wrong header when data type is LowCardinality. Closes [#42460](https://github.com/ClickHouse/ClickHouse/issues/42460). [#42579](https://github.com/ClickHouse/ClickHouse/pull/42579) ([flynn](https://github.com/ucasfl)).
|
||||
* `(U)Int128` and `(U)Int256` values are correctly checked in `PREWHERE`. [#42605](https://github.com/ClickHouse/ClickHouse/pull/42605) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix a bug in ParserFunction that could have led to a segmentation fault. [#42724](https://github.com/ClickHouse/ClickHouse/pull/42724) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix truncate table does not hold lock correctly. [#42728](https://github.com/ClickHouse/ClickHouse/pull/42728) ([flynn](https://github.com/ucasfl)).
|
||||
* Fix possible SIGSEGV for web disks when file does not exists (or `OPTIMIZE TABLE FINAL`, that also can got the same error eventually). [#42767](https://github.com/ClickHouse/ClickHouse/pull/42767) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix `auth_type` mapping in `system.session_log`, by including `SSL_CERTIFICATE` for the enum values. [#42782](https://github.com/ClickHouse/ClickHouse/pull/42782) ([Miel Donkers](https://github.com/mdonkers)).
|
||||
* Fix stack-use-after-return under ASAN build in ParserCreateUserQuery. [#42804](https://github.com/ClickHouse/ClickHouse/pull/42804) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix lowerUTF8()/upperUTF8() in case of symbol was in between 16-byte boundary (very frequent case of you have strings > 16 bytes long). [#42812](https://github.com/ClickHouse/ClickHouse/pull/42812) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Additional bound check was added to lz4 decompression routine to fix misbehaviour in case of malformed input. [#42868](https://github.com/ClickHouse/ClickHouse/pull/42868) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Fix rare possible hung on query cancellation. [#42874](https://github.com/ClickHouse/ClickHouse/pull/42874) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* * Fix incorrect saved_block_sample with multiple disjuncts in hash join, close [#42832](https://github.com/ClickHouse/ClickHouse/issues/42832). [#42876](https://github.com/ClickHouse/ClickHouse/pull/42876) ([Vladimir C](https://github.com/vdimir)).
|
||||
* A null pointer will be generated when select if as from ‘three table join’ , For example, the SQL:. [#42883](https://github.com/ClickHouse/ClickHouse/pull/42883) ([zzsmdfj](https://github.com/zzsmdfj)).
|
||||
* Fix memory sanitizer report in ClusterDiscovery, close [#42763](https://github.com/ClickHouse/ClickHouse/issues/42763). [#42905](https://github.com/ClickHouse/ClickHouse/pull/42905) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Fix datetime schema inference in case of empty string. [#42911](https://github.com/ClickHouse/ClickHouse/pull/42911) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix rare NOT_FOUND_COLUMN_IN_BLOCK error when projection is possible to use but there is no projection available. This fixes [#42771](https://github.com/ClickHouse/ClickHouse/issues/42771) . The bug was introduced in https://github.com/ClickHouse/ClickHouse/pull/25563. [#42938](https://github.com/ClickHouse/ClickHouse/pull/42938) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fixes for s3_plain disk that will allow to attach Wide parts. [#42950](https://github.com/ClickHouse/ClickHouse/pull/42950) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix ATTACH TABLE in PostgreSQL database engine if the table contains DATETIME data type. Closes [#42817](https://github.com/ClickHouse/ClickHouse/issues/42817). [#42960](https://github.com/ClickHouse/ClickHouse/pull/42960) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix lambda parsing. Closes [#41848](https://github.com/ClickHouse/ClickHouse/issues/41848). [#42979](https://github.com/ClickHouse/ClickHouse/pull/42979) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Handle (ignore) SAVEPOINT queries in MaterializedMySQL. [#43086](https://github.com/ClickHouse/ClickHouse/pull/43086) ([Stig Bakken](https://github.com/stigsb)).
|
||||
* Fix incorrect key analysis when nullable keys appear in the middle of a hyperrectangle. This fixes [#43111](https://github.com/ClickHouse/ClickHouse/issues/43111) . [#43133](https://github.com/ClickHouse/ClickHouse/pull/43133) ([Amos Bird](https://github.com/amosbird)).
|
||||
* - Fix several buffer over-reads. [#43159](https://github.com/ClickHouse/ClickHouse/pull/43159) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix function if in case of NULL and const Nullable arguments. Closes [#43069](https://github.com/ClickHouse/ClickHouse/issues/43069). [#43178](https://github.com/ClickHouse/ClickHouse/pull/43178) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix decimal math overflow in parsing datetime with 'best effort' algorithm. Closes [#43061](https://github.com/ClickHouse/ClickHouse/issues/43061). [#43180](https://github.com/ClickHouse/ClickHouse/pull/43180) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* The `indent` field produced by the `git-import` tool was miscalculated. See https://clickhouse.com/docs/en/getting-started/example-datasets/github/. [#43191](https://github.com/ClickHouse/ClickHouse/pull/43191) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed unexpected behaviour of Interval types with subquery and casting. [#43193](https://github.com/ClickHouse/ClickHouse/pull/43193) ([jh0x](https://github.com/jh0x)).
|
||||
* * Fix logical error in `sumMap/minMap/maxMap` functions executing `TOTALS/ROLLUP/CUBE` on `NULL` values. Close [#43022](https://github.com/ClickHouse/ClickHouse/issues/43022). [#43232](https://github.com/ClickHouse/ClickHouse/pull/43232) ([Vladimir C](https://github.com/vdimir)).
|
||||
* - Fix ubsan in AggregateFunctionMinMaxAny::read with high sizes. [#43249](https://github.com/ClickHouse/ClickHouse/pull/43249) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix IS (NOT) NULL operator priority in regard to other operators. [#43265](https://github.com/ClickHouse/ClickHouse/pull/43265) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
|
||||
#### Build Improvement
|
||||
|
||||
* ... Add support for format ipv6 on s390x. [#42412](https://github.com/ClickHouse/ClickHouse/pull/42412) ([Suzy Wang](https://github.com/SuzyWangIBMer)).
|
||||
|
||||
#### NO CL ENTRY
|
||||
|
||||
* NO CL ENTRY: 'Revert "Sonar Cloud Workflow"'. [#42725](https://github.com/ClickHouse/ClickHouse/pull/42725) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* NO CL ENTRY: 'Revert " Keeper retries during insert (clean)"'. [#43116](https://github.com/ClickHouse/ClickHouse/pull/43116) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* NO CL ENTRY: 'Revert "Revert " Keeper retries during insert (clean)""'. [#43122](https://github.com/ClickHouse/ClickHouse/pull/43122) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* NO CL ENTRY: 'Revert "Optimize TTL merge, completely expired parts can be removed in time"'. [#43134](https://github.com/ClickHouse/ClickHouse/pull/43134) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* NO CL ENTRY: 'Revert "Randomize keeper fault injection settings in stress tests"'. [#43218](https://github.com/ClickHouse/ClickHouse/pull/43218) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* NO CL ENTRY: 'Revert "S3 request per second rate throttling"'. [#43306](https://github.com/ClickHouse/ClickHouse/pull/43306) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
|
||||
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||
|
||||
* Better logging for docs builder [#41903](https://github.com/ClickHouse/ClickHouse/pull/41903) ([filimonov](https://github.com/filimonov)).
|
||||
* Save full server log in AST Fuzzer checks [#42316](https://github.com/ClickHouse/ClickHouse/pull/42316) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Build with libcxx(abi) 15 [#42513](https://github.com/ClickHouse/ClickHouse/pull/42513) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Sonar Cloud Workflow [#42534](https://github.com/ClickHouse/ClickHouse/pull/42534) ([Julio Jimenez](https://github.com/juliojimenez)).
|
||||
* Invalid type in where for Merge table (logical error) [#42576](https://github.com/ClickHouse/ClickHouse/pull/42576) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix frequent memory drift message and clarify things in comments [#42582](https://github.com/ClickHouse/ClickHouse/pull/42582) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add functions for PowerBI connect [#42612](https://github.com/ClickHouse/ClickHouse/pull/42612) ([Filatenkov Artur](https://github.com/FArthur-cmd)).
|
||||
* Try to save `IDataPartStorage` interface [#42618](https://github.com/ClickHouse/ClickHouse/pull/42618) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Remove Ubuntu cruft [#42622](https://github.com/ClickHouse/ClickHouse/pull/42622) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Analyzer change setting into allow_experimental_analyzer [#42649](https://github.com/ClickHouse/ClickHouse/pull/42649) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Analyzer IQueryTreeNode remove getName method [#42651](https://github.com/ClickHouse/ClickHouse/pull/42651) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Minor fix iotest_nonblock build [#42658](https://github.com/ClickHouse/ClickHouse/pull/42658) ([Jordi Villar](https://github.com/jrdi)).
|
||||
* Add tests and doc for some url-related functions [#42664](https://github.com/ClickHouse/ClickHouse/pull/42664) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Update version_date.tsv and changelogs after v22.10.1.1875-stable [#42676](https://github.com/ClickHouse/ClickHouse/pull/42676) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Fix error handling in clickhouse_helper.py [#42678](https://github.com/ClickHouse/ClickHouse/pull/42678) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Fix execution of version_helper.py to use git tweaks [#42679](https://github.com/ClickHouse/ClickHouse/pull/42679) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* MergeTree indexes use RPNBuilderTree [#42681](https://github.com/ClickHouse/ClickHouse/pull/42681) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Always run `BuilderReport` and `BuilderSpecialReport` in all CI types [#42684](https://github.com/ClickHouse/ClickHouse/pull/42684) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Support optimize_syntax_fuse_functions for sum/count/avg via analyzer [#42689](https://github.com/ClickHouse/ClickHouse/pull/42689) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Update version after release [#42699](https://github.com/ClickHouse/ClickHouse/pull/42699) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Update version_date.tsv and changelogs after v22.10.1.1877-stable [#42700](https://github.com/ClickHouse/ClickHouse/pull/42700) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* OrderByLimitByDuplicateEliminationPass improve performance [#42704](https://github.com/ClickHouse/ClickHouse/pull/42704) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Analyzer improve subqueries representation [#42705](https://github.com/ClickHouse/ClickHouse/pull/42705) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Update version_date.tsv and changelogs after v22.9.4.32-stable [#42712](https://github.com/ClickHouse/ClickHouse/pull/42712) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Update version_date.tsv and changelogs after v22.8.7.34-lts [#42713](https://github.com/ClickHouse/ClickHouse/pull/42713) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Update version_date.tsv and changelogs after v22.7.7.24-stable [#42714](https://github.com/ClickHouse/ClickHouse/pull/42714) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Move SonarCloud Job to nightly [#42718](https://github.com/ClickHouse/ClickHouse/pull/42718) ([Julio Jimenez](https://github.com/juliojimenez)).
|
||||
* Update version_date.tsv and changelogs after v22.8.8.3-lts [#42738](https://github.com/ClickHouse/ClickHouse/pull/42738) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Minor fix implicit cast CaresPTRResolver [#42747](https://github.com/ClickHouse/ClickHouse/pull/42747) ([Jordi Villar](https://github.com/jrdi)).
|
||||
* Fix build on master [#42752](https://github.com/ClickHouse/ClickHouse/pull/42752) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Update version_date.tsv and changelogs after v22.3.14.18-lts [#42759](https://github.com/ClickHouse/ClickHouse/pull/42759) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Fix anchor links [#42760](https://github.com/ClickHouse/ClickHouse/pull/42760) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Update version_date.tsv and changelogs after v22.3.14.23-lts [#42764](https://github.com/ClickHouse/ClickHouse/pull/42764) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Update README.md [#42783](https://github.com/ClickHouse/ClickHouse/pull/42783) ([Yuko Takagi](https://github.com/yukotakagi)).
|
||||
* Slightly better code with projections [#42794](https://github.com/ClickHouse/ClickHouse/pull/42794) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix some races in MergeTree [#42805](https://github.com/ClickHouse/ClickHouse/pull/42805) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix typo in comments [#42809](https://github.com/ClickHouse/ClickHouse/pull/42809) ([Gabriel](https://github.com/Gabriel39)).
|
||||
* Fix compilation of LLVM with cmake cache [#42816](https://github.com/ClickHouse/ClickHouse/pull/42816) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix link in docs [#42821](https://github.com/ClickHouse/ClickHouse/pull/42821) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Link to proper place in docs [#42822](https://github.com/ClickHouse/ClickHouse/pull/42822) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Fix argument type check in AggregateFunctionAnalysisOfVariance [#42823](https://github.com/ClickHouse/ClickHouse/pull/42823) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Tests/lambda analyzer [#42824](https://github.com/ClickHouse/ClickHouse/pull/42824) ([Denny Crane](https://github.com/den-crane)).
|
||||
* Fix Missing Quotes - Sonar Nightly [#42831](https://github.com/ClickHouse/ClickHouse/pull/42831) ([Julio Jimenez](https://github.com/juliojimenez)).
|
||||
* Add exclusions from the Snyk scan [#42834](https://github.com/ClickHouse/ClickHouse/pull/42834) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix Missing Env Vars - Sonar Nightly [#42843](https://github.com/ClickHouse/ClickHouse/pull/42843) ([Julio Jimenez](https://github.com/juliojimenez)).
|
||||
* Fix typo [#42855](https://github.com/ClickHouse/ClickHouse/pull/42855) ([GoGoWen](https://github.com/GoGoWen)).
|
||||
* Add timezone to 02458_datediff_date32 [#42857](https://github.com/ClickHouse/ClickHouse/pull/42857) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Adjust cancel and rerun workflow names to the actual [#42862](https://github.com/ClickHouse/ClickHouse/pull/42862) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Analyzer subquery in JOIN TREE with aggregation [#42865](https://github.com/ClickHouse/ClickHouse/pull/42865) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Fix getauxval for sanitizer builds [#42866](https://github.com/ClickHouse/ClickHouse/pull/42866) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Update version_date.tsv and changelogs after v22.10.2.11-stable [#42871](https://github.com/ClickHouse/ClickHouse/pull/42871) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Better usability for dashboard.html on changes [#42872](https://github.com/ClickHouse/ClickHouse/pull/42872) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Some fixes for ReplicatedMergeTree [#42878](https://github.com/ClickHouse/ClickHouse/pull/42878) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Validate Query Tree in debug [#42879](https://github.com/ClickHouse/ClickHouse/pull/42879) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* changed type name for s3 plain storage [#42890](https://github.com/ClickHouse/ClickHouse/pull/42890) ([Aleksandr](https://github.com/AVMusorin)).
|
||||
* Cleanup implementation of regexpReplace(All|One) [#42907](https://github.com/ClickHouse/ClickHouse/pull/42907) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Do not show status for Bugfix validate check in non bugfix PRs [#42932](https://github.com/ClickHouse/ClickHouse/pull/42932) ([Vladimir C](https://github.com/vdimir)).
|
||||
* fix(typo): Passible -> Possible [#42933](https://github.com/ClickHouse/ClickHouse/pull/42933) ([Yakko Majuri](https://github.com/yakkomajuri)).
|
||||
* Pin the cryptography version to not break lambdas [#42934](https://github.com/ClickHouse/ClickHouse/pull/42934) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Fix: bad cast from type DB::ColumnLowCardinality to DB::ColumnString [#42937](https://github.com/ClickHouse/ClickHouse/pull/42937) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Attach thread pool for loading parts to the query [#42947](https://github.com/ClickHouse/ClickHouse/pull/42947) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix macOS M1 builds due to sprintf deprecation [#42962](https://github.com/ClickHouse/ClickHouse/pull/42962) ([Jordi Villar](https://github.com/jrdi)).
|
||||
* Less use of CH-specific bit_cast() [#42968](https://github.com/ClickHouse/ClickHouse/pull/42968) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Remove some utils [#42972](https://github.com/ClickHouse/ClickHouse/pull/42972) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix a bug in CAST function parser [#42980](https://github.com/ClickHouse/ClickHouse/pull/42980) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix old bug to remove `refs/head` from ref name [#42981](https://github.com/ClickHouse/ClickHouse/pull/42981) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Add debug information to nightly builds [#42997](https://github.com/ClickHouse/ClickHouse/pull/42997) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Add some guard rails around aggregation memory management [#42999](https://github.com/ClickHouse/ClickHouse/pull/42999) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Add `on: workflow_call` to debug CI [#43000](https://github.com/ClickHouse/ClickHouse/pull/43000) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Analyzer added identifier typo corrections [#43002](https://github.com/ClickHouse/ClickHouse/pull/43002) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Simple fixes for restart replica description [#43004](https://github.com/ClickHouse/ClickHouse/pull/43004) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Cleanup match code [#43006](https://github.com/ClickHouse/ClickHouse/pull/43006) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix TSan errors (correctly ignore _exit interception) [#43009](https://github.com/ClickHouse/ClickHouse/pull/43009) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* fix bandwidth throttlers initialization order [#43015](https://github.com/ClickHouse/ClickHouse/pull/43015) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Add test for issue [#42520](https://github.com/ClickHouse/ClickHouse/issues/42520) [#43027](https://github.com/ClickHouse/ClickHouse/pull/43027) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Analyzer improve ARRAY JOIN with JOIN [#43048](https://github.com/ClickHouse/ClickHouse/pull/43048) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Fix projection part removal with zero-copy replication [#43060](https://github.com/ClickHouse/ClickHouse/pull/43060) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix msan warning [#43065](https://github.com/ClickHouse/ClickHouse/pull/43065) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Analyzer AST key condition crash fix [#43070](https://github.com/ClickHouse/ClickHouse/pull/43070) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Better logging for mark range filtering on projection parts [#43076](https://github.com/ClickHouse/ClickHouse/pull/43076) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Fix ub type punning [#43088](https://github.com/ClickHouse/ClickHouse/pull/43088) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
* Analyzer improve aliases support for table expressions [#43089](https://github.com/ClickHouse/ClickHouse/pull/43089) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Throw not implemented for window frame type 'groups' in analyzer [#43090](https://github.com/ClickHouse/ClickHouse/pull/43090) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Disable clickhouse local and client non-interactive progress by default. [#43092](https://github.com/ClickHouse/ClickHouse/pull/43092) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Make error message after dropping current user more correct. [#43097](https://github.com/ClickHouse/ClickHouse/pull/43097) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* More stable test [#43102](https://github.com/ClickHouse/ClickHouse/pull/43102) ([alesapin](https://github.com/alesapin)).
|
||||
* Rewrite tests for memory overcommit [#43105](https://github.com/ClickHouse/ClickHouse/pull/43105) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Fix trailing \n from SQLancer status [#43114](https://github.com/ClickHouse/ClickHouse/pull/43114) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Fix `test_keeper_four_word_command::test_cmd_stat` [#43115](https://github.com/ClickHouse/ClickHouse/pull/43115) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Enable keeper fault injection for inserts in functional tests [#43117](https://github.com/ClickHouse/ClickHouse/pull/43117) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Analyzer aggregation crash fix [#43118](https://github.com/ClickHouse/ClickHouse/pull/43118) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Analyzer aggregation totals crash fix [#43119](https://github.com/ClickHouse/ClickHouse/pull/43119) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Improve commit_status_helper.py [#43121](https://github.com/ClickHouse/ClickHouse/pull/43121) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Skip hash logging on sanitizer builds [#43129](https://github.com/ClickHouse/ClickHouse/pull/43129) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Analyzer improve JOIN with constants [#43141](https://github.com/ClickHouse/ClickHouse/pull/43141) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Remove POCO_CLICKHOUSE_PATCH [#43146](https://github.com/ClickHouse/ClickHouse/pull/43146) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Update CompressionCodecDeflateQpl.cpp [#43150](https://github.com/ClickHouse/ClickHouse/pull/43150) ([Tiaonmmn](https://github.com/Tiaonmmn)).
|
||||
* Randomize keeper fault injection settings in stress tests [#43187](https://github.com/ClickHouse/ClickHouse/pull/43187) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Fix for missing columns bug with projections an ALTER UPDATE [#43189](https://github.com/ClickHouse/ClickHouse/pull/43189) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* A workaround for LLVM bug, https://github.com/llvm/llvm-project/issues/58633 [#43195](https://github.com/ClickHouse/ClickHouse/pull/43195) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Stop `ConfigReloader` first to avoid data race [#43201](https://github.com/ClickHouse/ClickHouse/pull/43201) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix typo [#43203](https://github.com/ClickHouse/ClickHouse/pull/43203) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Miscellaneous changes [#43206](https://github.com/ClickHouse/ClickHouse/pull/43206) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix flaky 02449_check_dependencies_and_table_shutdown [#43212](https://github.com/ClickHouse/ClickHouse/pull/43212) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Add test to check [#43167](https://github.com/ClickHouse/ClickHouse/issues/43167) for all builds [#43216](https://github.com/ClickHouse/ClickHouse/pull/43216) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Don't throw if shared ID already created in `StorageReplicatedMergeTree` [#43244](https://github.com/ClickHouse/ClickHouse/pull/43244) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix nullptr dereference in collectScopeValidIdentifiersForTypoCorrection [#43245](https://github.com/ClickHouse/ClickHouse/pull/43245) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Better message in wait_zookeeper_to_start [#43256](https://github.com/ClickHouse/ClickHouse/pull/43256) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Make test_global_overcommit_tracker non-parallel [#43266](https://github.com/ClickHouse/ClickHouse/pull/43266) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Rename canonicalRand to randCanonical [#43283](https://github.com/ClickHouse/ClickHouse/pull/43283) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* check limits for an AST in select parser fuzzer [#43285](https://github.com/ClickHouse/ClickHouse/pull/43285) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Allow autoremoval of old parts if detach_not_byte_identical_parts enabled [#43287](https://github.com/ClickHouse/ClickHouse/pull/43287) ([filimonov](https://github.com/filimonov)).
|
||||
* `pmod`: compatibility with Spark, better documentation [#43313](https://github.com/ClickHouse/ClickHouse/pull/43313) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
@ -28,7 +28,7 @@ Engines:
|
||||
|
||||
During `INSERT` queries, the table is locked, and other queries for reading and writing data both wait for the table to unlock. If there are no data writing queries, any number of data reading queries can be performed concurrently.
|
||||
|
||||
- Do not support [mutations](/docs/en/sql-reference/statements/alter/index.md/#alter-mutations).
|
||||
- Do not support [mutations](/docs/en/sql-reference/statements/alter/index.md#alter-mutations).
|
||||
|
||||
- Do not support indexes.
|
||||
|
||||
|
@ -537,7 +537,7 @@ TTL time_column
|
||||
TTL time_column + interval
|
||||
```
|
||||
|
||||
To define `interval`, use [time interval](/docs/en/sql-reference/operators/index.md/#operators-datetime) operators, for example:
|
||||
To define `interval`, use [time interval](/docs/en/sql-reference/operators/index.md#operators-datetime) operators, for example:
|
||||
|
||||
``` sql
|
||||
TTL date_time + INTERVAL 1 MONTH
|
||||
@ -860,7 +860,7 @@ The number of threads performing background moves of data parts can be changed b
|
||||
In the case of `MergeTree` tables, data is getting to disk in different ways:
|
||||
|
||||
- As a result of an insert (`INSERT` query).
|
||||
- During background merges and [mutations](/docs/en/sql-reference/statements/alter/index.md/#alter-mutations).
|
||||
- During background merges and [mutations](/docs/en/sql-reference/statements/alter/index.md#alter-mutations).
|
||||
- When downloading from another replica.
|
||||
- As a result of partition freezing [ALTER TABLE … FREEZE PARTITION](/docs/en/sql-reference/statements/alter/partition.md/#alter_freeze-partition).
|
||||
|
||||
|
@ -20,7 +20,7 @@ Replication works at the level of an individual table, not the entire server. A
|
||||
|
||||
Replication does not depend on sharding. Each shard has its own independent replication.
|
||||
|
||||
Compressed data for `INSERT` and `ALTER` queries is replicated (for more information, see the documentation for [ALTER](/docs/en/sql-reference/statements/alter/index.md/#query_language_queries_alter)).
|
||||
Compressed data for `INSERT` and `ALTER` queries is replicated (for more information, see the documentation for [ALTER](/docs/en/sql-reference/statements/alter/index.md#query_language_queries_alter)).
|
||||
|
||||
`CREATE`, `DROP`, `ATTACH`, `DETACH` and `RENAME` queries are executed on a single server and are not replicated:
|
||||
|
||||
|
@ -59,7 +59,7 @@ Main use-cases for `Join`-engine tables are following:
|
||||
|
||||
### Deleting Data {#deleting-data}
|
||||
|
||||
`ALTER DELETE` queries for `Join`-engine tables are implemented as [mutations](/docs/en/sql-reference/statements/alter/index.md/#mutations). `DELETE` mutation reads filtered data and overwrites data of memory and disk.
|
||||
`ALTER DELETE` queries for `Join`-engine tables are implemented as [mutations](/docs/en/sql-reference/statements/alter/index.md#mutations). `DELETE` mutation reads filtered data and overwrites data of memory and disk.
|
||||
|
||||
### Limitations and Settings {#join-limitations-and-settings}
|
||||
|
||||
|
@ -26,7 +26,7 @@ Ways to configure settings, in order of priority:
|
||||
|
||||
- When starting the ClickHouse console client in non-interactive mode, set the startup parameter `--setting=value`.
|
||||
- When using the HTTP API, pass CGI parameters (`URL?setting_1=value&setting_2=value...`).
|
||||
- Make settings in the [SETTINGS](../../sql-reference/statements/select/index.md#settings-in-select) clause of the SELECT query. The setting value is applied only to that query and is reset to default or previous value after the query is executed.
|
||||
- Make settings in the [SETTINGS](../../sql-reference/statements/select/index.md#settings-in-select-query) clause of the SELECT query. The setting value is applied only to that query and is reset to default or previous value after the query is executed.
|
||||
|
||||
Settings that can only be made in the server config file are not covered in this section.
|
||||
|
||||
|
@ -276,7 +276,7 @@ Default value: 0.
|
||||
Enables or disables the insertion of [default values](../../sql-reference/statements/create/table.md/#create-default-values) instead of [NULL](../../sql-reference/syntax.md/#null-literal) into columns with not [nullable](../../sql-reference/data-types/nullable.md/#data_type-nullable) data type.
|
||||
If column type is not nullable and this setting is disabled, then inserting `NULL` causes an exception. If column type is nullable, then `NULL` values are inserted as is, regardless of this setting.
|
||||
|
||||
This setting is applicable to [INSERT ... SELECT](../../sql-reference/statements/insert-into.md/#insert_query_insert-select) queries. Note that `SELECT` subqueries may be concatenated with `UNION ALL` clause.
|
||||
This setting is applicable to [INSERT ... SELECT](../../sql-reference/statements/insert-into.md/#inserting-the-results-of-select) queries. Note that `SELECT` subqueries may be concatenated with `UNION ALL` clause.
|
||||
|
||||
Possible values:
|
||||
|
||||
@ -1619,8 +1619,8 @@ These functions can be transformed:
|
||||
- [length](../../sql-reference/functions/array-functions.md/#array_functions-length) to read the [size0](../../sql-reference/data-types/array.md/#array-size) subcolumn.
|
||||
- [empty](../../sql-reference/functions/array-functions.md/#function-empty) to read the [size0](../../sql-reference/data-types/array.md/#array-size) subcolumn.
|
||||
- [notEmpty](../../sql-reference/functions/array-functions.md/#function-notempty) to read the [size0](../../sql-reference/data-types/array.md/#array-size) subcolumn.
|
||||
- [isNull](../../sql-reference/operators/index.md/#operator-is-null) to read the [null](../../sql-reference/data-types/nullable.md/#finding-null) subcolumn.
|
||||
- [isNotNull](../../sql-reference/operators/index.md/#is-not-null) to read the [null](../../sql-reference/data-types/nullable.md/#finding-null) subcolumn.
|
||||
- [isNull](../../sql-reference/operators/index.md#operator-is-null) to read the [null](../../sql-reference/data-types/nullable.md/#finding-null) subcolumn.
|
||||
- [isNotNull](../../sql-reference/operators/index.md#is-not-null) to read the [null](../../sql-reference/data-types/nullable.md/#finding-null) subcolumn.
|
||||
- [count](../../sql-reference/aggregate-functions/reference/count.md) to read the [null](../../sql-reference/data-types/nullable.md/#finding-null) subcolumn.
|
||||
- [mapKeys](../../sql-reference/functions/tuple-map-functions.md/#mapkeys) to read the [keys](../../sql-reference/data-types/map.md/#map-subcolumns) subcolumn.
|
||||
- [mapValues](../../sql-reference/functions/tuple-map-functions.md/#mapvalues) to read the [values](../../sql-reference/data-types/map.md/#map-subcolumns) subcolumn.
|
||||
@ -2041,7 +2041,7 @@ Default value: 16.
|
||||
|
||||
## validate_polygons {#validate_polygons}
|
||||
|
||||
Enables or disables throwing an exception in the [pointInPolygon](../../sql-reference/functions/geo/index.md/#pointinpolygon) function, if the polygon is self-intersecting or self-tangent.
|
||||
Enables or disables throwing an exception in the [pointInPolygon](../../sql-reference/functions/geo/index.md#pointinpolygon) function, if the polygon is self-intersecting or self-tangent.
|
||||
|
||||
Possible values:
|
||||
|
||||
@ -2227,7 +2227,7 @@ Default value: `0`.
|
||||
|
||||
## mutations_sync {#mutations_sync}
|
||||
|
||||
Allows to execute `ALTER TABLE ... UPDATE|DELETE` queries ([mutations](../../sql-reference/statements/alter/index.md/#mutations)) synchronously.
|
||||
Allows to execute `ALTER TABLE ... UPDATE|DELETE` queries ([mutations](../../sql-reference/statements/alter/index.md#mutations)) synchronously.
|
||||
|
||||
Possible values:
|
||||
|
||||
@ -2239,8 +2239,8 @@ Default value: `0`.
|
||||
|
||||
**See Also**
|
||||
|
||||
- [Synchronicity of ALTER Queries](../../sql-reference/statements/alter/index.md/#synchronicity-of-alter-queries)
|
||||
- [Mutations](../../sql-reference/statements/alter/index.md/#mutations)
|
||||
- [Synchronicity of ALTER Queries](../../sql-reference/statements/alter/index.md#synchronicity-of-alter-queries)
|
||||
- [Mutations](../../sql-reference/statements/alter/index.md#mutations)
|
||||
|
||||
## ttl_only_drop_parts {#ttl_only_drop_parts}
|
||||
|
||||
|
@ -3,7 +3,7 @@ slug: /en/operations/system-tables/mutations
|
||||
---
|
||||
# mutations
|
||||
|
||||
The table contains information about [mutations](/docs/en/sql-reference/statements/alter/index.md/#mutations) of [MergeTree](/docs/en/engines/table-engines/mergetree-family/mergetree.md) tables and their progress. Each mutation command is represented by a single row.
|
||||
The table contains information about [mutations](/docs/en/sql-reference/statements/alter/index.md#mutations) of [MergeTree](/docs/en/engines/table-engines/mergetree-family/mergetree.md) tables and their progress. Each mutation command is represented by a single row.
|
||||
|
||||
Columns:
|
||||
|
||||
@ -45,7 +45,7 @@ If there were problems with mutating some data parts, the following columns cont
|
||||
|
||||
**See Also**
|
||||
|
||||
- [Mutations](/docs/en/sql-reference/statements/alter/index.md/#mutations)
|
||||
- [Mutations](/docs/en/sql-reference/statements/alter/index.md#mutations)
|
||||
- [MergeTree](/docs/en/engines/table-engines/mergetree-family/mergetree.md) table engine
|
||||
- [ReplicatedMergeTree](/docs/en/engines/table-engines/mergetree-family/replication.md) family
|
||||
|
||||
|
@ -9,7 +9,7 @@ Each row describes one data part.
|
||||
|
||||
Columns:
|
||||
|
||||
- `partition` ([String](../../sql-reference/data-types/string.md)) – The partition name. To learn what a partition is, see the description of the [ALTER](../../sql-reference/statements/alter/index.md/#query_language_queries_alter) query.
|
||||
- `partition` ([String](../../sql-reference/data-types/string.md)) – The partition name. To learn what a partition is, see the description of the [ALTER](../../sql-reference/statements/alter/index.md#query_language_queries_alter) query.
|
||||
|
||||
Formats:
|
||||
|
||||
|
@ -9,7 +9,7 @@ Each row describes one data part.
|
||||
|
||||
Columns:
|
||||
|
||||
- `partition` ([String](../../sql-reference/data-types/string.md)) — The partition name. To learn what a partition is, see the description of the [ALTER](../../sql-reference/statements/alter/index.md/#query_language_queries_alter) query.
|
||||
- `partition` ([String](../../sql-reference/data-types/string.md)) — The partition name. To learn what a partition is, see the description of the [ALTER](../../sql-reference/statements/alter/index.md#query_language_queries_alter) query.
|
||||
|
||||
Formats:
|
||||
|
||||
|
@ -6,7 +6,7 @@ sidebar_label: Date32
|
||||
|
||||
# Date32
|
||||
|
||||
A date. Supports the date range same with [DateTime64](../../sql-reference/data-types/datetime64.md). Stored in four bytes as the number of days since 1900-01-01. Allows storing values till 2299-12-31.
|
||||
A date. Supports the date range same with [DateTime64](../../sql-reference/data-types/datetime64.md). Stored as a signed 32-bit integer in native byte order with the value representing the days since 1970-01-01 (0 represents 1970-01-01 and negative values represent the days before 1970).
|
||||
|
||||
**Examples**
|
||||
|
||||
|
@ -7,7 +7,9 @@ import CloudDetails from '@site/docs/en/sql-reference/dictionaries/external-dict
|
||||
|
||||
# Dictionaries
|
||||
|
||||
<CloudDetails />
|
||||
:::tip Tutorial
|
||||
If you are getting started with Dictionaries in ClickHouse we have a tutorial that covers that topic. Take a look [here](/docs/en/tutorial.md).
|
||||
:::
|
||||
|
||||
You can add your own dictionaries from various data sources. The source for a dictionary can be a ClickHouse table, a local text or executable file, an HTTP(s) resource, or another DBMS. For more information, see “[Dictionary Sources](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md)”.
|
||||
|
||||
@ -27,6 +29,8 @@ The [dictionaries](../../../operations/system-tables/dictionaries.md#system_tabl
|
||||
- Configuration parameters.
|
||||
- Metrics like amount of RAM allocated for the dictionary or a number of queries since the dictionary was successfully loaded.
|
||||
|
||||
<CloudDetails />
|
||||
|
||||
## Creating a dictionary with a DDL query
|
||||
|
||||
Dictionaries can be created with [DDL queries](../../../sql-reference/statements/create/dictionary.md), and this is the recommended method because with DDL created dictionaries:
|
||||
|
@ -185,7 +185,7 @@ unhex(arg)
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `arg` — A string containing any number of hexadecimal digits. Type: [String](../../sql-reference/data-types/string.md).
|
||||
- `arg` — A string containing any number of hexadecimal digits. Type: [String](../../sql-reference/data-types/string.md), [FixedString](../../sql-reference/data-types/fixedstring.md).
|
||||
|
||||
Supports both uppercase and lowercase letters `A-F`. The number of hexadecimal digits does not have to be even. If it is odd, the last digit is interpreted as the least significant half of the `00-0F` byte. If the argument string contains anything other than hexadecimal digits, some implementation-defined result is returned (an exception isn’t thrown). For a numeric argument the inverse of hex(N) is not performed by unhex().
|
||||
|
||||
|
@ -549,3 +549,33 @@ Result:
|
||||
│ 3.141592653589793 │
|
||||
└───────────────────┘
|
||||
```
|
||||
|
||||
|
||||
## factorial(n)
|
||||
|
||||
Computes the factorial of an integer value. It works with any native integer type including UInt(8|16|32|64) and Int(8|16|32|64). The return type is UInt64.
|
||||
|
||||
The factorial of 0 is 1. Likewise, the factorial() function returns 1 for any negative value. The maximum positive value for the input argument is 20, a value of 21 or greater will cause exception throw.
|
||||
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
factorial(n)
|
||||
```
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT factorial(10);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─factorial(10)─┐
|
||||
│ 3628800 │
|
||||
└───────────────┘
|
||||
```
|
||||
|
@ -24,7 +24,7 @@ Returns a pseudo-random UInt64 number, evenly distributed among all UInt64-type
|
||||
|
||||
Uses a linear congruential generator.
|
||||
|
||||
## canonicalRand
|
||||
## randCanonical
|
||||
The function generates pseudo random results with independent and identically distributed uniformly distributed values in [0, 1).
|
||||
|
||||
Non-deterministic. Return type is Float64.
|
||||
|
@ -254,7 +254,7 @@ The `ALTER` query lets you create and delete separate elements (columns) in nest
|
||||
|
||||
There is no support for deleting columns in the primary key or the sampling key (columns that are used in the `ENGINE` expression). Changing the type for columns that are included in the primary key is only possible if this change does not cause the data to be modified (for example, you are allowed to add values to an Enum or to change a type from `DateTime` to `UInt32`).
|
||||
|
||||
If the `ALTER` query is not sufficient to make the table changes you need, you can create a new table, copy the data to it using the [INSERT SELECT](/docs/en/sql-reference/statements/insert-into.md/#insert_query_insert-select) query, then switch the tables using the [RENAME](/docs/en/sql-reference/statements/rename.md/#rename-table) query and delete the old table. You can use the [clickhouse-copier](/docs/en/operations/utilities/clickhouse-copier.md) as an alternative to the `INSERT SELECT` query.
|
||||
If the `ALTER` query is not sufficient to make the table changes you need, you can create a new table, copy the data to it using the [INSERT SELECT](/docs/en/sql-reference/statements/insert-into.md/#inserting-the-results-of-select) query, then switch the tables using the [RENAME](/docs/en/sql-reference/statements/rename.md/#rename-table) query and delete the old table. You can use the [clickhouse-copier](/docs/en/operations/utilities/clickhouse-copier.md) as an alternative to the `INSERT SELECT` query.
|
||||
|
||||
The `ALTER` query blocks all reads and writes for the table. In other words, if a long `SELECT` is running at the time of the `ALTER` query, the `ALTER` query will wait for it to complete. At the same time, all new queries to the same table will wait while this `ALTER` is running.
|
||||
|
||||
|
@ -10,7 +10,7 @@ sidebar_label: DELETE
|
||||
ALTER TABLE [db.]table [ON CLUSTER cluster] DELETE WHERE filter_expr
|
||||
```
|
||||
|
||||
Deletes data matching the specified filtering expression. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md/#mutations).
|
||||
Deletes data matching the specified filtering expression. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
|
||||
|
||||
|
||||
:::note
|
||||
@ -25,6 +25,6 @@ The synchronicity of the query processing is defined by the [mutations_sync](/do
|
||||
|
||||
**See also**
|
||||
|
||||
- [Mutations](/docs/en/sql-reference/statements/alter/index.md/#mutations)
|
||||
- [Synchronicity of ALTER Queries](/docs/en/sql-reference/statements/alter/index.md/#synchronicity-of-alter-queries)
|
||||
- [Mutations](/docs/en/sql-reference/statements/alter/index.md#mutations)
|
||||
- [Synchronicity of ALTER Queries](/docs/en/sql-reference/statements/alter/index.md#synchronicity-of-alter-queries)
|
||||
- [mutations_sync](/docs/en/operations/settings/settings.md/#mutations_sync) setting
|
||||
|
@ -270,7 +270,7 @@ ALTER TABLE hits MOVE PARTITION '2019-09-01' TO DISK 'fast_ssd'
|
||||
|
||||
## UPDATE IN PARTITION
|
||||
|
||||
Manipulates data in the specifies partition matching the specified filtering expression. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md/#mutations).
|
||||
Manipulates data in the specifies partition matching the specified filtering expression. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
|
||||
|
||||
Syntax:
|
||||
|
||||
@ -290,7 +290,7 @@ ALTER TABLE mt UPDATE x = x + 1 IN PARTITION 2 WHERE p = 2;
|
||||
|
||||
## DELETE IN PARTITION
|
||||
|
||||
Deletes data in the specifies partition matching the specified filtering expression. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md/#mutations).
|
||||
Deletes data in the specifies partition matching the specified filtering expression. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
|
||||
|
||||
Syntax:
|
||||
|
||||
|
@ -138,15 +138,15 @@ The following operations with [projections](/docs/en/engines/table-engines/merge
|
||||
|
||||
## DROP PROJECTION
|
||||
|
||||
`ALTER TABLE [db].name DROP PROJECTION name` - Removes projection description from tables metadata and deletes projection files from disk. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md/#mutations).
|
||||
`ALTER TABLE [db].name DROP PROJECTION name` - Removes projection description from tables metadata and deletes projection files from disk. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
|
||||
|
||||
## MATERIALIZE PROJECTION
|
||||
|
||||
`ALTER TABLE [db.]table MATERIALIZE PROJECTION name IN PARTITION partition_name` - The query rebuilds the projection `name` in the partition `partition_name`. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md/#mutations).
|
||||
`ALTER TABLE [db.]table MATERIALIZE PROJECTION name IN PARTITION partition_name` - The query rebuilds the projection `name` in the partition `partition_name`. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
|
||||
|
||||
## CLEAR PROJECTION
|
||||
|
||||
`ALTER TABLE [db.]table CLEAR PROJECTION name IN PARTITION partition_name` - Deletes projection files from disk without removing description. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md/#mutations).
|
||||
`ALTER TABLE [db.]table CLEAR PROJECTION name IN PARTITION partition_name` - Deletes projection files from disk without removing description. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
|
||||
|
||||
|
||||
The commands `ADD`, `DROP` and `CLEAR` are lightweight in a sense that they only change metadata or remove files.
|
||||
|
@ -14,7 +14,7 @@ The following operations are available:
|
||||
|
||||
- `ALTER TABLE [db].table_name [ON CLUSTER cluster] DROP INDEX name` - Removes index description from tables metadata and deletes index files from disk.
|
||||
|
||||
- `ALTER TABLE [db.]table_name [ON CLUSTER cluster] MATERIALIZE INDEX name [IN PARTITION partition_name]` - Rebuilds the secondary index `name` for the specified `partition_name`. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md/#mutations). If `IN PARTITION` part is omitted then it rebuilds the index for the whole table data.
|
||||
- `ALTER TABLE [db.]table_name [ON CLUSTER cluster] MATERIALIZE INDEX name [IN PARTITION partition_name]` - Rebuilds the secondary index `name` for the specified `partition_name`. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations). If `IN PARTITION` part is omitted then it rebuilds the index for the whole table data.
|
||||
|
||||
The first two commands are lightweight in a sense that they only change metadata or remove files.
|
||||
|
||||
|
@ -10,7 +10,7 @@ sidebar_label: UPDATE
|
||||
ALTER TABLE [db.]table [ON CLUSTER cluster] UPDATE column1 = expr1 [, ...] WHERE filter_expr
|
||||
```
|
||||
|
||||
Manipulates data matching the specified filtering expression. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md/#mutations).
|
||||
Manipulates data matching the specified filtering expression. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
|
||||
|
||||
:::note
|
||||
The `ALTER TABLE` prefix makes this syntax different from most other systems supporting SQL. It is intended to signify that unlike similar queries in OLTP databases this is a heavy operation not designed for frequent use.
|
||||
@ -24,7 +24,7 @@ The synchronicity of the query processing is defined by the [mutations_sync](/do
|
||||
|
||||
**See also**
|
||||
|
||||
- [Mutations](/docs/en/sql-reference/statements/alter/index.md/#mutations)
|
||||
- [Synchronicity of ALTER Queries](/docs/en/sql-reference/statements/alter/index.md/#synchronicity-of-alter-queries)
|
||||
- [Mutations](/docs/en/sql-reference/statements/alter/index.md#mutations)
|
||||
- [Synchronicity of ALTER Queries](/docs/en/sql-reference/statements/alter/index.md#synchronicity-of-alter-queries)
|
||||
- [mutations_sync](/docs/en/operations/settings/settings.md/#mutations_sync) setting
|
||||
|
||||
|
@ -24,7 +24,7 @@ slug: /ru/operations/settings/
|
||||
|
||||
- При запуске консольного клиента ClickHouse в не интерактивном режиме установите параметр запуска `--setting=value`.
|
||||
- При использовании HTTP API передавайте cgi-параметры (`URL?setting_1=value&setting_2=value...`).
|
||||
- Укажите необходимые настройки в секции [SETTINGS](../../sql-reference/statements/select/index.md#settings-in-select) запроса SELECT. Эти настройки действуют только в рамках данного запроса, а после его выполнения сбрасываются до предыдущего значения или значения по умолчанию.
|
||||
- Укажите необходимые настройки в секции [SETTINGS](../../sql-reference/statements/select/index.md#settings-in-select-query) запроса SELECT. Эти настройки действуют только в рамках данного запроса, а после его выполнения сбрасываются до предыдущего значения или значения по умолчанию.
|
||||
|
||||
Настройки, которые можно задать только в конфигурационном файле сервера, в разделе не рассматриваются.
|
||||
|
||||
|
@ -479,7 +479,7 @@ SELECT * FROM table_with_enum_column_for_tsv_insert;
|
||||
Включает или отключает вставку [значений по умолчанию](../../sql-reference/statements/create/table.md#create-default-values) вместо [NULL](../../sql-reference/syntax.md#null-literal) в столбцы, которые не позволяют [хранить NULL](../../sql-reference/data-types/nullable.md#data_type-nullable).
|
||||
Если столбец не позволяет хранить `NULL` и эта настройка отключена, то вставка `NULL` приведет к возникновению исключения. Если столбец позволяет хранить `NULL`, то значения `NULL` вставляются независимо от этой настройки.
|
||||
|
||||
Эта настройка используется для запросов [INSERT ... SELECT](../../sql-reference/statements/insert-into.md#insert_query_insert-select). При этом подзапросы `SELECT` могут объединяться с помощью `UNION ALL`.
|
||||
Эта настройка используется для запросов [INSERT ... SELECT](../../sql-reference/statements/insert-into.md#inserting-the-results-of-select). При этом подзапросы `SELECT` могут объединяться с помощью `UNION ALL`.
|
||||
|
||||
Возможные значения:
|
||||
|
||||
|
@ -254,7 +254,7 @@ SELECT groupArray(x), groupArray(s) FROM tmp;
|
||||
|
||||
Отсутствует возможность удалять столбцы, входящие в первичный ключ или ключ для сэмплирования (в общем, входящие в выражение `ENGINE`). Изменение типа у столбцов, входящих в первичный ключ возможно только в том случае, если это изменение не приводит к изменению данных (например, разрешено добавление значения в Enum или изменение типа с `DateTime` на `UInt32`).
|
||||
|
||||
Если возможностей запроса `ALTER` не хватает для нужного изменения таблицы, вы можете создать новую таблицу, скопировать туда данные с помощью запроса [INSERT SELECT](../insert-into.md#insert_query_insert-select), затем поменять таблицы местами с помощью запроса [RENAME](../rename.md#rename-table), и удалить старую таблицу. В качестве альтернативы для запроса `INSERT SELECT`, можно использовать инструмент [clickhouse-copier](../../../sql-reference/statements/alter/index.md).
|
||||
Если возможностей запроса `ALTER` не хватает для нужного изменения таблицы, вы можете создать новую таблицу, скопировать туда данные с помощью запроса [INSERT SELECT](../insert-into.md#inserting-the-results-of-select), затем поменять таблицы местами с помощью запроса [RENAME](../rename.md#rename-table), и удалить старую таблицу. В качестве альтернативы для запроса `INSERT SELECT`, можно использовать инструмент [clickhouse-copier](../../../sql-reference/statements/alter/index.md).
|
||||
|
||||
Запрос `ALTER` блокирует все чтения и записи для таблицы. То есть если на момент запроса `ALTER` выполнялся долгий `SELECT`, то запрос `ALTER` сначала дождётся его выполнения. И в это время все новые запросы к той же таблице будут ждать, пока завершится этот `ALTER`.
|
||||
|
||||
|
@ -95,7 +95,7 @@ INSERT INTO t FORMAT TabSeparated
|
||||
|
||||
Если в таблице объявлены [ограничения](../../sql-reference/statements/create/table.md#constraints), то их выполнимость будет проверена для каждой вставляемой строки. Если для хотя бы одной строки ограничения не будут выполнены, запрос будет остановлен.
|
||||
|
||||
### Вставка результатов `SELECT` {#insert_query_insert-select}
|
||||
### Вставка результатов `SELECT` {#inserting-the-results-of-select}
|
||||
|
||||
**Синтаксис**
|
||||
|
||||
|
@ -270,7 +270,7 @@ SELECT * REPLACE(i + 1 AS i) EXCEPT (j) APPLY(sum) from columns_transformers;
|
||||
└─────────────────┴────────┘
|
||||
```
|
||||
|
||||
## SETTINGS в запросе SELECT {#settings-in-select}
|
||||
## SETTINGS в запросе SELECT {#settings-in-select-query}
|
||||
|
||||
Вы можете задать значения необходимых настроек непосредственно в запросе `SELECT` в секции `SETTINGS`. Эти настройки действуют только в рамках данного запроса, а после его выполнения сбрасываются до предыдущего значения или значения по умолчанию.
|
||||
|
||||
|
@ -181,7 +181,7 @@ unhex(arg)
|
||||
|
||||
**参数**
|
||||
|
||||
- `arg` — 包含任意数量的十六进制数字的字符串。类型为:[String](../../sql-reference/data-types/string.md)。
|
||||
- `arg` — 包含任意数量的十六进制数字的字符串。类型为:[String](../../sql-reference/data-types/string.md),[FixedString](../../sql-reference/data-types/fixedstring.md)。
|
||||
|
||||
支持大写和小写字母A-F。十六进制数字的数量不必是偶数。如果是奇数,则最后一位数被解释为00-0F字节的低位。如果参数字符串包含除十六进制数字以外的任何内容,则返回一些实现定义的结果(不抛出异常)。对于数字参数, unhex()不执行 hex(N) 的倒数。
|
||||
|
||||
|
@ -150,7 +150,7 @@ ALTER TABLE visits MODIFY COLUMN browser Array(String)
|
||||
|
||||
不支持对primary key或者sampling key中的列(在 `ENGINE` 表达式中用到的列)进行删除操作。改变包含在primary key中的列的类型时,如果操作不会导致数据的变化(例如,往Enum中添加一个值,或者将`DateTime` 类型改成 `UInt32`),那么这种操作是可行的。
|
||||
|
||||
如果 `ALTER` 操作不足以完成你想要的表变动操作,你可以创建一张新的表,通过 [INSERT SELECT](../../sql-reference/statements/insert-into.md#insert_query_insert-select)将数据拷贝进去,然后通过 [RENAME](../../sql-reference/statements/misc.md#misc_operations-rename)将新的表改成和原有表一样的名称,并删除原有的表。你可以使用 [clickhouse-copier](../../operations/utilities/clickhouse-copier.md) 代替 `INSERT SELECT`。
|
||||
如果 `ALTER` 操作不足以完成你想要的表变动操作,你可以创建一张新的表,通过 [INSERT SELECT](../../sql-reference/statements/insert-into.md#inserting-the-results-of-select)将数据拷贝进去,然后通过 [RENAME](../../sql-reference/statements/misc.md#misc_operations-rename)将新的表改成和原有表一样的名称,并删除原有的表。你可以使用 [clickhouse-copier](../../operations/utilities/clickhouse-copier.md) 代替 `INSERT SELECT`。
|
||||
|
||||
`ALTER` 操作会阻塞对表的所有读写操作。换句话说,当一个大的 `SELECT` 语句和 `ALTER`同时执行时,`ALTER`会等待,直到 `SELECT` 执行结束。与此同时,当 `ALTER` 运行时,新的 sql 语句将会等待。
|
||||
|
||||
|
@ -90,7 +90,7 @@ INSERT INTO t FORMAT TabSeparated
|
||||
|
||||
如果表中有一些[限制](../../sql-reference/statements/create/table.mdx#constraints),,数据插入时会逐行进行数据校验,如果这里面包含了不符合限制条件的数据,服务将会抛出包含限制信息的异常,这个语句也会被停止执行。
|
||||
|
||||
### 使用`SELECT`的结果写入 {#insert_query_insert-select}
|
||||
### 使用`SELECT`的结果写入 {#inserting-the-results-of-select}
|
||||
|
||||
``` sql
|
||||
INSERT INTO [db.]table [(c1, c2, c3)] SELECT ...
|
||||
|
@ -29,6 +29,7 @@ namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
extern const int TOO_LARGE_STRING_SIZE;
|
||||
}
|
||||
|
||||
/** Aggregate functions that store one of passed values.
|
||||
@ -521,7 +522,11 @@ public:
|
||||
{
|
||||
if (capacity < rhs_size)
|
||||
{
|
||||
capacity = static_cast<UInt32>(roundUpToPowerOfTwoOrZero(rhs_size));
|
||||
capacity = static_cast<Int32>(roundUpToPowerOfTwoOrZero(rhs_size));
|
||||
/// It might happen if the size was too big and the rounded value does not fit a size_t
|
||||
if (unlikely(capacity < rhs_size))
|
||||
throw Exception(ErrorCodes::TOO_LARGE_STRING_SIZE, "String size is too big ({})", rhs_size);
|
||||
|
||||
/// Don't free large_data here.
|
||||
large_data = arena->alloc(capacity);
|
||||
}
|
||||
|
@ -202,7 +202,7 @@ public:
|
||||
auto & merged_maps = this->data(place).merged_maps;
|
||||
for (size_t col = 0, size = values_types.size(); col < size; ++col)
|
||||
{
|
||||
const auto & array_column = assert_cast<const ColumnArray&>(*columns[col + 1]);
|
||||
const auto & array_column = assert_cast<const ColumnArray &>(*columns[col + 1]);
|
||||
const IColumn & value_column = array_column.getData();
|
||||
const IColumn::Offsets & offsets = array_column.getOffsets();
|
||||
const size_t values_vec_offset = offsets[row_num - 1];
|
||||
@ -532,7 +532,12 @@ private:
|
||||
public:
|
||||
explicit FieldVisitorMax(const Field & rhs_) : rhs(rhs_) {}
|
||||
|
||||
bool operator() (Null &) const { throw Exception("Cannot compare Nulls", ErrorCodes::LOGICAL_ERROR); }
|
||||
bool operator() (Null &) const
|
||||
{
|
||||
/// Do not update current value, skip nulls
|
||||
return false;
|
||||
}
|
||||
|
||||
bool operator() (AggregateFunctionStateData &) const { throw Exception("Cannot compare AggregateFunctionStates", ErrorCodes::LOGICAL_ERROR); }
|
||||
|
||||
bool operator() (Array & x) const { return compareImpl<Array>(x); }
|
||||
@ -567,7 +572,13 @@ private:
|
||||
public:
|
||||
explicit FieldVisitorMin(const Field & rhs_) : rhs(rhs_) {}
|
||||
|
||||
bool operator() (Null &) const { throw Exception("Cannot compare Nulls", ErrorCodes::LOGICAL_ERROR); }
|
||||
|
||||
bool operator() (Null &) const
|
||||
{
|
||||
/// Do not update current value, skip nulls
|
||||
return false;
|
||||
}
|
||||
|
||||
bool operator() (AggregateFunctionStateData &) const { throw Exception("Cannot sum AggregateFunctionStates", ErrorCodes::LOGICAL_ERROR); }
|
||||
|
||||
bool operator() (Array & x) const { return compareImpl<Array>(x); }
|
||||
|
@ -9,6 +9,7 @@
|
||||
#include <DataTypes/DataTypeTuple.h>
|
||||
#include <DataTypes/DataTypeUUID.h>
|
||||
|
||||
#include <Core/Settings.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -28,8 +29,9 @@ namespace
|
||||
/** `DataForVariadic` is a data structure that will be used for `uniq` aggregate function of multiple arguments.
|
||||
* It differs, for example, in that it uses a trivial hash function, since `uniq` of many arguments first hashes them out itself.
|
||||
*/
|
||||
template <typename Data, typename DataForVariadic>
|
||||
AggregateFunctionPtr createAggregateFunctionUniq(const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
template <typename Data, template <bool, bool> typename DataForVariadic>
|
||||
AggregateFunctionPtr
|
||||
createAggregateFunctionUniq(const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
assertNoParameters(name, params);
|
||||
|
||||
@ -61,21 +63,22 @@ AggregateFunctionPtr createAggregateFunctionUniq(const std::string & name, const
|
||||
else if (which.isTuple())
|
||||
{
|
||||
if (use_exact_hash_function)
|
||||
return std::make_shared<AggregateFunctionUniqVariadic<DataForVariadic, true, true>>(argument_types);
|
||||
return std::make_shared<AggregateFunctionUniqVariadic<DataForVariadic<true, true>>>(argument_types);
|
||||
else
|
||||
return std::make_shared<AggregateFunctionUniqVariadic<DataForVariadic, false, true>>(argument_types);
|
||||
return std::make_shared<AggregateFunctionUniqVariadic<DataForVariadic<false, true>>>(argument_types);
|
||||
}
|
||||
}
|
||||
|
||||
/// "Variadic" method also works as a fallback generic case for single argument.
|
||||
if (use_exact_hash_function)
|
||||
return std::make_shared<AggregateFunctionUniqVariadic<DataForVariadic, true, false>>(argument_types);
|
||||
return std::make_shared<AggregateFunctionUniqVariadic<DataForVariadic<true, false>>>(argument_types);
|
||||
else
|
||||
return std::make_shared<AggregateFunctionUniqVariadic<DataForVariadic, false, false>>(argument_types);
|
||||
return std::make_shared<AggregateFunctionUniqVariadic<DataForVariadic<false, false>>>(argument_types);
|
||||
}
|
||||
|
||||
template <bool is_exact, template <typename> class Data, typename DataForVariadic>
|
||||
AggregateFunctionPtr createAggregateFunctionUniq(const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
template <bool is_exact, template <typename, bool> typename Data, template <bool, bool, bool> typename DataForVariadic, bool is_able_to_parallelize_merge>
|
||||
AggregateFunctionPtr
|
||||
createAggregateFunctionUniq(const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
assertNoParameters(name, params);
|
||||
|
||||
@ -91,35 +94,35 @@ AggregateFunctionPtr createAggregateFunctionUniq(const std::string & name, const
|
||||
{
|
||||
const IDataType & argument_type = *argument_types[0];
|
||||
|
||||
AggregateFunctionPtr res(createWithNumericType<AggregateFunctionUniq, Data>(*argument_types[0], argument_types));
|
||||
AggregateFunctionPtr res(createWithNumericType<AggregateFunctionUniq, Data, is_able_to_parallelize_merge>(*argument_types[0], argument_types));
|
||||
|
||||
WhichDataType which(argument_type);
|
||||
if (res)
|
||||
return res;
|
||||
else if (which.isDate())
|
||||
return std::make_shared<AggregateFunctionUniq<DataTypeDate::FieldType, Data<DataTypeDate::FieldType>>>(argument_types);
|
||||
return std::make_shared<AggregateFunctionUniq<DataTypeDate::FieldType, Data<DataTypeDate::FieldType, is_able_to_parallelize_merge>>>(argument_types);
|
||||
else if (which.isDate32())
|
||||
return std::make_shared<AggregateFunctionUniq<DataTypeDate32::FieldType, Data<DataTypeDate32::FieldType>>>(argument_types);
|
||||
return std::make_shared<AggregateFunctionUniq<DataTypeDate32::FieldType, Data<DataTypeDate32::FieldType, is_able_to_parallelize_merge>>>(argument_types);
|
||||
else if (which.isDateTime())
|
||||
return std::make_shared<AggregateFunctionUniq<DataTypeDateTime::FieldType, Data<DataTypeDateTime::FieldType>>>(argument_types);
|
||||
return std::make_shared<AggregateFunctionUniq<DataTypeDateTime::FieldType, Data<DataTypeDateTime::FieldType, is_able_to_parallelize_merge>>>(argument_types);
|
||||
else if (which.isStringOrFixedString())
|
||||
return std::make_shared<AggregateFunctionUniq<String, Data<String>>>(argument_types);
|
||||
return std::make_shared<AggregateFunctionUniq<String, Data<String, is_able_to_parallelize_merge>>>(argument_types);
|
||||
else if (which.isUUID())
|
||||
return std::make_shared<AggregateFunctionUniq<DataTypeUUID::FieldType, Data<DataTypeUUID::FieldType>>>(argument_types);
|
||||
return std::make_shared<AggregateFunctionUniq<DataTypeUUID::FieldType, Data<DataTypeUUID::FieldType, is_able_to_parallelize_merge>>>(argument_types);
|
||||
else if (which.isTuple())
|
||||
{
|
||||
if (use_exact_hash_function)
|
||||
return std::make_shared<AggregateFunctionUniqVariadic<DataForVariadic, true, true>>(argument_types);
|
||||
return std::make_shared<AggregateFunctionUniqVariadic<DataForVariadic<true, true, is_able_to_parallelize_merge>>>(argument_types);
|
||||
else
|
||||
return std::make_shared<AggregateFunctionUniqVariadic<DataForVariadic, false, true>>(argument_types);
|
||||
return std::make_shared<AggregateFunctionUniqVariadic<DataForVariadic<false, true, is_able_to_parallelize_merge>>>(argument_types);
|
||||
}
|
||||
}
|
||||
|
||||
/// "Variadic" method also works as a fallback generic case for single argument.
|
||||
if (use_exact_hash_function)
|
||||
return std::make_shared<AggregateFunctionUniqVariadic<DataForVariadic, true, false>>(argument_types);
|
||||
return std::make_shared<AggregateFunctionUniqVariadic<DataForVariadic<true, false, is_able_to_parallelize_merge>>>(argument_types);
|
||||
else
|
||||
return std::make_shared<AggregateFunctionUniqVariadic<DataForVariadic, false, false>>(argument_types);
|
||||
return std::make_shared<AggregateFunctionUniqVariadic<DataForVariadic<false, false, is_able_to_parallelize_merge>>>(argument_types);
|
||||
}
|
||||
|
||||
}
|
||||
@ -132,14 +135,23 @@ void registerAggregateFunctionsUniq(AggregateFunctionFactory & factory)
|
||||
{createAggregateFunctionUniq<AggregateFunctionUniqUniquesHashSetData, AggregateFunctionUniqUniquesHashSetDataForVariadic>, properties});
|
||||
|
||||
factory.registerFunction("uniqHLL12",
|
||||
{createAggregateFunctionUniq<false, AggregateFunctionUniqHLL12Data, AggregateFunctionUniqHLL12DataForVariadic>, properties});
|
||||
{createAggregateFunctionUniq<false, AggregateFunctionUniqHLL12Data, AggregateFunctionUniqHLL12DataForVariadic, false /* is_able_to_parallelize_merge */>, properties});
|
||||
|
||||
factory.registerFunction("uniqExact",
|
||||
{createAggregateFunctionUniq<true, AggregateFunctionUniqExactData, AggregateFunctionUniqExactData<String>>, properties});
|
||||
auto assign_bool_param = [](const std::string & name, const DataTypes & argument_types, const Array & params, const Settings * settings)
|
||||
{
|
||||
/// Using two level hash set if we wouldn't be able to merge in parallel can cause ~10% slowdown.
|
||||
if (settings && settings->max_threads > 1)
|
||||
return createAggregateFunctionUniq<
|
||||
true, AggregateFunctionUniqExactData, AggregateFunctionUniqExactDataForVariadic, true /* is_able_to_parallelize_merge */>(name, argument_types, params, settings);
|
||||
else
|
||||
return createAggregateFunctionUniq<
|
||||
true, AggregateFunctionUniqExactData, AggregateFunctionUniqExactDataForVariadic, false /* is_able_to_parallelize_merge */>(name, argument_types, params, settings);
|
||||
};
|
||||
factory.registerFunction("uniqExact", {assign_bool_param, properties});
|
||||
|
||||
#if USE_DATASKETCHES
|
||||
factory.registerFunction("uniqTheta",
|
||||
{createAggregateFunctionUniq<AggregateFunctionUniqThetaData, AggregateFunctionUniqThetaData>, properties});
|
||||
{createAggregateFunctionUniq<AggregateFunctionUniqThetaData, AggregateFunctionUniqThetaDataForVariadic>, properties});
|
||||
#endif
|
||||
|
||||
}
|
||||
|
@ -1,7 +1,10 @@
|
||||
#pragma once
|
||||
|
||||
#include <city.h>
|
||||
#include <atomic>
|
||||
#include <memory>
|
||||
#include <type_traits>
|
||||
#include <utility>
|
||||
#include <city.h>
|
||||
|
||||
#include <base/bit_cast.h>
|
||||
|
||||
@ -13,17 +16,18 @@
|
||||
|
||||
#include <Interpreters/AggregationCommon.h>
|
||||
|
||||
#include <Common/CombinedCardinalityEstimator.h>
|
||||
#include <Common/HashTable/Hash.h>
|
||||
#include <Common/HashTable/HashSet.h>
|
||||
#include <Common/HyperLogLogWithSmallSetOptimization.h>
|
||||
#include <Common/CombinedCardinalityEstimator.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Common/assert_cast.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
|
||||
#include <AggregateFunctions/UniquesHashSet.h>
|
||||
#include <AggregateFunctions/IAggregateFunction.h>
|
||||
#include <AggregateFunctions/ThetaSketchData.h>
|
||||
#include <AggregateFunctions/UniqExactSet.h>
|
||||
#include <AggregateFunctions/UniqVariadicHash.h>
|
||||
#include <AggregateFunctions/UniquesHashSet.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -37,94 +41,128 @@ struct AggregateFunctionUniqUniquesHashSetData
|
||||
using Set = UniquesHashSet<DefaultHash<UInt64>>;
|
||||
Set set;
|
||||
|
||||
constexpr static bool is_able_to_parallelize_merge = false;
|
||||
constexpr static bool is_variadic = false;
|
||||
|
||||
static String getName() { return "uniq"; }
|
||||
};
|
||||
|
||||
/// For a function that takes multiple arguments. Such a function pre-hashes them in advance, so TrivialHash is used here.
|
||||
template <bool is_exact_, bool argument_is_tuple_>
|
||||
struct AggregateFunctionUniqUniquesHashSetDataForVariadic
|
||||
{
|
||||
using Set = UniquesHashSet<TrivialHash>;
|
||||
Set set;
|
||||
|
||||
constexpr static bool is_able_to_parallelize_merge = false;
|
||||
constexpr static bool is_variadic = true;
|
||||
constexpr static bool is_exact = is_exact_;
|
||||
constexpr static bool argument_is_tuple = argument_is_tuple_;
|
||||
|
||||
static String getName() { return "uniq"; }
|
||||
};
|
||||
|
||||
|
||||
/// uniqHLL12
|
||||
|
||||
template <typename T>
|
||||
template <typename T, bool is_able_to_parallelize_merge_>
|
||||
struct AggregateFunctionUniqHLL12Data
|
||||
{
|
||||
using Set = HyperLogLogWithSmallSetOptimization<T, 16, 12>;
|
||||
Set set;
|
||||
|
||||
static String getName() { return "uniqHLL12"; }
|
||||
};
|
||||
|
||||
template <>
|
||||
struct AggregateFunctionUniqHLL12Data<String>
|
||||
{
|
||||
using Set = HyperLogLogWithSmallSetOptimization<UInt64, 16, 12>;
|
||||
Set set;
|
||||
constexpr static bool is_able_to_parallelize_merge = is_able_to_parallelize_merge_;
|
||||
constexpr static bool is_variadic = false;
|
||||
|
||||
static String getName() { return "uniqHLL12"; }
|
||||
};
|
||||
|
||||
template <>
|
||||
struct AggregateFunctionUniqHLL12Data<UUID>
|
||||
struct AggregateFunctionUniqHLL12Data<String, false>
|
||||
{
|
||||
using Set = HyperLogLogWithSmallSetOptimization<UInt64, 16, 12>;
|
||||
Set set;
|
||||
|
||||
constexpr static bool is_able_to_parallelize_merge = false;
|
||||
constexpr static bool is_variadic = false;
|
||||
|
||||
static String getName() { return "uniqHLL12"; }
|
||||
};
|
||||
|
||||
template <>
|
||||
struct AggregateFunctionUniqHLL12Data<UUID, false>
|
||||
{
|
||||
using Set = HyperLogLogWithSmallSetOptimization<UInt64, 16, 12>;
|
||||
Set set;
|
||||
|
||||
constexpr static bool is_able_to_parallelize_merge = false;
|
||||
constexpr static bool is_variadic = false;
|
||||
|
||||
static String getName() { return "uniqHLL12"; }
|
||||
};
|
||||
|
||||
template <bool is_exact_, bool argument_is_tuple_, bool is_able_to_parallelize_merge_>
|
||||
struct AggregateFunctionUniqHLL12DataForVariadic
|
||||
{
|
||||
using Set = HyperLogLogWithSmallSetOptimization<UInt64, 16, 12, TrivialHash>;
|
||||
Set set;
|
||||
|
||||
constexpr static bool is_able_to_parallelize_merge = is_able_to_parallelize_merge_;
|
||||
constexpr static bool is_variadic = true;
|
||||
constexpr static bool is_exact = is_exact_;
|
||||
constexpr static bool argument_is_tuple = argument_is_tuple_;
|
||||
|
||||
static String getName() { return "uniqHLL12"; }
|
||||
};
|
||||
|
||||
|
||||
/// uniqExact
|
||||
|
||||
template <typename T>
|
||||
template <typename T, bool is_able_to_parallelize_merge_>
|
||||
struct AggregateFunctionUniqExactData
|
||||
{
|
||||
using Key = T;
|
||||
|
||||
/// When creating, the hash table must be small.
|
||||
using Set = HashSet<
|
||||
Key,
|
||||
HashCRC32<Key>,
|
||||
HashTableGrower<4>,
|
||||
HashTableAllocatorWithStackMemory<sizeof(Key) * (1 << 4)>>;
|
||||
using SingleLevelSet = HashSet<Key, HashCRC32<Key>, HashTableGrower<4>, HashTableAllocatorWithStackMemory<sizeof(Key) * (1 << 4)>>;
|
||||
using TwoLevelSet = TwoLevelHashSet<Key, HashCRC32<Key>>;
|
||||
using Set = UniqExactSet<SingleLevelSet, TwoLevelSet>;
|
||||
|
||||
Set set;
|
||||
|
||||
constexpr static bool is_able_to_parallelize_merge = is_able_to_parallelize_merge_;
|
||||
constexpr static bool is_variadic = false;
|
||||
|
||||
static String getName() { return "uniqExact"; }
|
||||
};
|
||||
|
||||
/// For rows, we put the SipHash values (128 bits) into the hash table.
|
||||
template <>
|
||||
struct AggregateFunctionUniqExactData<String>
|
||||
template <bool is_able_to_parallelize_merge_>
|
||||
struct AggregateFunctionUniqExactData<String, is_able_to_parallelize_merge_>
|
||||
{
|
||||
using Key = UInt128;
|
||||
|
||||
/// When creating, the hash table must be small.
|
||||
using Set = HashSet<
|
||||
Key,
|
||||
UInt128TrivialHash,
|
||||
HashTableGrower<3>,
|
||||
HashTableAllocatorWithStackMemory<sizeof(Key) * (1 << 3)>>;
|
||||
using SingleLevelSet = HashSet<Key, UInt128TrivialHash, HashTableGrower<3>, HashTableAllocatorWithStackMemory<sizeof(Key) * (1 << 3)>>;
|
||||
using TwoLevelSet = TwoLevelHashSet<Key, UInt128TrivialHash>;
|
||||
using Set = UniqExactSet<SingleLevelSet, TwoLevelSet>;
|
||||
|
||||
Set set;
|
||||
|
||||
constexpr static bool is_able_to_parallelize_merge = is_able_to_parallelize_merge_;
|
||||
constexpr static bool is_variadic = false;
|
||||
|
||||
static String getName() { return "uniqExact"; }
|
||||
};
|
||||
|
||||
template <bool is_exact_, bool argument_is_tuple_, bool is_able_to_parallelize_merge_>
|
||||
struct AggregateFunctionUniqExactDataForVariadic : AggregateFunctionUniqExactData<String, is_able_to_parallelize_merge_>
|
||||
{
|
||||
constexpr static bool is_able_to_parallelize_merge = is_able_to_parallelize_merge_;
|
||||
constexpr static bool is_variadic = true;
|
||||
constexpr static bool is_exact = is_exact_;
|
||||
constexpr static bool argument_is_tuple = argument_is_tuple_;
|
||||
};
|
||||
|
||||
/// uniqTheta
|
||||
#if USE_DATASKETCHES
|
||||
@ -134,14 +172,37 @@ struct AggregateFunctionUniqThetaData
|
||||
using Set = ThetaSketchData<UInt64>;
|
||||
Set set;
|
||||
|
||||
constexpr static bool is_able_to_parallelize_merge = false;
|
||||
constexpr static bool is_variadic = false;
|
||||
|
||||
static String getName() { return "uniqTheta"; }
|
||||
};
|
||||
|
||||
template <bool is_exact_, bool argument_is_tuple_>
|
||||
struct AggregateFunctionUniqThetaDataForVariadic : AggregateFunctionUniqThetaData
|
||||
{
|
||||
constexpr static bool is_able_to_parallelize_merge = false;
|
||||
constexpr static bool is_variadic = true;
|
||||
constexpr static bool is_exact = is_exact_;
|
||||
constexpr static bool argument_is_tuple = argument_is_tuple_;
|
||||
};
|
||||
|
||||
#endif
|
||||
|
||||
namespace detail
|
||||
{
|
||||
|
||||
template <typename T>
|
||||
struct IsUniqExactSet : std::false_type
|
||||
{
|
||||
};
|
||||
|
||||
template <typename T1, typename T2>
|
||||
struct IsUniqExactSet<UniqExactSet<T1, T2>> : std::true_type
|
||||
{
|
||||
};
|
||||
|
||||
|
||||
/** Hash function for uniq.
|
||||
*/
|
||||
template <typename T> struct AggregateFunctionUniqTraits
|
||||
@ -162,17 +223,31 @@ template <typename T> struct AggregateFunctionUniqTraits
|
||||
};
|
||||
|
||||
|
||||
/** The structure for the delegation work to add one element to the `uniq` aggregate functions.
|
||||
/** The structure for the delegation work to add elements to the `uniq` aggregate functions.
|
||||
* Used for partial specialization to add strings.
|
||||
*/
|
||||
template <typename T, typename Data>
|
||||
struct OneAdder
|
||||
struct Adder
|
||||
{
|
||||
static void ALWAYS_INLINE add(Data & data, const IColumn & column, size_t row_num)
|
||||
/// We have to introduce this template parameter (and a bunch of ugly code dealing with it), because we cannot
|
||||
/// add runtime branches in whatever_hash_set::insert - it will immediately pop up in the perf top.
|
||||
template <bool use_single_level_hash_table = true>
|
||||
static void ALWAYS_INLINE add(Data & data, const IColumn ** columns, size_t num_args, size_t row_num)
|
||||
{
|
||||
if constexpr (std::is_same_v<Data, AggregateFunctionUniqUniquesHashSetData>
|
||||
|| std::is_same_v<Data, AggregateFunctionUniqHLL12Data<T>>)
|
||||
if constexpr (Data::is_variadic)
|
||||
{
|
||||
if constexpr (IsUniqExactSet<typename Data::Set>::value)
|
||||
data.set.template insert<T, use_single_level_hash_table>(
|
||||
UniqVariadicHash<Data::is_exact, Data::argument_is_tuple>::apply(num_args, columns, row_num));
|
||||
else
|
||||
data.set.insert(T{UniqVariadicHash<Data::is_exact, Data::argument_is_tuple>::apply(num_args, columns, row_num)});
|
||||
}
|
||||
else if constexpr (
|
||||
std::is_same_v<
|
||||
Data,
|
||||
AggregateFunctionUniqUniquesHashSetData> || std::is_same_v<Data, AggregateFunctionUniqHLL12Data<T, Data::is_able_to_parallelize_merge>>)
|
||||
{
|
||||
const auto & column = *columns[0];
|
||||
if constexpr (!std::is_same_v<T, String>)
|
||||
{
|
||||
using ValueType = typename decltype(data.set)::value_type;
|
||||
@ -185,11 +260,13 @@ struct OneAdder
|
||||
data.set.insert(CityHash_v1_0_2::CityHash64(value.data, value.size));
|
||||
}
|
||||
}
|
||||
else if constexpr (std::is_same_v<Data, AggregateFunctionUniqExactData<T>>)
|
||||
else if constexpr (std::is_same_v<Data, AggregateFunctionUniqExactData<T, Data::is_able_to_parallelize_merge>>)
|
||||
{
|
||||
const auto & column = *columns[0];
|
||||
if constexpr (!std::is_same_v<T, String>)
|
||||
{
|
||||
data.set.insert(assert_cast<const ColumnVector<T> &>(column).getData()[row_num]);
|
||||
data.set.template insert<const T &, use_single_level_hash_table>(
|
||||
assert_cast<const ColumnVector<T> &>(column).getData()[row_num]);
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -200,16 +277,72 @@ struct OneAdder
|
||||
hash.update(value.data, value.size);
|
||||
hash.get128(key);
|
||||
|
||||
data.set.insert(key);
|
||||
data.set.template insert<const UInt128 &, use_single_level_hash_table>(key);
|
||||
}
|
||||
}
|
||||
#if USE_DATASKETCHES
|
||||
else if constexpr (std::is_same_v<Data, AggregateFunctionUniqThetaData>)
|
||||
{
|
||||
const auto & column = *columns[0];
|
||||
data.set.insertOriginal(column.getDataAt(row_num));
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
static void ALWAYS_INLINE
|
||||
add(Data & data, const IColumn ** columns, size_t num_args, size_t row_begin, size_t row_end, const char8_t * flags, const UInt8 * null_map)
|
||||
{
|
||||
bool use_single_level_hash_table = true;
|
||||
if constexpr (Data::is_able_to_parallelize_merge)
|
||||
use_single_level_hash_table = data.set.isSingleLevel();
|
||||
|
||||
if (use_single_level_hash_table)
|
||||
addImpl<true>(data, columns, num_args, row_begin, row_end, flags, null_map);
|
||||
else
|
||||
addImpl<false>(data, columns, num_args, row_begin, row_end, flags, null_map);
|
||||
|
||||
if constexpr (Data::is_able_to_parallelize_merge)
|
||||
{
|
||||
if (data.set.isSingleLevel() && data.set.size() > 100'000)
|
||||
data.set.convertToTwoLevel();
|
||||
}
|
||||
}
|
||||
|
||||
private:
|
||||
template <bool use_single_level_hash_table>
|
||||
static void ALWAYS_INLINE
|
||||
addImpl(Data & data, const IColumn ** columns, size_t num_args, size_t row_begin, size_t row_end, const char8_t * flags, const UInt8 * null_map)
|
||||
{
|
||||
if (!flags)
|
||||
{
|
||||
if (!null_map)
|
||||
{
|
||||
for (size_t row = row_begin; row < row_end; ++row)
|
||||
add<use_single_level_hash_table>(data, columns, num_args, row);
|
||||
}
|
||||
else
|
||||
{
|
||||
for (size_t row = row_begin; row < row_end; ++row)
|
||||
if (!null_map[row])
|
||||
add<use_single_level_hash_table>(data, columns, num_args, row);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
if (!null_map)
|
||||
{
|
||||
for (size_t row = row_begin; row < row_end; ++row)
|
||||
if (flags[row])
|
||||
add<use_single_level_hash_table>(data, columns, num_args, row);
|
||||
}
|
||||
else
|
||||
{
|
||||
for (size_t row = row_begin; row < row_end; ++row)
|
||||
if (!null_map[row] && flags[row])
|
||||
add<use_single_level_hash_table>(data, columns, num_args, row);
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
}
|
||||
@ -219,9 +352,15 @@ struct OneAdder
|
||||
template <typename T, typename Data>
|
||||
class AggregateFunctionUniq final : public IAggregateFunctionDataHelper<Data, AggregateFunctionUniq<T, Data>>
|
||||
{
|
||||
private:
|
||||
static constexpr size_t num_args = 1;
|
||||
static constexpr bool is_able_to_parallelize_merge = Data::is_able_to_parallelize_merge;
|
||||
|
||||
public:
|
||||
AggregateFunctionUniq(const DataTypes & argument_types_)
|
||||
: IAggregateFunctionDataHelper<Data, AggregateFunctionUniq<T, Data>>(argument_types_, {}) {}
|
||||
explicit AggregateFunctionUniq(const DataTypes & argument_types_)
|
||||
: IAggregateFunctionDataHelper<Data, AggregateFunctionUniq<T, Data>>(argument_types_, {})
|
||||
{
|
||||
}
|
||||
|
||||
String getName() const override { return Data::getName(); }
|
||||
|
||||
@ -235,7 +374,18 @@ public:
|
||||
/// ALWAYS_INLINE is required to have better code layout for uniqHLL12 function
|
||||
void ALWAYS_INLINE add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
detail::OneAdder<T, Data>::add(this->data(place), *columns[0], row_num);
|
||||
detail::Adder<T, Data>::add(this->data(place), columns, num_args, row_num);
|
||||
}
|
||||
|
||||
void ALWAYS_INLINE addBatchSinglePlace(
|
||||
size_t row_begin, size_t row_end, AggregateDataPtr __restrict place, const IColumn ** columns, Arena *, ssize_t if_argument_pos)
|
||||
const override
|
||||
{
|
||||
const char8_t * flags = nullptr;
|
||||
if (if_argument_pos >= 0)
|
||||
flags = assert_cast<const ColumnUInt8 &>(*columns[if_argument_pos]).getData().data();
|
||||
|
||||
detail::Adder<T, Data>::add(this->data(place), columns, num_args, row_begin, row_end, flags, nullptr /* null_map */);
|
||||
}
|
||||
|
||||
void addManyDefaults(
|
||||
@ -244,7 +394,23 @@ public:
|
||||
size_t /*length*/,
|
||||
Arena * /*arena*/) const override
|
||||
{
|
||||
detail::OneAdder<T, Data>::add(this->data(place), *columns[0], 0);
|
||||
detail::Adder<T, Data>::add(this->data(place), columns, num_args, 0);
|
||||
}
|
||||
|
||||
void addBatchSinglePlaceNotNull(
|
||||
size_t row_begin,
|
||||
size_t row_end,
|
||||
AggregateDataPtr __restrict place,
|
||||
const IColumn ** columns,
|
||||
const UInt8 * null_map,
|
||||
Arena *,
|
||||
ssize_t if_argument_pos) const override
|
||||
{
|
||||
const char8_t * flags = nullptr;
|
||||
if (if_argument_pos >= 0)
|
||||
flags = assert_cast<const ColumnUInt8 &>(*columns[if_argument_pos]).getData().data();
|
||||
|
||||
detail::Adder<T, Data>::add(this->data(place), columns, num_args, row_begin, row_end, flags, null_map);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
@ -252,6 +418,16 @@ public:
|
||||
this->data(place).set.merge(this->data(rhs).set);
|
||||
}
|
||||
|
||||
bool isAbleToParallelizeMerge() const override { return is_able_to_parallelize_merge; }
|
||||
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, ThreadPool & thread_pool, Arena *) const override
|
||||
{
|
||||
if constexpr (is_able_to_parallelize_merge)
|
||||
this->data(place).set.merge(this->data(rhs).set, &thread_pool);
|
||||
else
|
||||
this->data(place).set.merge(this->data(rhs).set);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf, std::optional<size_t> /* version */) const override
|
||||
{
|
||||
this->data(place).set.write(buf);
|
||||
@ -273,15 +449,20 @@ public:
|
||||
* You can pass multiple arguments as is; You can also pass one argument - a tuple.
|
||||
* But (for the possibility of efficient implementation), you can not pass several arguments, among which there are tuples.
|
||||
*/
|
||||
template <typename Data, bool is_exact, bool argument_is_tuple>
|
||||
class AggregateFunctionUniqVariadic final : public IAggregateFunctionDataHelper<Data, AggregateFunctionUniqVariadic<Data, is_exact, argument_is_tuple>>
|
||||
template <typename Data>
|
||||
class AggregateFunctionUniqVariadic final : public IAggregateFunctionDataHelper<Data, AggregateFunctionUniqVariadic<Data>>
|
||||
{
|
||||
private:
|
||||
using T = typename Data::Set::value_type;
|
||||
|
||||
static constexpr size_t is_able_to_parallelize_merge = Data::is_able_to_parallelize_merge;
|
||||
static constexpr size_t argument_is_tuple = Data::argument_is_tuple;
|
||||
|
||||
size_t num_args = 0;
|
||||
|
||||
public:
|
||||
AggregateFunctionUniqVariadic(const DataTypes & arguments)
|
||||
: IAggregateFunctionDataHelper<Data, AggregateFunctionUniqVariadic<Data, is_exact, argument_is_tuple>>(arguments, {})
|
||||
explicit AggregateFunctionUniqVariadic(const DataTypes & arguments)
|
||||
: IAggregateFunctionDataHelper<Data, AggregateFunctionUniqVariadic<Data>>(arguments, {})
|
||||
{
|
||||
if (argument_is_tuple)
|
||||
num_args = typeid_cast<const DataTypeTuple &>(*arguments[0]).getElements().size();
|
||||
@ -300,8 +481,34 @@ public:
|
||||
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
this->data(place).set.insert(typename Data::Set::value_type(
|
||||
UniqVariadicHash<is_exact, argument_is_tuple>::apply(num_args, columns, row_num)));
|
||||
detail::Adder<T, Data>::add(this->data(place), columns, num_args, row_num);
|
||||
}
|
||||
|
||||
void addBatchSinglePlace(
|
||||
size_t row_begin, size_t row_end, AggregateDataPtr __restrict place, const IColumn ** columns, Arena *, ssize_t if_argument_pos)
|
||||
const override
|
||||
{
|
||||
const char8_t * flags = nullptr;
|
||||
if (if_argument_pos >= 0)
|
||||
flags = assert_cast<const ColumnUInt8 &>(*columns[if_argument_pos]).getData().data();
|
||||
|
||||
detail::Adder<T, Data>::add(this->data(place), columns, num_args, row_begin, row_end, flags, nullptr /* null_map */);
|
||||
}
|
||||
|
||||
void addBatchSinglePlaceNotNull(
|
||||
size_t row_begin,
|
||||
size_t row_end,
|
||||
AggregateDataPtr __restrict place,
|
||||
const IColumn ** columns,
|
||||
const UInt8 * null_map,
|
||||
Arena *,
|
||||
ssize_t if_argument_pos) const override
|
||||
{
|
||||
const char8_t * flags = nullptr;
|
||||
if (if_argument_pos >= 0)
|
||||
flags = assert_cast<const ColumnUInt8 &>(*columns[if_argument_pos]).getData().data();
|
||||
|
||||
detail::Adder<T, Data>::add(this->data(place), columns, num_args, row_begin, row_end, flags, null_map);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
@ -309,6 +516,16 @@ public:
|
||||
this->data(place).set.merge(this->data(rhs).set);
|
||||
}
|
||||
|
||||
bool isAbleToParallelizeMerge() const override { return is_able_to_parallelize_merge; }
|
||||
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, ThreadPool & thread_pool, Arena *) const override
|
||||
{
|
||||
if constexpr (is_able_to_parallelize_merge)
|
||||
this->data(place).set.merge(this->data(rhs).set, &thread_pool);
|
||||
else
|
||||
this->data(place).set.merge(this->data(rhs).set);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf, std::optional<size_t> /* version */) const override
|
||||
{
|
||||
this->data(place).set.write(buf);
|
||||
|
@ -74,6 +74,19 @@ static IAggregateFunction * createWithNumericType(const IDataType & argument_typ
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
template <template <typename, typename> class AggregateFunctionTemplate, template <typename, bool> class Data, bool bool_param, typename... TArgs>
|
||||
static IAggregateFunction * createWithNumericType(const IDataType & argument_type, TArgs && ... args)
|
||||
{
|
||||
WhichDataType which(argument_type);
|
||||
#define DISPATCH(TYPE) \
|
||||
if (which.idx == TypeIndex::TYPE) return new AggregateFunctionTemplate<TYPE, Data<TYPE, bool_param>>(std::forward<TArgs>(args)...); /// NOLINT
|
||||
FOR_NUMERIC_TYPES(DISPATCH)
|
||||
#undef DISPATCH
|
||||
if (which.idx == TypeIndex::Enum8) return new AggregateFunctionTemplate<Int8, Data<Int8, bool_param>>(std::forward<TArgs>(args)...);
|
||||
if (which.idx == TypeIndex::Enum16) return new AggregateFunctionTemplate<Int16, Data<Int16, bool_param>>(std::forward<TArgs>(args)...);
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
template <template <typename, typename> class AggregateFunctionTemplate, template <typename> class Data, typename... TArgs>
|
||||
static IAggregateFunction * createWithUnsignedIntegerType(const IDataType & argument_type, TArgs && ... args)
|
||||
{
|
||||
|
@ -1,14 +1,15 @@
|
||||
#pragma once
|
||||
|
||||
#include <Columns/ColumnSparse.h>
|
||||
#include <Columns/ColumnTuple.h>
|
||||
#include <Columns/ColumnsNumber.h>
|
||||
#include <Columns/ColumnSparse.h>
|
||||
#include <Core/Block.h>
|
||||
#include <Core/ColumnNumbers.h>
|
||||
#include <Core/Field.h>
|
||||
#include <Interpreters/Context_fwd.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <base/types.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/ThreadPool.h>
|
||||
|
||||
#include "config.h"
|
||||
|
||||
@ -147,6 +148,16 @@ public:
|
||||
/// Merges state (on which place points to) with other state of current aggregation function.
|
||||
virtual void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const = 0;
|
||||
|
||||
/// Tells if merge() with thread pool parameter could be used.
|
||||
virtual bool isAbleToParallelizeMerge() const { return false; }
|
||||
|
||||
/// Should be used only if isAbleToParallelizeMerge() returned true.
|
||||
virtual void
|
||||
merge(AggregateDataPtr __restrict /*place*/, ConstAggregateDataPtr /*rhs*/, ThreadPool & /*thread_pool*/, Arena * /*arena*/) const
|
||||
{
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "merge() with thread pool parameter isn't implemented for {} ", getName());
|
||||
}
|
||||
|
||||
/// Serializes state (to transmit it over the network, for example).
|
||||
virtual void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf, std::optional<size_t> version = std::nullopt) const = 0; /// NOLINT
|
||||
|
||||
|
112
src/AggregateFunctions/UniqExactSet.h
Normal file
112
src/AggregateFunctions/UniqExactSet.h
Normal file
@ -0,0 +1,112 @@
|
||||
#pragma once
|
||||
|
||||
#include <Common/CurrentThread.h>
|
||||
#include <Common/HashTable/HashSet.h>
|
||||
#include <Common/ThreadPool.h>
|
||||
#include <Common/setThreadName.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
template <typename SingleLevelSet, typename TwoLevelSet>
|
||||
class UniqExactSet
|
||||
{
|
||||
static_assert(std::is_same_v<typename SingleLevelSet::value_type, typename TwoLevelSet::value_type>);
|
||||
|
||||
public:
|
||||
using value_type = typename SingleLevelSet::value_type;
|
||||
|
||||
template <typename Arg, bool use_single_level_hash_table = true>
|
||||
auto ALWAYS_INLINE insert(Arg && arg)
|
||||
{
|
||||
if constexpr (use_single_level_hash_table)
|
||||
asSingleLevel().insert(std::forward<Arg>(arg));
|
||||
else
|
||||
asTwoLevel().insert(std::forward<Arg>(arg));
|
||||
}
|
||||
|
||||
auto merge(const UniqExactSet & other, ThreadPool * thread_pool = nullptr)
|
||||
{
|
||||
if (isSingleLevel() && other.isTwoLevel())
|
||||
convertToTwoLevel();
|
||||
|
||||
if (isSingleLevel())
|
||||
{
|
||||
asSingleLevel().merge(other.asSingleLevel());
|
||||
}
|
||||
else
|
||||
{
|
||||
auto & lhs = asTwoLevel();
|
||||
const auto rhs_ptr = other.getTwoLevelSet();
|
||||
const auto & rhs = *rhs_ptr;
|
||||
if (!thread_pool)
|
||||
{
|
||||
for (size_t i = 0; i < rhs.NUM_BUCKETS; ++i)
|
||||
lhs.impls[i].merge(rhs.impls[i]);
|
||||
}
|
||||
else
|
||||
{
|
||||
auto next_bucket_to_merge = std::make_shared<std::atomic_uint32_t>(0);
|
||||
|
||||
auto thread_func = [&lhs, &rhs, next_bucket_to_merge, thread_group = CurrentThread::getGroup()]()
|
||||
{
|
||||
if (thread_group)
|
||||
CurrentThread::attachToIfDetached(thread_group);
|
||||
setThreadName("UniqExactMerger");
|
||||
|
||||
while (true)
|
||||
{
|
||||
const auto bucket = next_bucket_to_merge->fetch_add(1);
|
||||
if (bucket >= rhs.NUM_BUCKETS)
|
||||
return;
|
||||
lhs.impls[bucket].merge(rhs.impls[bucket]);
|
||||
}
|
||||
};
|
||||
|
||||
for (size_t i = 0; i < std::min<size_t>(thread_pool->getMaxThreads(), rhs.NUM_BUCKETS); ++i)
|
||||
thread_pool->scheduleOrThrowOnError(thread_func);
|
||||
thread_pool->wait();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void read(ReadBuffer & in) { asSingleLevel().read(in); }
|
||||
|
||||
void write(WriteBuffer & out) const
|
||||
{
|
||||
if (isSingleLevel())
|
||||
asSingleLevel().write(out);
|
||||
else
|
||||
/// We have to preserve compatibility with the old implementation that used only single level hash sets.
|
||||
asTwoLevel().writeAsSingleLevel(out);
|
||||
}
|
||||
|
||||
size_t size() const { return isSingleLevel() ? asSingleLevel().size() : asTwoLevel().size(); }
|
||||
|
||||
/// To convert set to two level before merging (we cannot just call convertToTwoLevel() on right hand side set, because it is declared const).
|
||||
std::shared_ptr<TwoLevelSet> getTwoLevelSet() const
|
||||
{
|
||||
return two_level_set ? two_level_set : std::make_shared<TwoLevelSet>(asSingleLevel());
|
||||
}
|
||||
|
||||
void convertToTwoLevel()
|
||||
{
|
||||
two_level_set = getTwoLevelSet();
|
||||
single_level_set.clear();
|
||||
}
|
||||
|
||||
bool isSingleLevel() const { return !two_level_set; }
|
||||
bool isTwoLevel() const { return !!two_level_set; }
|
||||
|
||||
private:
|
||||
SingleLevelSet & asSingleLevel() { return single_level_set; }
|
||||
const SingleLevelSet & asSingleLevel() const { return single_level_set; }
|
||||
|
||||
TwoLevelSet & asTwoLevel() { return *two_level_set; }
|
||||
const TwoLevelSet & asTwoLevel() const { return *two_level_set; }
|
||||
|
||||
SingleLevelSet single_level_set;
|
||||
std::shared_ptr<TwoLevelSet> two_level_set;
|
||||
};
|
||||
}
|
@ -329,7 +329,7 @@ public:
|
||||
free();
|
||||
}
|
||||
|
||||
void insert(Value x)
|
||||
void ALWAYS_INLINE insert(Value x)
|
||||
{
|
||||
HashValue hash_value = hash(x);
|
||||
if (!good(hash_value))
|
||||
|
@ -1517,6 +1517,7 @@ void QueryAnalyzer::collectScopeValidIdentifiersForTypoCorrection(
|
||||
{
|
||||
for (const auto & [name, expression] : scope.alias_name_to_expression_node)
|
||||
{
|
||||
assert(expression);
|
||||
auto expression_identifier = Identifier(name);
|
||||
valid_identifiers_result.insert(expression_identifier);
|
||||
|
||||
@ -2170,6 +2171,18 @@ QueryTreeNodePtr QueryAnalyzer::tryResolveIdentifierFromAliases(const Identifier
|
||||
auto & alias_identifier_node = it->second->as<IdentifierNode &>();
|
||||
auto identifier = alias_identifier_node.getIdentifier();
|
||||
auto lookup_result = tryResolveIdentifier(IdentifierLookup{identifier, identifier_lookup.lookup_context}, scope, identifier_resolve_settings);
|
||||
if (!lookup_result.isResolved())
|
||||
{
|
||||
std::unordered_set<Identifier> valid_identifiers;
|
||||
collectScopeWithParentScopesValidIdentifiersForTypoCorrection(identifier, scope, true, false, false, valid_identifiers);
|
||||
|
||||
auto hints = collectIdentifierTypoHints(identifier, valid_identifiers);
|
||||
throw Exception(ErrorCodes::UNKNOWN_IDENTIFIER, "Unknown {} identifier '{}' in scope {}{}",
|
||||
toStringLowercase(IdentifierLookupContext::EXPRESSION),
|
||||
identifier.getFullName(),
|
||||
scope.scope_node->formatASTForErrorMessage(),
|
||||
getHintsErrorMessageSuffix(hints));
|
||||
}
|
||||
it->second = lookup_result.resolved_identifier;
|
||||
|
||||
/** During collection of aliases if node is identifier and has alias, we cannot say if it is
|
||||
|
@ -152,16 +152,16 @@ MutableColumnPtr ColumnAggregateFunction::convertToValues(MutableColumnPtr colum
|
||||
/// If there are references to states in final column, we must hold their ownership
|
||||
/// by holding arenas and source.
|
||||
|
||||
auto callback = [&](auto & subcolumn)
|
||||
auto callback = [&](IColumn & subcolumn)
|
||||
{
|
||||
if (auto * aggregate_subcolumn = typeid_cast<ColumnAggregateFunction *>(subcolumn.get()))
|
||||
if (auto * aggregate_subcolumn = typeid_cast<ColumnAggregateFunction *>(&subcolumn))
|
||||
{
|
||||
aggregate_subcolumn->foreign_arenas = concatArenas(column_aggregate_func.foreign_arenas, column_aggregate_func.my_arena);
|
||||
aggregate_subcolumn->src = column_aggregate_func.getPtr();
|
||||
}
|
||||
};
|
||||
|
||||
callback(res);
|
||||
callback(*res);
|
||||
res->forEachSubcolumnRecursively(callback);
|
||||
|
||||
for (auto * val : data)
|
||||
|
@ -151,17 +151,17 @@ public:
|
||||
|
||||
ColumnPtr compress() const override;
|
||||
|
||||
void forEachSubcolumn(ColumnCallback callback) override
|
||||
void forEachSubcolumn(ColumnCallback callback) const override
|
||||
{
|
||||
callback(offsets);
|
||||
callback(data);
|
||||
}
|
||||
|
||||
void forEachSubcolumnRecursively(ColumnCallback callback) override
|
||||
void forEachSubcolumnRecursively(RecursiveColumnCallback callback) const override
|
||||
{
|
||||
callback(offsets);
|
||||
callback(*offsets);
|
||||
offsets->forEachSubcolumnRecursively(callback);
|
||||
callback(data);
|
||||
callback(*data);
|
||||
data->forEachSubcolumnRecursively(callback);
|
||||
}
|
||||
|
||||
|
@ -230,14 +230,14 @@ public:
|
||||
data->getExtremes(min, max);
|
||||
}
|
||||
|
||||
void forEachSubcolumn(ColumnCallback callback) override
|
||||
void forEachSubcolumn(ColumnCallback callback) const override
|
||||
{
|
||||
callback(data);
|
||||
}
|
||||
|
||||
void forEachSubcolumnRecursively(ColumnCallback callback) override
|
||||
void forEachSubcolumnRecursively(RecursiveColumnCallback callback) const override
|
||||
{
|
||||
callback(data);
|
||||
callback(*data);
|
||||
data->forEachSubcolumnRecursively(callback);
|
||||
}
|
||||
|
||||
|
@ -164,7 +164,7 @@ public:
|
||||
size_t byteSizeAt(size_t n) const override { return getDictionary().byteSizeAt(getIndexes().getUInt(n)); }
|
||||
size_t allocatedBytes() const override { return idx.getPositions()->allocatedBytes() + getDictionary().allocatedBytes(); }
|
||||
|
||||
void forEachSubcolumn(ColumnCallback callback) override
|
||||
void forEachSubcolumn(ColumnCallback callback) const override
|
||||
{
|
||||
callback(idx.getPositionsPtr());
|
||||
|
||||
@ -173,15 +173,15 @@ public:
|
||||
callback(dictionary.getColumnUniquePtr());
|
||||
}
|
||||
|
||||
void forEachSubcolumnRecursively(ColumnCallback callback) override
|
||||
void forEachSubcolumnRecursively(RecursiveColumnCallback callback) const override
|
||||
{
|
||||
callback(idx.getPositionsPtr());
|
||||
callback(*idx.getPositionsPtr());
|
||||
idx.getPositionsPtr()->forEachSubcolumnRecursively(callback);
|
||||
|
||||
/// Column doesn't own dictionary if it's shared.
|
||||
if (!dictionary.isShared())
|
||||
{
|
||||
callback(dictionary.getColumnUniquePtr());
|
||||
callback(*dictionary.getColumnUniquePtr());
|
||||
dictionary.getColumnUniquePtr()->forEachSubcolumnRecursively(callback);
|
||||
}
|
||||
}
|
||||
@ -278,6 +278,7 @@ public:
|
||||
|
||||
const ColumnPtr & getPositions() const { return positions; }
|
||||
WrappedPtr & getPositionsPtr() { return positions; }
|
||||
const WrappedPtr & getPositionsPtr() const { return positions; }
|
||||
size_t getPositionAt(size_t row) const;
|
||||
void insertPosition(UInt64 position);
|
||||
void insertPositionsRange(const IColumn & column, UInt64 offset, UInt64 limit);
|
||||
|
@ -273,14 +273,14 @@ void ColumnMap::getExtremes(Field & min, Field & max) const
|
||||
max = std::move(map_max_value);
|
||||
}
|
||||
|
||||
void ColumnMap::forEachSubcolumn(ColumnCallback callback)
|
||||
void ColumnMap::forEachSubcolumn(ColumnCallback callback) const
|
||||
{
|
||||
callback(nested);
|
||||
}
|
||||
|
||||
void ColumnMap::forEachSubcolumnRecursively(ColumnCallback callback)
|
||||
void ColumnMap::forEachSubcolumnRecursively(RecursiveColumnCallback callback) const
|
||||
{
|
||||
callback(nested);
|
||||
callback(*nested);
|
||||
nested->forEachSubcolumnRecursively(callback);
|
||||
}
|
||||
|
||||
|
@ -88,8 +88,8 @@ public:
|
||||
size_t byteSizeAt(size_t n) const override;
|
||||
size_t allocatedBytes() const override;
|
||||
void protect() override;
|
||||
void forEachSubcolumn(ColumnCallback callback) override;
|
||||
void forEachSubcolumnRecursively(ColumnCallback callback) override;
|
||||
void forEachSubcolumn(ColumnCallback callback) const override;
|
||||
void forEachSubcolumnRecursively(RecursiveColumnCallback callback) const override;
|
||||
bool structureEquals(const IColumn & rhs) const override;
|
||||
double getRatioOfDefaultRows(double sample_ratio) const override;
|
||||
void getIndicesOfNonDefaultRows(Offsets & indices, size_t from, size_t limit) const override;
|
||||
|
@ -130,17 +130,17 @@ public:
|
||||
|
||||
ColumnPtr compress() const override;
|
||||
|
||||
void forEachSubcolumn(ColumnCallback callback) override
|
||||
void forEachSubcolumn(ColumnCallback callback) const override
|
||||
{
|
||||
callback(nested_column);
|
||||
callback(null_map);
|
||||
}
|
||||
|
||||
void forEachSubcolumnRecursively(ColumnCallback callback) override
|
||||
void forEachSubcolumnRecursively(RecursiveColumnCallback callback) const override
|
||||
{
|
||||
callback(nested_column);
|
||||
callback(*nested_column);
|
||||
nested_column->forEachSubcolumnRecursively(callback);
|
||||
callback(null_map);
|
||||
callback(*null_map);
|
||||
null_map->forEachSubcolumnRecursively(callback);
|
||||
}
|
||||
|
||||
|
@ -664,20 +664,20 @@ size_t ColumnObject::allocatedBytes() const
|
||||
return res;
|
||||
}
|
||||
|
||||
void ColumnObject::forEachSubcolumn(ColumnCallback callback)
|
||||
void ColumnObject::forEachSubcolumn(ColumnCallback callback) const
|
||||
{
|
||||
for (auto & entry : subcolumns)
|
||||
for (auto & part : entry->data.data)
|
||||
for (const auto & entry : subcolumns)
|
||||
for (const auto & part : entry->data.data)
|
||||
callback(part);
|
||||
}
|
||||
|
||||
void ColumnObject::forEachSubcolumnRecursively(ColumnCallback callback)
|
||||
void ColumnObject::forEachSubcolumnRecursively(RecursiveColumnCallback callback) const
|
||||
{
|
||||
for (auto & entry : subcolumns)
|
||||
for (const auto & entry : subcolumns)
|
||||
{
|
||||
for (auto & part : entry->data.data)
|
||||
for (const auto & part : entry->data.data)
|
||||
{
|
||||
callback(part);
|
||||
callback(*part);
|
||||
part->forEachSubcolumnRecursively(callback);
|
||||
}
|
||||
}
|
||||
|
@ -206,8 +206,8 @@ public:
|
||||
size_t size() const override;
|
||||
size_t byteSize() const override;
|
||||
size_t allocatedBytes() const override;
|
||||
void forEachSubcolumn(ColumnCallback callback) override;
|
||||
void forEachSubcolumnRecursively(ColumnCallback callback) override;
|
||||
void forEachSubcolumn(ColumnCallback callback) const override;
|
||||
void forEachSubcolumnRecursively(RecursiveColumnCallback callback) const override;
|
||||
void insert(const Field & field) override;
|
||||
void insertDefault() override;
|
||||
void insertFrom(const IColumn & src, size_t n) override;
|
||||
|
@ -744,17 +744,17 @@ bool ColumnSparse::structureEquals(const IColumn & rhs) const
|
||||
return false;
|
||||
}
|
||||
|
||||
void ColumnSparse::forEachSubcolumn(ColumnCallback callback)
|
||||
void ColumnSparse::forEachSubcolumn(ColumnCallback callback) const
|
||||
{
|
||||
callback(values);
|
||||
callback(offsets);
|
||||
}
|
||||
|
||||
void ColumnSparse::forEachSubcolumnRecursively(ColumnCallback callback)
|
||||
void ColumnSparse::forEachSubcolumnRecursively(RecursiveColumnCallback callback) const
|
||||
{
|
||||
callback(values);
|
||||
callback(*values);
|
||||
values->forEachSubcolumnRecursively(callback);
|
||||
callback(offsets);
|
||||
callback(*offsets);
|
||||
offsets->forEachSubcolumnRecursively(callback);
|
||||
}
|
||||
|
||||
|
@ -139,8 +139,8 @@ public:
|
||||
|
||||
ColumnPtr compress() const override;
|
||||
|
||||
void forEachSubcolumn(ColumnCallback callback) override;
|
||||
void forEachSubcolumnRecursively(ColumnCallback callback) override;
|
||||
void forEachSubcolumn(ColumnCallback callback) const override;
|
||||
void forEachSubcolumnRecursively(RecursiveColumnCallback callback) const override;
|
||||
|
||||
bool structureEquals(const IColumn & rhs) const override;
|
||||
|
||||
|
@ -495,17 +495,17 @@ void ColumnTuple::getExtremes(Field & min, Field & max) const
|
||||
max = max_tuple;
|
||||
}
|
||||
|
||||
void ColumnTuple::forEachSubcolumn(ColumnCallback callback)
|
||||
void ColumnTuple::forEachSubcolumn(ColumnCallback callback) const
|
||||
{
|
||||
for (auto & column : columns)
|
||||
for (const auto & column : columns)
|
||||
callback(column);
|
||||
}
|
||||
|
||||
void ColumnTuple::forEachSubcolumnRecursively(ColumnCallback callback)
|
||||
void ColumnTuple::forEachSubcolumnRecursively(RecursiveColumnCallback callback) const
|
||||
{
|
||||
for (auto & column : columns)
|
||||
for (const auto & column : columns)
|
||||
{
|
||||
callback(column);
|
||||
callback(*column);
|
||||
column->forEachSubcolumnRecursively(callback);
|
||||
}
|
||||
}
|
||||
|
@ -96,8 +96,8 @@ public:
|
||||
size_t byteSizeAt(size_t n) const override;
|
||||
size_t allocatedBytes() const override;
|
||||
void protect() override;
|
||||
void forEachSubcolumn(ColumnCallback callback) override;
|
||||
void forEachSubcolumnRecursively(ColumnCallback callback) override;
|
||||
void forEachSubcolumn(ColumnCallback callback) const override;
|
||||
void forEachSubcolumnRecursively(RecursiveColumnCallback callback) const override;
|
||||
bool structureEquals(const IColumn & rhs) const override;
|
||||
bool isCollationSupported() const override;
|
||||
ColumnPtr compress() const override;
|
||||
|
@ -105,7 +105,13 @@ public:
|
||||
return column_holder->allocatedBytes() + reverse_index.allocatedBytes()
|
||||
+ (nested_null_mask ? nested_null_mask->allocatedBytes() : 0);
|
||||
}
|
||||
void forEachSubcolumn(IColumn::ColumnCallback callback) override
|
||||
|
||||
void forEachSubcolumn(IColumn::ColumnCallback callback) const override
|
||||
{
|
||||
callback(column_holder);
|
||||
}
|
||||
|
||||
void forEachSubcolumn(IColumn::MutableColumnCallback callback) override
|
||||
{
|
||||
callback(column_holder);
|
||||
reverse_index.setColumn(getRawColumnPtr());
|
||||
@ -113,9 +119,15 @@ public:
|
||||
nested_column_nullable = ColumnNullable::create(column_holder, nested_null_mask);
|
||||
}
|
||||
|
||||
void forEachSubcolumnRecursively(IColumn::ColumnCallback callback) override
|
||||
void forEachSubcolumnRecursively(IColumn::RecursiveColumnCallback callback) const override
|
||||
{
|
||||
callback(column_holder);
|
||||
callback(*column_holder);
|
||||
column_holder->forEachSubcolumnRecursively(callback);
|
||||
}
|
||||
|
||||
void forEachSubcolumnRecursively(IColumn::RecursiveMutableColumnCallback callback) override
|
||||
{
|
||||
callback(*column_holder);
|
||||
column_holder->forEachSubcolumnRecursively(callback);
|
||||
reverse_index.setColumn(getRawColumnPtr());
|
||||
if (is_nullable)
|
||||
|
@ -20,12 +20,10 @@ String IColumn::dumpStructure() const
|
||||
WriteBufferFromOwnString res;
|
||||
res << getFamilyName() << "(size = " << size();
|
||||
|
||||
ColumnCallback callback = [&](ColumnPtr & subcolumn)
|
||||
forEachSubcolumn([&](const auto & subcolumn)
|
||||
{
|
||||
res << ", " << subcolumn->dumpStructure();
|
||||
};
|
||||
|
||||
const_cast<IColumn*>(this)->forEachSubcolumn(callback);
|
||||
});
|
||||
|
||||
res << ")";
|
||||
return res.str();
|
||||
@ -64,6 +62,22 @@ ColumnPtr IColumn::createWithOffsets(const Offsets & offsets, const Field & defa
|
||||
return res;
|
||||
}
|
||||
|
||||
void IColumn::forEachSubcolumn(MutableColumnCallback callback)
|
||||
{
|
||||
std::as_const(*this).forEachSubcolumn([&callback](const WrappedPtr & subcolumn)
|
||||
{
|
||||
callback(const_cast<WrappedPtr &>(subcolumn));
|
||||
});
|
||||
}
|
||||
|
||||
void IColumn::forEachSubcolumnRecursively(RecursiveMutableColumnCallback callback)
|
||||
{
|
||||
std::as_const(*this).forEachSubcolumnRecursively([&callback](const IColumn & subcolumn)
|
||||
{
|
||||
callback(const_cast<IColumn &>(subcolumn));
|
||||
});
|
||||
}
|
||||
|
||||
bool isColumnNullable(const IColumn & column)
|
||||
{
|
||||
return checkColumn<ColumnNullable>(column);
|
||||
|
@ -411,11 +411,22 @@ public:
|
||||
|
||||
/// If the column contains subcolumns (such as Array, Nullable, etc), do callback on them.
|
||||
/// Shallow: doesn't do recursive calls; don't do call for itself.
|
||||
using ColumnCallback = std::function<void(WrappedPtr&)>;
|
||||
virtual void forEachSubcolumn(ColumnCallback) {}
|
||||
|
||||
using ColumnCallback = std::function<void(const WrappedPtr &)>;
|
||||
virtual void forEachSubcolumn(ColumnCallback) const {}
|
||||
|
||||
using MutableColumnCallback = std::function<void(WrappedPtr &)>;
|
||||
virtual void forEachSubcolumn(MutableColumnCallback callback);
|
||||
|
||||
/// Similar to forEachSubcolumn but it also do recursive calls.
|
||||
virtual void forEachSubcolumnRecursively(ColumnCallback) {}
|
||||
/// In recursive calls it's prohibited to replace pointers
|
||||
/// to subcolumns, so we use another callback function.
|
||||
|
||||
using RecursiveColumnCallback = std::function<void(const IColumn &)>;
|
||||
virtual void forEachSubcolumnRecursively(RecursiveColumnCallback) const {}
|
||||
|
||||
using RecursiveMutableColumnCallback = std::function<void(IColumn &)>;
|
||||
virtual void forEachSubcolumnRecursively(RecursiveMutableColumnCallback callback);
|
||||
|
||||
/// Columns have equal structure.
|
||||
/// If true - you can use "compareAt", "insertFrom", etc. methods.
|
||||
|
27
src/Columns/tests/gtest_column_dump_structure.cpp
Normal file
27
src/Columns/tests/gtest_column_dump_structure.cpp
Normal file
@ -0,0 +1,27 @@
|
||||
#include <Columns/ColumnLowCardinality.h>
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
#include <DataTypes/DataTypeLowCardinality.h>
|
||||
#include <gtest/gtest.h>
|
||||
#include <thread>
|
||||
|
||||
using namespace DB;
|
||||
|
||||
TEST(IColumn, dumpStructure)
|
||||
{
|
||||
auto type_lc = std::make_shared<DataTypeLowCardinality>(std::make_shared<DataTypeString>());
|
||||
ColumnPtr column_lc = type_lc->createColumn();
|
||||
String expected_structure = "ColumnLowCardinality(size = 0, UInt8(size = 0), ColumnUnique(size = 1, String(size = 1)))";
|
||||
|
||||
std::vector<std::thread> threads;
|
||||
for (size_t i = 0; i < 6; ++i)
|
||||
{
|
||||
threads.emplace_back([&]
|
||||
{
|
||||
for (size_t j = 0; j < 10000; ++j)
|
||||
ASSERT_EQ(column_lc->dumpStructure(), expected_structure);
|
||||
});
|
||||
}
|
||||
|
||||
for (auto & t : threads)
|
||||
t.join();
|
||||
}
|
@ -141,7 +141,7 @@ public:
|
||||
/// Get piece of memory, without alignment.
|
||||
char * alloc(size_t size)
|
||||
{
|
||||
if (unlikely(head->pos + size > head->end))
|
||||
if (unlikely(static_cast<std::ptrdiff_t>(size) > head->end - head->pos))
|
||||
addMemoryChunk(size);
|
||||
|
||||
char * res = head->pos;
|
||||
|
@ -21,7 +21,12 @@ bool FieldVisitorSum::operator() (UInt64 & x) const
|
||||
|
||||
bool FieldVisitorSum::operator() (Float64 & x) const { x += rhs.get<Float64>(); return x != 0; }
|
||||
|
||||
bool FieldVisitorSum::operator() (Null &) const { throw Exception("Cannot sum Nulls", ErrorCodes::LOGICAL_ERROR); }
|
||||
bool FieldVisitorSum::operator() (Null &) const
|
||||
{
|
||||
/// Do not add anything
|
||||
return false;
|
||||
}
|
||||
|
||||
bool FieldVisitorSum::operator() (String &) const { throw Exception("Cannot sum Strings", ErrorCodes::LOGICAL_ERROR); }
|
||||
bool FieldVisitorSum::operator() (Array &) const { throw Exception("Cannot sum Arrays", ErrorCodes::LOGICAL_ERROR); }
|
||||
bool FieldVisitorSum::operator() (Tuple &) const { throw Exception("Cannot sum Tuples", ErrorCodes::LOGICAL_ERROR); }
|
||||
|
@ -3,6 +3,7 @@
|
||||
#include <Common/HashTable/Hash.h>
|
||||
#include <Common/HashTable/HashTable.h>
|
||||
#include <Common/HashTable/HashTableAllocator.h>
|
||||
#include <Common/HashTable/TwoLevelHashTable.h>
|
||||
|
||||
#include <IO/WriteBuffer.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
@ -10,6 +11,14 @@
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <IO/VarInt.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int LOGICAL_ERROR;
|
||||
}
|
||||
}
|
||||
|
||||
/** NOTE HashSet could only be used for memmoveable (position independent) types.
|
||||
* Example: std::string is not position independent in libstdc++ with C++11 ABI or in libc++.
|
||||
* Also, key must be of type, that zero bytes is compared equals to zero key.
|
||||
@ -64,6 +73,47 @@ public:
|
||||
};
|
||||
|
||||
|
||||
template <
|
||||
typename Key,
|
||||
typename TCell, /// Supposed to have no state (HashTableNoState)
|
||||
typename Hash = DefaultHash<Key>,
|
||||
typename Grower = TwoLevelHashTableGrower<>,
|
||||
typename Allocator = HashTableAllocator>
|
||||
class TwoLevelHashSetTable
|
||||
: public TwoLevelHashTable<Key, TCell, Hash, Grower, Allocator, HashSetTable<Key, TCell, Hash, Grower, Allocator>>
|
||||
{
|
||||
public:
|
||||
using Self = TwoLevelHashSetTable;
|
||||
using Base = TwoLevelHashTable<Key, TCell, Hash, Grower, Allocator, HashSetTable<Key, TCell, Hash, Grower, Allocator>>;
|
||||
|
||||
using Base::Base;
|
||||
|
||||
/// Writes its content in a way that it will be correctly read by HashSetTable.
|
||||
/// Used by uniqExact to preserve backward compatibility.
|
||||
void writeAsSingleLevel(DB::WriteBuffer & wb) const
|
||||
{
|
||||
DB::writeVarUInt(this->size(), wb);
|
||||
|
||||
bool zero_written = false;
|
||||
for (size_t i = 0; i < Base::NUM_BUCKETS; ++i)
|
||||
{
|
||||
if (this->impls[i].hasZero())
|
||||
{
|
||||
if (zero_written)
|
||||
throw DB::Exception(DB::ErrorCodes::LOGICAL_ERROR, "No more than one zero value expected");
|
||||
this->impls[i].zeroValue()->write(wb);
|
||||
zero_written = true;
|
||||
}
|
||||
}
|
||||
|
||||
static constexpr HashTableNoState state;
|
||||
for (auto ptr = this->begin(); ptr != this->end(); ++ptr)
|
||||
if (!ptr.getPtr()->isZero(state))
|
||||
ptr.getPtr()->write(wb);
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
template <typename Key, typename Hash, typename TState = HashTableNoState>
|
||||
struct HashSetCellWithSavedHash : public HashTableCell<Key, Hash, TState>
|
||||
{
|
||||
@ -89,6 +139,13 @@ template <
|
||||
typename Allocator = HashTableAllocator>
|
||||
using HashSet = HashSetTable<Key, HashTableCell<Key, Hash>, Hash, Grower, Allocator>;
|
||||
|
||||
template <
|
||||
typename Key,
|
||||
typename Hash = DefaultHash<Key>,
|
||||
typename Grower = TwoLevelHashTableGrower<>,
|
||||
typename Allocator = HashTableAllocator>
|
||||
using TwoLevelHashSet = TwoLevelHashSetTable<Key, HashTableCell<Key, Hash>, Hash, Grower, Allocator>;
|
||||
|
||||
template <typename Key, typename Hash, size_t initial_size_degree>
|
||||
using HashSetWithStackMemory = HashSet<
|
||||
Key,
|
||||
|
@ -432,20 +432,12 @@ struct AllocatorBufferDeleter<true, Allocator, Cell>
|
||||
|
||||
|
||||
// The HashTable
|
||||
template
|
||||
<
|
||||
typename Key,
|
||||
typename Cell,
|
||||
typename Hash,
|
||||
typename Grower,
|
||||
typename Allocator
|
||||
>
|
||||
class HashTable :
|
||||
private boost::noncopyable,
|
||||
protected Hash,
|
||||
protected Allocator,
|
||||
protected Cell::State,
|
||||
protected ZeroValueStorage<Cell::need_zero_value_storage, Cell> /// empty base optimization
|
||||
template <typename Key, typename Cell, typename Hash, typename Grower, typename Allocator>
|
||||
class HashTable : private boost::noncopyable,
|
||||
protected Hash,
|
||||
protected Allocator,
|
||||
protected Cell::State,
|
||||
public ZeroValueStorage<Cell::need_zero_value_storage, Cell> /// empty base optimization
|
||||
{
|
||||
public:
|
||||
// If we use an allocator with inline memory, check that the initial
|
||||
|
@ -159,14 +159,16 @@ public:
|
||||
|
||||
class const_iterator /// NOLINT
|
||||
{
|
||||
Self * container{};
|
||||
const Self * container{};
|
||||
size_t bucket{};
|
||||
typename Impl::const_iterator current_it{};
|
||||
|
||||
friend class TwoLevelHashTable;
|
||||
|
||||
const_iterator(Self * container_, size_t bucket_, typename Impl::const_iterator current_it_)
|
||||
: container(container_), bucket(bucket_), current_it(current_it_) {}
|
||||
const_iterator(const Self * container_, size_t bucket_, typename Impl::const_iterator current_it_)
|
||||
: container(container_), bucket(bucket_), current_it(current_it_)
|
||||
{
|
||||
}
|
||||
|
||||
public:
|
||||
const_iterator() = default;
|
||||
|
@ -27,7 +27,7 @@ int main(int, char **)
|
||||
std::cerr << x.getValue() << std::endl;
|
||||
|
||||
DB::WriteBufferFromOwnString wb;
|
||||
cont.writeText(wb);
|
||||
cont.write(wb);
|
||||
|
||||
std::cerr << "dump: " << wb.str() << std::endl;
|
||||
}
|
||||
|
@ -15,6 +15,17 @@
|
||||
|
||||
using namespace DB;
|
||||
|
||||
namespace
|
||||
{
|
||||
std::vector<UInt64> getVectorWithNumbersUpToN(size_t n)
|
||||
{
|
||||
std::vector<UInt64> res(n);
|
||||
std::iota(res.begin(), res.end(), 0);
|
||||
return res;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
/// To test dump functionality without using other hashes that can change
|
||||
template <typename T>
|
||||
@ -371,3 +382,48 @@ TEST(HashTable, Resize)
|
||||
ASSERT_EQ(actual, expected);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
using HashSetContent = std::vector<UInt64>;
|
||||
|
||||
class TwoLevelHashSetFixture : public ::testing::TestWithParam<HashSetContent>
|
||||
{
|
||||
};
|
||||
|
||||
|
||||
TEST_P(TwoLevelHashSetFixture, WriteAsSingleLevel)
|
||||
{
|
||||
using Key = UInt64;
|
||||
|
||||
{
|
||||
const auto & hash_set_content = GetParam();
|
||||
|
||||
TwoLevelHashSet<Key, HashCRC32<Key>> two_level;
|
||||
for (const auto & elem : hash_set_content)
|
||||
two_level.insert(elem);
|
||||
|
||||
WriteBufferFromOwnString wb;
|
||||
two_level.writeAsSingleLevel(wb);
|
||||
|
||||
ReadBufferFromString rb(wb.str());
|
||||
HashSet<Key, HashCRC32<Key>> single_level;
|
||||
single_level.read(rb);
|
||||
|
||||
EXPECT_EQ(single_level.size(), hash_set_content.size());
|
||||
for (const auto & elem : hash_set_content)
|
||||
EXPECT_NE(single_level.find(elem), nullptr);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
INSTANTIATE_TEST_SUITE_P(
|
||||
TwoLevelHashSetTests,
|
||||
TwoLevelHashSetFixture,
|
||||
::testing::Values(
|
||||
HashSetContent{},
|
||||
getVectorWithNumbersUpToN(1),
|
||||
getVectorWithNumbersUpToN(100),
|
||||
getVectorWithNumbersUpToN(1000),
|
||||
getVectorWithNumbersUpToN(10000),
|
||||
getVectorWithNumbersUpToN(100000),
|
||||
getVectorWithNumbersUpToN(1000000)));
|
||||
|
@ -566,7 +566,8 @@ public:
|
||||
|
||||
DataTypePtr getReturnTypeImpl(const DataTypes & arguments) const override
|
||||
{
|
||||
if (!isString(arguments[0]))
|
||||
WhichDataType which(arguments[0]);
|
||||
if (!which.isStringOrFixedString())
|
||||
throw Exception("Illegal type " + arguments[0]->getName() + " of argument of function " + getName(),
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
|
||||
@ -612,6 +613,39 @@ public:
|
||||
|
||||
return col_res;
|
||||
}
|
||||
else if (const ColumnFixedString * col_fix_string = checkAndGetColumn<ColumnFixedString>(column.get()))
|
||||
{
|
||||
auto col_res = ColumnString::create();
|
||||
|
||||
ColumnString::Chars & out_vec = col_res->getChars();
|
||||
ColumnString::Offsets & out_offsets = col_res->getOffsets();
|
||||
|
||||
const ColumnString::Chars & in_vec = col_fix_string->getChars();
|
||||
size_t n = col_fix_string->getN();
|
||||
|
||||
size_t size = col_fix_string->size();
|
||||
out_offsets.resize(size);
|
||||
out_vec.resize(in_vec.size() / word_size + size);
|
||||
|
||||
char * begin = reinterpret_cast<char *>(out_vec.data());
|
||||
char * pos = begin;
|
||||
size_t prev_offset = 0;
|
||||
|
||||
for (size_t i = 0; i < size; ++i)
|
||||
{
|
||||
size_t new_offset = prev_offset + n;
|
||||
|
||||
Impl::decode(reinterpret_cast<const char *>(&in_vec[prev_offset]), reinterpret_cast<const char *>(&in_vec[new_offset]), pos);
|
||||
|
||||
out_offsets[i] = pos - begin;
|
||||
|
||||
prev_offset = new_offset;
|
||||
}
|
||||
|
||||
out_vec.resize(pos - begin);
|
||||
|
||||
return col_res;
|
||||
}
|
||||
else
|
||||
{
|
||||
throw Exception("Illegal column " + arguments[0].column->getName()
|
||||
|
@ -20,17 +20,19 @@
|
||||
#include <Columns/ColumnArray.h>
|
||||
#include <Columns/ColumnTuple.h>
|
||||
|
||||
#include <DataTypes/Serializations/SerializationDecimal.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
#include <DataTypes/DataTypeLowCardinality.h>
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
#include <DataTypes/DataTypesDecimal.h>
|
||||
#include <DataTypes/DataTypeArray.h>
|
||||
#include <DataTypes/DataTypeEnum.h>
|
||||
#include <DataTypes/DataTypeFactory.h>
|
||||
#include <DataTypes/DataTypeFixedString.h>
|
||||
#include <DataTypes/DataTypeLowCardinality.h>
|
||||
#include <DataTypes/DataTypeNothing.h>
|
||||
#include <DataTypes/DataTypeNullable.h>
|
||||
#include <DataTypes/DataTypeArray.h>
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
#include <DataTypes/DataTypeTuple.h>
|
||||
#include <DataTypes/DataTypeUUID.h>
|
||||
#include <DataTypes/DataTypesDecimal.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
#include <DataTypes/Serializations/SerializationDecimal.h>
|
||||
|
||||
#include <Functions/FunctionFactory.h>
|
||||
#include <Functions/IFunction.h>
|
||||
@ -720,8 +722,16 @@ public:
|
||||
return false;
|
||||
}
|
||||
|
||||
auto & col_vec = assert_cast<ColumnVector<NumberType> &>(dest);
|
||||
col_vec.insertValue(value);
|
||||
if (dest.getDataType() == TypeIndex::LowCardinality)
|
||||
{
|
||||
ColumnLowCardinality & col_low = assert_cast<ColumnLowCardinality &>(dest);
|
||||
col_low.insertData(reinterpret_cast<const char *>(&value), sizeof(value));
|
||||
}
|
||||
else
|
||||
{
|
||||
auto & col_vec = assert_cast<ColumnVector<NumberType> &>(dest);
|
||||
col_vec.insertValue(value);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
};
|
||||
@ -825,8 +835,17 @@ public:
|
||||
return JSONExtractRawImpl<JSONParser>::insertResultToColumn(dest, element, {});
|
||||
|
||||
auto str = element.getString();
|
||||
ColumnString & col_str = assert_cast<ColumnString &>(dest);
|
||||
col_str.insertData(str.data(), str.size());
|
||||
|
||||
if (dest.getDataType() == TypeIndex::LowCardinality)
|
||||
{
|
||||
ColumnLowCardinality & col_low = assert_cast<ColumnLowCardinality &>(dest);
|
||||
col_low.insertData(str.data(), str.size());
|
||||
}
|
||||
else
|
||||
{
|
||||
ColumnString & col_str = assert_cast<ColumnString &>(dest);
|
||||
col_str.insertData(str.data(), str.size());
|
||||
}
|
||||
return true;
|
||||
}
|
||||
};
|
||||
@ -855,25 +874,41 @@ struct JSONExtractTree
|
||||
}
|
||||
};
|
||||
|
||||
class LowCardinalityNode : public Node
|
||||
class LowCardinalityFixedStringNode : public Node
|
||||
{
|
||||
public:
|
||||
LowCardinalityNode(DataTypePtr dictionary_type_, std::unique_ptr<Node> impl_)
|
||||
: dictionary_type(dictionary_type_), impl(std::move(impl_)) {}
|
||||
explicit LowCardinalityFixedStringNode(const size_t fixed_length_) : fixed_length(fixed_length_) { }
|
||||
bool insertResultToColumn(IColumn & dest, const Element & element) override
|
||||
{
|
||||
auto from_col = dictionary_type->createColumn();
|
||||
if (impl->insertResultToColumn(*from_col, element))
|
||||
// If element is an object we delegate the insertion to JSONExtractRawImpl
|
||||
if (element.isObject())
|
||||
return JSONExtractRawImpl<JSONParser>::insertResultToLowCardinalityFixedStringColumn(dest, element, fixed_length);
|
||||
else if (!element.isString())
|
||||
return false;
|
||||
|
||||
auto str = element.getString();
|
||||
if (str.size() > fixed_length)
|
||||
return false;
|
||||
|
||||
// For the non low cardinality case of FixedString, the padding is done in the FixedString Column implementation.
|
||||
// In order to avoid having to pass the data to a FixedString Column and read it back (which would slow down the execution)
|
||||
// the data is padded here and written directly to the Low Cardinality Column
|
||||
if (str.size() == fixed_length)
|
||||
{
|
||||
std::string_view value = from_col->getDataAt(0).toView();
|
||||
assert_cast<ColumnLowCardinality &>(dest).insertData(value.data(), value.size());
|
||||
return true;
|
||||
assert_cast<ColumnLowCardinality &>(dest).insertData(str.data(), str.size());
|
||||
}
|
||||
return false;
|
||||
else
|
||||
{
|
||||
String padded_str(str);
|
||||
padded_str.resize(fixed_length, '\0');
|
||||
|
||||
assert_cast<ColumnLowCardinality &>(dest).insertData(padded_str.data(), padded_str.size());
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
private:
|
||||
DataTypePtr dictionary_type;
|
||||
std::unique_ptr<Node> impl;
|
||||
const size_t fixed_length;
|
||||
};
|
||||
|
||||
class UUIDNode : public Node
|
||||
@ -885,7 +920,15 @@ struct JSONExtractTree
|
||||
return false;
|
||||
|
||||
auto uuid = parseFromString<UUID>(element.getString());
|
||||
assert_cast<ColumnUUID &>(dest).insert(uuid);
|
||||
if (dest.getDataType() == TypeIndex::LowCardinality)
|
||||
{
|
||||
ColumnLowCardinality & col_low = assert_cast<ColumnLowCardinality &>(dest);
|
||||
col_low.insertData(reinterpret_cast<const char *>(&uuid), sizeof(uuid));
|
||||
}
|
||||
else
|
||||
{
|
||||
assert_cast<ColumnUUID &>(dest).insert(uuid);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
};
|
||||
@ -928,6 +971,7 @@ struct JSONExtractTree
|
||||
assert_cast<ColumnDecimal<DecimalType> &>(dest).insert(value);
|
||||
return true;
|
||||
}
|
||||
|
||||
private:
|
||||
DataTypePtr data_type;
|
||||
};
|
||||
@ -946,13 +990,18 @@ struct JSONExtractTree
|
||||
public:
|
||||
bool insertResultToColumn(IColumn & dest, const Element & element) override
|
||||
{
|
||||
if (!element.isString())
|
||||
if (element.isNull())
|
||||
return false;
|
||||
auto & col_str = assert_cast<ColumnFixedString &>(dest);
|
||||
|
||||
if (!element.isString())
|
||||
return JSONExtractRawImpl<JSONParser>::insertResultToFixedStringColumn(dest, element, {});
|
||||
|
||||
auto str = element.getString();
|
||||
auto & col_str = assert_cast<ColumnFixedString &>(dest);
|
||||
if (str.size() > col_str.getN())
|
||||
return false;
|
||||
col_str.insertData(str.data(), str.size());
|
||||
|
||||
return true;
|
||||
}
|
||||
};
|
||||
@ -1178,9 +1227,18 @@ struct JSONExtractTree
|
||||
case TypeIndex::UUID: return std::make_unique<UUIDNode>();
|
||||
case TypeIndex::LowCardinality:
|
||||
{
|
||||
// The low cardinality case is treated in two different ways:
|
||||
// For FixedString type, an especial class is implemented for inserting the data in the destination column,
|
||||
// as the string length must be passed in order to check and pad the incoming data.
|
||||
// For the rest of low cardinality types, the insertion is done in their corresponding class, adapting the data
|
||||
// as needed for the insertData function of the ColumnLowCardinality.
|
||||
auto dictionary_type = typeid_cast<const DataTypeLowCardinality *>(type.get())->getDictionaryType();
|
||||
auto impl = build(function_name, dictionary_type);
|
||||
return std::make_unique<LowCardinalityNode>(dictionary_type, std::move(impl));
|
||||
if ((*dictionary_type).getTypeId() == TypeIndex::FixedString)
|
||||
{
|
||||
auto fixed_length = typeid_cast<const DataTypeFixedString *>(dictionary_type.get())->getN();
|
||||
return std::make_unique<LowCardinalityFixedStringNode>(fixed_length);
|
||||
}
|
||||
return build(function_name, dictionary_type);
|
||||
}
|
||||
case TypeIndex::Decimal256: return std::make_unique<DecimalNode<Decimal256>>(type);
|
||||
case TypeIndex::Decimal128: return std::make_unique<DecimalNode<Decimal128>>(type);
|
||||
@ -1332,13 +1390,63 @@ public:
|
||||
|
||||
static bool insertResultToColumn(IColumn & dest, const Element & element, std::string_view)
|
||||
{
|
||||
ColumnString & col_str = assert_cast<ColumnString &>(dest);
|
||||
auto & chars = col_str.getChars();
|
||||
WriteBufferFromVector<ColumnString::Chars> buf(chars, AppendModeTag());
|
||||
if (dest.getDataType() == TypeIndex::LowCardinality)
|
||||
{
|
||||
ColumnString::Chars chars;
|
||||
WriteBufferFromVector<ColumnString::Chars> buf(chars, AppendModeTag());
|
||||
traverse(element, buf);
|
||||
buf.finalize();
|
||||
assert_cast<ColumnLowCardinality &>(dest).insertData(reinterpret_cast<const char *>(chars.data()), chars.size());
|
||||
}
|
||||
else
|
||||
{
|
||||
ColumnString & col_str = assert_cast<ColumnString &>(dest);
|
||||
auto & chars = col_str.getChars();
|
||||
WriteBufferFromVector<ColumnString::Chars> buf(chars, AppendModeTag());
|
||||
traverse(element, buf);
|
||||
buf.finalize();
|
||||
chars.push_back(0);
|
||||
col_str.getOffsets().push_back(chars.size());
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
// We use insertResultToFixedStringColumn in case we are inserting raw data in a FixedString column
|
||||
static bool insertResultToFixedStringColumn(IColumn & dest, const Element & element, std::string_view)
|
||||
{
|
||||
ColumnFixedString::Chars chars;
|
||||
WriteBufferFromVector<ColumnFixedString::Chars> buf(chars, AppendModeTag());
|
||||
traverse(element, buf);
|
||||
buf.finalize();
|
||||
chars.push_back(0);
|
||||
col_str.getOffsets().push_back(chars.size());
|
||||
|
||||
auto & col_str = assert_cast<ColumnFixedString &>(dest);
|
||||
|
||||
if (chars.size() > col_str.getN())
|
||||
return false;
|
||||
|
||||
chars.resize_fill(col_str.getN());
|
||||
col_str.insertData(reinterpret_cast<const char *>(chars.data()), chars.size());
|
||||
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
// We use insertResultToLowCardinalityFixedStringColumn in case we are inserting raw data in a Low Cardinality FixedString column
|
||||
static bool insertResultToLowCardinalityFixedStringColumn(IColumn & dest, const Element & element, size_t fixed_length)
|
||||
{
|
||||
if (element.getObject().size() > fixed_length)
|
||||
return false;
|
||||
|
||||
ColumnFixedString::Chars chars;
|
||||
WriteBufferFromVector<ColumnFixedString::Chars> buf(chars, AppendModeTag());
|
||||
traverse(element, buf);
|
||||
buf.finalize();
|
||||
|
||||
if (chars.size() > fixed_length)
|
||||
return false;
|
||||
chars.resize_fill(fixed_length);
|
||||
assert_cast<ColumnLowCardinality &>(dest).insertData(reinterpret_cast<const char *>(chars.data()), chars.size());
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -34,7 +34,7 @@ private:
|
||||
|
||||
struct NameCanonicalRand
|
||||
{
|
||||
static constexpr auto name = "canonicalRand";
|
||||
static constexpr auto name = "randCanonical";
|
||||
};
|
||||
|
||||
class FunctionCanonicalRand : public FunctionRandomImpl<CanonicalRandImpl, Float64, NameCanonicalRand>
|
||||
@ -52,7 +52,7 @@ REGISTER_FUNCTION(CanonicalRand)
|
||||
The function generates pseudo random results with independent and identically distributed uniformly distributed values in [0, 1).
|
||||
Non-deterministic. Return type is Float64.
|
||||
)",
|
||||
Documentation::Examples{{"canonicalRand", "SELECT canonicalRand()"}},
|
||||
Documentation::Examples{{"randCanonical", "SELECT randCanonical()"}},
|
||||
Documentation::Categories{"Mathematical"}});
|
||||
}
|
||||
|
||||
|
113
src/Functions/factorial.cpp
Normal file
113
src/Functions/factorial.cpp
Normal file
@ -0,0 +1,113 @@
|
||||
#include <Functions/FunctionFactory.h>
|
||||
#include <Functions/FunctionUnaryArithmetic.h>
|
||||
#include <DataTypes/NumberTraits.h>
|
||||
#include <Common/FieldVisitorConvertToNumber.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
template <typename A>
|
||||
struct FactorialImpl
|
||||
{
|
||||
using ResultType = UInt64;
|
||||
static const constexpr bool allow_decimal = false;
|
||||
static const constexpr bool allow_fixed_string = false;
|
||||
static const constexpr bool allow_string_integer = false;
|
||||
|
||||
static inline NO_SANITIZE_UNDEFINED ResultType apply(A a)
|
||||
{
|
||||
if constexpr (std::is_floating_point_v<A> || is_over_big_int<A>)
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
|
||||
"Illegal type of argument of function factorial, should not be floating point or big int");
|
||||
|
||||
if constexpr (is_integer<A>)
|
||||
{
|
||||
if (a > 20)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "The maximum value for the input argument of function factorial is 20");
|
||||
|
||||
if constexpr (is_unsigned_v<A>)
|
||||
return factorials[a];
|
||||
else if constexpr (is_signed_v<A>)
|
||||
return a >= 0 ? factorials[a] : 1;
|
||||
}
|
||||
}
|
||||
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
static constexpr bool compilable = false; /// special type handling, some other time
|
||||
#endif
|
||||
|
||||
private:
|
||||
static const constexpr ResultType factorials[21]
|
||||
= {1,
|
||||
1,
|
||||
2,
|
||||
6,
|
||||
24,
|
||||
120,
|
||||
720,
|
||||
5040,
|
||||
40320,
|
||||
362880,
|
||||
3628800,
|
||||
39916800,
|
||||
479001600,
|
||||
6227020800L,
|
||||
87178291200L,
|
||||
1307674368000L,
|
||||
20922789888000L,
|
||||
355687428096000L,
|
||||
6402373705728000L,
|
||||
121645100408832000L,
|
||||
2432902008176640000L};
|
||||
};
|
||||
|
||||
struct NameFactorial { static constexpr auto name = "factorial"; };
|
||||
using FunctionFactorial = FunctionUnaryArithmetic<FactorialImpl, NameFactorial, false>;
|
||||
|
||||
template <> struct FunctionUnaryArithmeticMonotonicity<NameFactorial>
|
||||
{
|
||||
static bool has() { return true; }
|
||||
|
||||
static IFunction::Monotonicity get(const Field & left, const Field & right)
|
||||
{
|
||||
bool is_strict = false;
|
||||
if (!left.isNull() && !right.isNull())
|
||||
{
|
||||
auto left_value = applyVisitor(FieldVisitorConvertToNumber<Int128>(), left);
|
||||
auto right_value = applyVisitor(FieldVisitorConvertToNumber<Int128>(), left);
|
||||
if (1 <= left_value && left_value <= right_value && right_value <= 20)
|
||||
is_strict = true;
|
||||
}
|
||||
|
||||
return {
|
||||
.is_monotonic = true,
|
||||
.is_positive = true,
|
||||
.is_always_monotonic = true,
|
||||
.is_strict = is_strict,
|
||||
};
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
REGISTER_FUNCTION(Factorial)
|
||||
{
|
||||
factory.registerFunction<FunctionFactorial>(
|
||||
{
|
||||
R"(
|
||||
Computes the factorial of an integer value. It works with any native integer type including UInt(8|16|32|64) and Int(8|16|32|64). The return type is UInt64.
|
||||
|
||||
The factorial of 0 is 1. Likewise, the factorial() function returns 1 for any negative value. The maximum positive value for the input argument is 20, a value of 21 or greater will cause exception throw.
|
||||
)",
|
||||
Documentation::Examples{{"factorial", "SELECT factorial(10)"}},
|
||||
Documentation::Categories{"Mathematical"}},
|
||||
FunctionFactory::CaseInsensitive);
|
||||
}
|
||||
|
||||
}
|
@ -182,7 +182,7 @@ REGISTER_FUNCTION(ModuloLegacy)
|
||||
|
||||
struct NamePositiveModulo
|
||||
{
|
||||
static constexpr auto name = "positive_modulo";
|
||||
static constexpr auto name = "positiveModulo";
|
||||
};
|
||||
using FunctionPositiveModulo = BinaryArithmeticOverloadResolver<PositiveModuloImpl, NamePositiveModulo, false>;
|
||||
|
||||
@ -191,11 +191,17 @@ REGISTER_FUNCTION(PositiveModulo)
|
||||
factory.registerFunction<FunctionPositiveModulo>(
|
||||
{
|
||||
R"(
|
||||
Calculates the remainder when dividing `a` by `b`. Similar to function `modulo` except that `positive_modulo` always return non-negative number.
|
||||
Calculates the remainder when dividing `a` by `b`. Similar to function `modulo` except that `positiveModulo` always return non-negative number.
|
||||
Returns the difference between `a` and the nearest integer not greater than `a` divisible by `b`.
|
||||
In other words, the function returning the modulus (modulo) in the terms of Modular Arithmetic.
|
||||
)",
|
||||
Documentation::Examples{{"positive_modulo", "SELECT positive_modulo(-1000, 32);"}},
|
||||
Documentation::Examples{{"positiveModulo", "SELECT positiveModulo(-1, 10);"}},
|
||||
Documentation::Categories{"Arithmetic"}},
|
||||
FunctionFactory::CaseInsensitive);
|
||||
|
||||
factory.registerAlias("positive_modulo", "positiveModulo", FunctionFactory::CaseInsensitive);
|
||||
/// Compatibility with Spark:
|
||||
factory.registerAlias("pmod", "positiveModulo", FunctionFactory::CaseInsensitive);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -50,14 +50,15 @@ OutputBlockColumns prepareOutputBlockColumns(
|
||||
|
||||
if (aggregate_functions[i]->isState())
|
||||
{
|
||||
auto callback = [&](auto & subcolumn)
|
||||
auto callback = [&](IColumn & subcolumn)
|
||||
{
|
||||
/// The ColumnAggregateFunction column captures the shared ownership of the arena with aggregate function states.
|
||||
if (auto * column_aggregate_func = typeid_cast<ColumnAggregateFunction *>(subcolumn.get()))
|
||||
if (auto * column_aggregate_func = typeid_cast<ColumnAggregateFunction *>(&subcolumn))
|
||||
for (auto & pool : aggregates_pools)
|
||||
column_aggregate_func->addArena(pool);
|
||||
};
|
||||
callback(final_aggregate_columns[i]);
|
||||
|
||||
callback(*final_aggregate_columns[i]);
|
||||
final_aggregate_columns[i]->forEachSubcolumnRecursively(callback);
|
||||
}
|
||||
}
|
||||
|
@ -2508,6 +2508,8 @@ void NO_INLINE Aggregator::mergeDataOnlyExistingKeysImpl(
|
||||
void NO_INLINE Aggregator::mergeWithoutKeyDataImpl(
|
||||
ManyAggregatedDataVariants & non_empty_data) const
|
||||
{
|
||||
ThreadPool thread_pool{params.max_threads};
|
||||
|
||||
AggregatedDataVariantsPtr & res = non_empty_data[0];
|
||||
|
||||
/// We merge all aggregation results to the first.
|
||||
@ -2517,7 +2519,15 @@ void NO_INLINE Aggregator::mergeWithoutKeyDataImpl(
|
||||
AggregatedDataWithoutKey & current_data = non_empty_data[result_num]->without_key;
|
||||
|
||||
for (size_t i = 0; i < params.aggregates_size; ++i)
|
||||
aggregate_functions[i]->merge(res_data + offsets_of_aggregate_states[i], current_data + offsets_of_aggregate_states[i], res->aggregates_pool);
|
||||
if (aggregate_functions[i]->isAbleToParallelizeMerge())
|
||||
aggregate_functions[i]->merge(
|
||||
res_data + offsets_of_aggregate_states[i],
|
||||
current_data + offsets_of_aggregate_states[i],
|
||||
thread_pool,
|
||||
res->aggregates_pool);
|
||||
else
|
||||
aggregate_functions[i]->merge(
|
||||
res_data + offsets_of_aggregate_states[i], current_data + offsets_of_aggregate_states[i], res->aggregates_pool);
|
||||
|
||||
for (size_t i = 0; i < params.aggregates_size; ++i)
|
||||
aggregate_functions[i]->destroy(current_data + offsets_of_aggregate_states[i]);
|
||||
|
@ -63,6 +63,7 @@ ASTPtr ASTIdentifier::clone() const
|
||||
{
|
||||
auto ret = std::make_shared<ASTIdentifier>(*this);
|
||||
ret->semantic = std::make_shared<IdentifierSemanticImpl>(*ret->semantic);
|
||||
ret->cloneChildren();
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -2199,40 +2199,40 @@ std::vector<std::pair<const char *, Operator>> ParserExpressionImpl::operators_t
|
||||
{"AND", Operator("and", 4, 2, OperatorType::Mergeable)},
|
||||
{"BETWEEN", Operator("", 6, 0, OperatorType::StartBetween)},
|
||||
{"NOT BETWEEN", Operator("", 6, 0, OperatorType::StartNotBetween)},
|
||||
{"IS NULL", Operator("isNull", 8, 1, OperatorType::IsNull)},
|
||||
{"IS NOT NULL", Operator("isNotNull", 8, 1, OperatorType::IsNull)},
|
||||
{"==", Operator("equals", 9, 2, OperatorType::Comparison)},
|
||||
{"!=", Operator("notEquals", 9, 2, OperatorType::Comparison)},
|
||||
{"<>", Operator("notEquals", 9, 2, OperatorType::Comparison)},
|
||||
{"<=", Operator("lessOrEquals", 9, 2, OperatorType::Comparison)},
|
||||
{">=", Operator("greaterOrEquals", 9, 2, OperatorType::Comparison)},
|
||||
{"<", Operator("less", 9, 2, OperatorType::Comparison)},
|
||||
{">", Operator("greater", 9, 2, OperatorType::Comparison)},
|
||||
{"=", Operator("equals", 9, 2, OperatorType::Comparison)},
|
||||
{"LIKE", Operator("like", 9, 2)},
|
||||
{"ILIKE", Operator("ilike", 9, 2)},
|
||||
{"NOT LIKE", Operator("notLike", 9, 2)},
|
||||
{"NOT ILIKE", Operator("notILike", 9, 2)},
|
||||
{"IN", Operator("in", 9, 2)},
|
||||
{"NOT IN", Operator("notIn", 9, 2)},
|
||||
{"GLOBAL IN", Operator("globalIn", 9, 2)},
|
||||
{"GLOBAL NOT IN", Operator("globalNotIn", 9, 2)},
|
||||
{"||", Operator("concat", 10, 2, OperatorType::Mergeable)},
|
||||
{"+", Operator("plus", 11, 2)},
|
||||
{"-", Operator("minus", 11, 2)},
|
||||
{"*", Operator("multiply", 12, 2)},
|
||||
{"/", Operator("divide", 12, 2)},
|
||||
{"%", Operator("modulo", 12, 2)},
|
||||
{"MOD", Operator("modulo", 12, 2)},
|
||||
{"DIV", Operator("intDiv", 12, 2)},
|
||||
{".", Operator("tupleElement", 14, 2, OperatorType::TupleElement)},
|
||||
{"[", Operator("arrayElement", 14, 2, OperatorType::ArrayElement)},
|
||||
{"::", Operator("CAST", 14, 2, OperatorType::Cast)},
|
||||
{"==", Operator("equals", 8, 2, OperatorType::Comparison)},
|
||||
{"!=", Operator("notEquals", 8, 2, OperatorType::Comparison)},
|
||||
{"<>", Operator("notEquals", 8, 2, OperatorType::Comparison)},
|
||||
{"<=", Operator("lessOrEquals", 8, 2, OperatorType::Comparison)},
|
||||
{">=", Operator("greaterOrEquals", 8, 2, OperatorType::Comparison)},
|
||||
{"<", Operator("less", 8, 2, OperatorType::Comparison)},
|
||||
{">", Operator("greater", 8, 2, OperatorType::Comparison)},
|
||||
{"=", Operator("equals", 8, 2, OperatorType::Comparison)},
|
||||
{"LIKE", Operator("like", 8, 2)},
|
||||
{"ILIKE", Operator("ilike", 8, 2)},
|
||||
{"NOT LIKE", Operator("notLike", 8, 2)},
|
||||
{"NOT ILIKE", Operator("notILike", 8, 2)},
|
||||
{"IN", Operator("in", 8, 2)},
|
||||
{"NOT IN", Operator("notIn", 8, 2)},
|
||||
{"GLOBAL IN", Operator("globalIn", 8, 2)},
|
||||
{"GLOBAL NOT IN", Operator("globalNotIn", 8, 2)},
|
||||
{"||", Operator("concat", 9, 2, OperatorType::Mergeable)},
|
||||
{"+", Operator("plus", 10, 2)},
|
||||
{"-", Operator("minus", 10, 2)},
|
||||
{"*", Operator("multiply", 11, 2)},
|
||||
{"/", Operator("divide", 11, 2)},
|
||||
{"%", Operator("modulo", 11, 2)},
|
||||
{"MOD", Operator("modulo", 11, 2)},
|
||||
{"DIV", Operator("intDiv", 11, 2)},
|
||||
{".", Operator("tupleElement", 13, 2, OperatorType::TupleElement)},
|
||||
{"[", Operator("arrayElement", 13, 2, OperatorType::ArrayElement)},
|
||||
{"::", Operator("CAST", 13, 2, OperatorType::Cast)},
|
||||
{"IS NULL", Operator("isNull", 13, 1, OperatorType::IsNull)},
|
||||
{"IS NOT NULL", Operator("isNotNull", 13, 1, OperatorType::IsNull)},
|
||||
});
|
||||
|
||||
std::vector<std::pair<const char *, Operator>> ParserExpressionImpl::unary_operators_table({
|
||||
{"NOT", Operator("not", 5, 1)},
|
||||
{"-", Operator("negate", 13, 1)}
|
||||
{"-", Operator("negate", 12, 1)}
|
||||
});
|
||||
|
||||
Operator ParserExpressionImpl::finish_between_operator = Operator("", 7, 0, OperatorType::FinishBetween);
|
||||
|
@ -12,7 +12,15 @@ try
|
||||
std::string input = std::string(reinterpret_cast<const char*>(data), size);
|
||||
|
||||
DB::ParserQueryWithOutput parser(input.data() + input.size());
|
||||
DB::ASTPtr ast = parseQuery(parser, input.data(), input.data() + input.size(), "", 0, 1000);
|
||||
|
||||
const UInt64 max_parser_depth = 1000;
|
||||
DB::ASTPtr ast = parseQuery(parser, input.data(), input.data() + input.size(), "", 0, max_parser_depth);
|
||||
|
||||
const UInt64 max_ast_depth = 1000;
|
||||
ast->checkDepth(max_ast_depth);
|
||||
|
||||
const UInt64 max_ast_elements = 50000;
|
||||
ast->checkSize(max_ast_elements);
|
||||
|
||||
DB::WriteBufferFromOwnString wb;
|
||||
DB::formatAST(*ast, wb);
|
||||
|
@ -498,17 +498,6 @@ void Planner::buildQueryPlanIfNeeded()
|
||||
should_produce_results_in_order_of_bucket_number);
|
||||
query_plan.addStep(std::move(aggregating_step));
|
||||
|
||||
if (query_node.isGroupByWithRollup())
|
||||
{
|
||||
auto rollup_step = std::make_unique<RollupStep>(query_plan.getCurrentDataStream(), std::move(aggregator_params), true /*final*/, settings.group_by_use_nulls);
|
||||
query_plan.addStep(std::move(rollup_step));
|
||||
}
|
||||
else if (query_node.isGroupByWithCube())
|
||||
{
|
||||
auto cube_step = std::make_unique<CubeStep>(query_plan.getCurrentDataStream(), std::move(aggregator_params), true /*final*/, settings.group_by_use_nulls);
|
||||
query_plan.addStep(std::move(cube_step));
|
||||
}
|
||||
|
||||
if (query_node.isGroupByWithTotals())
|
||||
{
|
||||
const auto & having_analysis_result = expression_analysis_result.getHaving();
|
||||
@ -528,6 +517,17 @@ void Planner::buildQueryPlanIfNeeded()
|
||||
|
||||
query_plan.addStep(std::move(totals_having_step));
|
||||
}
|
||||
|
||||
if (query_node.isGroupByWithRollup())
|
||||
{
|
||||
auto rollup_step = std::make_unique<RollupStep>(query_plan.getCurrentDataStream(), std::move(aggregator_params), true /*final*/, settings.group_by_use_nulls);
|
||||
query_plan.addStep(std::move(rollup_step));
|
||||
}
|
||||
else if (query_node.isGroupByWithCube())
|
||||
{
|
||||
auto cube_step = std::make_unique<CubeStep>(query_plan.getCurrentDataStream(), std::move(aggregator_params), true /*final*/, settings.group_by_use_nulls);
|
||||
query_plan.addStep(std::move(cube_step));
|
||||
}
|
||||
}
|
||||
|
||||
if (!having_executed && expression_analysis_result.hasHaving())
|
||||
|
@ -154,6 +154,8 @@ struct DetachedPartInfo : public MergeTreePartInfo
|
||||
"deleting",
|
||||
"tmp-fetch",
|
||||
"covered-by-broken",
|
||||
"merge-not-byte-identical",
|
||||
"mutate-not-byte-identical"
|
||||
});
|
||||
|
||||
static constexpr auto DETACHED_REASONS_REMOVABLE_BY_TIMEOUT = std::to_array<std::string_view>({
|
||||
@ -163,7 +165,9 @@ struct DetachedPartInfo : public MergeTreePartInfo
|
||||
"ignored",
|
||||
"broken-on-start",
|
||||
"deleting",
|
||||
"clone"
|
||||
"clone",
|
||||
"merge-not-byte-identical",
|
||||
"mutate-not-byte-identical"
|
||||
});
|
||||
|
||||
/// NOTE: It may parse part info incorrectly.
|
||||
|
@ -10,6 +10,7 @@ const char * auto_contributors[] {
|
||||
"546",
|
||||
"7",
|
||||
"821008736@qq.com",
|
||||
"94rain",
|
||||
"ANDREI STAROVEROV",
|
||||
"Aaron Katz",
|
||||
"Adam Rutkowski",
|
||||
@ -21,6 +22,7 @@ const char * auto_contributors[] {
|
||||
"Alain BERRIER",
|
||||
"Albert Kidrachev",
|
||||
"Alberto",
|
||||
"Alejandro",
|
||||
"Aleksandr",
|
||||
"Aleksandr Karo",
|
||||
"Aleksandr Musorin",
|
||||
@ -63,6 +65,7 @@ const char * auto_contributors[] {
|
||||
"Alexander Sapin",
|
||||
"Alexander Tokmakov",
|
||||
"Alexander Tretiakov",
|
||||
"Alexander Yakovlev",
|
||||
"Alexandr Kondratev",
|
||||
"Alexandr Krasheninnikov",
|
||||
"Alexandr Orlov",
|
||||
@ -200,6 +203,7 @@ const char * auto_contributors[] {
|
||||
"Brett Hoerner",
|
||||
"Brian Hunter",
|
||||
"Bulat Gaifullin",
|
||||
"Camilo Sierra",
|
||||
"Carbyn",
|
||||
"Carlos Rodríguez Hernández",
|
||||
"Caspian",
|
||||
@ -235,6 +239,7 @@ const char * auto_contributors[] {
|
||||
"Daniel Dao",
|
||||
"Daniel Kutenin",
|
||||
"Daniel Qin",
|
||||
"Daniil Rubin",
|
||||
"Danila Kutenin",
|
||||
"Dao",
|
||||
"Dao Minh Thuc",
|
||||
@ -332,6 +337,7 @@ const char * auto_contributors[] {
|
||||
"Fullstop000",
|
||||
"Fuwang Hu",
|
||||
"G5.Qin",
|
||||
"Gabriel",
|
||||
"Gagan Arneja",
|
||||
"Gao Qiang",
|
||||
"Gary Dotzler",
|
||||
@ -345,6 +351,7 @@ const char * auto_contributors[] {
|
||||
"Gleb Kanterov",
|
||||
"Gleb Novikov",
|
||||
"Gleb-Tretyakov",
|
||||
"GoGoWen2021",
|
||||
"Gregory",
|
||||
"Grigory",
|
||||
"Grigory Buteyko",
|
||||
@ -432,6 +439,7 @@ const char * auto_contributors[] {
|
||||
"Jiang Tao",
|
||||
"Jianmei Zhang",
|
||||
"Jiebin Sun",
|
||||
"Joanna Hulboj",
|
||||
"Jochen Schalanda",
|
||||
"John",
|
||||
"John Hummel",
|
||||
@ -475,6 +483,7 @@ const char * auto_contributors[] {
|
||||
"Kostiantyn Storozhuk",
|
||||
"Kozlov Ivan",
|
||||
"Kruglov Pavel",
|
||||
"Krzysztof Góralski",
|
||||
"Kseniia Sumarokova",
|
||||
"Kuz Le",
|
||||
"Ky Li",
|
||||
@ -604,6 +613,7 @@ const char * auto_contributors[] {
|
||||
"Mr.General",
|
||||
"Murat Kabilov",
|
||||
"MyroTk",
|
||||
"Márcio Martins",
|
||||
"Mátyás Jani",
|
||||
"N. Kolotov",
|
||||
"NIKITA MIKHAILOV",
|
||||
@ -698,11 +708,13 @@ const char * auto_contributors[] {
|
||||
"Pysaoke",
|
||||
"Quanfa Fu",
|
||||
"Quid37",
|
||||
"Radistka-75",
|
||||
"Rafael Acevedo",
|
||||
"Rafael David Tinoco",
|
||||
"Rajkumar",
|
||||
"Rajkumar Varada",
|
||||
"Ramazan Polat",
|
||||
"Rami Dridi",
|
||||
"Ravengg",
|
||||
"Raúl Marín",
|
||||
"Realist007",
|
||||
@ -787,6 +799,7 @@ const char * auto_contributors[] {
|
||||
"SkyhotQin",
|
||||
"Slach",
|
||||
"Smita Kulkarni",
|
||||
"SmitaRKulkarni",
|
||||
"Snow",
|
||||
"Sofia Antipushina",
|
||||
"Stanislav Pavlovichev",
|
||||
@ -1007,6 +1020,7 @@ const char * auto_contributors[] {
|
||||
"bobrovskij artemij",
|
||||
"booknouse",
|
||||
"bseng",
|
||||
"canenoneko",
|
||||
"caspian",
|
||||
"cekc",
|
||||
"centos7",
|
||||
@ -1026,6 +1040,7 @@ const char * auto_contributors[] {
|
||||
"chertus",
|
||||
"chou.fan",
|
||||
"christophe.kalenzaga",
|
||||
"clarkcaoliu",
|
||||
"clickhouse-robot-curie",
|
||||
"cms",
|
||||
"cmsxbc",
|
||||
@ -1209,6 +1224,7 @@ const char * auto_contributors[] {
|
||||
"liuneng1994",
|
||||
"liuyangkuan",
|
||||
"liuyimin",
|
||||
"lixuchun",
|
||||
"liyang",
|
||||
"liyang830",
|
||||
"lokax",
|
||||
@ -1340,6 +1356,7 @@ const char * auto_contributors[] {
|
||||
"shangshujie",
|
||||
"shedx",
|
||||
"shuchaome",
|
||||
"shuyang",
|
||||
"simon-says",
|
||||
"snyk-bot",
|
||||
"songenjie",
|
||||
@ -1361,6 +1378,7 @@ const char * auto_contributors[] {
|
||||
"taiyang-li",
|
||||
"tangjiangling",
|
||||
"tao jiang",
|
||||
"taojiatao",
|
||||
"tavplubix",
|
||||
"tchepavel",
|
||||
"tcoyvwac",
|
||||
|
@ -13,7 +13,7 @@
|
||||
# include <Storages/StorageDelta.h>
|
||||
# include <Storages/StorageURL.h>
|
||||
# include <Storages/checkAndGetLiteralArgument.h>
|
||||
# include <TableFunctions/TableFunctionDelta.h>
|
||||
# include <TableFunctions/TableFunctionDeltaLake.h>
|
||||
# include <TableFunctions/TableFunctionFactory.h>
|
||||
# include "registerTableFunctions.h"
|
||||
|
||||
@ -160,9 +160,9 @@ void registerTableFunctionDelta(TableFunctionFactory & factory)
|
||||
factory.registerFunction<TableFunctionDelta>(
|
||||
{.documentation
|
||||
= {R"(The table function can be used to read the DeltaLake table stored on object store.)",
|
||||
Documentation::Examples{{"hudi", "SELECT * FROM deltaLake(url, access_key_id, secret_access_key)"}},
|
||||
Documentation::Examples{{"deltaLake", "SELECT * FROM deltaLake(url, access_key_id, secret_access_key)"}},
|
||||
Documentation::Categories{"DataLake"}},
|
||||
.allow_readonly = true});
|
||||
.allow_readonly = false});
|
||||
}
|
||||
|
||||
}
|
@ -162,7 +162,7 @@ void registerTableFunctionHudi(TableFunctionFactory & factory)
|
||||
= {R"(The table function can be used to read the Hudi table stored on object store.)",
|
||||
Documentation::Examples{{"hudi", "SELECT * FROM hudi(url, access_key_id, secret_access_key)"}},
|
||||
Documentation::Categories{"DataLake"}},
|
||||
.allow_readonly = true});
|
||||
.allow_readonly = false});
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1,5 +1,14 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
"""
|
||||
script to create releases for ClickHouse
|
||||
|
||||
The `gh` CLI prefered over the PyGithub to have an easy way to rollback bad
|
||||
release in command line by simple execution giving rollback commands
|
||||
|
||||
On another hand, PyGithub is used for convenient getting commit's status from API
|
||||
"""
|
||||
|
||||
|
||||
from contextlib import contextmanager
|
||||
from typing import List, Optional
|
||||
@ -8,6 +17,8 @@ import logging
|
||||
import subprocess
|
||||
|
||||
from git_helper import commit, release_branch
|
||||
from github_helper import GitHub
|
||||
from mark_release_ready import RELEASE_READY_STATUS
|
||||
from version_helper import (
|
||||
FILE_WITH_VERSION_PATH,
|
||||
GENERATED_CONTRIBUTORS,
|
||||
@ -67,12 +78,12 @@ class Release:
|
||||
self._release_branch = ""
|
||||
self._rollback_stack = [] # type: List[str]
|
||||
|
||||
def run(self, cmd: str, cwd: Optional[str] = None) -> str:
|
||||
def run(self, cmd: str, cwd: Optional[str] = None, **kwargs) -> str:
|
||||
cwd_text = ""
|
||||
if cwd:
|
||||
cwd_text = f" (CWD='{cwd}')"
|
||||
logging.info("Running command%s:\n %s", cwd_text, cmd)
|
||||
return self._git.run(cmd, cwd)
|
||||
return self._git.run(cmd, cwd, **kwargs)
|
||||
|
||||
def set_release_branch(self):
|
||||
# Fetch release commit in case it does not exist locally
|
||||
@ -94,6 +105,38 @@ class Release:
|
||||
return VersionType.LTS
|
||||
return VersionType.STABLE
|
||||
|
||||
def check_commit_release_ready(self):
|
||||
# First, get the auth token from gh cli
|
||||
auth_status = self.run(
|
||||
"gh auth status -t", stderr=subprocess.STDOUT
|
||||
).splitlines()
|
||||
token = ""
|
||||
for line in auth_status:
|
||||
if "✓ Token:" in line:
|
||||
token = line.split()[-1]
|
||||
if not token:
|
||||
logging.error("Can not extract token from `gh auth`")
|
||||
raise subprocess.SubprocessError("Can not extract token from `gh auth`")
|
||||
gh = GitHub(token, per_page=100)
|
||||
repo = gh.get_repo(str(self.repo))
|
||||
|
||||
# Statuses are ordered by descending updated_at, so the first necessary
|
||||
# status in the list is the most recent
|
||||
statuses = repo.get_commit(self.release_commit).get_statuses()
|
||||
for status in statuses:
|
||||
if status.context == RELEASE_READY_STATUS:
|
||||
if status.state == "success":
|
||||
return
|
||||
|
||||
raise Exception(
|
||||
f"the status {RELEASE_READY_STATUS} is {status.state}, not success"
|
||||
)
|
||||
|
||||
raise Exception(
|
||||
f"the status {RELEASE_READY_STATUS} "
|
||||
f"is not found for commit {self.release_commit}"
|
||||
)
|
||||
|
||||
def check_prerequisites(self):
|
||||
"""
|
||||
Check tooling installed in the system, `git` is checked by Git() init
|
||||
@ -108,6 +151,8 @@ class Release:
|
||||
)
|
||||
raise
|
||||
|
||||
self.check_commit_release_ready()
|
||||
|
||||
def do(self, check_dirty: bool, check_branch: bool, with_release_branch: bool):
|
||||
self.check_prerequisites()
|
||||
|
||||
|
8
tests/config/users.d/insert_keeper_retries.xml
Normal file
8
tests/config/users.d/insert_keeper_retries.xml
Normal file
@ -0,0 +1,8 @@
|
||||
<clickhouse>
|
||||
<profiles>
|
||||
<default>
|
||||
<insert_keeper_max_retries>20</insert_keeper_max_retries>
|
||||
<insert_keeper_fault_injection_probability>0.01</insert_keeper_fault_injection_probability>
|
||||
</default>
|
||||
</profiles>
|
||||
</clickhouse>
|
@ -2070,10 +2070,12 @@ class ClickHouseCluster:
|
||||
logging.debug("All instances of ZooKeeper started")
|
||||
return
|
||||
except Exception as ex:
|
||||
logging.debug("Can't connect to ZooKeeper " + str(ex))
|
||||
logging.debug(f"Can't connect to ZooKeeper {instance}: {ex}")
|
||||
time.sleep(0.5)
|
||||
|
||||
raise Exception("Cannot wait ZooKeeper container")
|
||||
raise Exception(
|
||||
"Cannot wait ZooKeeper container (probably it's a `iptables-nft` issue, you may try to `sudo iptables -P FORWARD ACCEPT`)"
|
||||
)
|
||||
|
||||
def make_hdfs_api(self, timeout=180, kerberized=False):
|
||||
if kerberized:
|
||||
|
@ -48,6 +48,8 @@
|
||||
"test_system_replicated_fetches/test.py::test_system_replicated_fetches",
|
||||
"test_zookeeper_config_load_balancing/test.py::test_round_robin",
|
||||
|
||||
"test_global_overcommit_tracker/test.py::test_global_overcommit",
|
||||
|
||||
"test_user_ip_restrictions/test.py::test_ipv4",
|
||||
"test_user_ip_restrictions/test.py::test_ipv6"
|
||||
]
|
||||
|
@ -0,0 +1,228 @@
|
||||
import pytest
|
||||
|
||||
from helpers.cluster import ClickHouseCluster
|
||||
|
||||
cluster = ClickHouseCluster(__file__)
|
||||
node1 = cluster.add_instance(
|
||||
"node1",
|
||||
with_zookeeper=False,
|
||||
image="yandex/clickhouse-server",
|
||||
tag="19.16.9.37",
|
||||
stay_alive=True,
|
||||
with_installed_binary=True,
|
||||
)
|
||||
node2 = cluster.add_instance(
|
||||
"node2",
|
||||
with_zookeeper=False,
|
||||
image="yandex/clickhouse-server",
|
||||
tag="19.16.9.37",
|
||||
stay_alive=True,
|
||||
with_installed_binary=True,
|
||||
)
|
||||
node3 = cluster.add_instance("node3", with_zookeeper=False)
|
||||
node4 = cluster.add_instance("node4", with_zookeeper=False)
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def start_cluster():
|
||||
try:
|
||||
cluster.start()
|
||||
yield cluster
|
||||
|
||||
finally:
|
||||
cluster.shutdown()
|
||||
|
||||
|
||||
# We will test that serialization of internal state of "avg" function is compatible between different versions.
|
||||
# TODO Implement versioning of serialization format for aggregate function states.
|
||||
# NOTE This test is too ad-hoc.
|
||||
|
||||
|
||||
def test_backward_compatability_for_avg(start_cluster):
|
||||
node1.query("create table tab (x UInt64) engine = Memory")
|
||||
node2.query("create table tab (x UInt64) engine = Memory")
|
||||
node3.query("create table tab (x UInt64) engine = Memory")
|
||||
node4.query("create table tab (x UInt64) engine = Memory")
|
||||
|
||||
node1.query("INSERT INTO tab VALUES (1)")
|
||||
node2.query("INSERT INTO tab VALUES (2)")
|
||||
node3.query("INSERT INTO tab VALUES (3)")
|
||||
node4.query("INSERT INTO tab VALUES (4)")
|
||||
|
||||
assert (
|
||||
node1.query("SELECT avg(x) FROM remote('node{1..4}', default, tab)") == "2.5\n"
|
||||
)
|
||||
assert (
|
||||
node2.query("SELECT avg(x) FROM remote('node{1..4}', default, tab)") == "2.5\n"
|
||||
)
|
||||
assert (
|
||||
node3.query("SELECT avg(x) FROM remote('node{1..4}', default, tab)") == "2.5\n"
|
||||
)
|
||||
assert (
|
||||
node4.query("SELECT avg(x) FROM remote('node{1..4}', default, tab)") == "2.5\n"
|
||||
)
|
||||
|
||||
# Also check with persisted aggregate function state
|
||||
|
||||
node1.query("create table state (x AggregateFunction(avg, UInt64)) engine = Log")
|
||||
node1.query(
|
||||
"INSERT INTO state SELECT avgState(arrayJoin(CAST([1, 2, 3, 4] AS Array(UInt64))))"
|
||||
)
|
||||
|
||||
assert node1.query("SELECT avgMerge(x) FROM state") == "2.5\n"
|
||||
|
||||
node1.restart_with_latest_version(fix_metadata=True)
|
||||
|
||||
assert node1.query("SELECT avgMerge(x) FROM state") == "2.5\n"
|
||||
|
||||
node1.query("drop table tab")
|
||||
node1.query("drop table state")
|
||||
node2.query("drop table tab")
|
||||
node3.query("drop table tab")
|
||||
node4.query("drop table tab")
|
||||
|
||||
|
||||
@pytest.mark.parametrize("uniq_keys", [1000, 500000])
|
||||
def test_backward_compatability_for_uniq_exact(start_cluster, uniq_keys):
|
||||
node1.query(f"CREATE TABLE tab_{uniq_keys} (x UInt64) Engine = Memory")
|
||||
node2.query(f"CREATE TABLE tab_{uniq_keys} (x UInt64) Engine = Memory")
|
||||
node3.query(f"CREATE TABLE tab_{uniq_keys} (x UInt64) Engine = Memory")
|
||||
node4.query(f"CREATE TABLE tab_{uniq_keys} (x UInt64) Engine = Memory")
|
||||
|
||||
node1.query(
|
||||
f"INSERT INTO tab_{uniq_keys} SELECT number FROM numbers_mt(0, {uniq_keys})"
|
||||
)
|
||||
node2.query(
|
||||
f"INSERT INTO tab_{uniq_keys} SELECT number FROM numbers_mt(1, {uniq_keys})"
|
||||
)
|
||||
node3.query(
|
||||
f"INSERT INTO tab_{uniq_keys} SELECT number FROM numbers_mt(2, {uniq_keys})"
|
||||
)
|
||||
node4.query(
|
||||
f"INSERT INTO tab_{uniq_keys} SELECT number FROM numbers_mt(3, {uniq_keys})"
|
||||
)
|
||||
|
||||
assert (
|
||||
node1.query(
|
||||
f"SELECT uniqExact(x) FROM remote('node{{1..4}}', default, tab_{uniq_keys})"
|
||||
)
|
||||
== f"{uniq_keys + 3}\n"
|
||||
)
|
||||
assert (
|
||||
node2.query(
|
||||
f"SELECT uniqExact(x) FROM remote('node{{1..4}}', default, tab_{uniq_keys})"
|
||||
)
|
||||
== f"{uniq_keys + 3}\n"
|
||||
)
|
||||
assert (
|
||||
node3.query(
|
||||
f"SELECT uniqExact(x) FROM remote('node{{1..4}}', default, tab_{uniq_keys})"
|
||||
)
|
||||
== f"{uniq_keys + 3}\n"
|
||||
)
|
||||
assert (
|
||||
node4.query(
|
||||
f"SELECT uniqExact(x) FROM remote('node{{1..4}}', default, tab_{uniq_keys})"
|
||||
)
|
||||
== f"{uniq_keys + 3}\n"
|
||||
)
|
||||
|
||||
# Also check with persisted aggregate function state
|
||||
|
||||
node1.query(
|
||||
f"CREATE TABLE state_{uniq_keys} (x AggregateFunction(uniqExact, UInt64)) Engine = Log"
|
||||
)
|
||||
node1.query(
|
||||
f"INSERT INTO state_{uniq_keys} SELECT uniqExactState(number) FROM numbers_mt({uniq_keys})"
|
||||
)
|
||||
|
||||
assert (
|
||||
node1.query(f"SELECT uniqExactMerge(x) FROM state_{uniq_keys}")
|
||||
== f"{uniq_keys}\n"
|
||||
)
|
||||
|
||||
node1.restart_with_latest_version()
|
||||
|
||||
assert (
|
||||
node1.query(f"SELECT uniqExactMerge(x) FROM state_{uniq_keys}")
|
||||
== f"{uniq_keys}\n"
|
||||
)
|
||||
|
||||
node1.query(f"DROP TABLE state_{uniq_keys}")
|
||||
node1.query(f"DROP TABLE tab_{uniq_keys}")
|
||||
node2.query(f"DROP TABLE tab_{uniq_keys}")
|
||||
node3.query(f"DROP TABLE tab_{uniq_keys}")
|
||||
node4.query(f"DROP TABLE tab_{uniq_keys}")
|
||||
|
||||
|
||||
@pytest.mark.parametrize("uniq_keys", [1000, 500000])
|
||||
def test_backward_compatability_for_uniq_exact_variadic(start_cluster, uniq_keys):
|
||||
node1.query(f"CREATE TABLE tab_{uniq_keys} (x UInt64, y UInt64) Engine = Memory")
|
||||
node2.query(f"CREATE TABLE tab_{uniq_keys} (x UInt64, y UInt64) Engine = Memory")
|
||||
node3.query(f"CREATE TABLE tab_{uniq_keys} (x UInt64, y UInt64) Engine = Memory")
|
||||
node4.query(f"CREATE TABLE tab_{uniq_keys} (x UInt64, y UInt64) Engine = Memory")
|
||||
|
||||
node1.query(
|
||||
f"INSERT INTO tab_{uniq_keys} SELECT number, number/2 FROM numbers_mt(0, {uniq_keys})"
|
||||
)
|
||||
node2.query(
|
||||
f"INSERT INTO tab_{uniq_keys} SELECT number, number/2 FROM numbers_mt(1, {uniq_keys})"
|
||||
)
|
||||
node3.query(
|
||||
f"INSERT INTO tab_{uniq_keys} SELECT number, number/2 FROM numbers_mt(2, {uniq_keys})"
|
||||
)
|
||||
node4.query(
|
||||
f"INSERT INTO tab_{uniq_keys} SELECT number, number/2 FROM numbers_mt(3, {uniq_keys})"
|
||||
)
|
||||
|
||||
assert (
|
||||
node1.query(
|
||||
f"SELECT uniqExact(x, y) FROM remote('node{{1..4}}', default, tab_{uniq_keys})"
|
||||
)
|
||||
== f"{uniq_keys + 3}\n"
|
||||
)
|
||||
assert (
|
||||
node2.query(
|
||||
f"SELECT uniqExact(x, y) FROM remote('node{{1..4}}', default, tab_{uniq_keys})"
|
||||
)
|
||||
== f"{uniq_keys + 3}\n"
|
||||
)
|
||||
assert (
|
||||
node3.query(
|
||||
f"SELECT uniqExact(x, y) FROM remote('node{{1..4}}', default, tab_{uniq_keys})"
|
||||
)
|
||||
== f"{uniq_keys + 3}\n"
|
||||
)
|
||||
assert (
|
||||
node4.query(
|
||||
f"SELECT uniqExact(x, y) FROM remote('node{{1..4}}', default, tab_{uniq_keys})"
|
||||
)
|
||||
== f"{uniq_keys + 3}\n"
|
||||
)
|
||||
|
||||
# Also check with persisted aggregate function state
|
||||
|
||||
node1.query(
|
||||
f"CREATE TABLE state_{uniq_keys} (x AggregateFunction(uniqExact, UInt64, UInt64)) Engine = Log"
|
||||
)
|
||||
node1.query(
|
||||
f"INSERT INTO state_{uniq_keys} SELECT uniqExactState(number, intDiv(number,2)) FROM numbers_mt({uniq_keys})"
|
||||
)
|
||||
|
||||
assert (
|
||||
node1.query(f"SELECT uniqExactMerge(x) FROM state_{uniq_keys}")
|
||||
== f"{uniq_keys}\n"
|
||||
)
|
||||
|
||||
node1.restart_with_latest_version()
|
||||
|
||||
assert (
|
||||
node1.query(f"SELECT uniqExactMerge(x) FROM state_{uniq_keys}")
|
||||
== f"{uniq_keys}\n"
|
||||
)
|
||||
|
||||
node1.query(f"DROP TABLE state_{uniq_keys}")
|
||||
node1.query(f"DROP TABLE tab_{uniq_keys}")
|
||||
node2.query(f"DROP TABLE tab_{uniq_keys}")
|
||||
node3.query(f"DROP TABLE tab_{uniq_keys}")
|
||||
node4.query(f"DROP TABLE tab_{uniq_keys}")
|
@ -1,82 +0,0 @@
|
||||
import pytest
|
||||
|
||||
from helpers.cluster import ClickHouseCluster
|
||||
|
||||
cluster = ClickHouseCluster(__file__)
|
||||
node1 = cluster.add_instance(
|
||||
"node1",
|
||||
with_zookeeper=False,
|
||||
image="yandex/clickhouse-server",
|
||||
tag="19.16.9.37",
|
||||
stay_alive=True,
|
||||
with_installed_binary=True,
|
||||
)
|
||||
node2 = cluster.add_instance(
|
||||
"node2",
|
||||
with_zookeeper=False,
|
||||
image="yandex/clickhouse-server",
|
||||
tag="19.16.9.37",
|
||||
stay_alive=True,
|
||||
with_installed_binary=True,
|
||||
)
|
||||
node3 = cluster.add_instance("node3", with_zookeeper=False)
|
||||
node4 = cluster.add_instance("node4", with_zookeeper=False)
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def start_cluster():
|
||||
try:
|
||||
cluster.start()
|
||||
yield cluster
|
||||
|
||||
finally:
|
||||
cluster.shutdown()
|
||||
|
||||
|
||||
# We will test that serialization of internal state of "avg" function is compatible between different versions.
|
||||
# TODO Implement versioning of serialization format for aggregate function states.
|
||||
# NOTE This test is too ad-hoc.
|
||||
|
||||
|
||||
def test_backward_compatability(start_cluster):
|
||||
node1.query("create table tab (x UInt64) engine = Memory")
|
||||
node2.query("create table tab (x UInt64) engine = Memory")
|
||||
node3.query("create table tab (x UInt64) engine = Memory")
|
||||
node4.query("create table tab (x UInt64) engine = Memory")
|
||||
|
||||
node1.query("INSERT INTO tab VALUES (1)")
|
||||
node2.query("INSERT INTO tab VALUES (2)")
|
||||
node3.query("INSERT INTO tab VALUES (3)")
|
||||
node4.query("INSERT INTO tab VALUES (4)")
|
||||
|
||||
assert (
|
||||
node1.query("SELECT avg(x) FROM remote('node{1..4}', default, tab)") == "2.5\n"
|
||||
)
|
||||
assert (
|
||||
node2.query("SELECT avg(x) FROM remote('node{1..4}', default, tab)") == "2.5\n"
|
||||
)
|
||||
assert (
|
||||
node3.query("SELECT avg(x) FROM remote('node{1..4}', default, tab)") == "2.5\n"
|
||||
)
|
||||
assert (
|
||||
node4.query("SELECT avg(x) FROM remote('node{1..4}', default, tab)") == "2.5\n"
|
||||
)
|
||||
|
||||
# Also check with persisted aggregate function state
|
||||
|
||||
node1.query("create table state (x AggregateFunction(avg, UInt64)) engine = Log")
|
||||
node1.query(
|
||||
"INSERT INTO state SELECT avgState(arrayJoin(CAST([1, 2, 3, 4] AS Array(UInt64))))"
|
||||
)
|
||||
|
||||
assert node1.query("SELECT avgMerge(x) FROM state") == "2.5\n"
|
||||
|
||||
node1.restart_with_latest_version(fix_metadata=True)
|
||||
|
||||
assert node1.query("SELECT avgMerge(x) FROM state") == "2.5\n"
|
||||
|
||||
node1.query("drop table tab")
|
||||
node1.query("drop table state")
|
||||
node2.query("drop table tab")
|
||||
node3.query("drop table tab")
|
||||
node4.query("drop table tab")
|
73
tests/performance/low_cardinality_from_json.xml
Normal file
73
tests/performance/low_cardinality_from_json.xml
Normal file
@ -0,0 +1,73 @@
|
||||
<test>
|
||||
|
||||
<substitutions>
|
||||
<substitution>
|
||||
<name>string_json</name>
|
||||
<values>
|
||||
<value>'{"a": "hi", "b": "hello", "c": "hola", "d": "see you, bye, bye"}'</value>
|
||||
</values>
|
||||
</substitution>
|
||||
<substitution>
|
||||
<name>int_json</name>
|
||||
<values>
|
||||
<value>'{"a": 11, "b": 2222, "c": 33333333, "d": 4444444444444444}'</value>
|
||||
</values>
|
||||
</substitution>
|
||||
<substitution>
|
||||
<name>uuid_json</name>
|
||||
<values>
|
||||
<value>'{"a": "2d49dc6e-ddce-4cd0-afb8-790956df54c4", "b": "2d49dc6e-ddce-4cd0-afb8-790956df54c3", "c": "2d49dc6e-ddce-4cd0-afb8-790956df54c1", "d": "2d49dc6e-ddce-4cd0-afb8-790956df54c1"}'</value>
|
||||
</values>
|
||||
</substitution>
|
||||
<substitution>
|
||||
<name>low_cardinality_tuple_string</name>
|
||||
<values>
|
||||
<value>'Tuple(a LowCardinality(String), b LowCardinality(String), c LowCardinality(String), d LowCardinality(String) )'</value>
|
||||
</values>
|
||||
</substitution>
|
||||
<substitution>
|
||||
<name>low_cardinality_tuple_fixed_string</name>
|
||||
<values>
|
||||
<value>'Tuple(a LowCardinality(FixedString(20)), b LowCardinality(FixedString(20)), c LowCardinality(FixedString(20)), d LowCardinality(FixedString(20)) )'</value>
|
||||
</values>
|
||||
</substitution>
|
||||
<substitution>
|
||||
<name>low_cardinality_tuple_int8</name>
|
||||
<values>
|
||||
<value>'Tuple(a LowCardinality(Int8), b LowCardinality(Int8), c LowCardinality(Int8), d LowCardinality(Int8) )'</value>
|
||||
</values>
|
||||
</substitution>
|
||||
<substitution>
|
||||
<name>low_cardinality_tuple_int16</name>
|
||||
<values>
|
||||
<value>'Tuple(a LowCardinality(Int16), b LowCardinality(Int16), c LowCardinality(Int16), d LowCardinality(Int16) )'</value>
|
||||
</values>
|
||||
</substitution>
|
||||
<substitution>
|
||||
<name>low_cardinality_tuple_int32</name>
|
||||
<values>
|
||||
<value>'Tuple(a LowCardinality(Int32), b LowCardinality(Int32), c LowCardinality(Int32), d LowCardinality(Int32) )'</value>
|
||||
</values>
|
||||
</substitution>
|
||||
<substitution>
|
||||
<name>low_cardinality_tuple_int64</name>
|
||||
<values>
|
||||
<value>'Tuple(a LowCardinality(Int64), b LowCardinality(Int64), c LowCardinality(Int64), d LowCardinality(Int64) )'</value>
|
||||
</values>
|
||||
</substitution>
|
||||
<substitution>
|
||||
<name>low_cardinality_tuple_uuid</name>
|
||||
<values>
|
||||
<value>'Tuple(a LowCardinality(UUID), b LowCardinality(UUID), c LowCardinality(UUID), d LowCardinality(UUID) )'</value>
|
||||
</values>
|
||||
</substitution>
|
||||
</substitutions>
|
||||
|
||||
<query>SELECT 'fixed_string_json' FROM zeros(500000) WHERE NOT ignore(JSONExtract(materialize({string_json}), {low_cardinality_tuple_fixed_string})) FORMAT Null </query>
|
||||
<query>SELECT 'string_json' FROM zeros(500000) WHERE NOT ignore(JSONExtract(materialize({string_json}), {low_cardinality_tuple_string})) FORMAT Null </query>
|
||||
<query>SELECT 'int8_json' FROM zeros(500000) WHERE NOT ignore(JSONExtract(materialize({int_json}), {low_cardinality_tuple_int8})) FORMAT Null </query>
|
||||
<query>SELECT 'int16_json' FROM zeros(500000) WHERE NOT ignore(JSONExtract(materialize({int_json}), {low_cardinality_tuple_int16})) FORMAT Null </query>
|
||||
<query>SELECT 'int32_json' FROM zeros(500000) WHERE NOT ignore(JSONExtract(materialize({int_json}), {low_cardinality_tuple_int32})) FORMAT Null </query>
|
||||
<query>SELECT 'int64_json' FROM zeros(500000) WHERE NOT ignore(JSONExtract(materialize({int_json}), {low_cardinality_tuple_int64})) FORMAT Null </query>
|
||||
<query>SELECT 'uuid_json' FROM zeros(500000) WHERE NOT ignore(JSONExtract(materialize({uuid_json}), {low_cardinality_tuple_uuid})) FORMAT Null </query>
|
||||
</test>
|
33
tests/performance/uniq_without_key.xml
Normal file
33
tests/performance/uniq_without_key.xml
Normal file
@ -0,0 +1,33 @@
|
||||
<test>
|
||||
<substitutions>
|
||||
<substitution>
|
||||
<name>uniq_keys</name>
|
||||
<values>
|
||||
<value>10000</value>
|
||||
<value>50000</value>
|
||||
<value>100000</value>
|
||||
<value>250000</value>
|
||||
<value>500000</value>
|
||||
<value>1000000</value>
|
||||
</values>
|
||||
</substitution>
|
||||
</substitutions>
|
||||
|
||||
<create_query>create table t_{uniq_keys}(a UInt64) engine=MergeTree order by tuple()</create_query>
|
||||
|
||||
<fill_query>insert into t_{uniq_keys} select number % {uniq_keys} from numbers_mt(5e7)</fill_query>
|
||||
|
||||
<query>SELECT count(distinct a) FROM t_{uniq_keys} GROUP BY a FORMAT Null</query>
|
||||
<query>SELECT uniqExact(a) FROM t_{uniq_keys} GROUP BY a FORMAT Null</query>
|
||||
|
||||
<query>SELECT count(distinct a) FROM t_{uniq_keys}</query>
|
||||
<query>SELECT uniqExact(a) FROM t_{uniq_keys}</query>
|
||||
|
||||
<query>SELECT uniqExact(number) from numbers_mt(1e7)</query>
|
||||
<query>SELECT uniqExact(number) from numbers_mt(5e7)</query>
|
||||
|
||||
<query>SELECT uniqExact(number, number) from numbers_mt(5e6)</query>
|
||||
<query>SELECT uniqExact(number, number) from numbers_mt(1e7)</query>
|
||||
|
||||
<drop_query>drop table t_{uniq_keys}</drop_query>
|
||||
</test>
|
@ -1,13 +1,13 @@
|
||||
select toTypeName(rand(cast(4 as Nullable(UInt8))));
|
||||
select toTypeName(canonicalRand(CAST(4 as Nullable(UInt8))));
|
||||
select toTypeName(randCanonical(CAST(4 as Nullable(UInt8))));
|
||||
select toTypeName(randConstant(CAST(4 as Nullable(UInt8))));
|
||||
select toTypeName(rand(Null));
|
||||
select toTypeName(canonicalRand(Null));
|
||||
select toTypeName(randCanonical(Null));
|
||||
select toTypeName(randConstant(Null));
|
||||
|
||||
select rand(cast(4 as Nullable(UInt8))) * 0;
|
||||
select canonicalRand(cast(4 as Nullable(UInt8))) * 0;
|
||||
select randCanonical(cast(4 as Nullable(UInt8))) * 0;
|
||||
select randConstant(CAST(4 as Nullable(UInt8))) * 0;
|
||||
select rand(Null) * 0;
|
||||
select canonicalRand(Null) * 0;
|
||||
select randCanonical(Null) * 0;
|
||||
select randConstant(Null) * 0;
|
||||
|
@ -0,0 +1,7 @@
|
||||
('hi','hello','hola','see you, bye, bye')
|
||||
('hi\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0','hello\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0','hola\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0','see you, bye, bye\0\0\0')
|
||||
(11,0,0,0)
|
||||
(11,2222,0,0)
|
||||
(11,2222,33333333,0)
|
||||
(11,2222,33333333,4444444444444444)
|
||||
('2d49dc6e-ddce-4cd0-afb8-790956df54c4','2d49dc6e-ddce-4cd0-afb8-790956df54c3','2d49dc6e-ddce-4cd0-afb8-790956df54c1','2d49dc6e-ddce-4cd0-afb8-790956df54c1')
|
55
tests/queries/0_stateless/02452_check_low_cardinality.sql
Normal file
55
tests/queries/0_stateless/02452_check_low_cardinality.sql
Normal file
@ -0,0 +1,55 @@
|
||||
-- Tags: no-fasttest
|
||||
DROP TABLE IF EXISTS test_low_cardinality_string;
|
||||
DROP TABLE IF EXISTS test_low_cardinality_uuid;
|
||||
DROP TABLE IF EXISTS test_low_cardinality_int;
|
||||
CREATE TABLE test_low_cardinality_string (data String) ENGINE MergeTree ORDER BY data;
|
||||
CREATE TABLE test_low_cardinality_uuid (data String) ENGINE MergeTree ORDER BY data;
|
||||
CREATE TABLE test_low_cardinality_int (data String) ENGINE MergeTree ORDER BY data;
|
||||
INSERT INTO test_low_cardinality_string (data) VALUES ('{"a": "hi", "b": "hello", "c": "hola", "d": "see you, bye, bye"}');
|
||||
INSERT INTO test_low_cardinality_int (data) VALUES ('{"a": 11, "b": 2222, "c": 33333333, "d": 4444444444444444}');
|
||||
INSERT INTO test_low_cardinality_uuid (data) VALUES ('{"a": "2d49dc6e-ddce-4cd0-afb8-790956df54c4", "b": "2d49dc6e-ddce-4cd0-afb8-790956df54c3", "c": "2d49dc6e-ddce-4cd0-afb8-790956df54c1", "d": "2d49dc6e-ddce-4cd0-afb8-790956df54c1"}');
|
||||
SELECT JSONExtract(data, 'Tuple(
|
||||
a LowCardinality(String),
|
||||
b LowCardinality(String),
|
||||
c LowCardinality(String),
|
||||
d LowCardinality(String)
|
||||
)') AS json FROM test_low_cardinality_string;
|
||||
SELECT JSONExtract(data, 'Tuple(
|
||||
a LowCardinality(FixedString(20)),
|
||||
b LowCardinality(FixedString(20)),
|
||||
c LowCardinality(FixedString(20)),
|
||||
d LowCardinality(FixedString(20))
|
||||
)') AS json FROM test_low_cardinality_string;
|
||||
SELECT JSONExtract(data, 'Tuple(
|
||||
a LowCardinality(Int8),
|
||||
b LowCardinality(Int8),
|
||||
c LowCardinality(Int8),
|
||||
d LowCardinality(Int8)
|
||||
)') AS json FROM test_low_cardinality_int;
|
||||
SELECT JSONExtract(data, 'Tuple(
|
||||
a LowCardinality(Int16),
|
||||
b LowCardinality(Int16),
|
||||
c LowCardinality(Int16),
|
||||
d LowCardinality(Int16)
|
||||
)') AS json FROM test_low_cardinality_int;
|
||||
SELECT JSONExtract(data, 'Tuple(
|
||||
a LowCardinality(Int32),
|
||||
b LowCardinality(Int32),
|
||||
c LowCardinality(Int32),
|
||||
d LowCardinality(Int32)
|
||||
)') AS json FROM test_low_cardinality_int;
|
||||
SELECT JSONExtract(data, 'Tuple(
|
||||
a LowCardinality(Int64),
|
||||
b LowCardinality(Int64),
|
||||
c LowCardinality(Int64),
|
||||
d LowCardinality(Int64)
|
||||
)') AS json FROM test_low_cardinality_int;
|
||||
SELECT JSONExtract(data, 'Tuple(
|
||||
a LowCardinality(UUID),
|
||||
b LowCardinality(UUID),
|
||||
c LowCardinality(UUID),
|
||||
d LowCardinality(UUID)
|
||||
)') AS json FROM test_low_cardinality_uuid;
|
||||
DROP TABLE test_low_cardinality_string;
|
||||
DROP TABLE test_low_cardinality_uuid;
|
||||
DROP TABLE test_low_cardinality_int;
|
@ -0,0 +1 @@
|
||||
('{"b":{"c":1,"d":"str"}}\0')
|
@ -0,0 +1,6 @@
|
||||
-- Tags: no-fasttest
|
||||
DROP TABLE IF EXISTS test_fixed_string_nested_json;
|
||||
CREATE TABLE test_fixed_string_nested_json (data String) ENGINE MergeTree ORDER BY data;
|
||||
INSERT INTO test_fixed_string_nested_json (data) VALUES ('{"a" : {"b" : {"c" : 1, "d" : "str"}}}');
|
||||
SELECT JSONExtract(data, 'Tuple(a FixedString(24))') AS json FROM test_fixed_string_nested_json;
|
||||
DROP TABLE test_fixed_string_nested_json;
|
@ -0,0 +1,2 @@
|
||||
('{"b":{"c":1,"d":"str"}}','','','')
|
||||
('{"b":{"c":1,"d":"str"}}','','','')
|
@ -0,0 +1,3 @@
|
||||
-- Tags: no-fasttest
|
||||
SELECT JSONExtract('{"a" : {"b" : {"c" : 1, "d" : "str"}}}', 'Tuple( a LowCardinality(String), b LowCardinality(String), c LowCardinality(String), d LowCardinality(String))');
|
||||
SELECT JSONExtract('{"a" : {"b" : {"c" : 1, "d" : "str"}}}', 'Tuple( a String, b LowCardinality(String), c LowCardinality(String), d LowCardinality(String))');
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user