mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-26 01:22:04 +00:00
Merge branch 'master' into window-function-expression
This commit is contained in:
commit
670a63865e
4
.github/workflows/backport.yml
vendored
4
.github/workflows/backport.yml
vendored
@ -9,9 +9,11 @@ concurrency:
|
|||||||
on: # yamllint disable-line rule:truthy
|
on: # yamllint disable-line rule:truthy
|
||||||
schedule:
|
schedule:
|
||||||
- cron: '0 */3 * * *'
|
- cron: '0 */3 * * *'
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
CherryPick:
|
CherryPick:
|
||||||
runs-on: [self-hosted, style-checker]
|
runs-on: [self-hosted, style-checker-aarch64]
|
||||||
steps:
|
steps:
|
||||||
- name: Set envs
|
- name: Set envs
|
||||||
# https://docs.github.com/en/actions/learn-github-actions/workflow-commands-for-github-actions#multiline-strings
|
# https://docs.github.com/en/actions/learn-github-actions/workflow-commands-for-github-actions#multiline-strings
|
||||||
|
171
CHANGELOG.md
171
CHANGELOG.md
@ -1,10 +1,177 @@
|
|||||||
### Table of Contents
|
### Table of Contents
|
||||||
|
**[ClickHouse release v22.6, 2022-06-16](#226)**<br>
|
||||||
**[ClickHouse release v22.5, 2022-05-19](#225)**<br>
|
**[ClickHouse release v22.5, 2022-05-19](#225)**<br>
|
||||||
**[ClickHouse release v22.4, 2022-04-20](#224)**<br>
|
**[ClickHouse release v22.4, 2022-04-20](#224)**<br>
|
||||||
**[ClickHouse release v22.3-lts, 2022-03-17](#223)**<br>
|
**[ClickHouse release v22.3-lts, 2022-03-17](#223)**<br>
|
||||||
**[ClickHouse release v22.2, 2022-02-17](#222)**<br>
|
**[ClickHouse release v22.2, 2022-02-17](#222)**<br>
|
||||||
**[ClickHouse release v22.1, 2022-01-18](#221)**<br>
|
**[ClickHouse release v22.1, 2022-01-18](#221)**<br>
|
||||||
**[Changelog for 2021](https://github.com/ClickHouse/ClickHouse/blob/master/docs/en/whats-new/changelog/2021.md)**<br>
|
**[Changelog for 2021](https://clickhouse.com/docs/en/whats-new/changelog/2021/)**<br>
|
||||||
|
|
||||||
|
### <a id="226"></a> ClickHouse release 22.6, 2022-06-16
|
||||||
|
|
||||||
|
#### Backward Incompatible Change
|
||||||
|
* Remove support for octal number literals in SQL. In previous versions they were parsed as Float64. [#37765](https://github.com/ClickHouse/ClickHouse/pull/37765) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* Changes how settings using `seconds` as type are parsed to support floating point values (for example: `max_execution_time=0.5`). Infinity or NaN values will throw an exception. [#37187](https://github.com/ClickHouse/ClickHouse/pull/37187) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Changed format of binary serialization of columns of experimental type `Object`. New format is more convenient to implement by third-party clients. [#37482](https://github.com/ClickHouse/ClickHouse/pull/37482) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Turn on setting `output_format_json_named_tuples_as_objects` by default. It allows to serialize named tuples as JSON objects in JSON formats. [#37756](https://github.com/ClickHouse/ClickHouse/pull/37756) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* LIKE patterns with trailing escape symbol ('\\') are now disallowed (as mandated by the SQL standard). [#37764](https://github.com/ClickHouse/ClickHouse/pull/37764) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* If you run different ClickHouse versions on a cluster with AArch64 CPU or mix AArch64 and amd64 on a cluster, and use distributed queries with GROUP BY multiple keys of fixed-size type that fit in 256 bits but don't fit in 64 bits, and the size of the result is huge, the data will not be fully aggregated in the result of these queries during upgrade. Workaround: upgrade with downtime instead of a rolling upgrade.
|
||||||
|
|
||||||
|
#### New Feature
|
||||||
|
* Add `GROUPING` function. It allows to disambiguate the records in the queries with `ROLLUP`, `CUBE` or `GROUPING SETS`. Closes [#19426](https://github.com/ClickHouse/ClickHouse/issues/19426). [#37163](https://github.com/ClickHouse/ClickHouse/pull/37163) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* A new codec [FPC](https://userweb.cs.txstate.edu/~burtscher/papers/dcc07a.pdf) algorithm for floating point data compression. [#37553](https://github.com/ClickHouse/ClickHouse/pull/37553) ([Mikhail Guzov](https://github.com/koloshmet)).
|
||||||
|
* Add new columnar JSON formats: `JSONColumns`, `JSONCompactColumns`, `JSONColumnsWithMetadata`. Closes [#36338](https://github.com/ClickHouse/ClickHouse/issues/36338) Closes [#34509](https://github.com/ClickHouse/ClickHouse/issues/34509). [#36975](https://github.com/ClickHouse/ClickHouse/pull/36975) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Added open telemetry traces visualizing tool based on d3js. [#37810](https://github.com/ClickHouse/ClickHouse/pull/37810) ([Sergei Trifonov](https://github.com/serxa)).
|
||||||
|
* Support INSERTs into `system.zookeeper` table. Closes [#22130](https://github.com/ClickHouse/ClickHouse/issues/22130). [#37596](https://github.com/ClickHouse/ClickHouse/pull/37596) ([Han Fei](https://github.com/hanfei1991)).
|
||||||
|
* Support non-constant pattern argument for `LIKE`, `ILIKE` and `match` functions. [#37251](https://github.com/ClickHouse/ClickHouse/pull/37251) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* Executable user defined functions now support parameters. Example: `SELECT test_function(parameters)(arguments)`. Closes [#37578](https://github.com/ClickHouse/ClickHouse/issues/37578). [#37720](https://github.com/ClickHouse/ClickHouse/pull/37720) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Add `merge_reason` column to system.part_log table. [#36912](https://github.com/ClickHouse/ClickHouse/pull/36912) ([Sema Checherinda](https://github.com/CheSema)).
|
||||||
|
* Add support for Maps and Records in Avro format. Add new setting `input_format_avro_null_as_default ` that allow to insert null as default in Avro format. Closes [#18925](https://github.com/ClickHouse/ClickHouse/issues/18925) Closes [#37378](https://github.com/ClickHouse/ClickHouse/issues/37378) Closes [#32899](https://github.com/ClickHouse/ClickHouse/issues/32899). [#37525](https://github.com/ClickHouse/ClickHouse/pull/37525) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Add `clickhouse-disks` tool to introspect and operate on virtual filesystems configured for ClickHouse. [#36060](https://github.com/ClickHouse/ClickHouse/pull/36060) ([Artyom Yurkov](https://github.com/Varinara)).
|
||||||
|
* Adds H3 unidirectional edge functions. [#36843](https://github.com/ClickHouse/ClickHouse/pull/36843) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Add support for calculating [hashids](https://hashids.org/) from unsigned integers. [#37013](https://github.com/ClickHouse/ClickHouse/pull/37013) ([Michael Nutt](https://github.com/mnutt)).
|
||||||
|
* Explicit `SALT` specification is allowed for `CREATE USER <user> IDENTIFIED WITH sha256_hash`. [#37377](https://github.com/ClickHouse/ClickHouse/pull/37377) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* Add two new settings `input_format_csv_skip_first_lines/input_format_tsv_skip_first_lines` to allow skipping specified number of lines in the beginning of the file in CSV/TSV formats. [#37537](https://github.com/ClickHouse/ClickHouse/pull/37537) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* `showCertificate` function shows current server's SSL certificate. [#37540](https://github.com/ClickHouse/ClickHouse/pull/37540) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* HTTP source for Data Dictionaries in Named Collections is supported. [#37581](https://github.com/ClickHouse/ClickHouse/pull/37581) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* Added a new window function `nonNegativeDerivative(metric_column, timestamp_column[, INTERVAL x SECOND])`. [#37628](https://github.com/ClickHouse/ClickHouse/pull/37628) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||||
|
* Implemented changing the comment for `ReplicatedMergeTree` tables. [#37416](https://github.com/ClickHouse/ClickHouse/pull/37416) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
|
* Added `SYSTEM UNFREEZE` query that deletes the whole backup regardless if the corresponding table is deleted or not. [#36424](https://github.com/ClickHouse/ClickHouse/pull/36424) ([Vadim Volodin](https://github.com/PolyProgrammist)).
|
||||||
|
|
||||||
|
#### Experimental Feature
|
||||||
|
* Enables `POPULATE` for `WINDOW VIEW`. [#36945](https://github.com/ClickHouse/ClickHouse/pull/36945) ([vxider](https://github.com/Vxider)).
|
||||||
|
* `ALTER TABLE ... MODIFY QUERY` support for `WINDOW VIEW`. [#37188](https://github.com/ClickHouse/ClickHouse/pull/37188) ([vxider](https://github.com/Vxider)).
|
||||||
|
* This PR changes the behavior of the `ENGINE` syntax in `WINDOW VIEW`, to make it like in `MATERIALIZED VIEW`. [#37214](https://github.com/ClickHouse/ClickHouse/pull/37214) ([vxider](https://github.com/Vxider)).
|
||||||
|
|
||||||
|
#### Performance Improvement
|
||||||
|
* Added numerous optimizations for ARM NEON [#38093](https://github.com/ClickHouse/ClickHouse/pull/38093)([Daniel Kutenin](https://github.com/danlark1)), ([Alexandra Pilipyuk](https://github.com/chalice19)) Note: if you run different ClickHouse versions on a cluster with ARM CPU and use distributed queries with GROUP BY multiple keys of fixed-size type that fit in 256 bits but don't fit in 64 bits, the result of the aggregation query will be wrong during upgrade. Workaround: upgrade with downtime instead of a rolling upgrade.
|
||||||
|
* Improve performance and memory usage for select of subset of columns for formats Native, Protobuf, CapnProto, JSONEachRow, TSKV, all formats with suffixes WithNames/WithNamesAndTypes. Previously while selecting only subset of columns from files in these formats all columns were read and stored in memory. Now only required columns are read. This PR enables setting `input_format_skip_unknown_fields` by default, because otherwise in case of select of subset of columns exception will be thrown. [#37192](https://github.com/ClickHouse/ClickHouse/pull/37192) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Now more filters can be pushed down for join. [#37472](https://github.com/ClickHouse/ClickHouse/pull/37472) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Load marks for only necessary columns when reading wide parts. [#36879](https://github.com/ClickHouse/ClickHouse/pull/36879) ([Anton Kozlov](https://github.com/tonickkozlov)).
|
||||||
|
* Improved performance of aggregation in case, when sparse columns (can be enabled by experimental setting `ratio_of_defaults_for_sparse_serialization` in `MergeTree` tables) are used as arguments in aggregate functions. [#37617](https://github.com/ClickHouse/ClickHouse/pull/37617) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Optimize function `COALESCE` with two arguments. [#37666](https://github.com/ClickHouse/ClickHouse/pull/37666) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Replace `multiIf` to `if` in case when `multiIf` has only one condition, because function `if` is more performant. [#37695](https://github.com/ClickHouse/ClickHouse/pull/37695) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Improve performance of `dictGetDescendants`, `dictGetChildren` functions, create temporary parent to children hierarchical index per query, not per function call during query. Allow to specify `BIDIRECTIONAL` for `HIERARHICAL` attributes, dictionary will maintain parent to children index in memory, that way functions `dictGetDescendants`, `dictGetChildren` will not create temporary index per query. Closes [#32481](https://github.com/ClickHouse/ClickHouse/issues/32481). [#37148](https://github.com/ClickHouse/ClickHouse/pull/37148) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Aggregates state destruction now may be posted on a thread pool. For queries with LIMIT and big state it provides significant speedup, e.g. `select uniq(number) from numbers_mt(1e7) group by number limit 100` became around 2.5x faster. [#37855](https://github.com/ClickHouse/ClickHouse/pull/37855) ([Nikita Taranov](https://github.com/nickitat)).
|
||||||
|
* Improve sort performance by single column. [#37195](https://github.com/ClickHouse/ClickHouse/pull/37195) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Improve performance of single column sorting using sorting queue specializations. [#37990](https://github.com/ClickHouse/ClickHouse/pull/37990) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Improved performance on array norm and distance functions 2x-4x times. [#37394](https://github.com/ClickHouse/ClickHouse/pull/37394) ([Alexander Gololobov](https://github.com/davenger)).
|
||||||
|
* Improve performance of number comparison functions using dynamic dispatch. [#37399](https://github.com/ClickHouse/ClickHouse/pull/37399) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Improve performance of ORDER BY with LIMIT. [#37481](https://github.com/ClickHouse/ClickHouse/pull/37481) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Improve performance of `hasAll` function using dynamic dispatch infrastructure. [#37484](https://github.com/ClickHouse/ClickHouse/pull/37484) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Improve performance of `greatCircleAngle`, `greatCircleDistance`, `geoDistance` functions. [#37524](https://github.com/ClickHouse/ClickHouse/pull/37524) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Improve performance of insert into MergeTree if there are multiple columns in ORDER BY. [#35762](https://github.com/ClickHouse/ClickHouse/pull/35762) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Fix excessive CPU usage in background when there are a lot of tables. [#38028](https://github.com/ClickHouse/ClickHouse/pull/38028) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Improve performance of `not` function using dynamic dispatch. [#38058](https://github.com/ClickHouse/ClickHouse/pull/38058) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Optimized the internal caching of re2 patterns which occur e.g. in LIKE and MATCH functions. [#37544](https://github.com/ClickHouse/ClickHouse/pull/37544) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* Improve filter bitmask generator function all in one with AVX-512 instructions. [#37588](https://github.com/ClickHouse/ClickHouse/pull/37588) ([yaqi-zhao](https://github.com/yaqi-zhao)).
|
||||||
|
* Apply read method `threadpool` for Hive integration engine. This will significantly speed up reading. [#36328](https://github.com/ClickHouse/ClickHouse/pull/36328) ([李扬](https://github.com/taiyang-li)).
|
||||||
|
* When all the columns to read are partition keys, construct columns by the file's row number without real reading the Hive file. [#37103](https://github.com/ClickHouse/ClickHouse/pull/37103) ([lgbo](https://github.com/lgbo-ustc)).
|
||||||
|
* Support multi disks for caching hive files. [#37279](https://github.com/ClickHouse/ClickHouse/pull/37279) ([lgbo](https://github.com/lgbo-ustc)).
|
||||||
|
* Limiting the maximum cache usage per query can effectively prevent cache pool contamination. [Related Issues](https://github.com/ClickHouse/ClickHouse/issues/28961). [#37859](https://github.com/ClickHouse/ClickHouse/pull/37859) ([Han Shukai](https://github.com/KinderRiven)).
|
||||||
|
* Currently clickhouse directly downloads all remote files to the local cache (even if they are only read once), which will frequently cause IO of the local hard disk. In some scenarios, these IOs may not be necessary and may easily cause negative optimization. As shown in the figure below, when we run SSB Q1-Q4, the performance of the cache has caused negative optimization. [#37516](https://github.com/ClickHouse/ClickHouse/pull/37516) ([Han Shukai](https://github.com/KinderRiven)).
|
||||||
|
* Allow to prune the list of files via virtual columns such as `_file` and `_path` when reading from S3. This is for [#37174](https://github.com/ClickHouse/ClickHouse/issues/37174) , [#23494](https://github.com/ClickHouse/ClickHouse/issues/23494). [#37356](https://github.com/ClickHouse/ClickHouse/pull/37356) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* In function: CompressedWriteBuffer::nextImpl(), there is an unnecessary write-copy step that would happen frequently during inserting data. Below shows the differentiation with this patch: - Before: 1. Compress "working_buffer" into "compressed_buffer" 2. write-copy into "out" - After: Directly Compress "working_buffer" into "out". [#37242](https://github.com/ClickHouse/ClickHouse/pull/37242) ([jasperzhu](https://github.com/jinjunzh)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
* Support types with non-standard defaults in ROLLUP, CUBE, GROUPING SETS. Closes [#37360](https://github.com/ClickHouse/ClickHouse/issues/37360). [#37667](https://github.com/ClickHouse/ClickHouse/pull/37667) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Fix stack traces collection on ARM. Closes [#37044](https://github.com/ClickHouse/ClickHouse/issues/37044). Closes [#15638](https://github.com/ClickHouse/ClickHouse/issues/15638). [#37797](https://github.com/ClickHouse/ClickHouse/pull/37797) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Client will try every IP address returned by DNS resolution until successful connection. [#37273](https://github.com/ClickHouse/ClickHouse/pull/37273) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* Allow to use String type instead of Binary in Arrow/Parquet/ORC formats. This PR introduces 3 new settings for it: `output_format_arrow_string_as_string`, `output_format_parquet_string_as_string`, `output_format_orc_string_as_string`. Default value for all settings is `false`. [#37327](https://github.com/ClickHouse/ClickHouse/pull/37327) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Apply setting `input_format_max_rows_to_read_for_schema_inference` for all read rows in total from all files in globs. Previously setting `input_format_max_rows_to_read_for_schema_inference` was applied for each file in glob separately and in case of huge number of nulls we could read first `input_format_max_rows_to_read_for_schema_inference` rows from each file and get nothing. Also increase default value for this setting to 25000. [#37332](https://github.com/ClickHouse/ClickHouse/pull/37332) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Add separate `CLUSTER` grant (and `access_control_improvements.on_cluster_queries_require_cluster_grant` configuration directive, for backward compatibility, default to `false`). [#35767](https://github.com/ClickHouse/ClickHouse/pull/35767) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Added support for schema inference for `hdfsCluster`. [#35812](https://github.com/ClickHouse/ClickHouse/pull/35812) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Implement `least_used` load balancing algorithm for disks inside volume (multi disk configuration). [#36686](https://github.com/ClickHouse/ClickHouse/pull/36686) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Modify the HTTP Endpoint to return the full stats under the `X-ClickHouse-Summary` header when `send_progress_in_http_headers=0` (before it would return all zeros). - Modify the HTTP Endpoint to return `X-ClickHouse-Exception-Code` header when progress has been sent before (`send_progress_in_http_headers=1`) - Modify the HTTP Endpoint to return `HTTP_REQUEST_TIMEOUT` (408) instead of `HTTP_INTERNAL_SERVER_ERROR` (500) on `TIMEOUT_EXCEEDED` errors. [#36884](https://github.com/ClickHouse/ClickHouse/pull/36884) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Allow a user to inspect grants from granted roles. [#36941](https://github.com/ClickHouse/ClickHouse/pull/36941) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
* Do not calculate an integral numerically but use CDF functions instead. This will speed up execution and will increase the precision. This fixes [#36714](https://github.com/ClickHouse/ClickHouse/issues/36714). [#36953](https://github.com/ClickHouse/ClickHouse/pull/36953) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Add default implementation for Nothing in functions. Now most of the functions will return column with type Nothing in case one of it's arguments is Nothing. It also solves problem with functions like arrayMap/arrayFilter and similar when they have empty array as an argument. Previously queries like `select arrayMap(x -> 2 * x, []);` failed because function inside lambda cannot work with type `Nothing`, now such queries return empty array with type `Array(Nothing)`. Also add support for arrays of nullable types in functions like arrayFilter/arrayFill. Previously, queries like `select arrayFilter(x -> x % 2, [1, NULL])` failed, now they work (if the result of lambda is NULL, then this value won't be included in the result). Closes [#37000](https://github.com/ClickHouse/ClickHouse/issues/37000). [#37048](https://github.com/ClickHouse/ClickHouse/pull/37048) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Now if a shard has local replica we create a local plan and a plan to read from all remote replicas. They have shared initiator which coordinates reading. [#37204](https://github.com/ClickHouse/ClickHouse/pull/37204) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Do no longer abort server startup if configuration option "mark_cache_size" is not explicitly set. [#37326](https://github.com/ClickHouse/ClickHouse/pull/37326) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* Allows providing `NULL`/`NOT NULL` right after type in column declaration. [#37337](https://github.com/ClickHouse/ClickHouse/pull/37337) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||||
|
* optimize file segment PARTIALLY_DOWNLOADED get read buffer. [#37338](https://github.com/ClickHouse/ClickHouse/pull/37338) ([xiedeyantu](https://github.com/xiedeyantu)).
|
||||||
|
* Try to improve short circuit functions processing to fix problems with stress tests. [#37384](https://github.com/ClickHouse/ClickHouse/pull/37384) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Closes [#37395](https://github.com/ClickHouse/ClickHouse/issues/37395). [#37415](https://github.com/ClickHouse/ClickHouse/pull/37415) ([Memo](https://github.com/Joeywzr)).
|
||||||
|
* Fix extremely rare deadlock during part fetch in zero-copy replication. Fixes [#37423](https://github.com/ClickHouse/ClickHouse/issues/37423). [#37424](https://github.com/ClickHouse/ClickHouse/pull/37424) ([metahys](https://github.com/metahys)).
|
||||||
|
* Don't allow to create storage with unknown data format. [#37450](https://github.com/ClickHouse/ClickHouse/pull/37450) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Set `global_memory_usage_overcommit_max_wait_microseconds` default value to 5 seconds. Add info about `OvercommitTracker` to OOM exception message. Add `MemoryOvercommitWaitTimeMicroseconds` profile event. [#37460](https://github.com/ClickHouse/ClickHouse/pull/37460) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Do not display `-0.0` CPU time in clickhouse-client. It can appear due to rounding errors. This closes [#38003](https://github.com/ClickHouse/ClickHouse/issues/38003). This closes [#38038](https://github.com/ClickHouse/ClickHouse/issues/38038). [#38064](https://github.com/ClickHouse/ClickHouse/pull/38064) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Play UI: Keep controls in place when the page is scrolled horizontally. This makes edits comfortable even if the table is wide and it was scrolled far to the right. The feature proposed by Maksym Tereshchenko from CaspianDB. [#37470](https://github.com/ClickHouse/ClickHouse/pull/37470) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Modify query div in play.html to be extendable beyond 20% height. In case of very long queries it is helpful to extend the textarea element, only today, since the div is fixed height, the extended textarea hides the data div underneath. With this fix, extending the textarea element will push the data div down/up such the extended textarea won't hide it. Also, keeps query box width 100% even when the user adjusting the size of the query textarea. [#37488](https://github.com/ClickHouse/ClickHouse/pull/37488) ([guyco87](https://github.com/guyco87)).
|
||||||
|
* Added `ProfileEvents` for introspection of type of written (inserted or merged) parts (`Inserted{Wide/Compact/InMemory}Parts`, `MergedInto{Wide/Compact/InMemory}Parts`. Added column `part_type` to `system.part_log`. Resolves [#37495](https://github.com/ClickHouse/ClickHouse/issues/37495). [#37536](https://github.com/ClickHouse/ClickHouse/pull/37536) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* clickhouse-keeper improvement: move broken logs to a timestamped folder. [#37565](https://github.com/ClickHouse/ClickHouse/pull/37565) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Do not write expired columns by TTL after subsequent merges (before only first merge/optimize of the part will not write expired by TTL columns, all other will do). [#37570](https://github.com/ClickHouse/ClickHouse/pull/37570) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* More precise result of the `dumpColumnStructure` miscellaneous function in presence of LowCardinality or Sparse columns. In previous versions, these functions were converting the argument to a full column before returning the result. This is needed to provide an answer in [#6935](https://github.com/ClickHouse/ClickHouse/issues/6935). [#37633](https://github.com/ClickHouse/ClickHouse/pull/37633) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* clickhouse-keeper: store only unique session IDs for watches. [#37641](https://github.com/ClickHouse/ClickHouse/pull/37641) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix possible "Cannot write to finalized buffer". [#37645](https://github.com/ClickHouse/ClickHouse/pull/37645) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add setting `support_batch_delete` for `DiskS3` to disable multiobject delete calls, which Google Cloud Storage doesn't support. [#37659](https://github.com/ClickHouse/ClickHouse/pull/37659) ([Fred Wulff](https://github.com/frew)).
|
||||||
|
* Add an option to disable connection pooling in ODBC bridge. [#37705](https://github.com/ClickHouse/ClickHouse/pull/37705) ([Anton Kozlov](https://github.com/tonickkozlov)).
|
||||||
|
* Functions `dictGetHierarchy`, `dictIsIn`, `dictGetChildren`, `dictGetDescendants` added support nullable `HIERARCHICAL` attribute in dictionaries. Closes [#35521](https://github.com/ClickHouse/ClickHouse/issues/35521). [#37805](https://github.com/ClickHouse/ClickHouse/pull/37805) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Expose BoringSSL version related info in the `system.build_options` table. [#37850](https://github.com/ClickHouse/ClickHouse/pull/37850) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Now clickhouse-server removes `delete_tmp` directories on server start. Fixes [#26503](https://github.com/ClickHouse/ClickHouse/issues/26503). [#37906](https://github.com/ClickHouse/ClickHouse/pull/37906) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Clean up broken detached parts after timeout. Closes [#25195](https://github.com/ClickHouse/ClickHouse/issues/25195). [#37975](https://github.com/ClickHouse/ClickHouse/pull/37975) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Now in MergeTree table engines family failed-to-move parts will be removed instantly. [#37994](https://github.com/ClickHouse/ClickHouse/pull/37994) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Now if setting `always_fetch_merged_part` is enabled for ReplicatedMergeTree merges will try to find parts on other replicas rarely with smaller load for [Zoo]Keeper. [#37995](https://github.com/ClickHouse/ClickHouse/pull/37995) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Add implicit grants with grant option too. For example `GRANT CREATE TABLE ON test.* TO A WITH GRANT OPTION` now allows `A` to execute `GRANT CREATE VIEW ON test.* TO B`. [#38017](https://github.com/ClickHouse/ClickHouse/pull/38017) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
* Use `clang-14` and LLVM infrastructure version 14 for builds. This closes [#34681](https://github.com/ClickHouse/ClickHouse/issues/34681). [#34754](https://github.com/ClickHouse/ClickHouse/pull/34754) ([Alexey Milovidov](https://github.com/alexey-milovidov)). Note: `clang-14` has [a bug](https://github.com/google/sanitizers/issues/1540) in ThreadSanitizer that makes our CI work worse.
|
||||||
|
* Allow to drop privileges at startup. This simplifies Docker images. Closes [#36293](https://github.com/ClickHouse/ClickHouse/issues/36293). [#36341](https://github.com/ClickHouse/ClickHouse/pull/36341) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add docs spellcheck to CI. [#37790](https://github.com/ClickHouse/ClickHouse/pull/37790) ([Vladimir C](https://github.com/vdimir)).
|
||||||
|
* Fix overly aggressive stripping which removed the embedded hash required for checking the consistency of the executable. [#37993](https://github.com/ClickHouse/ClickHouse/pull/37993) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix `SELECT ... INTERSECT` and `EXCEPT SELECT` statements with constant string types. [#37738](https://github.com/ClickHouse/ClickHouse/pull/37738) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Fix `GROUP BY` `AggregateFunction` (i.e. you `GROUP BY` by the column that has `AggregateFunction` type). [#37093](https://github.com/ClickHouse/ClickHouse/pull/37093) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* (experimental WINDOW VIEW) Fix `addDependency` in WindowView. This bug can be reproduced like [#37237](https://github.com/ClickHouse/ClickHouse/issues/37237). [#37224](https://github.com/ClickHouse/ClickHouse/pull/37224) ([vxider](https://github.com/Vxider)).
|
||||||
|
* Fix inconsistency in ORDER BY ... WITH FILL feature. Query, containing ORDER BY ... WITH FILL, can generate extra rows when multiple WITH FILL columns are present. [#38074](https://github.com/ClickHouse/ClickHouse/pull/38074) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* This PR moving `addDependency` from constructor to `startup()` to avoid adding dependency to a *dropped* table, fix [#37237](https://github.com/ClickHouse/ClickHouse/issues/37237). [#37243](https://github.com/ClickHouse/ClickHouse/pull/37243) ([vxider](https://github.com/Vxider)).
|
||||||
|
* Fix inserting defaults for missing values in columnar formats. Previously missing columns were filled with defaults for types, not for columns. [#37253](https://github.com/ClickHouse/ClickHouse/pull/37253) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* (experimental Object type) Fix some cases of insertion nested arrays to columns of type `Object`. [#37305](https://github.com/ClickHouse/ClickHouse/pull/37305) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix unexpected errors with a clash of constant strings in aggregate function, prewhere and join. Close [#36891](https://github.com/ClickHouse/ClickHouse/issues/36891). [#37336](https://github.com/ClickHouse/ClickHouse/pull/37336) ([Vladimir C](https://github.com/vdimir)).
|
||||||
|
* Fix projections with GROUP/ORDER BY in query and optimize_aggregation_in_order (before the result was incorrect since only finish sorting was performed). [#37342](https://github.com/ClickHouse/ClickHouse/pull/37342) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fixed error with symbols in key name in S3. Fixes [#33009](https://github.com/ClickHouse/ClickHouse/issues/33009). [#37344](https://github.com/ClickHouse/ClickHouse/pull/37344) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Throw an exception when GROUPING SETS used with ROLLUP or CUBE. [#37367](https://github.com/ClickHouse/ClickHouse/pull/37367) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Fix LOGICAL_ERROR in getMaxSourcePartsSizeForMerge during merges (in case of non standard, greater, values of `background_pool_size`/`background_merges_mutations_concurrency_ratio` has been specified in `config.xml` (new way) not in `users.xml` (deprecated way)). [#37413](https://github.com/ClickHouse/ClickHouse/pull/37413) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Stop removing UTF-8 BOM in RowBinary format. [#37428](https://github.com/ClickHouse/ClickHouse/pull/37428) ([Paul Loyd](https://github.com/loyd)). [#37428](https://github.com/ClickHouse/ClickHouse/pull/37428) ([Paul Loyd](https://github.com/loyd)).
|
||||||
|
* clickhouse-keeper bugfix: fix force recovery for single node cluster. [#37440](https://github.com/ClickHouse/ClickHouse/pull/37440) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Fix logical error in normalizeUTF8 functions. Closes [#37298](https://github.com/ClickHouse/ClickHouse/issues/37298). [#37443](https://github.com/ClickHouse/ClickHouse/pull/37443) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Fix cast lowcard of nullable in JoinSwitcher, close [#37385](https://github.com/ClickHouse/ClickHouse/issues/37385). [#37453](https://github.com/ClickHouse/ClickHouse/pull/37453) ([Vladimir C](https://github.com/vdimir)).
|
||||||
|
* Fix named tuples output in ORC/Arrow/Parquet formats. [#37458](https://github.com/ClickHouse/ClickHouse/pull/37458) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix optimization of monotonous functions in ORDER BY clause in presence of GROUPING SETS. Fixes [#37401](https://github.com/ClickHouse/ClickHouse/issues/37401). [#37493](https://github.com/ClickHouse/ClickHouse/pull/37493) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Fix error on joining with dictionary on some conditions. Close [#37386](https://github.com/ClickHouse/ClickHouse/issues/37386). [#37530](https://github.com/ClickHouse/ClickHouse/pull/37530) ([Vladimir C](https://github.com/vdimir)).
|
||||||
|
* Prohibit `optimize_aggregation_in_order` with `GROUPING SETS` (fixes `LOGICAL_ERROR`). [#37542](https://github.com/ClickHouse/ClickHouse/pull/37542) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix wrong dump information of ActionsDAG. [#37587](https://github.com/ClickHouse/ClickHouse/pull/37587) ([zhanglistar](https://github.com/zhanglistar)).
|
||||||
|
* Fix converting types for UNION queries (may produce LOGICAL_ERROR). [#37593](https://github.com/ClickHouse/ClickHouse/pull/37593) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix `WITH FILL` modifier with negative intervals in `STEP` clause. Fixes [#37514](https://github.com/ClickHouse/ClickHouse/issues/37514). [#37600](https://github.com/ClickHouse/ClickHouse/pull/37600) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix illegal joinGet array usage when ` join_use_nulls = 1`. This fixes [#37562](https://github.com/ClickHouse/ClickHouse/issues/37562) . [#37650](https://github.com/ClickHouse/ClickHouse/pull/37650) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix columns number mismatch in cross join, close [#37561](https://github.com/ClickHouse/ClickHouse/issues/37561). [#37653](https://github.com/ClickHouse/ClickHouse/pull/37653) ([Vladimir C](https://github.com/vdimir)).
|
||||||
|
* Fix segmentation fault in `show create table` from mysql database when it is configured with named collections. Closes [#37683](https://github.com/ClickHouse/ClickHouse/issues/37683). [#37690](https://github.com/ClickHouse/ClickHouse/pull/37690) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Fix RabbitMQ Storage not being able to startup on server restart if storage was create without SETTINGS clause. Closes [#37463](https://github.com/ClickHouse/ClickHouse/issues/37463). [#37691](https://github.com/ClickHouse/ClickHouse/pull/37691) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* SQL user defined functions disable CREATE/DROP in readonly mode. Closes [#37280](https://github.com/ClickHouse/ClickHouse/issues/37280). [#37699](https://github.com/ClickHouse/ClickHouse/pull/37699) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Fix formatting of Nullable arguments for executable user defined functions. Closes [#35897](https://github.com/ClickHouse/ClickHouse/issues/35897). [#37711](https://github.com/ClickHouse/ClickHouse/pull/37711) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Fix optimization enabled by setting `optimize_monotonous_functions_in_order_by` in distributed queries. Fixes [#36037](https://github.com/ClickHouse/ClickHouse/issues/36037). [#37724](https://github.com/ClickHouse/ClickHouse/pull/37724) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix possible logical error: `Invalid Field get from type UInt64 to type Float64` in `values` table function. Closes [#37602](https://github.com/ClickHouse/ClickHouse/issues/37602). [#37754](https://github.com/ClickHouse/ClickHouse/pull/37754) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix possible segfault in schema inference in case of exception in SchemaReader constructor. Closes [#37680](https://github.com/ClickHouse/ClickHouse/issues/37680). [#37760](https://github.com/ClickHouse/ClickHouse/pull/37760) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix setting cast_ipv4_ipv6_default_on_conversion_error for internal cast function. Closes [#35156](https://github.com/ClickHouse/ClickHouse/issues/35156). [#37761](https://github.com/ClickHouse/ClickHouse/pull/37761) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Fix toString error on DatatypeDate32. [#37775](https://github.com/ClickHouse/ClickHouse/pull/37775) ([LiuNeng](https://github.com/liuneng1994)).
|
||||||
|
* The clickhouse-keeper setting `dead_session_check_period_ms` was transformed into microseconds (multiplied by 1000), which lead to dead sessions only being cleaned up after several minutes (instead of 500ms). [#37824](https://github.com/ClickHouse/ClickHouse/pull/37824) ([Michael Lex](https://github.com/mlex)).
|
||||||
|
* Fix possible "No more packets are available" for distributed queries (in case of `async_socket_for_remote`/`use_hedged_requests` is disabled). [#37826](https://github.com/ClickHouse/ClickHouse/pull/37826) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* (experimental WINDOW VIEW) Do not drop the inner target table when executing `ALTER TABLE … MODIFY QUERY` in WindowView. [#37879](https://github.com/ClickHouse/ClickHouse/pull/37879) ([vxider](https://github.com/Vxider)).
|
||||||
|
* Fix directory ownership of coordination dir in clickhouse-keeper Docker image. Fixes [#37914](https://github.com/ClickHouse/ClickHouse/issues/37914). [#37915](https://github.com/ClickHouse/ClickHouse/pull/37915) ([James Maidment](https://github.com/jamesmaidment)).
|
||||||
|
* Dictionaries fix custom query with update field and `{condition}`. Closes [#33746](https://github.com/ClickHouse/ClickHouse/issues/33746). [#37947](https://github.com/ClickHouse/ClickHouse/pull/37947) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Fix possible incorrect result of `SELECT ... WITH FILL` in the case when `ORDER BY` should be applied after `WITH FILL` result (e.g. for outer query). Incorrect result was caused by optimization for `ORDER BY` expressions ([#35623](https://github.com/ClickHouse/ClickHouse/issues/35623)). Closes [#37904](https://github.com/ClickHouse/ClickHouse/issues/37904). [#37959](https://github.com/ClickHouse/ClickHouse/pull/37959) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* (experimental WINDOW VIEW) Add missing default columns when pushing to the target table in WindowView, fix [#37815](https://github.com/ClickHouse/ClickHouse/issues/37815). [#37965](https://github.com/ClickHouse/ClickHouse/pull/37965) ([vxider](https://github.com/Vxider)).
|
||||||
|
* Fixed too large stack frame that would cause compilation to fail. [#37996](https://github.com/ClickHouse/ClickHouse/pull/37996) ([Han Shukai](https://github.com/KinderRiven)).
|
||||||
|
* When open enable_filesystem_query_cache_limit, throw Reserved cache size exceeds the remaining cache size. [#38004](https://github.com/ClickHouse/ClickHouse/pull/38004) ([xiedeyantu](https://github.com/xiedeyantu)).
|
||||||
|
* Fix converting types for UNION queries (may produce LOGICAL_ERROR). [#34775](https://github.com/ClickHouse/ClickHouse/pull/34775) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* TTL merge may not be scheduled again if BackgroundExecutor is busy. --merges_with_ttl_counter is increased in selectPartsToMerge() --merge task will be ignored if BackgroundExecutor is busy --merges_with_ttl_counter will not be decrease. [#36387](https://github.com/ClickHouse/ClickHouse/pull/36387) ([lthaooo](https://github.com/lthaooo)).
|
||||||
|
* Fix overridden settings value of `normalize_function_names`. [#36937](https://github.com/ClickHouse/ClickHouse/pull/36937) ([李扬](https://github.com/taiyang-li)).
|
||||||
|
* Fix for exponential time decaying window functions. Now respecting boundaries of the window. [#36944](https://github.com/ClickHouse/ClickHouse/pull/36944) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Fix possible heap-use-after-free error when reading system.projection_parts and system.projection_parts_columns . This fixes [#37184](https://github.com/ClickHouse/ClickHouse/issues/37184). [#37185](https://github.com/ClickHouse/ClickHouse/pull/37185) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed `DateTime64` fractional seconds behavior prior to Unix epoch. [#37697](https://github.com/ClickHouse/ClickHouse/pull/37697) ([Andrey Zvonov](https://github.com/zvonand)). [#37039](https://github.com/ClickHouse/ClickHouse/pull/37039) ([李扬](https://github.com/taiyang-li)).
|
||||||
|
|
||||||
|
|
||||||
### <a id="225"></a> ClickHouse release 22.5, 2022-05-19
|
### <a id="225"></a> ClickHouse release 22.5, 2022-05-19
|
||||||
|
|
||||||
@ -172,7 +339,7 @@
|
|||||||
|
|
||||||
#### Backward Incompatible Change
|
#### Backward Incompatible Change
|
||||||
|
|
||||||
* Do not allow SETTINGS after FORMAT for INSERT queries (there is compatibility setting `parser_settings_after_format_compact` to accept such queries, but it is turned OFF by default). [#35883](https://github.com/ClickHouse/ClickHouse/pull/35883) ([Azat Khuzhin](https://github.com/azat)).
|
* Do not allow SETTINGS after FORMAT for INSERT queries (there is compatibility setting `allow_settings_after_format_in_insert` to accept such queries, but it is turned OFF by default). [#35883](https://github.com/ClickHouse/ClickHouse/pull/35883) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
* Function `yandexConsistentHash` (consistent hashing algorithm by Konstantin "kostik" Oblakov) is renamed to `kostikConsistentHash`. The old name is left as an alias for compatibility. Although this change is backward compatible, we may remove the alias in subsequent releases, that's why it's recommended to update the usages of this function in your apps. [#35553](https://github.com/ClickHouse/ClickHouse/pull/35553) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
* Function `yandexConsistentHash` (consistent hashing algorithm by Konstantin "kostik" Oblakov) is renamed to `kostikConsistentHash`. The old name is left as an alias for compatibility. Although this change is backward compatible, we may remove the alias in subsequent releases, that's why it's recommended to update the usages of this function in your apps. [#35553](https://github.com/ClickHouse/ClickHouse/pull/35553) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
#### New Feature
|
#### New Feature
|
||||||
|
@ -244,16 +244,18 @@ endif ()
|
|||||||
# Add a section with the hash of the compiled machine code for integrity checks.
|
# Add a section with the hash of the compiled machine code for integrity checks.
|
||||||
# Only for official builds, because adding a section can be time consuming (rewrite of several GB).
|
# Only for official builds, because adding a section can be time consuming (rewrite of several GB).
|
||||||
# And cross compiled binaries are not supported (since you cannot execute clickhouse hash-binary)
|
# And cross compiled binaries are not supported (since you cannot execute clickhouse hash-binary)
|
||||||
if (OBJCOPY_PATH AND CLICKHOUSE_OFFICIAL_BUILD AND (NOT CMAKE_TOOLCHAIN_FILE OR CMAKE_TOOLCHAIN_FILE MATCHES "linux/toolchain-x86_64.cmake$"))
|
if (CLICKHOUSE_OFFICIAL_BUILD AND (NOT CMAKE_TOOLCHAIN_FILE OR CMAKE_TOOLCHAIN_FILE MATCHES "linux/toolchain-x86_64.cmake$"))
|
||||||
|
message(STATUS "Official build: A checksum hash will be added to the clickhouse executable")
|
||||||
set (USE_BINARY_HASH 1 CACHE STRING "Calculate binary hash and store it in the separate section")
|
set (USE_BINARY_HASH 1 CACHE STRING "Calculate binary hash and store it in the separate section")
|
||||||
|
else ()
|
||||||
|
message(STATUS "No official build: A checksum hash will not be added to the clickhouse executable")
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
# Allows to build stripped binary in a separate directory
|
# Optionally split binaries and debug symbols.
|
||||||
if (OBJCOPY_PATH AND STRIP_PATH)
|
option(INSTALL_STRIPPED_BINARIES "Split binaries and debug symbols" OFF)
|
||||||
option(INSTALL_STRIPPED_BINARIES "Build stripped binaries with debug info in separate directory" OFF)
|
if (INSTALL_STRIPPED_BINARIES)
|
||||||
if (INSTALL_STRIPPED_BINARIES)
|
message(STATUS "Will split binaries and debug symbols")
|
||||||
set(STRIPPED_BINARIES_OUTPUT "stripped" CACHE STRING "A separate directory for stripped information")
|
set(STRIPPED_BINARIES_OUTPUT "stripped" CACHE STRING "A separate directory for stripped information")
|
||||||
endif()
|
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
cmake_host_system_information(RESULT AVAILABLE_PHYSICAL_MEMORY QUERY AVAILABLE_PHYSICAL_MEMORY) # Not available under freebsd
|
cmake_host_system_information(RESULT AVAILABLE_PHYSICAL_MEMORY QUERY AVAILABLE_PHYSICAL_MEMORY) # Not available under freebsd
|
||||||
|
@ -13,7 +13,3 @@ ClickHouse® is an open-source column-oriented database management system that a
|
|||||||
* [Code Browser (Woboq)](https://clickhouse.com/codebrowser/ClickHouse/index.html) with syntax highlight and navigation.
|
* [Code Browser (Woboq)](https://clickhouse.com/codebrowser/ClickHouse/index.html) with syntax highlight and navigation.
|
||||||
* [Code Browser (github.dev)](https://github.dev/ClickHouse/ClickHouse) with syntax highlight, powered by github.dev.
|
* [Code Browser (github.dev)](https://github.dev/ClickHouse/ClickHouse) with syntax highlight, powered by github.dev.
|
||||||
* [Contacts](https://clickhouse.com/company/#contact) can help to get your questions answered if there are any.
|
* [Contacts](https://clickhouse.com/company/#contact) can help to get your questions answered if there are any.
|
||||||
|
|
||||||
## Upcoming Events
|
|
||||||
|
|
||||||
* [ClickHouse Meetup Amsterdam (in-person and online)](https://www.meetup.com/clickhouse-netherlands-user-group/events/286017044/) on June 8th, 2022
|
|
||||||
|
49
SECURITY.md
49
SECURITY.md
@ -1,3 +1,4 @@
|
|||||||
|
|
||||||
# Security Policy
|
# Security Policy
|
||||||
|
|
||||||
## Security Announcements
|
## Security Announcements
|
||||||
@ -7,29 +8,30 @@ Security fixes will be announced by posting them in the [security changelog](htt
|
|||||||
|
|
||||||
The following versions of ClickHouse server are currently being supported with security updates:
|
The following versions of ClickHouse server are currently being supported with security updates:
|
||||||
|
|
||||||
| Version | Supported |
|
| Version | Supported |
|
||||||
| ------- | ------------------ |
|
|:-|:-|
|
||||||
| 1.x | :x: |
|
| 22.6 | ✔️ |
|
||||||
| 18.x | :x: |
|
| 22.5 | ✔️ |
|
||||||
| 19.x | :x: |
|
| 22.4 | ✔️ |
|
||||||
| 20.x | :x: |
|
| 22.3 | ✔️ |
|
||||||
| 21.1 | :x: |
|
| 22.2 | ❌ |
|
||||||
| 21.2 | :x: |
|
| 22.1 | ❌ |
|
||||||
| 21.3 | :x: |
|
| 21.12 | ❌ |
|
||||||
| 21.4 | :x: |
|
| 21.11 | ❌ |
|
||||||
| 21.5 | :x: |
|
| 21.10 | ❌ |
|
||||||
| 21.6 | :x: |
|
| 21.9 | ❌ |
|
||||||
| 21.7 | :x: |
|
| 21.8 | ✔️ |
|
||||||
| 21.8 | ✅ |
|
| 21.7 | ❌ |
|
||||||
| 21.9 | :x: |
|
| 21.6 | ❌ |
|
||||||
| 21.10 | :x: |
|
| 21.5 | ❌ |
|
||||||
| 21.11 | :x: |
|
| 21.4 | ❌ |
|
||||||
| 21.12 | :x: |
|
| 21.3 | ❌ |
|
||||||
| 22.1 | :x: |
|
| 21.2 | ❌ |
|
||||||
| 22.2 | :x: |
|
| 21.1 | ❌ |
|
||||||
| 22.3 | ✅ |
|
| 20.* | ❌ |
|
||||||
| 22.4 | ✅ |
|
| 19.* | ❌ |
|
||||||
| 22.5 | ✅ |
|
| 18.* | ❌ |
|
||||||
|
| 1.* | ❌ |
|
||||||
|
|
||||||
## Reporting a Vulnerability
|
## Reporting a Vulnerability
|
||||||
|
|
||||||
@ -57,4 +59,3 @@ As the security issue moves from triage, to identified fix, to release planning
|
|||||||
|
|
||||||
A public disclosure date is negotiated by the ClickHouse maintainers and the bug submitter. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) to 90 days. For a vulnerability with a straightforward mitigation, we expect report date to disclosure date to be on the order of 7 days.
|
A public disclosure date is negotiated by the ClickHouse maintainers and the bug submitter. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) to 90 days. For a vulnerability with a straightforward mitigation, we expect report date to disclosure date to be on the order of 7 days.
|
||||||
|
|
||||||
|
|
||||||
|
@ -49,7 +49,7 @@ struct Decimal
|
|||||||
using NativeType = T;
|
using NativeType = T;
|
||||||
|
|
||||||
constexpr Decimal() = default;
|
constexpr Decimal() = default;
|
||||||
constexpr Decimal(Decimal<T> &&) = default;
|
constexpr Decimal(Decimal<T> &&) noexcept = default;
|
||||||
constexpr Decimal(const Decimal<T> &) = default;
|
constexpr Decimal(const Decimal<T> &) = default;
|
||||||
|
|
||||||
constexpr Decimal(const T & value_): value(value_) {}
|
constexpr Decimal(const T & value_): value(value_) {}
|
||||||
@ -57,7 +57,7 @@ struct Decimal
|
|||||||
template <typename U>
|
template <typename U>
|
||||||
constexpr Decimal(const Decimal<U> & x): value(x.value) {}
|
constexpr Decimal(const Decimal<U> & x): value(x.value) {}
|
||||||
|
|
||||||
constexpr Decimal<T> & operator = (Decimal<T> &&) = default;
|
constexpr Decimal<T> & operator=(Decimal<T> &&) noexcept = default;
|
||||||
constexpr Decimal<T> & operator = (const Decimal<T> &) = default;
|
constexpr Decimal<T> & operator = (const Decimal<T> &) = default;
|
||||||
|
|
||||||
constexpr operator T () const { return value; }
|
constexpr operator T () const { return value; }
|
||||||
|
@ -260,4 +260,35 @@ TRAP(mq_timedreceive)
|
|||||||
TRAP(wordexp)
|
TRAP(wordexp)
|
||||||
TRAP(wordfree)
|
TRAP(wordfree)
|
||||||
|
|
||||||
|
/// C11 threading primitives are not supported by ThreadSanitizer.
|
||||||
|
/// Also we should avoid using them for compatibility with old libc.
|
||||||
|
TRAP(thrd_create)
|
||||||
|
TRAP(thrd_equal)
|
||||||
|
TRAP(thrd_current)
|
||||||
|
TRAP(thrd_sleep)
|
||||||
|
TRAP(thrd_yield)
|
||||||
|
TRAP(thrd_exit)
|
||||||
|
TRAP(thrd_detach)
|
||||||
|
TRAP(thrd_join)
|
||||||
|
|
||||||
|
TRAP(mtx_init)
|
||||||
|
TRAP(mtx_lock)
|
||||||
|
TRAP(mtx_timedlock)
|
||||||
|
TRAP(mtx_trylock)
|
||||||
|
TRAP(mtx_unlock)
|
||||||
|
TRAP(mtx_destroy)
|
||||||
|
TRAP(call_once)
|
||||||
|
|
||||||
|
TRAP(cnd_init)
|
||||||
|
TRAP(cnd_signal)
|
||||||
|
TRAP(cnd_broadcast)
|
||||||
|
TRAP(cnd_wait)
|
||||||
|
TRAP(cnd_timedwait)
|
||||||
|
TRAP(cnd_destroy)
|
||||||
|
|
||||||
|
TRAP(tss_create)
|
||||||
|
TRAP(tss_get)
|
||||||
|
TRAP(tss_set)
|
||||||
|
TRAP(tss_delete)
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
@ -2,11 +2,11 @@
|
|||||||
|
|
||||||
# NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION,
|
# NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION,
|
||||||
# only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes.
|
# only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes.
|
||||||
SET(VERSION_REVISION 54463)
|
SET(VERSION_REVISION 54464)
|
||||||
SET(VERSION_MAJOR 22)
|
SET(VERSION_MAJOR 22)
|
||||||
SET(VERSION_MINOR 6)
|
SET(VERSION_MINOR 7)
|
||||||
SET(VERSION_PATCH 1)
|
SET(VERSION_PATCH 1)
|
||||||
SET(VERSION_GITHASH df0cb0620985eb5ec59760cc76f7736e5b6209bb)
|
SET(VERSION_GITHASH 7000c4e0033bb9e69050ab8ef73e8e7465f78059)
|
||||||
SET(VERSION_DESCRIBE v22.6.1.1-testing)
|
SET(VERSION_DESCRIBE v22.7.1.1-testing)
|
||||||
SET(VERSION_STRING 22.6.1.1)
|
SET(VERSION_STRING 22.7.1.1)
|
||||||
# end of autochange
|
# end of autochange
|
||||||
|
@ -29,7 +29,7 @@ if (ARCH_NATIVE)
|
|||||||
set (COMPILER_FLAGS "${COMPILER_FLAGS} -march=native")
|
set (COMPILER_FLAGS "${COMPILER_FLAGS} -march=native")
|
||||||
|
|
||||||
elseif (ARCH_AARCH64)
|
elseif (ARCH_AARCH64)
|
||||||
set (COMPILER_FLAGS "${COMPILER_FLAGS} -march=armv8-a+crc")
|
set (COMPILER_FLAGS "${COMPILER_FLAGS} -march=armv8-a+crc+simd+crypto+dotprod+ssbs")
|
||||||
|
|
||||||
elseif (ARCH_PPC64LE)
|
elseif (ARCH_PPC64LE)
|
||||||
# Note that gcc and clang have support for x86 SSE2 intrinsics when building for PowerPC
|
# Note that gcc and clang have support for x86 SSE2 intrinsics when building for PowerPC
|
||||||
|
@ -19,9 +19,12 @@ macro(clickhouse_strip_binary)
|
|||||||
COMMAND mkdir -p "${STRIP_DESTINATION_DIR}/lib/debug/bin"
|
COMMAND mkdir -p "${STRIP_DESTINATION_DIR}/lib/debug/bin"
|
||||||
COMMAND mkdir -p "${STRIP_DESTINATION_DIR}/bin"
|
COMMAND mkdir -p "${STRIP_DESTINATION_DIR}/bin"
|
||||||
COMMAND cp "${STRIP_BINARY_PATH}" "${STRIP_DESTINATION_DIR}/bin/${STRIP_TARGET}"
|
COMMAND cp "${STRIP_BINARY_PATH}" "${STRIP_DESTINATION_DIR}/bin/${STRIP_TARGET}"
|
||||||
|
# Splits debug symbols into separate file, leaves the binary untouched:
|
||||||
COMMAND "${OBJCOPY_PATH}" --only-keep-debug --compress-debug-sections "${STRIP_DESTINATION_DIR}/bin/${STRIP_TARGET}" "${STRIP_DESTINATION_DIR}/lib/debug/bin/${STRIP_TARGET}.debug"
|
COMMAND "${OBJCOPY_PATH}" --only-keep-debug --compress-debug-sections "${STRIP_DESTINATION_DIR}/bin/${STRIP_TARGET}" "${STRIP_DESTINATION_DIR}/lib/debug/bin/${STRIP_TARGET}.debug"
|
||||||
COMMAND chmod 0644 "${STRIP_DESTINATION_DIR}/lib/debug/bin/${STRIP_TARGET}.debug"
|
COMMAND chmod 0644 "${STRIP_DESTINATION_DIR}/lib/debug/bin/${STRIP_TARGET}.debug"
|
||||||
COMMAND "${STRIP_PATH}" --remove-section=.comment --remove-section=.note "${STRIP_DESTINATION_DIR}/bin/${STRIP_TARGET}"
|
# Strips binary, sections '.note' & '.comment' are removed in line with Debian's stripping policy: www.debian.org/doc/debian-policy/ch-files.html, section '.clickhouse.hash' is needed for integrity check:
|
||||||
|
COMMAND "${STRIP_PATH}" --remove-section=.comment --remove-section=.note --keep-section=.clickhouse.hash "${STRIP_DESTINATION_DIR}/bin/${STRIP_TARGET}"
|
||||||
|
# Associate stripped binary with debug symbols:
|
||||||
COMMAND "${OBJCOPY_PATH}" --add-gnu-debuglink "${STRIP_DESTINATION_DIR}/lib/debug/bin/${STRIP_TARGET}.debug" "${STRIP_DESTINATION_DIR}/bin/${STRIP_TARGET}"
|
COMMAND "${OBJCOPY_PATH}" --add-gnu-debuglink "${STRIP_DESTINATION_DIR}/lib/debug/bin/${STRIP_TARGET}.debug" "${STRIP_DESTINATION_DIR}/bin/${STRIP_TARGET}"
|
||||||
COMMENT "Stripping clickhouse binary" VERBATIM
|
COMMENT "Stripping clickhouse binary" VERBATIM
|
||||||
)
|
)
|
||||||
|
@ -77,6 +77,7 @@ if (OS_LINUX AND NOT LINKER_NAME)
|
|||||||
|
|
||||||
if (NOT LINKER_NAME)
|
if (NOT LINKER_NAME)
|
||||||
if (GOLD_PATH)
|
if (GOLD_PATH)
|
||||||
|
message (WARNING "Linking with gold is not recommended. Please use lld.")
|
||||||
if (COMPILER_GCC)
|
if (COMPILER_GCC)
|
||||||
set (LINKER_NAME "gold")
|
set (LINKER_NAME "gold")
|
||||||
else ()
|
else ()
|
||||||
@ -111,7 +112,7 @@ endif()
|
|||||||
# Archiver
|
# Archiver
|
||||||
|
|
||||||
if (COMPILER_GCC)
|
if (COMPILER_GCC)
|
||||||
find_program (LLVM_AR_PATH NAMES "llvm-ar" "llvm-ar-13" "llvm-ar-12" "llvm-ar-11")
|
find_program (LLVM_AR_PATH NAMES "llvm-ar" "llvm-ar-14" "llvm-ar-13" "llvm-ar-12")
|
||||||
else ()
|
else ()
|
||||||
find_program (LLVM_AR_PATH NAMES "llvm-ar-${COMPILER_VERSION_MAJOR}" "llvm-ar")
|
find_program (LLVM_AR_PATH NAMES "llvm-ar-${COMPILER_VERSION_MAJOR}" "llvm-ar")
|
||||||
endif ()
|
endif ()
|
||||||
@ -125,7 +126,7 @@ message(STATUS "Using archiver: ${CMAKE_AR}")
|
|||||||
# Ranlib
|
# Ranlib
|
||||||
|
|
||||||
if (COMPILER_GCC)
|
if (COMPILER_GCC)
|
||||||
find_program (LLVM_RANLIB_PATH NAMES "llvm-ranlib" "llvm-ranlib-13" "llvm-ranlib-12" "llvm-ranlib-11")
|
find_program (LLVM_RANLIB_PATH NAMES "llvm-ranlib" "llvm-ranlib-14" "llvm-ranlib-13" "llvm-ranlib-12")
|
||||||
else ()
|
else ()
|
||||||
find_program (LLVM_RANLIB_PATH NAMES "llvm-ranlib-${COMPILER_VERSION_MAJOR}" "llvm-ranlib")
|
find_program (LLVM_RANLIB_PATH NAMES "llvm-ranlib-${COMPILER_VERSION_MAJOR}" "llvm-ranlib")
|
||||||
endif ()
|
endif ()
|
||||||
@ -139,7 +140,7 @@ message(STATUS "Using ranlib: ${CMAKE_RANLIB}")
|
|||||||
# Install Name Tool
|
# Install Name Tool
|
||||||
|
|
||||||
if (COMPILER_GCC)
|
if (COMPILER_GCC)
|
||||||
find_program (LLVM_INSTALL_NAME_TOOL_PATH NAMES "llvm-install-name-tool" "llvm-install-name-tool-13" "llvm-install-name-tool-12" "llvm-install-name-tool-11")
|
find_program (LLVM_INSTALL_NAME_TOOL_PATH NAMES "llvm-install-name-tool" "llvm-install-name-tool-14" "llvm-install-name-tool-13" "llvm-install-name-tool-12")
|
||||||
else ()
|
else ()
|
||||||
find_program (LLVM_INSTALL_NAME_TOOL_PATH NAMES "llvm-install-name-tool-${COMPILER_VERSION_MAJOR}" "llvm-install-name-tool")
|
find_program (LLVM_INSTALL_NAME_TOOL_PATH NAMES "llvm-install-name-tool-${COMPILER_VERSION_MAJOR}" "llvm-install-name-tool")
|
||||||
endif ()
|
endif ()
|
||||||
@ -153,7 +154,7 @@ message(STATUS "Using install-name-tool: ${CMAKE_INSTALL_NAME_TOOL}")
|
|||||||
# Objcopy
|
# Objcopy
|
||||||
|
|
||||||
if (COMPILER_GCC)
|
if (COMPILER_GCC)
|
||||||
find_program (OBJCOPY_PATH NAMES "llvm-objcopy" "llvm-objcopy-13" "llvm-objcopy-12" "llvm-objcopy-11" "objcopy")
|
find_program (OBJCOPY_PATH NAMES "llvm-objcopy" "llvm-objcopy-14" "llvm-objcopy-13" "llvm-objcopy-12" "objcopy")
|
||||||
else ()
|
else ()
|
||||||
find_program (OBJCOPY_PATH NAMES "llvm-objcopy-${COMPILER_VERSION_MAJOR}" "llvm-objcopy" "objcopy")
|
find_program (OBJCOPY_PATH NAMES "llvm-objcopy-${COMPILER_VERSION_MAJOR}" "llvm-objcopy" "objcopy")
|
||||||
endif ()
|
endif ()
|
||||||
@ -167,7 +168,7 @@ endif ()
|
|||||||
# Strip
|
# Strip
|
||||||
|
|
||||||
if (COMPILER_GCC)
|
if (COMPILER_GCC)
|
||||||
find_program (STRIP_PATH NAMES "llvm-strip" "llvm-strip-13" "llvm-strip-12" "llvm-strip-11" "strip")
|
find_program (STRIP_PATH NAMES "llvm-strip" "llvm-strip-14" "llvm-strip-13" "llvm-strip-12" "strip")
|
||||||
else ()
|
else ()
|
||||||
find_program (STRIP_PATH NAMES "llvm-strip-${COMPILER_VERSION_MAJOR}" "llvm-strip" "strip")
|
find_program (STRIP_PATH NAMES "llvm-strip-${COMPILER_VERSION_MAJOR}" "llvm-strip" "strip")
|
||||||
endif ()
|
endif ()
|
||||||
|
2
contrib/NuRaft
vendored
2
contrib/NuRaft
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 24a13f15cf0838b93f3b1beb62ed010dffdb2117
|
Subproject commit 1334b9ae72576821a698d657d08838861cf33007
|
2
contrib/curl
vendored
2
contrib/curl
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 801bd5138ce31aa0d906fa4e2eabfc599d74e793
|
Subproject commit 462196e6b4a47f924293a0e26b8e9c23d37ac26f
|
@ -84,7 +84,6 @@ set (SRCS
|
|||||||
"${LIBRARY_DIR}/lib/gopher.c"
|
"${LIBRARY_DIR}/lib/gopher.c"
|
||||||
"${LIBRARY_DIR}/lib/idn_win32.c"
|
"${LIBRARY_DIR}/lib/idn_win32.c"
|
||||||
"${LIBRARY_DIR}/lib/http_proxy.c"
|
"${LIBRARY_DIR}/lib/http_proxy.c"
|
||||||
"${LIBRARY_DIR}/lib/non-ascii.c"
|
|
||||||
"${LIBRARY_DIR}/lib/asyn-thread.c"
|
"${LIBRARY_DIR}/lib/asyn-thread.c"
|
||||||
"${LIBRARY_DIR}/lib/curl_gssapi.c"
|
"${LIBRARY_DIR}/lib/curl_gssapi.c"
|
||||||
"${LIBRARY_DIR}/lib/http_ntlm.c"
|
"${LIBRARY_DIR}/lib/http_ntlm.c"
|
||||||
@ -93,10 +92,8 @@ set (SRCS
|
|||||||
"${LIBRARY_DIR}/lib/curl_sasl.c"
|
"${LIBRARY_DIR}/lib/curl_sasl.c"
|
||||||
"${LIBRARY_DIR}/lib/rand.c"
|
"${LIBRARY_DIR}/lib/rand.c"
|
||||||
"${LIBRARY_DIR}/lib/curl_multibyte.c"
|
"${LIBRARY_DIR}/lib/curl_multibyte.c"
|
||||||
"${LIBRARY_DIR}/lib/hostcheck.c"
|
|
||||||
"${LIBRARY_DIR}/lib/conncache.c"
|
"${LIBRARY_DIR}/lib/conncache.c"
|
||||||
"${LIBRARY_DIR}/lib/dotdot.c"
|
"${LIBRARY_DIR}/lib/dotdot.c"
|
||||||
"${LIBRARY_DIR}/lib/x509asn1.c"
|
|
||||||
"${LIBRARY_DIR}/lib/http2.c"
|
"${LIBRARY_DIR}/lib/http2.c"
|
||||||
"${LIBRARY_DIR}/lib/smb.c"
|
"${LIBRARY_DIR}/lib/smb.c"
|
||||||
"${LIBRARY_DIR}/lib/curl_endian.c"
|
"${LIBRARY_DIR}/lib/curl_endian.c"
|
||||||
@ -120,6 +117,9 @@ set (SRCS
|
|||||||
"${LIBRARY_DIR}/lib/http_aws_sigv4.c"
|
"${LIBRARY_DIR}/lib/http_aws_sigv4.c"
|
||||||
"${LIBRARY_DIR}/lib/mqtt.c"
|
"${LIBRARY_DIR}/lib/mqtt.c"
|
||||||
"${LIBRARY_DIR}/lib/rename.c"
|
"${LIBRARY_DIR}/lib/rename.c"
|
||||||
|
"${LIBRARY_DIR}/lib/h2h3.c"
|
||||||
|
"${LIBRARY_DIR}/lib/headers.c"
|
||||||
|
"${LIBRARY_DIR}/lib/timediff.c"
|
||||||
"${LIBRARY_DIR}/lib/vauth/vauth.c"
|
"${LIBRARY_DIR}/lib/vauth/vauth.c"
|
||||||
"${LIBRARY_DIR}/lib/vauth/cleartext.c"
|
"${LIBRARY_DIR}/lib/vauth/cleartext.c"
|
||||||
"${LIBRARY_DIR}/lib/vauth/cram.c"
|
"${LIBRARY_DIR}/lib/vauth/cram.c"
|
||||||
@ -142,11 +142,13 @@ set (SRCS
|
|||||||
"${LIBRARY_DIR}/lib/vtls/sectransp.c"
|
"${LIBRARY_DIR}/lib/vtls/sectransp.c"
|
||||||
"${LIBRARY_DIR}/lib/vtls/gskit.c"
|
"${LIBRARY_DIR}/lib/vtls/gskit.c"
|
||||||
"${LIBRARY_DIR}/lib/vtls/mbedtls.c"
|
"${LIBRARY_DIR}/lib/vtls/mbedtls.c"
|
||||||
"${LIBRARY_DIR}/lib/vtls/mesalink.c"
|
|
||||||
"${LIBRARY_DIR}/lib/vtls/bearssl.c"
|
"${LIBRARY_DIR}/lib/vtls/bearssl.c"
|
||||||
"${LIBRARY_DIR}/lib/vtls/keylog.c"
|
"${LIBRARY_DIR}/lib/vtls/keylog.c"
|
||||||
|
"${LIBRARY_DIR}/lib/vtls/x509asn1.c"
|
||||||
|
"${LIBRARY_DIR}/lib/vtls/hostcheck.c"
|
||||||
"${LIBRARY_DIR}/lib/vquic/ngtcp2.c"
|
"${LIBRARY_DIR}/lib/vquic/ngtcp2.c"
|
||||||
"${LIBRARY_DIR}/lib/vquic/quiche.c"
|
"${LIBRARY_DIR}/lib/vquic/quiche.c"
|
||||||
|
"${LIBRARY_DIR}/lib/vquic/msh3.c"
|
||||||
"${LIBRARY_DIR}/lib/vssh/libssh2.c"
|
"${LIBRARY_DIR}/lib/vssh/libssh2.c"
|
||||||
"${LIBRARY_DIR}/lib/vssh/libssh.c"
|
"${LIBRARY_DIR}/lib/vssh/libssh.c"
|
||||||
)
|
)
|
||||||
|
2
contrib/grpc
vendored
2
contrib/grpc
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 7eac189a6badddac593580ec2ad1478bd2656fc7
|
Subproject commit 5e23e96c0c02e451dbb291cf9f66231d02b6cdb6
|
2
contrib/librdkafka
vendored
2
contrib/librdkafka
vendored
@ -1 +1 @@
|
|||||||
Subproject commit b8554f1682062c85ba519eb54ef2f90e02b812cb
|
Subproject commit 81b413cc1c2a33ad4e96df856b89184efbd6221c
|
2
contrib/libunwind
vendored
2
contrib/libunwind
vendored
@ -1 +1 @@
|
|||||||
Subproject commit c4ea9848a697747dfa35325af9b3452f30841685
|
Subproject commit 5022f30f3e092a54a7c101c335ce5e08769db366
|
@ -76,9 +76,7 @@ message (STATUS "LLVM library Directory: ${LLVM_LIBRARY_DIRS}")
|
|||||||
message (STATUS "LLVM C++ compiler flags: ${LLVM_CXXFLAGS}")
|
message (STATUS "LLVM C++ compiler flags: ${LLVM_CXXFLAGS}")
|
||||||
|
|
||||||
# ld: unknown option: --color-diagnostics
|
# ld: unknown option: --color-diagnostics
|
||||||
if (APPLE)
|
set (LINKER_SUPPORTS_COLOR_DIAGNOSTICS 0 CACHE INTERNAL "")
|
||||||
set (LINKER_SUPPORTS_COLOR_DIAGNOSTICS 0 CACHE INTERNAL "")
|
|
||||||
endif ()
|
|
||||||
|
|
||||||
# Do not adjust RPATH in llvm, since then it will not be able to find libcxx/libcxxabi/libunwind
|
# Do not adjust RPATH in llvm, since then it will not be able to find libcxx/libcxxabi/libunwind
|
||||||
set (CMAKE_INSTALL_RPATH "ON")
|
set (CMAKE_INSTALL_RPATH "ON")
|
||||||
|
2
contrib/simdjson
vendored
2
contrib/simdjson
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 8df32cea3359cb30120795da6020b3b73da01d38
|
Subproject commit de196dd7a3a16e4056b0551ffa3b85c2f52581e1
|
2
contrib/zstd
vendored
2
contrib/zstd
vendored
@ -1 +1 @@
|
|||||||
Subproject commit a488ba114ec17ea1054b9057c26a046fc122b3b6
|
Subproject commit b944db0c451ba1bc6bbd8e201d5f88f9041bf1f9
|
@ -50,7 +50,7 @@ GetLibraryVersion("${HEADER_CONTENT}" LIBVER_MAJOR LIBVER_MINOR LIBVER_RELEASE)
|
|||||||
MESSAGE(STATUS "ZSTD VERSION ${LIBVER_MAJOR}.${LIBVER_MINOR}.${LIBVER_RELEASE}")
|
MESSAGE(STATUS "ZSTD VERSION ${LIBVER_MAJOR}.${LIBVER_MINOR}.${LIBVER_RELEASE}")
|
||||||
|
|
||||||
# cd contrib/zstd/lib
|
# cd contrib/zstd/lib
|
||||||
# find . -name '*.c' | grep -vP 'deprecated|legacy' | sort | sed 's/^\./ "${LIBRARY_DIR}/"'
|
# find . -name '*.c' -or -name '*.S' | grep -vP 'deprecated|legacy' | sort | sed 's/^\./ "${LIBRARY_DIR}/"'
|
||||||
SET(Sources
|
SET(Sources
|
||||||
"${LIBRARY_DIR}/common/debug.c"
|
"${LIBRARY_DIR}/common/debug.c"
|
||||||
"${LIBRARY_DIR}/common/entropy_common.c"
|
"${LIBRARY_DIR}/common/entropy_common.c"
|
||||||
@ -73,6 +73,7 @@ SET(Sources
|
|||||||
"${LIBRARY_DIR}/compress/zstd_ldm.c"
|
"${LIBRARY_DIR}/compress/zstd_ldm.c"
|
||||||
"${LIBRARY_DIR}/compress/zstdmt_compress.c"
|
"${LIBRARY_DIR}/compress/zstdmt_compress.c"
|
||||||
"${LIBRARY_DIR}/compress/zstd_opt.c"
|
"${LIBRARY_DIR}/compress/zstd_opt.c"
|
||||||
|
"${LIBRARY_DIR}/decompress/huf_decompress_amd64.S"
|
||||||
"${LIBRARY_DIR}/decompress/huf_decompress.c"
|
"${LIBRARY_DIR}/decompress/huf_decompress.c"
|
||||||
"${LIBRARY_DIR}/decompress/zstd_ddict.c"
|
"${LIBRARY_DIR}/decompress/zstd_ddict.c"
|
||||||
"${LIBRARY_DIR}/decompress/zstd_decompress_block.c"
|
"${LIBRARY_DIR}/decompress/zstd_decompress_block.c"
|
||||||
@ -85,6 +86,7 @@ SET(Sources
|
|||||||
# cd contrib/zstd/lib
|
# cd contrib/zstd/lib
|
||||||
# find . -name '*.h' | grep -vP 'deprecated|legacy' | sort | sed 's/^\./ "${LIBRARY_DIR}/"'
|
# find . -name '*.h' | grep -vP 'deprecated|legacy' | sort | sed 's/^\./ "${LIBRARY_DIR}/"'
|
||||||
SET(Headers
|
SET(Headers
|
||||||
|
"${LIBRARY_DIR}/common/bits.h"
|
||||||
"${LIBRARY_DIR}/common/bitstream.h"
|
"${LIBRARY_DIR}/common/bitstream.h"
|
||||||
"${LIBRARY_DIR}/common/compiler.h"
|
"${LIBRARY_DIR}/common/compiler.h"
|
||||||
"${LIBRARY_DIR}/common/cpu.h"
|
"${LIBRARY_DIR}/common/cpu.h"
|
||||||
@ -94,11 +96,13 @@ SET(Headers
|
|||||||
"${LIBRARY_DIR}/common/huf.h"
|
"${LIBRARY_DIR}/common/huf.h"
|
||||||
"${LIBRARY_DIR}/common/mem.h"
|
"${LIBRARY_DIR}/common/mem.h"
|
||||||
"${LIBRARY_DIR}/common/pool.h"
|
"${LIBRARY_DIR}/common/pool.h"
|
||||||
|
"${LIBRARY_DIR}/common/portability_macros.h"
|
||||||
"${LIBRARY_DIR}/common/threading.h"
|
"${LIBRARY_DIR}/common/threading.h"
|
||||||
"${LIBRARY_DIR}/common/xxhash.h"
|
"${LIBRARY_DIR}/common/xxhash.h"
|
||||||
"${LIBRARY_DIR}/common/zstd_deps.h"
|
"${LIBRARY_DIR}/common/zstd_deps.h"
|
||||||
"${LIBRARY_DIR}/common/zstd_internal.h"
|
"${LIBRARY_DIR}/common/zstd_internal.h"
|
||||||
"${LIBRARY_DIR}/common/zstd_trace.h"
|
"${LIBRARY_DIR}/common/zstd_trace.h"
|
||||||
|
"${LIBRARY_DIR}/compress/clevels.h"
|
||||||
"${LIBRARY_DIR}/compress/hist.h"
|
"${LIBRARY_DIR}/compress/hist.h"
|
||||||
"${LIBRARY_DIR}/compress/zstd_compress_internal.h"
|
"${LIBRARY_DIR}/compress/zstd_compress_internal.h"
|
||||||
"${LIBRARY_DIR}/compress/zstd_compress_literals.h"
|
"${LIBRARY_DIR}/compress/zstd_compress_literals.h"
|
||||||
|
@ -21,7 +21,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
|
|||||||
|
|
||||||
ARG REPO_CHANNEL="stable"
|
ARG REPO_CHANNEL="stable"
|
||||||
ARG REPOSITORY="deb https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
ARG REPOSITORY="deb https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
||||||
ARG VERSION=22.5.1.*
|
ARG VERSION=22.6.1.*
|
||||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||||
|
|
||||||
# set non-empty deb_location_url url to create a docker image
|
# set non-empty deb_location_url url to create a docker image
|
||||||
|
@ -21,7 +21,9 @@ By default, starting above server instance will be run as default user without p
|
|||||||
|
|
||||||
### connect to it from a native client
|
### connect to it from a native client
|
||||||
```bash
|
```bash
|
||||||
$ docker run -it --rm --link some-clickhouse-server:clickhouse-server clickhouse/clickhouse-client --host clickhouse-server
|
$ docker run -it --rm --link some-clickhouse-server:clickhouse-server --entrypoint clickhouse-client clickhouse/clickhouse-server --host clickhouse-server
|
||||||
|
# OR
|
||||||
|
$ docker exec -it some-clickhouse-server clickhouse-client
|
||||||
```
|
```
|
||||||
|
|
||||||
More information about [ClickHouse client](https://clickhouse.com/docs/en/interfaces/cli/).
|
More information about [ClickHouse client](https://clickhouse.com/docs/en/interfaces/cli/).
|
||||||
|
@ -37,38 +37,13 @@ export FASTTEST_DATA
|
|||||||
export FASTTEST_OUT
|
export FASTTEST_OUT
|
||||||
export PATH
|
export PATH
|
||||||
|
|
||||||
server_pid=none
|
|
||||||
|
|
||||||
function stop_server
|
|
||||||
{
|
|
||||||
if ! kill -0 -- "$server_pid"
|
|
||||||
then
|
|
||||||
echo "ClickHouse server pid '$server_pid' is not running"
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
for _ in {1..60}
|
|
||||||
do
|
|
||||||
if ! pkill -f "clickhouse-server" && ! kill -- "$server_pid" ; then break ; fi
|
|
||||||
sleep 1
|
|
||||||
done
|
|
||||||
|
|
||||||
if kill -0 -- "$server_pid"
|
|
||||||
then
|
|
||||||
pstree -apgT
|
|
||||||
jobs
|
|
||||||
echo "Failed to kill the ClickHouse server pid '$server_pid'"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
server_pid=none
|
|
||||||
}
|
|
||||||
|
|
||||||
function start_server
|
function start_server
|
||||||
{
|
{
|
||||||
set -m # Spawn server in its own process groups
|
set -m # Spawn server in its own process groups
|
||||||
|
|
||||||
local opts=(
|
local opts=(
|
||||||
--config-file "$FASTTEST_DATA/config.xml"
|
--config-file "$FASTTEST_DATA/config.xml"
|
||||||
|
--pid-file "$FASTTEST_DATA/clickhouse-server.pid"
|
||||||
--
|
--
|
||||||
--path "$FASTTEST_DATA"
|
--path "$FASTTEST_DATA"
|
||||||
--user_files_path "$FASTTEST_DATA/user_files"
|
--user_files_path "$FASTTEST_DATA/user_files"
|
||||||
@ -76,40 +51,22 @@ function start_server
|
|||||||
--keeper_server.storage_path "$FASTTEST_DATA/coordination"
|
--keeper_server.storage_path "$FASTTEST_DATA/coordination"
|
||||||
)
|
)
|
||||||
clickhouse-server "${opts[@]}" &>> "$FASTTEST_OUTPUT/server.log" &
|
clickhouse-server "${opts[@]}" &>> "$FASTTEST_OUTPUT/server.log" &
|
||||||
server_pid=$!
|
|
||||||
set +m
|
set +m
|
||||||
|
|
||||||
if [ "$server_pid" == "0" ]
|
for _ in {1..60}; do
|
||||||
then
|
if clickhouse-client --query "select 1"; then
|
||||||
echo "Failed to start ClickHouse server"
|
|
||||||
# Avoid zero PID because `kill` treats it as our process group PID.
|
|
||||||
server_pid="none"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
for _ in {1..60}
|
|
||||||
do
|
|
||||||
if clickhouse-client --query "select 1" || ! kill -0 -- "$server_pid"
|
|
||||||
then
|
|
||||||
break
|
break
|
||||||
fi
|
fi
|
||||||
sleep 1
|
sleep 1
|
||||||
done
|
done
|
||||||
|
|
||||||
if ! clickhouse-client --query "select 1"
|
if ! clickhouse-client --query "select 1"; then
|
||||||
then
|
|
||||||
echo "Failed to wait until ClickHouse server starts."
|
echo "Failed to wait until ClickHouse server starts."
|
||||||
server_pid="none"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if ! kill -0 -- "$server_pid"
|
|
||||||
then
|
|
||||||
echo "Wrong clickhouse server started: PID '$server_pid' we started is not running, but '$(pgrep -f clickhouse-server)' is running"
|
|
||||||
server_pid="none"
|
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
local server_pid
|
||||||
|
server_pid="$(cat "$FASTTEST_DATA/clickhouse-server.pid")"
|
||||||
echo "ClickHouse server pid '$server_pid' started and responded"
|
echo "ClickHouse server pid '$server_pid' started and responded"
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -254,9 +211,6 @@ function run_tests
|
|||||||
clickhouse-server --version
|
clickhouse-server --version
|
||||||
clickhouse-test --help
|
clickhouse-test --help
|
||||||
|
|
||||||
# Kill the server in case we are running locally and not in docker
|
|
||||||
stop_server ||:
|
|
||||||
|
|
||||||
start_server
|
start_server
|
||||||
|
|
||||||
set +e
|
set +e
|
||||||
@ -284,6 +238,8 @@ function run_tests
|
|||||||
| ts '%Y-%m-%d %H:%M:%S' \
|
| ts '%Y-%m-%d %H:%M:%S' \
|
||||||
| tee "$FASTTEST_OUTPUT/test_result.txt"
|
| tee "$FASTTEST_OUTPUT/test_result.txt"
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
|
clickhouse stop --pid-path "$FASTTEST_DATA"
|
||||||
}
|
}
|
||||||
|
|
||||||
case "$stage" in
|
case "$stage" in
|
||||||
|
@ -125,16 +125,7 @@ function filter_exists_and_template
|
|||||||
function stop_server
|
function stop_server
|
||||||
{
|
{
|
||||||
clickhouse-client --query "select elapsed, query from system.processes" ||:
|
clickhouse-client --query "select elapsed, query from system.processes" ||:
|
||||||
killall clickhouse-server ||:
|
clickhouse stop
|
||||||
for _ in {1..10}
|
|
||||||
do
|
|
||||||
if ! pgrep -f clickhouse-server
|
|
||||||
then
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
sleep 1
|
|
||||||
done
|
|
||||||
killall -9 clickhouse-server ||:
|
|
||||||
|
|
||||||
# Debug.
|
# Debug.
|
||||||
date
|
date
|
||||||
@ -159,10 +150,12 @@ function fuzz
|
|||||||
NEW_TESTS_OPT="${NEW_TESTS_OPT:-}"
|
NEW_TESTS_OPT="${NEW_TESTS_OPT:-}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
mkdir -p /var/run/clickhouse-server
|
||||||
|
|
||||||
# interferes with gdb
|
# interferes with gdb
|
||||||
export CLICKHOUSE_WATCHDOG_ENABLE=0
|
export CLICKHOUSE_WATCHDOG_ENABLE=0
|
||||||
# NOTE: we use process substitution here to preserve keep $! as a pid of clickhouse-server
|
# NOTE: we use process substitution here to preserve keep $! as a pid of clickhouse-server
|
||||||
clickhouse-server --config-file db/config.xml -- --path db > >(tail -100000 > server.log) 2>&1 &
|
clickhouse-server --config-file db/config.xml --pid-file /var/run/clickhouse-server/clickhouse-server.pid -- --path db > >(tail -100000 > server.log) 2>&1 &
|
||||||
server_pid=$!
|
server_pid=$!
|
||||||
|
|
||||||
kill -0 $server_pid
|
kill -0 $server_pid
|
||||||
|
@ -21,7 +21,7 @@ export NUM_QUERIES=1000
|
|||||||
( java -jar target/sqlancer-*.jar --num-threads 10 --timeout-seconds $TIMEOUT --num-queries $NUM_QUERIES --username default --password "" clickhouse --oracle TLPDistinct | tee /test_output/TLPDistinct.out ) 3>&1 1>&2 2>&3 | tee /test_output/TLPDistinct.err
|
( java -jar target/sqlancer-*.jar --num-threads 10 --timeout-seconds $TIMEOUT --num-queries $NUM_QUERIES --username default --password "" clickhouse --oracle TLPDistinct | tee /test_output/TLPDistinct.out ) 3>&1 1>&2 2>&3 | tee /test_output/TLPDistinct.err
|
||||||
( java -jar target/sqlancer-*.jar --num-threads 10 --timeout-seconds $TIMEOUT --num-queries $NUM_QUERIES --username default --password "" clickhouse --oracle TLPAggregate | tee /test_output/TLPAggregate.out ) 3>&1 1>&2 2>&3 | tee /test_output/TLPAggregate.err
|
( java -jar target/sqlancer-*.jar --num-threads 10 --timeout-seconds $TIMEOUT --num-queries $NUM_QUERIES --username default --password "" clickhouse --oracle TLPAggregate | tee /test_output/TLPAggregate.out ) 3>&1 1>&2 2>&3 | tee /test_output/TLPAggregate.err
|
||||||
|
|
||||||
service clickhouse-server stop && sleep 10
|
service clickhouse stop
|
||||||
|
|
||||||
ls /var/log/clickhouse-server/
|
ls /var/log/clickhouse-server/
|
||||||
tar czf /test_output/logs.tar.gz -C /var/log/clickhouse-server/ .
|
tar czf /test_output/logs.tar.gz -C /var/log/clickhouse-server/ .
|
||||||
|
@ -7,22 +7,12 @@ RUN apt-get update -y \
|
|||||||
&& env DEBIAN_FRONTEND=noninteractive \
|
&& env DEBIAN_FRONTEND=noninteractive \
|
||||||
apt-get install --yes --no-install-recommends \
|
apt-get install --yes --no-install-recommends \
|
||||||
python3-requests \
|
python3-requests \
|
||||||
llvm-9
|
&& apt-get clean
|
||||||
|
|
||||||
COPY s3downloader /s3downloader
|
COPY s3downloader /s3downloader
|
||||||
|
|
||||||
ENV S3_URL="https://clickhouse-datasets.s3.amazonaws.com"
|
ENV S3_URL="https://clickhouse-datasets.s3.amazonaws.com"
|
||||||
ENV DATASETS="hits visits"
|
ENV DATASETS="hits visits"
|
||||||
ENV EXPORT_S3_STORAGE_POLICIES=1
|
|
||||||
|
|
||||||
# Download Minio-related binaries
|
|
||||||
RUN arch=${TARGETARCH:-amd64} \
|
|
||||||
&& if [ "$arch" = "amd64" ] ; then wget "https://dl.min.io/server/minio/release/linux-${arch}/archive/minio-20220103182258.0.0.x86_64.rpm"; else wget "https://dl.min.io/server/minio/release/linux-${arch}/archive/minio-20220103182258.0.0.aarch64.rpm" ; fi \
|
|
||||||
&& wget "https://dl.min.io/client/mc/release/linux-${arch}/mc" \
|
|
||||||
&& chmod +x ./mc
|
|
||||||
ENV MINIO_ROOT_USER="clickhouse"
|
|
||||||
ENV MINIO_ROOT_PASSWORD="clickhouse"
|
|
||||||
COPY setup_minio.sh /
|
|
||||||
|
|
||||||
COPY run.sh /
|
COPY run.sh /
|
||||||
CMD ["/bin/bash", "/run.sh"]
|
CMD ["/bin/bash", "/run.sh"]
|
||||||
|
@ -17,22 +17,28 @@ ln -s /usr/share/clickhouse-test/clickhouse-test /usr/bin/clickhouse-test
|
|||||||
# install test configs
|
# install test configs
|
||||||
/usr/share/clickhouse-test/config/install.sh
|
/usr/share/clickhouse-test/config/install.sh
|
||||||
|
|
||||||
./setup_minio.sh
|
./setup_minio.sh stateful
|
||||||
|
|
||||||
function start()
|
function start()
|
||||||
{
|
{
|
||||||
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
||||||
|
mkdir -p /var/run/clickhouse-server1
|
||||||
|
sudo chown clickhouse:clickhouse /var/run/clickhouse-server1
|
||||||
# NOTE We run "clickhouse server" instead of "clickhouse-server"
|
# NOTE We run "clickhouse server" instead of "clickhouse-server"
|
||||||
# to make "pidof clickhouse-server" return single pid of the main instance.
|
# to make "pidof clickhouse-server" return single pid of the main instance.
|
||||||
# We wil run main instance using "service clickhouse-server start"
|
# We wil run main instance using "service clickhouse-server start"
|
||||||
sudo -E -u clickhouse /usr/bin/clickhouse server --config /etc/clickhouse-server1/config.xml --daemon \
|
sudo -E -u clickhouse /usr/bin/clickhouse server --config /etc/clickhouse-server1/config.xml --daemon \
|
||||||
|
--pid-file /var/run/clickhouse-server1/clickhouse-server.pid \
|
||||||
-- --path /var/lib/clickhouse1/ --logger.stderr /var/log/clickhouse-server/stderr1.log \
|
-- --path /var/lib/clickhouse1/ --logger.stderr /var/log/clickhouse-server/stderr1.log \
|
||||||
--logger.log /var/log/clickhouse-server/clickhouse-server1.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server1.err.log \
|
--logger.log /var/log/clickhouse-server/clickhouse-server1.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server1.err.log \
|
||||||
--tcp_port 19000 --tcp_port_secure 19440 --http_port 18123 --https_port 18443 --interserver_http_port 19009 --tcp_with_proxy_port 19010 \
|
--tcp_port 19000 --tcp_port_secure 19440 --http_port 18123 --https_port 18443 --interserver_http_port 19009 --tcp_with_proxy_port 19010 \
|
||||||
--mysql_port 19004 --postgresql_port 19005 \
|
--mysql_port 19004 --postgresql_port 19005 \
|
||||||
--keeper_server.tcp_port 19181 --keeper_server.server_id 2
|
--keeper_server.tcp_port 19181 --keeper_server.server_id 2
|
||||||
|
|
||||||
|
mkdir -p /var/run/clickhouse-server2
|
||||||
|
sudo chown clickhouse:clickhouse /var/run/clickhouse-server2
|
||||||
sudo -E -u clickhouse /usr/bin/clickhouse server --config /etc/clickhouse-server2/config.xml --daemon \
|
sudo -E -u clickhouse /usr/bin/clickhouse server --config /etc/clickhouse-server2/config.xml --daemon \
|
||||||
|
--pid-file /var/run/clickhouse-server2/clickhouse-server.pid \
|
||||||
-- --path /var/lib/clickhouse2/ --logger.stderr /var/log/clickhouse-server/stderr2.log \
|
-- --path /var/lib/clickhouse2/ --logger.stderr /var/log/clickhouse-server/stderr2.log \
|
||||||
--logger.log /var/log/clickhouse-server/clickhouse-server2.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server2.err.log \
|
--logger.log /var/log/clickhouse-server/clickhouse-server2.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server2.err.log \
|
||||||
--tcp_port 29000 --tcp_port_secure 29440 --http_port 28123 --https_port 28443 --interserver_http_port 29009 --tcp_with_proxy_port 29010 \
|
--tcp_port 29000 --tcp_port_secure 29440 --http_port 28123 --https_port 28443 --interserver_http_port 29009 --tcp_with_proxy_port 29010 \
|
||||||
@ -135,6 +141,12 @@ ls -la /
|
|||||||
|
|
||||||
/process_functional_tests_result.py || echo -e "failure\tCannot parse results" > /test_output/check_status.tsv
|
/process_functional_tests_result.py || echo -e "failure\tCannot parse results" > /test_output/check_status.tsv
|
||||||
|
|
||||||
|
sudo clickhouse stop ||:
|
||||||
|
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
||||||
|
sudo clickhouse stop --pid-path /var/run/clickhouse-server1 ||:
|
||||||
|
sudo clickhouse stop --pid-path /var/run/clickhouse-server2 ||:
|
||||||
|
fi
|
||||||
|
|
||||||
grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server.log ||:
|
grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server.log ||:
|
||||||
|
|
||||||
pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.gz ||:
|
pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.gz ||:
|
||||||
|
@ -1,77 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# TODO: Make this file shared with stateless tests
|
|
||||||
#
|
|
||||||
# Usage for local run:
|
|
||||||
#
|
|
||||||
# ./docker/test/stateful/setup_minio.sh ./tests/
|
|
||||||
#
|
|
||||||
|
|
||||||
set -e -x -a -u
|
|
||||||
|
|
||||||
rpm2cpio ./minio-20220103182258.0.0.*.rpm | cpio -i --make-directories
|
|
||||||
find / -name minio
|
|
||||||
cp ./usr/local/bin/minio ./
|
|
||||||
|
|
||||||
ls -lha
|
|
||||||
|
|
||||||
mkdir -p ./minio_data
|
|
||||||
|
|
||||||
if [ ! -f ./minio ]; then
|
|
||||||
echo 'MinIO binary not found, downloading...'
|
|
||||||
|
|
||||||
BINARY_TYPE=$(uname -s | tr '[:upper:]' '[:lower:]')
|
|
||||||
|
|
||||||
wget "https://dl.min.io/server/minio/release/${BINARY_TYPE}-amd64/minio" \
|
|
||||||
&& chmod +x ./minio \
|
|
||||||
&& wget "https://dl.min.io/client/mc/release/${BINARY_TYPE}-amd64/mc" \
|
|
||||||
&& chmod +x ./mc
|
|
||||||
fi
|
|
||||||
|
|
||||||
MINIO_ROOT_USER=${MINIO_ROOT_USER:-clickhouse}
|
|
||||||
MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD:-clickhouse}
|
|
||||||
|
|
||||||
./minio --version
|
|
||||||
./minio server --address ":11111" ./minio_data &
|
|
||||||
|
|
||||||
i=0
|
|
||||||
while ! curl -v --silent http://localhost:11111 2>&1 | grep AccessDenied
|
|
||||||
do
|
|
||||||
if [[ $i == 60 ]]; then
|
|
||||||
echo "Failed to setup minio"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
echo "Trying to connect to minio"
|
|
||||||
sleep 1
|
|
||||||
i=$((i + 1))
|
|
||||||
done
|
|
||||||
|
|
||||||
lsof -i :11111
|
|
||||||
|
|
||||||
sleep 5
|
|
||||||
|
|
||||||
./mc alias set clickminio http://localhost:11111 clickhouse clickhouse
|
|
||||||
./mc admin user add clickminio test testtest
|
|
||||||
./mc admin policy set clickminio readwrite user=test
|
|
||||||
./mc mb clickminio/test
|
|
||||||
|
|
||||||
|
|
||||||
# Upload data to Minio. By default after unpacking all tests will in
|
|
||||||
# /usr/share/clickhouse-test/queries
|
|
||||||
|
|
||||||
TEST_PATH=${1:-/usr/share/clickhouse-test}
|
|
||||||
MINIO_DATA_PATH=${TEST_PATH}/queries/1_stateful/data_minio
|
|
||||||
|
|
||||||
# Iterating over globs will cause redudant FILE variale to be a path to a file, not a filename
|
|
||||||
# shellcheck disable=SC2045
|
|
||||||
for FILE in $(ls "${MINIO_DATA_PATH}"); do
|
|
||||||
echo "$FILE";
|
|
||||||
./mc cp "${MINIO_DATA_PATH}"/"$FILE" clickminio/test/"$FILE";
|
|
||||||
done
|
|
||||||
|
|
||||||
mkdir -p ~/.aws
|
|
||||||
cat <<EOT >> ~/.aws/credentials
|
|
||||||
[default]
|
|
||||||
aws_access_key_id=clickhouse
|
|
||||||
aws_secret_access_key=clickhouse
|
|
||||||
EOT
|
|
1
docker/test/stateful/setup_minio.sh
Symbolic link
1
docker/test/stateful/setup_minio.sh
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../stateless/setup_minio.sh
|
@ -5,37 +5,36 @@ FROM clickhouse/test-base:$FROM_TAG
|
|||||||
|
|
||||||
ARG odbc_driver_url="https://github.com/ClickHouse/clickhouse-odbc/releases/download/v1.1.4.20200302/clickhouse-odbc-1.1.4-Linux.tar.gz"
|
ARG odbc_driver_url="https://github.com/ClickHouse/clickhouse-odbc/releases/download/v1.1.4.20200302/clickhouse-odbc-1.1.4-Linux.tar.gz"
|
||||||
|
|
||||||
|
# golang version 1.13 on Ubuntu 20 is enough for tests
|
||||||
RUN apt-get update -y \
|
RUN apt-get update -y \
|
||||||
&& env DEBIAN_FRONTEND=noninteractive \
|
&& env DEBIAN_FRONTEND=noninteractive \
|
||||||
apt-get install --yes --no-install-recommends \
|
apt-get install --yes --no-install-recommends \
|
||||||
|
awscli \
|
||||||
brotli \
|
brotli \
|
||||||
expect \
|
expect \
|
||||||
zstd \
|
golang \
|
||||||
lsof \
|
lsof \
|
||||||
|
mysql-client=8.0* \
|
||||||
ncdu \
|
ncdu \
|
||||||
netcat-openbsd \
|
netcat-openbsd \
|
||||||
|
openjdk-11-jre-headless \
|
||||||
openssl \
|
openssl \
|
||||||
|
postgresql-client \
|
||||||
protobuf-compiler \
|
protobuf-compiler \
|
||||||
python3 \
|
python3 \
|
||||||
python3-lxml \
|
python3-lxml \
|
||||||
|
python3-pip \
|
||||||
python3-requests \
|
python3-requests \
|
||||||
python3-termcolor \
|
python3-termcolor \
|
||||||
python3-pip \
|
|
||||||
qemu-user-static \
|
qemu-user-static \
|
||||||
|
sqlite3 \
|
||||||
sudo \
|
sudo \
|
||||||
# golang version 1.13 on Ubuntu 20 is enough for tests
|
|
||||||
golang \
|
|
||||||
telnet \
|
telnet \
|
||||||
tree \
|
tree \
|
||||||
unixodbc \
|
unixodbc \
|
||||||
wget \
|
wget \
|
||||||
mysql-client=8.0* \
|
zstd \
|
||||||
postgresql-client \
|
&& apt-get clean
|
||||||
sqlite3 \
|
|
||||||
awscli \
|
|
||||||
openjdk-11-jre-headless \
|
|
||||||
rpm2cpio \
|
|
||||||
cpio
|
|
||||||
|
|
||||||
|
|
||||||
RUN pip3 install numpy scipy pandas Jinja2
|
RUN pip3 install numpy scipy pandas Jinja2
|
||||||
@ -53,13 +52,17 @@ RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
|||||||
ENV NUM_TRIES=1
|
ENV NUM_TRIES=1
|
||||||
ENV MAX_RUN_TIME=0
|
ENV MAX_RUN_TIME=0
|
||||||
|
|
||||||
|
# Unrelated to vars in setup_minio.sh, but should be the same there
|
||||||
|
# to have the same binaries for local running scenario
|
||||||
|
ARG MINIO_SERVER_VERSION=2022-01-03T18-22-58Z
|
||||||
|
ARG MINIO_CLIENT_VERSION=2022-01-05T23-52-51Z
|
||||||
ARG TARGETARCH
|
ARG TARGETARCH
|
||||||
|
|
||||||
# Download Minio-related binaries
|
# Download Minio-related binaries
|
||||||
RUN arch=${TARGETARCH:-amd64} \
|
RUN arch=${TARGETARCH:-amd64} \
|
||||||
&& if [ "$arch" = "amd64" ] ; then wget "https://dl.min.io/server/minio/release/linux-${arch}/archive/minio-20220103182258.0.0.x86_64.rpm"; else wget "https://dl.min.io/server/minio/release/linux-${arch}/archive/minio-20220103182258.0.0.aarch64.rpm" ; fi \
|
&& wget "https://dl.min.io/server/minio/release/linux-${arch}/archive/minio.RELEASE.${MINIO_SERVER_VERSION}" -O ./minio \
|
||||||
&& wget "https://dl.min.io/client/mc/release/linux-${arch}/mc" \
|
&& wget "https://dl.min.io/client/mc/release/linux-${arch}/archive/mc.RELEASE.${MINIO_CLIENT_VERSION}" -O ./mc \
|
||||||
&& chmod +x ./mc
|
&& chmod +x ./mc ./minio
|
||||||
|
|
||||||
|
|
||||||
RUN wget 'https://dlcdn.apache.org/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz' \
|
RUN wget 'https://dlcdn.apache.org/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz' \
|
||||||
|
@ -18,7 +18,7 @@ ln -s /usr/share/clickhouse-test/clickhouse-test /usr/bin/clickhouse-test
|
|||||||
# install test configs
|
# install test configs
|
||||||
/usr/share/clickhouse-test/config/install.sh
|
/usr/share/clickhouse-test/config/install.sh
|
||||||
|
|
||||||
./setup_minio.sh
|
./setup_minio.sh stateless
|
||||||
./setup_hdfs_minicluster.sh
|
./setup_hdfs_minicluster.sh
|
||||||
|
|
||||||
# For flaky check we also enable thread fuzzer
|
# For flaky check we also enable thread fuzzer
|
||||||
@ -41,15 +41,18 @@ if [ "$NUM_TRIES" -gt "1" ]; then
|
|||||||
export THREAD_FUZZER_pthread_mutex_unlock_BEFORE_SLEEP_TIME_US=10000
|
export THREAD_FUZZER_pthread_mutex_unlock_BEFORE_SLEEP_TIME_US=10000
|
||||||
export THREAD_FUZZER_pthread_mutex_unlock_AFTER_SLEEP_TIME_US=10000
|
export THREAD_FUZZER_pthread_mutex_unlock_AFTER_SLEEP_TIME_US=10000
|
||||||
|
|
||||||
|
mkdir -p /var/run/clickhouse-server
|
||||||
# simpliest way to forward env variables to server
|
# simpliest way to forward env variables to server
|
||||||
sudo -E -u clickhouse /usr/bin/clickhouse-server --config /etc/clickhouse-server/config.xml --daemon
|
sudo -E -u clickhouse /usr/bin/clickhouse-server --config /etc/clickhouse-server/config.xml --daemon --pid-file /var/run/clickhouse-server/clickhouse-server.pid
|
||||||
else
|
else
|
||||||
sudo clickhouse start
|
sudo clickhouse start
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
||||||
|
mkdir -p /var/run/clickhouse-server1
|
||||||
|
sudo chown clickhouse:clickhouse /var/run/clickhouse-server1
|
||||||
sudo -E -u clickhouse /usr/bin/clickhouse server --config /etc/clickhouse-server1/config.xml --daemon \
|
sudo -E -u clickhouse /usr/bin/clickhouse server --config /etc/clickhouse-server1/config.xml --daemon \
|
||||||
|
--pid-file /var/run/clickhouse-server1/clickhouse-server.pid \
|
||||||
-- --path /var/lib/clickhouse1/ --logger.stderr /var/log/clickhouse-server/stderr1.log \
|
-- --path /var/lib/clickhouse1/ --logger.stderr /var/log/clickhouse-server/stderr1.log \
|
||||||
--logger.log /var/log/clickhouse-server/clickhouse-server1.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server1.err.log \
|
--logger.log /var/log/clickhouse-server/clickhouse-server1.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server1.err.log \
|
||||||
--tcp_port 19000 --tcp_port_secure 19440 --http_port 18123 --https_port 18443 --interserver_http_port 19009 --tcp_with_proxy_port 19010 \
|
--tcp_port 19000 --tcp_port_secure 19440 --http_port 18123 --https_port 18443 --interserver_http_port 19009 --tcp_with_proxy_port 19010 \
|
||||||
@ -57,7 +60,10 @@ if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]
|
|||||||
--keeper_server.tcp_port 19181 --keeper_server.server_id 2 \
|
--keeper_server.tcp_port 19181 --keeper_server.server_id 2 \
|
||||||
--macros.replica r2 # It doesn't work :(
|
--macros.replica r2 # It doesn't work :(
|
||||||
|
|
||||||
|
mkdir -p /var/run/clickhouse-server2
|
||||||
|
sudo chown clickhouse:clickhouse /var/run/clickhouse-server2
|
||||||
sudo -E -u clickhouse /usr/bin/clickhouse server --config /etc/clickhouse-server2/config.xml --daemon \
|
sudo -E -u clickhouse /usr/bin/clickhouse server --config /etc/clickhouse-server2/config.xml --daemon \
|
||||||
|
--pid-file /var/run/clickhouse-server2/clickhouse-server.pid \
|
||||||
-- --path /var/lib/clickhouse2/ --logger.stderr /var/log/clickhouse-server/stderr2.log \
|
-- --path /var/lib/clickhouse2/ --logger.stderr /var/log/clickhouse-server/stderr2.log \
|
||||||
--logger.log /var/log/clickhouse-server/clickhouse-server2.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server2.err.log \
|
--logger.log /var/log/clickhouse-server/clickhouse-server2.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server2.err.log \
|
||||||
--tcp_port 29000 --tcp_port_secure 29440 --http_port 28123 --https_port 28443 --interserver_http_port 29009 --tcp_with_proxy_port 29010 \
|
--tcp_port 29000 --tcp_port_secure 29440 --http_port 28123 --https_port 28443 --interserver_http_port 29009 --tcp_with_proxy_port 29010 \
|
||||||
@ -133,18 +139,10 @@ clickhouse-client -q "system flush logs" ||:
|
|||||||
# Stop server so we can safely read data with clickhouse-local.
|
# Stop server so we can safely read data with clickhouse-local.
|
||||||
# Why do we read data with clickhouse-local?
|
# Why do we read data with clickhouse-local?
|
||||||
# Because it's the simplest way to read it when server has crashed.
|
# Because it's the simplest way to read it when server has crashed.
|
||||||
if [ "$NUM_TRIES" -gt "1" ]; then
|
sudo clickhouse stop ||:
|
||||||
clickhouse-client -q "system shutdown" ||:
|
|
||||||
sleep 10
|
|
||||||
else
|
|
||||||
sudo clickhouse stop ||:
|
|
||||||
fi
|
|
||||||
|
|
||||||
|
|
||||||
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
||||||
clickhouse-client --port 19000 -q "system shutdown" ||:
|
sudo clickhouse stop --pid-path /var/run/clickhouse-server1 ||:
|
||||||
clickhouse-client --port 29000 -q "system shutdown" ||:
|
sudo clickhouse stop --pid-path /var/run/clickhouse-server2 ||:
|
||||||
sleep 10
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server.log ||:
|
grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server.log ||:
|
||||||
|
@ -1,29 +1,41 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
# Usage for local run:
|
USAGE='Usage for local run:
|
||||||
#
|
|
||||||
# ./docker/test/stateless/setup_minio.sh ./tests/
|
./docker/test/stateless/setup_minio.sh { stateful | stateless } ./tests/
|
||||||
#
|
|
||||||
|
'
|
||||||
|
|
||||||
set -e -x -a -u
|
set -e -x -a -u
|
||||||
|
|
||||||
rpm2cpio ./minio-20220103182258.0.0.*.rpm | cpio -i --make-directories
|
TEST_TYPE="$1"
|
||||||
find / -name minio
|
shift
|
||||||
cp ./usr/local/bin/minio ./
|
|
||||||
|
case $TEST_TYPE in
|
||||||
|
stateless) QUERY_DIR=0_stateless ;;
|
||||||
|
stateful) QUERY_DIR=1_stateful ;;
|
||||||
|
*) echo "unknown test type $TEST_TYPE"; echo "${USAGE}"; exit 1 ;;
|
||||||
|
esac
|
||||||
|
|
||||||
ls -lha
|
ls -lha
|
||||||
|
|
||||||
mkdir -p ./minio_data
|
mkdir -p ./minio_data
|
||||||
|
|
||||||
if [ ! -f ./minio ]; then
|
if [ ! -f ./minio ]; then
|
||||||
|
MINIO_SERVER_VERSION=${MINIO_SERVER_VERSION:-2022-01-03T18-22-58Z}
|
||||||
|
MINIO_CLIENT_VERSION=${MINIO_CLIENT_VERSION:-2022-01-05T23-52-51Z}
|
||||||
|
case $(uname -m) in
|
||||||
|
x86_64) BIN_ARCH=amd64 ;;
|
||||||
|
aarch64) BIN_ARCH=arm64 ;;
|
||||||
|
*) echo "unknown architecture $(uname -m)"; exit 1 ;;
|
||||||
|
esac
|
||||||
echo 'MinIO binary not found, downloading...'
|
echo 'MinIO binary not found, downloading...'
|
||||||
|
|
||||||
BINARY_TYPE=$(uname -s | tr '[:upper:]' '[:lower:]')
|
BINARY_TYPE=$(uname -s | tr '[:upper:]' '[:lower:]')
|
||||||
|
|
||||||
wget "https://dl.min.io/server/minio/release/${BINARY_TYPE}-amd64/minio" \
|
wget "https://dl.min.io/server/minio/release/${BINARY_TYPE}-${BIN_ARCH}/archive/minio.RELEASE.${MINIO_SERVER_VERSION}" -O ./minio \
|
||||||
&& chmod +x ./minio \
|
&& wget "https://dl.min.io/client/mc/release/${BINARY_TYPE}-${BIN_ARCH}/archive/mc.RELEASE.${MINIO_CLIENT_VERSION}" -O ./mc \
|
||||||
&& wget "https://dl.min.io/client/mc/release/${BINARY_TYPE}-amd64/mc" \
|
&& chmod +x ./mc ./minio
|
||||||
&& chmod +x ./mc
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
MINIO_ROOT_USER=${MINIO_ROOT_USER:-clickhouse}
|
MINIO_ROOT_USER=${MINIO_ROOT_USER:-clickhouse}
|
||||||
@ -52,14 +64,16 @@ sleep 5
|
|||||||
./mc admin user add clickminio test testtest
|
./mc admin user add clickminio test testtest
|
||||||
./mc admin policy set clickminio readwrite user=test
|
./mc admin policy set clickminio readwrite user=test
|
||||||
./mc mb clickminio/test
|
./mc mb clickminio/test
|
||||||
./mc policy set public clickminio/test
|
if [ "$TEST_TYPE" = "stateless" ]; then
|
||||||
|
./mc policy set public clickminio/test
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
# Upload data to Minio. By default after unpacking all tests will in
|
# Upload data to Minio. By default after unpacking all tests will in
|
||||||
# /usr/share/clickhouse-test/queries
|
# /usr/share/clickhouse-test/queries
|
||||||
|
|
||||||
TEST_PATH=${1:-/usr/share/clickhouse-test}
|
TEST_PATH=${1:-/usr/share/clickhouse-test}
|
||||||
MINIO_DATA_PATH=${TEST_PATH}/queries/0_stateless/data_minio
|
MINIO_DATA_PATH=${TEST_PATH}/queries/${QUERY_DIR}/data_minio
|
||||||
|
|
||||||
# Iterating over globs will cause redudant FILE variale to be a path to a file, not a filename
|
# Iterating over globs will cause redudant FILE variale to be a path to a file, not a filename
|
||||||
# shellcheck disable=SC2045
|
# shellcheck disable=SC2045
|
||||||
@ -71,6 +85,6 @@ done
|
|||||||
mkdir -p ~/.aws
|
mkdir -p ~/.aws
|
||||||
cat <<EOT >> ~/.aws/credentials
|
cat <<EOT >> ~/.aws/credentials
|
||||||
[default]
|
[default]
|
||||||
aws_access_key_id=clickhouse
|
aws_access_key_id=${MINIO_ROOT_USER}
|
||||||
aws_secret_access_key=clickhouse
|
aws_secret_access_key=${MINIO_ROOT_PASSWORD}
|
||||||
EOT
|
EOT
|
||||||
|
@ -7,26 +7,27 @@ set -x
|
|||||||
|
|
||||||
# Thread Fuzzer allows to check more permutations of possible thread scheduling
|
# Thread Fuzzer allows to check more permutations of possible thread scheduling
|
||||||
# and find more potential issues.
|
# and find more potential issues.
|
||||||
|
is_tsan_build=$(clickhouse local -q "select value like '% -fsanitize=thread %' from system.build_options where name='CXX_FLAGS'")
|
||||||
|
if [ "$is_tsan_build" -eq "0" ]; then
|
||||||
|
export THREAD_FUZZER_CPU_TIME_PERIOD_US=1000
|
||||||
|
export THREAD_FUZZER_SLEEP_PROBABILITY=0.1
|
||||||
|
export THREAD_FUZZER_SLEEP_TIME_US=100000
|
||||||
|
|
||||||
export THREAD_FUZZER_CPU_TIME_PERIOD_US=1000
|
export THREAD_FUZZER_pthread_mutex_lock_BEFORE_MIGRATE_PROBABILITY=1
|
||||||
export THREAD_FUZZER_SLEEP_PROBABILITY=0.1
|
export THREAD_FUZZER_pthread_mutex_lock_AFTER_MIGRATE_PROBABILITY=1
|
||||||
export THREAD_FUZZER_SLEEP_TIME_US=100000
|
export THREAD_FUZZER_pthread_mutex_unlock_BEFORE_MIGRATE_PROBABILITY=1
|
||||||
|
export THREAD_FUZZER_pthread_mutex_unlock_AFTER_MIGRATE_PROBABILITY=1
|
||||||
|
|
||||||
export THREAD_FUZZER_pthread_mutex_lock_BEFORE_MIGRATE_PROBABILITY=1
|
export THREAD_FUZZER_pthread_mutex_lock_BEFORE_SLEEP_PROBABILITY=0.001
|
||||||
export THREAD_FUZZER_pthread_mutex_lock_AFTER_MIGRATE_PROBABILITY=1
|
export THREAD_FUZZER_pthread_mutex_lock_AFTER_SLEEP_PROBABILITY=0.001
|
||||||
export THREAD_FUZZER_pthread_mutex_unlock_BEFORE_MIGRATE_PROBABILITY=1
|
export THREAD_FUZZER_pthread_mutex_unlock_BEFORE_SLEEP_PROBABILITY=0.001
|
||||||
export THREAD_FUZZER_pthread_mutex_unlock_AFTER_MIGRATE_PROBABILITY=1
|
export THREAD_FUZZER_pthread_mutex_unlock_AFTER_SLEEP_PROBABILITY=0.001
|
||||||
|
export THREAD_FUZZER_pthread_mutex_lock_BEFORE_SLEEP_TIME_US=10000
|
||||||
export THREAD_FUZZER_pthread_mutex_lock_BEFORE_SLEEP_PROBABILITY=0.001
|
|
||||||
export THREAD_FUZZER_pthread_mutex_lock_AFTER_SLEEP_PROBABILITY=0.001
|
|
||||||
export THREAD_FUZZER_pthread_mutex_unlock_BEFORE_SLEEP_PROBABILITY=0.001
|
|
||||||
export THREAD_FUZZER_pthread_mutex_unlock_AFTER_SLEEP_PROBABILITY=0.001
|
|
||||||
export THREAD_FUZZER_pthread_mutex_lock_BEFORE_SLEEP_TIME_US=10000
|
|
||||||
|
|
||||||
export THREAD_FUZZER_pthread_mutex_lock_AFTER_SLEEP_TIME_US=10000
|
|
||||||
export THREAD_FUZZER_pthread_mutex_unlock_BEFORE_SLEEP_TIME_US=10000
|
|
||||||
export THREAD_FUZZER_pthread_mutex_unlock_AFTER_SLEEP_TIME_US=10000
|
|
||||||
|
|
||||||
|
export THREAD_FUZZER_pthread_mutex_lock_AFTER_SLEEP_TIME_US=10000
|
||||||
|
export THREAD_FUZZER_pthread_mutex_unlock_BEFORE_SLEEP_TIME_US=10000
|
||||||
|
export THREAD_FUZZER_pthread_mutex_unlock_AFTER_SLEEP_TIME_US=10000
|
||||||
|
fi
|
||||||
|
|
||||||
function install_packages()
|
function install_packages()
|
||||||
{
|
{
|
||||||
@ -174,7 +175,7 @@ install_packages package_folder
|
|||||||
|
|
||||||
configure
|
configure
|
||||||
|
|
||||||
./setup_minio.sh
|
./setup_minio.sh stateful # to have a proper environment
|
||||||
|
|
||||||
start
|
start
|
||||||
|
|
||||||
@ -340,6 +341,9 @@ then
|
|||||||
-e "UNFINISHED" \
|
-e "UNFINISHED" \
|
||||||
-e "Renaming unexpected part" \
|
-e "Renaming unexpected part" \
|
||||||
-e "PART_IS_TEMPORARILY_LOCKED" \
|
-e "PART_IS_TEMPORARILY_LOCKED" \
|
||||||
|
-e "and a merge is impossible: we didn't find smaller parts" \
|
||||||
|
-e "found in queue and some source parts for it was lost" \
|
||||||
|
-e "is lost forever." \
|
||||||
/var/log/clickhouse-server/clickhouse-server.backward.clean.log | zgrep -Fa "<Error>" > /test_output/bc_check_error_messages.txt \
|
/var/log/clickhouse-server/clickhouse-server.backward.clean.log | zgrep -Fa "<Error>" > /test_output/bc_check_error_messages.txt \
|
||||||
&& echo -e 'Backward compatibility check: Error message in clickhouse-server.log (see bc_check_error_messages.txt)\tFAIL' >> /test_output/test_results.tsv \
|
&& echo -e 'Backward compatibility check: Error message in clickhouse-server.log (see bc_check_error_messages.txt)\tFAIL' >> /test_output/test_results.tsv \
|
||||||
|| echo -e 'Backward compatibility check: No Error messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
|| echo -e 'Backward compatibility check: No Error messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
||||||
|
@ -11,6 +11,7 @@ TIMEOUT_SIGN = "[ Timeout! "
|
|||||||
UNKNOWN_SIGN = "[ UNKNOWN "
|
UNKNOWN_SIGN = "[ UNKNOWN "
|
||||||
SKIPPED_SIGN = "[ SKIPPED "
|
SKIPPED_SIGN = "[ SKIPPED "
|
||||||
HUNG_SIGN = "Found hung queries in processlist"
|
HUNG_SIGN = "Found hung queries in processlist"
|
||||||
|
DATABASE_SIGN = "Database: "
|
||||||
|
|
||||||
SUCCESS_FINISH_SIGNS = ["All tests have finished", "No tests were run"]
|
SUCCESS_FINISH_SIGNS = ["All tests have finished", "No tests were run"]
|
||||||
|
|
||||||
@ -27,14 +28,19 @@ def process_test_log(log_path):
|
|||||||
retries = False
|
retries = False
|
||||||
success_finish = False
|
success_finish = False
|
||||||
test_results = []
|
test_results = []
|
||||||
|
test_end = True
|
||||||
with open(log_path, "r") as test_file:
|
with open(log_path, "r") as test_file:
|
||||||
for line in test_file:
|
for line in test_file:
|
||||||
original_line = line
|
original_line = line
|
||||||
line = line.strip()
|
line = line.strip()
|
||||||
|
|
||||||
if any(s in line for s in SUCCESS_FINISH_SIGNS):
|
if any(s in line for s in SUCCESS_FINISH_SIGNS):
|
||||||
success_finish = True
|
success_finish = True
|
||||||
|
# Ignore hung check report, since it may be quite large.
|
||||||
|
# (and may break python parser which has limit of 128KiB for each row).
|
||||||
if HUNG_SIGN in line:
|
if HUNG_SIGN in line:
|
||||||
hung = True
|
hung = True
|
||||||
|
break
|
||||||
if RETRIES_SIGN in line:
|
if RETRIES_SIGN in line:
|
||||||
retries = True
|
retries = True
|
||||||
if any(
|
if any(
|
||||||
@ -67,8 +73,17 @@ def process_test_log(log_path):
|
|||||||
else:
|
else:
|
||||||
success += int(OK_SIGN in line)
|
success += int(OK_SIGN in line)
|
||||||
test_results.append((test_name, "OK", test_time, []))
|
test_results.append((test_name, "OK", test_time, []))
|
||||||
elif len(test_results) > 0 and test_results[-1][1] == "FAIL":
|
test_end = False
|
||||||
|
elif (
|
||||||
|
len(test_results) > 0 and test_results[-1][1] == "FAIL" and not test_end
|
||||||
|
):
|
||||||
test_results[-1][3].append(original_line)
|
test_results[-1][3].append(original_line)
|
||||||
|
# Database printed after everything else in case of failures,
|
||||||
|
# so this is a stop marker for capturing test output.
|
||||||
|
#
|
||||||
|
# And it is handled after everything else to include line with database into the report.
|
||||||
|
if DATABASE_SIGN in line:
|
||||||
|
test_end = True
|
||||||
|
|
||||||
test_results = [
|
test_results = [
|
||||||
(test[0], test[1], test[2], "".join(test[3])) for test in test_results
|
(test[0], test[1], test[2], "".join(test[3])) for test in test_results
|
||||||
@ -113,7 +128,7 @@ def process_result(result_path):
|
|||||||
test_results,
|
test_results,
|
||||||
) = process_test_log(result_path)
|
) = process_test_log(result_path)
|
||||||
is_flacky_check = 1 < int(os.environ.get("NUM_TRIES", 1))
|
is_flacky_check = 1 < int(os.environ.get("NUM_TRIES", 1))
|
||||||
logging.info("Is flacky check: %s", is_flacky_check)
|
logging.info("Is flaky check: %s", is_flacky_check)
|
||||||
# If no tests were run (success == 0) it indicates an error (e.g. server did not start or crashed immediately)
|
# If no tests were run (success == 0) it indicates an error (e.g. server did not start or crashed immediately)
|
||||||
# But it's Ok for "flaky checks" - they can contain just one test for check which is marked as skipped.
|
# But it's Ok for "flaky checks" - they can contain just one test for check which is marked as skipped.
|
||||||
if failed != 0 or unknown != 0 or (success == 0 and (not is_flacky_check)):
|
if failed != 0 or unknown != 0 or (success == 0 and (not is_flacky_check)):
|
||||||
|
@ -14,7 +14,7 @@
|
|||||||
* Selects with final are executed in parallel. Added setting `max_final_threads` to limit the number of threads used. [#10463](https://github.com/ClickHouse/ClickHouse/pull/10463) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
* Selects with final are executed in parallel. Added setting `max_final_threads` to limit the number of threads used. [#10463](https://github.com/ClickHouse/ClickHouse/pull/10463) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
* Function that extracts from haystack all matching non-overlapping groups with regular expressions, and put those into `Array(Array(String))` column. [#10534](https://github.com/ClickHouse/ClickHouse/pull/10534) ([Vasily Nemkov](https://github.com/Enmk)).
|
* Function that extracts from haystack all matching non-overlapping groups with regular expressions, and put those into `Array(Array(String))` column. [#10534](https://github.com/ClickHouse/ClickHouse/pull/10534) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
* Added ability to delete a subset of expired rows, which satisfies the condition in WHERE clause. Added ability to replace expired rows with aggregates of them specified in GROUP BY clause. [#10537](https://github.com/ClickHouse/ClickHouse/pull/10537) ([expl0si0nn](https://github.com/expl0si0nn)).
|
* Added ability to delete a subset of expired rows, which satisfies the condition in WHERE clause. Added ability to replace expired rows with aggregates of them specified in GROUP BY clause. [#10537](https://github.com/ClickHouse/ClickHouse/pull/10537) ([expl0si0nn](https://github.com/expl0si0nn)).
|
||||||
* (Only Linux) Clickhouse server now tries to fallback to ProcfsMetricsProvider when clickhouse binary is not attributed with CAP_NET_ADMIN capability to collect per-query system metrics (for CPU and I/O). [#10544](https://github.com/ClickHouse/ClickHouse/pull/10544) ([Alexander Kazakov](https://github.com/Akazz)).
|
* (Only Linux) ClickHouse server now tries to fallback to ProcfsMetricsProvider when clickhouse binary is not attributed with CAP_NET_ADMIN capability to collect per-query system metrics (for CPU and I/O). [#10544](https://github.com/ClickHouse/ClickHouse/pull/10544) ([Alexander Kazakov](https://github.com/Akazz)).
|
||||||
* - Add Arrow IPC File format (Input and Output) - Fix incorrect work of resetParser() for Parquet Input Format - Add zero-copy optimization for ORC for RandomAccessFiles - Add missing halffloat type for input parquet and ORC formats ... [#10580](https://github.com/ClickHouse/ClickHouse/pull/10580) ([Zhanna](https://github.com/FawnD2)).
|
* - Add Arrow IPC File format (Input and Output) - Fix incorrect work of resetParser() for Parquet Input Format - Add zero-copy optimization for ORC for RandomAccessFiles - Add missing halffloat type for input parquet and ORC formats ... [#10580](https://github.com/ClickHouse/ClickHouse/pull/10580) ([Zhanna](https://github.com/FawnD2)).
|
||||||
* Allowed to profile memory with finer granularity steps than 4 MiB. Added sampling memory profiler to capture random allocations/deallocations. [#10598](https://github.com/ClickHouse/ClickHouse/pull/10598) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
* Allowed to profile memory with finer granularity steps than 4 MiB. Added sampling memory profiler to capture random allocations/deallocations. [#10598](https://github.com/ClickHouse/ClickHouse/pull/10598) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
* Add new input format `JSONAsString` that accepts a sequence of JSON objects separated by newlines, spaces and/or commas. [#10607](https://github.com/ClickHouse/ClickHouse/pull/10607) ([Kruglov Pavel](https://github.com/Avogar)).
|
* Add new input format `JSONAsString` that accepts a sequence of JSON objects separated by newlines, spaces and/or commas. [#10607](https://github.com/ClickHouse/ClickHouse/pull/10607) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
@ -13,7 +13,7 @@
|
|||||||
* Users now can set comments to database in `CREATE DATABASE` statement ... [#29429](https://github.com/ClickHouse/ClickHouse/pull/29429) ([Vasily Nemkov](https://github.com/Enmk)).
|
* Users now can set comments to database in `CREATE DATABASE` statement ... [#29429](https://github.com/ClickHouse/ClickHouse/pull/29429) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
* New function` mapContainsKeyLike` to get the map that key matches a simple regular expression. [#29471](https://github.com/ClickHouse/ClickHouse/pull/29471) ([凌涛](https://github.com/lingtaolf)).
|
* New function` mapContainsKeyLike` to get the map that key matches a simple regular expression. [#29471](https://github.com/ClickHouse/ClickHouse/pull/29471) ([凌涛](https://github.com/lingtaolf)).
|
||||||
* Huawei OBS Storage support. Closes [#24294](https://github.com/ClickHouse/ClickHouse/issues/24294). [#29511](https://github.com/ClickHouse/ClickHouse/pull/29511) ([kevin wan](https://github.com/MaxWk)).
|
* Huawei OBS Storage support. Closes [#24294](https://github.com/ClickHouse/ClickHouse/issues/24294). [#29511](https://github.com/ClickHouse/ClickHouse/pull/29511) ([kevin wan](https://github.com/MaxWk)).
|
||||||
* Clickhouse HTTP Server can enable HSTS by set `hsts_max_age` in config.xml with a positive number. [#29516](https://github.com/ClickHouse/ClickHouse/pull/29516) ([凌涛](https://github.com/lingtaolf)).
|
* ClickHouse HTTP Server can enable HSTS by set `hsts_max_age` in config.xml with a positive number. [#29516](https://github.com/ClickHouse/ClickHouse/pull/29516) ([凌涛](https://github.com/lingtaolf)).
|
||||||
* - Added MD4 and SHA384 functions. [#29602](https://github.com/ClickHouse/ClickHouse/pull/29602) ([Nikita Tikhomirov](https://github.com/NSTikhomirov)).
|
* - Added MD4 and SHA384 functions. [#29602](https://github.com/ClickHouse/ClickHouse/pull/29602) ([Nikita Tikhomirov](https://github.com/NSTikhomirov)).
|
||||||
* Support EXISTS(subquery). Closes [#6852](https://github.com/ClickHouse/ClickHouse/issues/6852). [#29731](https://github.com/ClickHouse/ClickHouse/pull/29731) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
* Support EXISTS(subquery). Closes [#6852](https://github.com/ClickHouse/ClickHouse/issues/6852). [#29731](https://github.com/ClickHouse/ClickHouse/pull/29731) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
* Added function `ngram`. Closes [#29699](https://github.com/ClickHouse/ClickHouse/issues/29699). [#29738](https://github.com/ClickHouse/ClickHouse/pull/29738) ([Maksim Kita](https://github.com/kitaisreal)).
|
* Added function `ngram`. Closes [#29699](https://github.com/ClickHouse/ClickHouse/issues/29699). [#29738](https://github.com/ClickHouse/ClickHouse/pull/29738) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
@ -54,7 +54,7 @@
|
|||||||
* Add settings `merge_tree_min_rows_for_concurrent_read_for_remote_filesystem` and `merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem`. [#30970](https://github.com/ClickHouse/ClickHouse/pull/30970) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
* Add settings `merge_tree_min_rows_for_concurrent_read_for_remote_filesystem` and `merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem`. [#30970](https://github.com/ClickHouse/ClickHouse/pull/30970) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
* Do not allow to drop a table or dictionary if some tables or dictionaries depend on it. [#30977](https://github.com/ClickHouse/ClickHouse/pull/30977) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
* Do not allow to drop a table or dictionary if some tables or dictionaries depend on it. [#30977](https://github.com/ClickHouse/ClickHouse/pull/30977) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
* Only grab AlterLock when we do alter command. Let's see if the assumption is correct. [#31010](https://github.com/ClickHouse/ClickHouse/pull/31010) ([Amos Bird](https://github.com/amosbird)).
|
* Only grab AlterLock when we do alter command. Let's see if the assumption is correct. [#31010](https://github.com/ClickHouse/ClickHouse/pull/31010) ([Amos Bird](https://github.com/amosbird)).
|
||||||
* The local session inside a Clickhouse dictionary source won't send its events to the session log anymore. This fixes a possible deadlock (tsan alert) on shutdown. Also this PR fixes flaky `test_dictionaries_dependency_xml/`. [#31013](https://github.com/ClickHouse/ClickHouse/pull/31013) ([Vitaly Baranov](https://github.com/vitlibar)).
|
* The local session inside a ClickHouse dictionary source won't send its events to the session log anymore. This fixes a possible deadlock (tsan alert) on shutdown. Also this PR fixes flaky `test_dictionaries_dependency_xml/`. [#31013](https://github.com/ClickHouse/ClickHouse/pull/31013) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
* Cancel vertical merges when partition is dropped. This is a follow-up of https://github.com/ClickHouse/ClickHouse/pull/25684 and https://github.com/ClickHouse/ClickHouse/pull/30996. [#31057](https://github.com/ClickHouse/ClickHouse/pull/31057) ([Amos Bird](https://github.com/amosbird)).
|
* Cancel vertical merges when partition is dropped. This is a follow-up of https://github.com/ClickHouse/ClickHouse/pull/25684 and https://github.com/ClickHouse/ClickHouse/pull/30996. [#31057](https://github.com/ClickHouse/ClickHouse/pull/31057) ([Amos Bird](https://github.com/amosbird)).
|
||||||
* Support `IF EXISTS` modifier for `RENAME DATABASE`/`TABLE`/`DICTIONARY` query, If this directive is used, one will not get an error if the DATABASE/TABLE/DICTIONARY to be renamed doesn't exist. [#31081](https://github.com/ClickHouse/ClickHouse/pull/31081) ([victorgao](https://github.com/kafka1991)).
|
* Support `IF EXISTS` modifier for `RENAME DATABASE`/`TABLE`/`DICTIONARY` query, If this directive is used, one will not get an error if the DATABASE/TABLE/DICTIONARY to be renamed doesn't exist. [#31081](https://github.com/ClickHouse/ClickHouse/pull/31081) ([victorgao](https://github.com/kafka1991)).
|
||||||
* Function name normalization for ALTER queries. This helps avoid metadata mismatch between creating table with indices/projections and adding indices/projections via alter commands. This is a follow-up PR of https://github.com/ClickHouse/ClickHouse/pull/20174. Mark as improvements as there are no bug reports and the senario is somehow rare. [#31095](https://github.com/ClickHouse/ClickHouse/pull/31095) ([Amos Bird](https://github.com/amosbird)).
|
* Function name normalization for ALTER queries. This helps avoid metadata mismatch between creating table with indices/projections and adding indices/projections via alter commands. This is a follow-up PR of https://github.com/ClickHouse/ClickHouse/pull/20174. Mark as improvements as there are no bug reports and the senario is somehow rare. [#31095](https://github.com/ClickHouse/ClickHouse/pull/31095) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
### ClickHouse release v21.12.3.32-stable FIXME as compared to v21.12.2.17-stable
|
### ClickHouse release v21.12.3.32-stable FIXME as compared to v21.12.2.17-stable
|
||||||
|
|
||||||
#### Bug Fix
|
#### Bug Fix
|
||||||
* Backported in [#33018](https://github.com/ClickHouse/ClickHouse/issues/33018): - Clickhouse Keeper handler should remove operation when response sent. [#32988](https://github.com/ClickHouse/ClickHouse/pull/32988) ([JackyWoo](https://github.com/JackyWoo)).
|
* Backported in [#33018](https://github.com/ClickHouse/ClickHouse/issues/33018): - ClickHouse Keeper handler should remove operation when response sent. [#32988](https://github.com/ClickHouse/ClickHouse/pull/32988) ([JackyWoo](https://github.com/JackyWoo)).
|
||||||
|
|
||||||
#### Bug Fix (user-visible misbehaviour in official stable or prestable release)
|
#### Bug Fix (user-visible misbehaviour in official stable or prestable release)
|
||||||
|
|
||||||
|
@ -68,7 +68,7 @@
|
|||||||
* Add separate pool for message brokers (RabbitMQ and Kafka). [#19722](https://github.com/ClickHouse/ClickHouse/pull/19722) ([Azat Khuzhin](https://github.com/azat)).
|
* Add separate pool for message brokers (RabbitMQ and Kafka). [#19722](https://github.com/ClickHouse/ClickHouse/pull/19722) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
* In distributed queries if the setting `async_socket_for_remote` is enabled, it was possible to get stack overflow at least in debug build configuration if very deeply nested data type is used in table (e.g. `Array(Array(Array(...more...)))`). This fixes [#19108](https://github.com/ClickHouse/ClickHouse/issues/19108). This change introduces minor backward incompatibility: excessive parenthesis in type definitions no longer supported, example: `Array((UInt8))`. [#19736](https://github.com/ClickHouse/ClickHouse/pull/19736) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
* In distributed queries if the setting `async_socket_for_remote` is enabled, it was possible to get stack overflow at least in debug build configuration if very deeply nested data type is used in table (e.g. `Array(Array(Array(...more...)))`). This fixes [#19108](https://github.com/ClickHouse/ClickHouse/issues/19108). This change introduces minor backward incompatibility: excessive parenthesis in type definitions no longer supported, example: `Array((UInt8))`. [#19736](https://github.com/ClickHouse/ClickHouse/pull/19736) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
* Table function `S3` will use global region if the region can't be determined exactly. This closes [#10998](https://github.com/ClickHouse/ClickHouse/issues/10998). [#19750](https://github.com/ClickHouse/ClickHouse/pull/19750) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
* Table function `S3` will use global region if the region can't be determined exactly. This closes [#10998](https://github.com/ClickHouse/ClickHouse/issues/10998). [#19750](https://github.com/ClickHouse/ClickHouse/pull/19750) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
* Clickhouse client query param CTE added test. [#19762](https://github.com/ClickHouse/ClickHouse/pull/19762) ([Maksim Kita](https://github.com/kitaisreal)).
|
* ClickHouse client query param CTE added test. [#19762](https://github.com/ClickHouse/ClickHouse/pull/19762) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
* Correctly output infinite arguments for `formatReadableTimeDelta` function. In previous versions, there was implicit conversion to implementation specific integer value. [#19791](https://github.com/ClickHouse/ClickHouse/pull/19791) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
* Correctly output infinite arguments for `formatReadableTimeDelta` function. In previous versions, there was implicit conversion to implementation specific integer value. [#19791](https://github.com/ClickHouse/ClickHouse/pull/19791) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
* `S3` table function now supports `auto` compression mode (autodetect). This closes [#18754](https://github.com/ClickHouse/ClickHouse/issues/18754). [#19793](https://github.com/ClickHouse/ClickHouse/pull/19793) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
* `S3` table function now supports `auto` compression mode (autodetect). This closes [#18754](https://github.com/ClickHouse/ClickHouse/issues/18754). [#19793](https://github.com/ClickHouse/ClickHouse/pull/19793) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
* Set charset to utf8mb4 when interacting with remote MySQL servers. Fixes [#19795](https://github.com/ClickHouse/ClickHouse/issues/19795). [#19800](https://github.com/ClickHouse/ClickHouse/pull/19800) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
* Set charset to utf8mb4 when interacting with remote MySQL servers. Fixes [#19795](https://github.com/ClickHouse/ClickHouse/issues/19795). [#19800](https://github.com/ClickHouse/ClickHouse/pull/19800) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
@ -34,7 +34,7 @@
|
|||||||
* Allow to use CTE in VIEW definition. This closes [#22491](https://github.com/ClickHouse/ClickHouse/issues/22491). [#22657](https://github.com/ClickHouse/ClickHouse/pull/22657) ([Amos Bird](https://github.com/amosbird)).
|
* Allow to use CTE in VIEW definition. This closes [#22491](https://github.com/ClickHouse/ClickHouse/issues/22491). [#22657](https://github.com/ClickHouse/ClickHouse/pull/22657) ([Amos Bird](https://github.com/amosbird)).
|
||||||
* Add metric to track how much time is spend during waiting for Buffer layer lock. [#22725](https://github.com/ClickHouse/ClickHouse/pull/22725) ([Azat Khuzhin](https://github.com/azat)).
|
* Add metric to track how much time is spend during waiting for Buffer layer lock. [#22725](https://github.com/ClickHouse/ClickHouse/pull/22725) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
* Allow RBAC row policy via postgresql protocol. Closes [#22658](https://github.com/ClickHouse/ClickHouse/issues/22658). PostgreSQL protocol is enabled in configuration by default. [#22755](https://github.com/ClickHouse/ClickHouse/pull/22755) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
* Allow RBAC row policy via postgresql protocol. Closes [#22658](https://github.com/ClickHouse/ClickHouse/issues/22658). PostgreSQL protocol is enabled in configuration by default. [#22755](https://github.com/ClickHouse/ClickHouse/pull/22755) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
* MaterializeMySQL (experimental feature). Make Clickhouse to be able to replicate MySQL databases containing views without failing. This is accomplished by ignoring the views. ... [#22760](https://github.com/ClickHouse/ClickHouse/pull/22760) ([Christian Frøystad](https://github.com/cfroystad)).
|
* MaterializeMySQL (experimental feature). Make ClickHouse to be able to replicate MySQL databases containing views without failing. This is accomplished by ignoring the views. ... [#22760](https://github.com/ClickHouse/ClickHouse/pull/22760) ([Christian Frøystad](https://github.com/cfroystad)).
|
||||||
* `dateDiff` now works with `DateTime64` arguments (even for values outside of `DateTime` range) ... [#22931](https://github.com/ClickHouse/ClickHouse/pull/22931) ([Vasily Nemkov](https://github.com/Enmk)).
|
* `dateDiff` now works with `DateTime64` arguments (even for values outside of `DateTime` range) ... [#22931](https://github.com/ClickHouse/ClickHouse/pull/22931) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
* Set `background_fetches_pool_size` to 8 that is better for production usage with frequent small insertions or slow ZooKeeper cluster. [#22945](https://github.com/ClickHouse/ClickHouse/pull/22945) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
* Set `background_fetches_pool_size` to 8 that is better for production usage with frequent small insertions or slow ZooKeeper cluster. [#22945](https://github.com/ClickHouse/ClickHouse/pull/22945) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
* Fix inactive_parts_to_throw_insert=0 with inactive_parts_to_delay_insert>0. [#22947](https://github.com/ClickHouse/ClickHouse/pull/22947) ([Azat Khuzhin](https://github.com/azat)).
|
* Fix inactive_parts_to_throw_insert=0 with inactive_parts_to_delay_insert>0. [#22947](https://github.com/ClickHouse/ClickHouse/pull/22947) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
@ -84,7 +84,7 @@
|
|||||||
|
|
||||||
#### Bug Fix
|
#### Bug Fix
|
||||||
* Quota limit was not reached, but the limit was exceeded. This PR fixes [#31174](https://github.com/ClickHouse/ClickHouse/issues/31174). [#31656](https://github.com/ClickHouse/ClickHouse/pull/31656) ([sunny](https://github.com/sunny19930321)).
|
* Quota limit was not reached, but the limit was exceeded. This PR fixes [#31174](https://github.com/ClickHouse/ClickHouse/issues/31174). [#31656](https://github.com/ClickHouse/ClickHouse/pull/31656) ([sunny](https://github.com/sunny19930321)).
|
||||||
* - Clickhouse Keeper handler should remove operation when response sent. [#32988](https://github.com/ClickHouse/ClickHouse/pull/32988) ([JackyWoo](https://github.com/JackyWoo)).
|
* - ClickHouse Keeper handler should remove operation when response sent. [#32988](https://github.com/ClickHouse/ClickHouse/pull/32988) ([JackyWoo](https://github.com/JackyWoo)).
|
||||||
* Fix null pointer dereference in low cardinality data when deserializing LowCardinality data in the Native format. [#33021](https://github.com/ClickHouse/ClickHouse/pull/33021) ([Harry Lee](https://github.com/HarryLeeIBM)).
|
* Fix null pointer dereference in low cardinality data when deserializing LowCardinality data in the Native format. [#33021](https://github.com/ClickHouse/ClickHouse/pull/33021) ([Harry Lee](https://github.com/HarryLeeIBM)).
|
||||||
* Specifically crafted input data for `Native` format may lead to reading uninitialized memory or crash. This is relevant if `clickhouse-server` is open for write access to adversary. [#33050](https://github.com/ClickHouse/ClickHouse/pull/33050) ([Heena Bansal](https://github.com/HeenaBansal2009)).
|
* Specifically crafted input data for `Native` format may lead to reading uninitialized memory or crash. This is relevant if `clickhouse-server` is open for write access to adversary. [#33050](https://github.com/ClickHouse/ClickHouse/pull/33050) ([Heena Bansal](https://github.com/HeenaBansal2009)).
|
||||||
|
|
||||||
@ -196,7 +196,7 @@
|
|||||||
* NO CL ENTRY: 'Update CHANGELOG.md'. [#32472](https://github.com/ClickHouse/ClickHouse/pull/32472) ([Rich Raposa](https://github.com/rfraposa)).
|
* NO CL ENTRY: 'Update CHANGELOG.md'. [#32472](https://github.com/ClickHouse/ClickHouse/pull/32472) ([Rich Raposa](https://github.com/rfraposa)).
|
||||||
* NO CL ENTRY: 'Revert "Split long tests into multiple checks"'. [#32514](https://github.com/ClickHouse/ClickHouse/pull/32514) ([alesapin](https://github.com/alesapin)).
|
* NO CL ENTRY: 'Revert "Split long tests into multiple checks"'. [#32514](https://github.com/ClickHouse/ClickHouse/pull/32514) ([alesapin](https://github.com/alesapin)).
|
||||||
* NO CL ENTRY: 'Revert "Revert "Split long tests into multiple checks""'. [#32515](https://github.com/ClickHouse/ClickHouse/pull/32515) ([alesapin](https://github.com/alesapin)).
|
* NO CL ENTRY: 'Revert "Revert "Split long tests into multiple checks""'. [#32515](https://github.com/ClickHouse/ClickHouse/pull/32515) ([alesapin](https://github.com/alesapin)).
|
||||||
* NO CL ENTRY: 'blog post how to enable predictive capabilities in Clickhouse'. [#32768](https://github.com/ClickHouse/ClickHouse/pull/32768) ([Tom Risse](https://github.com/flickerbox-tom)).
|
* NO CL ENTRY: 'blog post how to enable predictive capabilities in ClickHouse'. [#32768](https://github.com/ClickHouse/ClickHouse/pull/32768) ([Tom Risse](https://github.com/flickerbox-tom)).
|
||||||
* NO CL ENTRY: 'Revert "Fix build issue related to azure blob storage"'. [#32845](https://github.com/ClickHouse/ClickHouse/pull/32845) ([alesapin](https://github.com/alesapin)).
|
* NO CL ENTRY: 'Revert "Fix build issue related to azure blob storage"'. [#32845](https://github.com/ClickHouse/ClickHouse/pull/32845) ([alesapin](https://github.com/alesapin)).
|
||||||
* NO CL ENTRY: 'Revert "Dictionaries added Date32 type support"'. [#33053](https://github.com/ClickHouse/ClickHouse/pull/33053) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
* NO CL ENTRY: 'Revert "Dictionaries added Date32 type support"'. [#33053](https://github.com/ClickHouse/ClickHouse/pull/33053) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
* NO CL ENTRY: 'Updated Lawrence Berkeley National Lab stats'. [#33066](https://github.com/ClickHouse/ClickHouse/pull/33066) ([Michael Smitasin](https://github.com/michaelsmitasin)).
|
* NO CL ENTRY: 'Updated Lawrence Berkeley National Lab stats'. [#33066](https://github.com/ClickHouse/ClickHouse/pull/33066) ([Michael Smitasin](https://github.com/michaelsmitasin)).
|
||||||
|
@ -65,7 +65,7 @@
|
|||||||
* Add setting to lower column case when reading parquet/ORC file. [#35145](https://github.com/ClickHouse/ClickHouse/pull/35145) ([shuchaome](https://github.com/shuchaome)).
|
* Add setting to lower column case when reading parquet/ORC file. [#35145](https://github.com/ClickHouse/ClickHouse/pull/35145) ([shuchaome](https://github.com/shuchaome)).
|
||||||
* Do not retry non-rertiable errors. Closes [#35161](https://github.com/ClickHouse/ClickHouse/issues/35161). [#35172](https://github.com/ClickHouse/ClickHouse/pull/35172) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
* Do not retry non-rertiable errors. Closes [#35161](https://github.com/ClickHouse/ClickHouse/issues/35161). [#35172](https://github.com/ClickHouse/ClickHouse/pull/35172) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
* Added disk_name to system.part_log. [#35178](https://github.com/ClickHouse/ClickHouse/pull/35178) ([Artyom Yurkov](https://github.com/Varinara)).
|
* Added disk_name to system.part_log. [#35178](https://github.com/ClickHouse/ClickHouse/pull/35178) ([Artyom Yurkov](https://github.com/Varinara)).
|
||||||
* Currently,Clickhouse validates hosts defined under <remote_url_allow_hosts> for URL and Remote Table functions. This PR extends the RemoteHostFilter to Mysql and PostgreSQL table functions. [#35191](https://github.com/ClickHouse/ClickHouse/pull/35191) ([Heena Bansal](https://github.com/HeenaBansal2009)).
|
* Currently,ClickHouse validates hosts defined under <remote_url_allow_hosts> for URL and Remote Table functions. This PR extends the RemoteHostFilter to Mysql and PostgreSQL table functions. [#35191](https://github.com/ClickHouse/ClickHouse/pull/35191) ([Heena Bansal](https://github.com/HeenaBansal2009)).
|
||||||
* Sometimes it is not enough for us to distinguish queries hierachy only by is_initial_query in system.query_log and system.processes. So distributed_depth is needed. [#35207](https://github.com/ClickHouse/ClickHouse/pull/35207) ([李扬](https://github.com/taiyang-li)).
|
* Sometimes it is not enough for us to distinguish queries hierachy only by is_initial_query in system.query_log and system.processes. So distributed_depth is needed. [#35207](https://github.com/ClickHouse/ClickHouse/pull/35207) ([李扬](https://github.com/taiyang-li)).
|
||||||
* Support test mode for clickhouse-local. [#35264](https://github.com/ClickHouse/ClickHouse/pull/35264) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
* Support test mode for clickhouse-local. [#35264](https://github.com/ClickHouse/ClickHouse/pull/35264) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
* Return const for function getMacro if not in distributed query. Close [#34727](https://github.com/ClickHouse/ClickHouse/issues/34727). [#35289](https://github.com/ClickHouse/ClickHouse/pull/35289) ([李扬](https://github.com/taiyang-li)).
|
* Return const for function getMacro if not in distributed query. Close [#34727](https://github.com/ClickHouse/ClickHouse/issues/34727). [#35289](https://github.com/ClickHouse/ClickHouse/pull/35289) ([李扬](https://github.com/taiyang-li)).
|
||||||
|
192
docs/changelogs/v22.6.1.1985-stable.md
Normal file
192
docs/changelogs/v22.6.1.1985-stable.md
Normal file
@ -0,0 +1,192 @@
|
|||||||
|
### ClickHouse release v22.6.1.1985-stable FIXME as compared to v22.5.1.2079-stable
|
||||||
|
|
||||||
|
#### Backward Incompatible Change
|
||||||
|
* Changes how settings using `seconds` as type are parsed to support floating point values (for example: `max_execution_time=0.5`). Infinity or NaN values will throw an exception. [#37187](https://github.com/ClickHouse/ClickHouse/pull/37187) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Changed format of binary serialization of columns of experimental type `Object`. New format is more convenient to implement by third-party clients. [#37482](https://github.com/ClickHouse/ClickHouse/pull/37482) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Turn on setting `output_format_json_named_tuples_as_objects` by default. It allows to serialize named tuples as JSON objects in JSON formats. [#37756](https://github.com/ClickHouse/ClickHouse/pull/37756) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
#### New Feature
|
||||||
|
* Added `SYSTEM UNFREEZE` query that deletes the whole backup regardless if the corresponding table is deleted or not. [#36424](https://github.com/ClickHouse/ClickHouse/pull/36424) ([Vadim Volodin](https://github.com/PolyProgrammist)).
|
||||||
|
* Adds H3 unidirectional edge functions. [#36843](https://github.com/ClickHouse/ClickHouse/pull/36843) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Add merge_reason column to system.part_log table. [#36912](https://github.com/ClickHouse/ClickHouse/pull/36912) ([Sema Checherinda](https://github.com/CheSema)).
|
||||||
|
* This PR enables `POPULATE` for WindowView. [#36945](https://github.com/ClickHouse/ClickHouse/pull/36945) ([vxider](https://github.com/Vxider)).
|
||||||
|
* Add new columnar JSON formats: JSONColumns, JSONCompactColumns, JSONColumnsWithMetadata. Closes [#36338](https://github.com/ClickHouse/ClickHouse/issues/36338) Closes [#34509](https://github.com/ClickHouse/ClickHouse/issues/34509). [#36975](https://github.com/ClickHouse/ClickHouse/pull/36975) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Add support for calculating [hashids](https://hashids.org/) from unsigned integers. [#37013](https://github.com/ClickHouse/ClickHouse/pull/37013) ([Michael Nutt](https://github.com/mnutt)).
|
||||||
|
* Add GROUPING function. Closes [#19426](https://github.com/ClickHouse/ClickHouse/issues/19426). [#37163](https://github.com/ClickHouse/ClickHouse/pull/37163) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* `ALTER TABLE ... MODIFY QUERY` support for WindowView. [#37188](https://github.com/ClickHouse/ClickHouse/pull/37188) ([vxider](https://github.com/Vxider)).
|
||||||
|
* This PR changes the behavior of the `ENGINE` syntax in WindowView, to make it like in MaterializedView. [#37214](https://github.com/ClickHouse/ClickHouse/pull/37214) ([vxider](https://github.com/Vxider)).
|
||||||
|
* SALT is allowed for CREATE USER <user> IDENTIFIED WITH sha256_hash. [#37377](https://github.com/ClickHouse/ClickHouse/pull/37377) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* Implemented changing comment to a ReplicatedMergeTree table. [#37416](https://github.com/ClickHouse/ClickHouse/pull/37416) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
|
* Add support for Maps and Records in Avro format. Add new setting `input_format_avro_null_as_default ` that allow to insert null as default in Avro format. Closes [#18925](https://github.com/ClickHouse/ClickHouse/issues/18925) Closes [#37378](https://github.com/ClickHouse/ClickHouse/issues/37378) Closes [#32899](https://github.com/ClickHouse/ClickHouse/issues/32899). [#37525](https://github.com/ClickHouse/ClickHouse/pull/37525) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Add two new settings `input_format_csv_skip_first_lines/input_format_tsv_skip_first_lines` to allow skipping specified number of lines in the beginning of the file in CSV/TSV formats. [#37537](https://github.com/ClickHouse/ClickHouse/pull/37537) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* showCertificate() function shows current server's SSL certificate. [#37540](https://github.com/ClickHouse/ClickHouse/pull/37540) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* Implementation of [FPC](https://userweb.cs.txstate.edu/~burtscher/papers/dcc07a.pdf) algorithm for floating point data compression. [#37553](https://github.com/ClickHouse/ClickHouse/pull/37553) ([Mikhail Guzov](https://github.com/koloshmet)).
|
||||||
|
* HTTP source for Data Dictionaries in Named Collections is supported. [#37581](https://github.com/ClickHouse/ClickHouse/pull/37581) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* This PR aims to resolve [#22130](https://github.com/ClickHouse/ClickHouse/issues/22130) which allows inserting into `system.zookeeper`. To simplify, this PR is designed as:. [#37596](https://github.com/ClickHouse/ClickHouse/pull/37596) ([Han Fei](https://github.com/hanfei1991)).
|
||||||
|
* Added a new window function `nonNegativeDerivative(metric_column, timestamp_column[, INTERVAL x SECOND])`. [#37628](https://github.com/ClickHouse/ClickHouse/pull/37628) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||||
|
* Executable user defined functions support parameters. Example: `SELECT test_function(parameters)(arguments)`. Closes [#37578](https://github.com/ClickHouse/ClickHouse/issues/37578). [#37720](https://github.com/ClickHouse/ClickHouse/pull/37720) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Added open telemetry traces visualizing tool based on d3js. [#37810](https://github.com/ClickHouse/ClickHouse/pull/37810) ([Sergei Trifonov](https://github.com/serxa)).
|
||||||
|
|
||||||
|
#### Performance Improvement
|
||||||
|
* Improve performance of insert into MergeTree if there are multiple columns in ORDER BY. [#35762](https://github.com/ClickHouse/ClickHouse/pull/35762) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Apply read method 'threadpool' for StorageHive. [#36328](https://github.com/ClickHouse/ClickHouse/pull/36328) ([李扬](https://github.com/taiyang-li)).
|
||||||
|
* Now we split data parts into layers and distribute them among threads instead of whole parts to make the execution of queries with `FINAL` more data-parallel. [#36396](https://github.com/ClickHouse/ClickHouse/pull/36396) ([Nikita Taranov](https://github.com/nickitat)).
|
||||||
|
* Load marks for only necessary columns when reading wide parts. [#36879](https://github.com/ClickHouse/ClickHouse/pull/36879) ([Anton Kozlov](https://github.com/tonickkozlov)).
|
||||||
|
* When all the columns to read are partition keys, construct columns by the file's row number without real reading the hive file. [#37103](https://github.com/ClickHouse/ClickHouse/pull/37103) ([lgbo](https://github.com/lgbo-ustc)).
|
||||||
|
* Fix performance of `dictGetDescendants`, `dictGetChildren` functions, create temporary parent to children hierarchical index per query, not per function call during query. Allow to specify `BIDIRECTIONAL` for `HIERARHICAL` attributes, dictionary will maintain parent to children index in memory, that way functions `dictGetDescendants`, `dictGetChildren` will not create temporary index per query. Closes [#32481](https://github.com/ClickHouse/ClickHouse/issues/32481). [#37148](https://github.com/ClickHouse/ClickHouse/pull/37148) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Improve performance and memory usage for select of subset of columns for formats Native, Protobuf, CapnProto, JSONEachRow, TSKV, all formats with suffixes WithNames/WithNamesAndTypes. Previously while selecting only subset of columns from files in these formats all columns were read and stored in memory. Now only required columns are read. This PR enables setting `input_format_skip_unknown_fields` by default, because otherwise in case of select of subset of columns exception will be thrown. [#37192](https://github.com/ClickHouse/ClickHouse/pull/37192) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Improve sort performance by single column. [#37195](https://github.com/ClickHouse/ClickHouse/pull/37195) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Support multi disks for caching hive files. [#37279](https://github.com/ClickHouse/ClickHouse/pull/37279) ([lgbo](https://github.com/lgbo-ustc)).
|
||||||
|
* Improved performance on array norm and distance functions 2x-4x times. [#37394](https://github.com/ClickHouse/ClickHouse/pull/37394) ([Alexander Gololobov](https://github.com/davenger)).
|
||||||
|
* Improve performance of number comparison functions using dynamic dispatch. [#37399](https://github.com/ClickHouse/ClickHouse/pull/37399) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Improve performance of ORDER BY with LIMIT. [#37481](https://github.com/ClickHouse/ClickHouse/pull/37481) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Improve performance of `hasAll` function using dynamic dispatch infrastructure. [#37484](https://github.com/ClickHouse/ClickHouse/pull/37484) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Improve performance of `greatCircleAngle`, `greatCircleDistance`, `geoDistance` functions. [#37524](https://github.com/ClickHouse/ClickHouse/pull/37524) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Optimized the internal caching of re2 patterns which occur e.g. in LIKE and MATCH functions. [#37544](https://github.com/ClickHouse/ClickHouse/pull/37544) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* Improve filter bitmask generator function all in one with avx512 instructions. [#37588](https://github.com/ClickHouse/ClickHouse/pull/37588) ([yaqi-zhao](https://github.com/yaqi-zhao)).
|
||||||
|
* Improved performance of aggregation in case, when sparse columns (can be enabled by experimental setting `ratio_of_defaults_for_sparse_serialization` in `MergeTree` tables) are used as arguments in aggregate functions. [#37617](https://github.com/ClickHouse/ClickHouse/pull/37617) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Optimize function `COALESCE` with two arguments. [#37666](https://github.com/ClickHouse/ClickHouse/pull/37666) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Replace `multiIf` to `if` in case when `multiIf` has only one condition, because function `if` is more performant. [#37695](https://github.com/ClickHouse/ClickHouse/pull/37695) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Aggregates state destruction now may be posted on a thread pool. For queries with LIMIT and big state it provides significant speedup, e.g. `select uniq(number) from numbers_mt(1e7) group by number limit 100` became around 2.5x faster. [#37855](https://github.com/ClickHouse/ClickHouse/pull/37855) ([Nikita Taranov](https://github.com/nickitat)).
|
||||||
|
* Improve performance of single column sorting using sorting queue specializations. [#37990](https://github.com/ClickHouse/ClickHouse/pull/37990) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Fix excessive CPU usage in background when there are a lot of tables. [#38028](https://github.com/ClickHouse/ClickHouse/pull/38028) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Improve performance of `not` function using dynamic dispatch. [#38058](https://github.com/ClickHouse/ClickHouse/pull/38058) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Added numerous NEON accelerated paths for main libraries. [#38093](https://github.com/ClickHouse/ClickHouse/pull/38093) ([Daniel Kutenin](https://github.com/danlark1)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
* Add separate CLUSTER grant (and `access_control_improvements.on_cluster_queries_require_cluster_grant` configuration directive, for backward compatibility, default to `false`). [#35767](https://github.com/ClickHouse/ClickHouse/pull/35767) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add self extracting executable [#34755](https://github.com/ClickHouse/ClickHouse/issues/34755). [#35775](https://github.com/ClickHouse/ClickHouse/pull/35775) ([Filatenkov Artur](https://github.com/FArthur-cmd)).
|
||||||
|
* Added support for schema inference for `hdfsCluster`. [#35812](https://github.com/ClickHouse/ClickHouse/pull/35812) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Add feature disks( ls - list files on disk, C - set config file, list-disks - list disks names, disk - set disk for work by name, help - produce help message copy - copy data on disk containing at `from_path` to `to_path` link - Create hardlink on disk from `from_path` to `to_path` list - List files on disk move - Move file or directory on disk from `from_path` to `to_path` read - read File on disk `from_path` to `to_path` or to stdout remove - Remove file or directory on disk with all children. write - Write File on disk`from_path` or stdin to `to_path`). [#36060](https://github.com/ClickHouse/ClickHouse/pull/36060) ([Artyom Yurkov](https://github.com/Varinara)).
|
||||||
|
* Implement `least_used` load balancing algorithm for disks inside volume (multi disk configuration). [#36686](https://github.com/ClickHouse/ClickHouse/pull/36686) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* - Modify the HTTP Endpoint to return the full stats under the `X-ClickHouse-Summary` header when `send_progress_in_http_headers=0` (before it would return all zeros). - Modify the HTTP Endpoint to return `X-ClickHouse-Exception-Code` header when progress has been sent before (`send_progress_in_http_headers=1`) - Modify the HTTP Endpoint to return `HTTP_REQUEST_TIMEOUT` (408) instead of `HTTP_INTERNAL_SERVER_ERROR` (500) on `TIMEOUT_EXCEEDED` errors. [#36884](https://github.com/ClickHouse/ClickHouse/pull/36884) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Allow a user to inspect grants from granted roles. [#36941](https://github.com/ClickHouse/ClickHouse/pull/36941) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
* Do not calculate an integral numerically but use CDF functions instead. This will speed up execution and will increase the precision. This fixes [#36714](https://github.com/ClickHouse/ClickHouse/issues/36714). [#36953](https://github.com/ClickHouse/ClickHouse/pull/36953) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Add default implementation for Nothing in functions. Now most of the functions will return column with type Nothing in case one of it's arguments is Nothing. It also solves problem with functions like arrayMap/arrayFilter and similar when they have empty array as an argument. Previously queries like `select arrayMap(x -> 2 * x, []);` failed because function inside lambda cannot work with type `Nothing`, now such queries return empty array with type `Array(Nothing)`. Also add support for arrays of nullable types in functions like arrayFilter/arrayFill. Previously, queries like `select arrayFilter(x -> x % 2, [1, NULL])` failed, now they work (if the result of lambda is NULL, then this value won't be included in the result). Closes [#37000](https://github.com/ClickHouse/ClickHouse/issues/37000). [#37048](https://github.com/ClickHouse/ClickHouse/pull/37048) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Now if a shard has local replica we create a local plan and a plan to read from all remote replicas. They have shared initiator which coordinates reading. [#37204](https://github.com/ClickHouse/ClickHouse/pull/37204) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* In function: CompressedWriteBuffer::nextImpl(), there is an unnecessary write-copy step that would happen frequently during inserting data. Below shows the differentiation with this patch: - Before: 1. Compress "working_buffer" into "compressed_buffer" 2. write-copy into "out" - After: Directly Compress "working_buffer" into "out". [#37242](https://github.com/ClickHouse/ClickHouse/pull/37242) ([jasperzhu](https://github.com/jinjunzh)).
|
||||||
|
* Support non-constant SQL functions (NOT) (I)LIKE and MATCH. [#37251](https://github.com/ClickHouse/ClickHouse/pull/37251) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* Client will try every IP address returned by DNS resolution until successful connection. [#37273](https://github.com/ClickHouse/ClickHouse/pull/37273) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* - Do no longer abort server startup if configuration option "mark_cache_size" is not explicitly set. [#37326](https://github.com/ClickHouse/ClickHouse/pull/37326) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* Allow to use String type instead of Binary in Arrow/Parquet/ORC formats. This PR introduces 3 new settings for it: `output_format_arrow_string_as_string`, `output_format_parquet_string_as_string`, `output_format_orc_string_as_string`. Default value for all settings is `false`. [#37327](https://github.com/ClickHouse/ClickHouse/pull/37327) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Apply setting `input_format_max_rows_to_read_for_schema_inference` for all read rows in total from all files in globs. Previously setting `input_format_max_rows_to_read_for_schema_inference` was applied for each file in glob separately and in case of huge number of nulls we could read first `input_format_max_rows_to_read_for_schema_inference` rows from each file and get nothing. Also increase default value for this setting to 25000. [#37332](https://github.com/ClickHouse/ClickHouse/pull/37332) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* allows providing `NULL`/`NOT NULL` right after type in column declaration. [#37337](https://github.com/ClickHouse/ClickHouse/pull/37337) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||||
|
* optimize file segment PARTIALLY_DOWNLOADED get read buffer. [#37338](https://github.com/ClickHouse/ClickHouse/pull/37338) ([xiedeyantu](https://github.com/xiedeyantu)).
|
||||||
|
* Allow to prune the list of files via virtual columns such as `_file` and `_path` when reading from S3. This is for [#37174](https://github.com/ClickHouse/ClickHouse/issues/37174) , [#23494](https://github.com/ClickHouse/ClickHouse/issues/23494). [#37356](https://github.com/ClickHouse/ClickHouse/pull/37356) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Try to improve short circuit functions processing to fix problems with stress tests. [#37384](https://github.com/ClickHouse/ClickHouse/pull/37384) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Closes [#37395](https://github.com/ClickHouse/ClickHouse/issues/37395). [#37415](https://github.com/ClickHouse/ClickHouse/pull/37415) ([Memo](https://github.com/Joeywzr)).
|
||||||
|
* Fix extremely rare deadlock during part fetch in zero-copy replication. Fixes [#37423](https://github.com/ClickHouse/ClickHouse/issues/37423). [#37424](https://github.com/ClickHouse/ClickHouse/pull/37424) ([metahys](https://github.com/metahys)).
|
||||||
|
* Don't allow to create storage with unknown data format. [#37450](https://github.com/ClickHouse/ClickHouse/pull/37450) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Set `global_memory_usage_overcommit_max_wait_microseconds` default value to 5 seconds. Add info about `OvercommitTracker` to OOM exception message. Add `MemoryOvercommitWaitTimeMicroseconds` profile event. [#37460](https://github.com/ClickHouse/ClickHouse/pull/37460) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Play UI: Keep controls in place when the page is scrolled horizontally. This makes edits comfortable even if the table is wide and it was scrolled far to the right. The feature proposed by Maksym Tereshchenko from CaspianDB. [#37470](https://github.com/ClickHouse/ClickHouse/pull/37470) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Now more filters can be pushed down for join. [#37472](https://github.com/ClickHouse/ClickHouse/pull/37472) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Modify query div in play.html to be extendable beyond 20% height. In case of very long queries it is helpful to extend the textarea element, only today, since the div is fixed height, the extended textarea hides the data div underneath. With this fix, extending the textarea element will push the data div down/up such the extended textarea won't hide it. Also, keeps query box width 100% even when the user adjusting the size of the query textarea. [#37488](https://github.com/ClickHouse/ClickHouse/pull/37488) ([guyco87](https://github.com/guyco87)).
|
||||||
|
* Modify query div in play.html to be extendable beyond 20% height. In case of very long queries it is helpful to extend the textarea element, only today, since the div is fixed height, the extended textarea hides the data div underneath. With this fix, extending the textarea element will push the data div down/up such the extended textarea won't hide it. Also, keeps query box width 100% even when the user adjusting the size of the query textarea. [#37504](https://github.com/ClickHouse/ClickHouse/pull/37504) ([guyco87](https://github.com/guyco87)).
|
||||||
|
* Currently clickhouse directly downloads all remote files to the local cache (even if they are only read once), which will frequently cause IO of the local hard disk. In some scenarios, these IOs may not be necessary and may easily cause negative optimization. As shown in the figure below, when we run SSB Q1-Q4, the performance of the cache has caused negative optimization. [#37516](https://github.com/ClickHouse/ClickHouse/pull/37516) ([Han Shukai](https://github.com/KinderRiven)).
|
||||||
|
* Added `ProfileEvents` for introspection of type of written (inserted or merged) parts (`Inserted{Wide/Compact/InMemory}Parts`, `MergedInto{Wide/Compact/InMemory}Parts`. Added column `part_type` to `system.part_log`. Resolves [#37495](https://github.com/ClickHouse/ClickHouse/issues/37495). [#37536](https://github.com/ClickHouse/ClickHouse/pull/37536) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* clickhouse-keeper improvement: move broken logs to a timestamped folder. [#37565](https://github.com/ClickHouse/ClickHouse/pull/37565) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Do not write expired columns by TTL after subsequent merges (before only first merge/optimize of the part will not write expired by TTL columns, all other will do). [#37570](https://github.com/ClickHouse/ClickHouse/pull/37570) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* More precise result of the `dumpColumnStructure` miscellaneous function in presence of LowCardinality or Sparse columns. In previous versions, these functions were converting the argument to a full column before returning the result. This is needed to provide an answer in [#6935](https://github.com/ClickHouse/ClickHouse/issues/6935). [#37633](https://github.com/ClickHouse/ClickHouse/pull/37633) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* keeper: store only unique session IDs for watches. [#37641](https://github.com/ClickHouse/ClickHouse/pull/37641) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix possible "Cannot write to finalized buffer". [#37645](https://github.com/ClickHouse/ClickHouse/pull/37645) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add setting `support_batch_delete` for `DiskS3` to disable multiobject delete calls, which Google Cloud Storage doesn't support. [#37659](https://github.com/ClickHouse/ClickHouse/pull/37659) ([Fred Wulff](https://github.com/frew)).
|
||||||
|
* Support types with non-standard defaults in ROLLUP, CUBE, GROUPING SETS. Closes [#37360](https://github.com/ClickHouse/ClickHouse/issues/37360). [#37667](https://github.com/ClickHouse/ClickHouse/pull/37667) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Add an option to disable connection pooling in ODBC bridge. [#37705](https://github.com/ClickHouse/ClickHouse/pull/37705) ([Anton Kozlov](https://github.com/tonickkozlov)).
|
||||||
|
* ... LIKE patterns with trailing escape symbol ('\\') are now disallowed (as mandated by the SQL standard). [#37764](https://github.com/ClickHouse/ClickHouse/pull/37764) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* Fix stacktraces collection on ARM. Closes [#37044](https://github.com/ClickHouse/ClickHouse/issues/37044). Closes [#15638](https://github.com/ClickHouse/ClickHouse/issues/15638). [#37797](https://github.com/ClickHouse/ClickHouse/pull/37797) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Functions `dictGetHierarchy`, `dictIsIn`, `dictGetChildren`, `dictGetDescendants` added support nullable `HIERARCHICAL` attribute in dictionaries. Closes [#35521](https://github.com/ClickHouse/ClickHouse/issues/35521). [#37805](https://github.com/ClickHouse/ClickHouse/pull/37805) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Expose BoringSSL version related info in the `system.build_options` table. [#37850](https://github.com/ClickHouse/ClickHouse/pull/37850) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* **Description** Limiting the maximum cache usage per query can effectively prevent cache pool contamination. [Related Issues](https://github.com/ClickHouse/ClickHouse/issues/28961). [#37859](https://github.com/ClickHouse/ClickHouse/pull/37859) ([Han Shukai](https://github.com/KinderRiven)).
|
||||||
|
* Now clickhouse-server removes `delete_tmp` directories on server start. Fixes [#26503](https://github.com/ClickHouse/ClickHouse/issues/26503). [#37906](https://github.com/ClickHouse/ClickHouse/pull/37906) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Clean up broken detached parts after timeout. Closes [#25195](https://github.com/ClickHouse/ClickHouse/issues/25195). [#37975](https://github.com/ClickHouse/ClickHouse/pull/37975) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Now in MergeTree table engines family failed-to-move parts will be removed instantly. [#37994](https://github.com/ClickHouse/ClickHouse/pull/37994) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Now if setting `always_fetch_merged_part` is enabled for ReplicatedMergeTree merges will try to find parts on other replicas rarely with smaller load for [Zoo]Keeper. [#37995](https://github.com/ClickHouse/ClickHouse/pull/37995) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Add implicit grants with grant option too. For example `GRANT CREATE TABLE ON test.* TO A WITH GRANT OPTION` now allows `A` to execute `GRANT CREATE VIEW ON test.* TO B`. [#38017](https://github.com/ClickHouse/ClickHouse/pull/38017) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Do not display `-0.0` CPU time in clickhouse-client. It can appear due to rounding errors. This closes [#38003](https://github.com/ClickHouse/ClickHouse/issues/38003). This closes [#38038](https://github.com/ClickHouse/ClickHouse/issues/38038). [#38064](https://github.com/ClickHouse/ClickHouse/pull/38064) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
* Use clang-14 and LLVM infrastructure version 14 for builds. This closes [#34681](https://github.com/ClickHouse/ClickHouse/issues/34681). [#34754](https://github.com/ClickHouse/ClickHouse/pull/34754) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Allow to drop privileges at startup. This simplifies Docker images. Closes [#36293](https://github.com/ClickHouse/ClickHouse/issues/36293). [#36341](https://github.com/ClickHouse/ClickHouse/pull/36341) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Remove recursive submodules, because we don't need them and they can be confusing. Add style check to prevent recursive submodules. This closes [#32821](https://github.com/ClickHouse/ClickHouse/issues/32821). [#37616](https://github.com/ClickHouse/ClickHouse/pull/37616) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add docs spellcheck to CI. [#37790](https://github.com/ClickHouse/ClickHouse/pull/37790) ([Vladimir C](https://github.com/vdimir)).
|
||||||
|
* Fix overly aggressive stripping which removed the embedded hash required for checking the consistency of the executable. [#37993](https://github.com/ClickHouse/ClickHouse/pull/37993) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* fix MacOS build compressor faild. [#38007](https://github.com/ClickHouse/ClickHouse/pull/38007) ([xiedeyantu](https://github.com/xiedeyantu)).
|
||||||
|
|
||||||
|
#### Bug Fix (user-visible misbehavior in official stable or prestable release)
|
||||||
|
|
||||||
|
* Fix `GROUP BY` `AggregateFunction` (i.e. you `GROUP BY` by the column that has `AggregateFunction` type). [#37093](https://github.com/ClickHouse/ClickHouse/pull/37093) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix possible heap-use-after-free error when reading system.projection_parts and system.projection_parts_columns . This fixes [#37184](https://github.com/ClickHouse/ClickHouse/issues/37184). [#37185](https://github.com/ClickHouse/ClickHouse/pull/37185) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix `addDependency` in WindowView. This bug can be reproduced like [#37237](https://github.com/ClickHouse/ClickHouse/issues/37237). [#37224](https://github.com/ClickHouse/ClickHouse/pull/37224) ([vxider](https://github.com/Vxider)).
|
||||||
|
* This PR moving `addDependency` from constructor to `startup()` to avoid adding dependency to a **dropped** table, fix [#37237](https://github.com/ClickHouse/ClickHouse/issues/37237). [#37243](https://github.com/ClickHouse/ClickHouse/pull/37243) ([vxider](https://github.com/Vxider)).
|
||||||
|
* Fix inserting defaults for missing values in columnar formats. Previously missing columns were filled with defaults for types, not for columns. [#37253](https://github.com/ClickHouse/ClickHouse/pull/37253) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix some cases of insertion nested arrays to columns of type `Object`. [#37305](https://github.com/ClickHouse/ClickHouse/pull/37305) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix unexpected errors with a clash of constant strings in aggregate function, prewhere and join. Close [#36891](https://github.com/ClickHouse/ClickHouse/issues/36891). [#37336](https://github.com/ClickHouse/ClickHouse/pull/37336) ([Vladimir C](https://github.com/vdimir)).
|
||||||
|
* Fix projections with GROUP/ORDER BY in query and optimize_aggregation_in_order (before the result was incorrect since only finish sorting was performed). [#37342](https://github.com/ClickHouse/ClickHouse/pull/37342) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fixed error with symbols in key name in S3. Fixes [#33009](https://github.com/ClickHouse/ClickHouse/issues/33009). [#37344](https://github.com/ClickHouse/ClickHouse/pull/37344) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Throw an exception when GROUPING SETS used with ROLLUP or CUBE. [#37367](https://github.com/ClickHouse/ClickHouse/pull/37367) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Fix LOGICAL_ERROR in getMaxSourcePartsSizeForMerge during merges (in case of non standard, greater, values of `background_pool_size`/`background_merges_mutations_concurrency_ratio` has been specified in `config.xml` (new way) not in `users.xml` (deprecated way)). [#37413](https://github.com/ClickHouse/ClickHouse/pull/37413) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* ``` Stop removing UTF-8 BOM in RowBinary format. [#37428](https://github.com/ClickHouse/ClickHouse/pull/37428) ([Paul Loyd](https://github.com/loyd)). ```. [#37428](https://github.com/ClickHouse/ClickHouse/pull/37428) ([Paul Loyd](https://github.com/loyd)).
|
||||||
|
* clickhouse-keeper bugfix: fix force recovery for single node cluster. [#37440](https://github.com/ClickHouse/ClickHouse/pull/37440) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Fix logical error in normalizeUTF8 functions. Closes [#37298](https://github.com/ClickHouse/ClickHouse/issues/37298). [#37443](https://github.com/ClickHouse/ClickHouse/pull/37443) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* * Fix cast lowcard of nullable in JoinSwitcher, close [#37385](https://github.com/ClickHouse/ClickHouse/issues/37385). [#37453](https://github.com/ClickHouse/ClickHouse/pull/37453) ([Vladimir C](https://github.com/vdimir)).
|
||||||
|
* Fix named tuples output in ORC/Arrow/Parquet formats. [#37458](https://github.com/ClickHouse/ClickHouse/pull/37458) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix optimization of monotonous functions in ORDER BY clause in presence of GROUPING SETS. Fixes [#37401](https://github.com/ClickHouse/ClickHouse/issues/37401). [#37493](https://github.com/ClickHouse/ClickHouse/pull/37493) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Fix error on joining with dictionary on some conditions. Close [#37386](https://github.com/ClickHouse/ClickHouse/issues/37386). [#37530](https://github.com/ClickHouse/ClickHouse/pull/37530) ([Vladimir C](https://github.com/vdimir)).
|
||||||
|
* Prohibit `optimize_aggregation_in_order` with `GROUPING SETS` (fixes `LOGICAL_ERROR`). [#37542](https://github.com/ClickHouse/ClickHouse/pull/37542) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix wrong dump information of ActionsDAG. [#37587](https://github.com/ClickHouse/ClickHouse/pull/37587) ([zhanglistar](https://github.com/zhanglistar)).
|
||||||
|
* Fix converting types for UNION queries (may produce LOGICAL_ERROR). [#37593](https://github.com/ClickHouse/ClickHouse/pull/37593) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix `WITH FILL` modifier with negative intervals in `STEP` clause. Fixes [#37514](https://github.com/ClickHouse/ClickHouse/issues/37514). [#37600](https://github.com/ClickHouse/ClickHouse/pull/37600) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix illegal joinGet array usage when ` join_use_nulls = 1`. This fixes [#37562](https://github.com/ClickHouse/ClickHouse/issues/37562) . [#37650](https://github.com/ClickHouse/ClickHouse/pull/37650) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix columns number mismatch in cross join, close [#37561](https://github.com/ClickHouse/ClickHouse/issues/37561). [#37653](https://github.com/ClickHouse/ClickHouse/pull/37653) ([Vladimir C](https://github.com/vdimir)).
|
||||||
|
* Fix segmentation fault in `show create table` from mysql database when it is configured with named collections. Closes [#37683](https://github.com/ClickHouse/ClickHouse/issues/37683). [#37690](https://github.com/ClickHouse/ClickHouse/pull/37690) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Fix RabbitMQ Storage not being able to startup on server restart if storage was create without SETTINGS clause. Closes [#37463](https://github.com/ClickHouse/ClickHouse/issues/37463). [#37691](https://github.com/ClickHouse/ClickHouse/pull/37691) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Fixed DateTime64 fractional seconds behavior prior to Unix epoch. [#37697](https://github.com/ClickHouse/ClickHouse/pull/37697) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||||
|
* SQL user defined functions disable CREATE/DROP in readonly mode. Closes [#37280](https://github.com/ClickHouse/ClickHouse/issues/37280). [#37699](https://github.com/ClickHouse/ClickHouse/pull/37699) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Fix formatting of Nullable arguments for executable user defined functions. Closes [#35897](https://github.com/ClickHouse/ClickHouse/issues/35897). [#37711](https://github.com/ClickHouse/ClickHouse/pull/37711) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Fix optimization enabled by setting `optimize_monotonous_functions_in_order_by` in distributed queries. Fixes [#36037](https://github.com/ClickHouse/ClickHouse/issues/36037). [#37724](https://github.com/ClickHouse/ClickHouse/pull/37724) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix `SELECT ... INTERSECT` and `EXCEPT SELECT` statements with constant string types. [#37738](https://github.com/ClickHouse/ClickHouse/pull/37738) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Fix crash of FunctionHashID, closes [#37735](https://github.com/ClickHouse/ClickHouse/issues/37735). [#37742](https://github.com/ClickHouse/ClickHouse/pull/37742) ([flynn](https://github.com/ucasfl)).
|
||||||
|
* Fix possible logical error: `Invalid Field get from type UInt64 to type Float64` in `values` table function. Closes [#37602](https://github.com/ClickHouse/ClickHouse/issues/37602). [#37754](https://github.com/ClickHouse/ClickHouse/pull/37754) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix possible segfault in schema inference in case of exception in SchemaReader constructor. Closes [#37680](https://github.com/ClickHouse/ClickHouse/issues/37680). [#37760](https://github.com/ClickHouse/ClickHouse/pull/37760) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix setting cast_ipv4_ipv6_default_on_conversion_error for internal cast function. Closes [#35156](https://github.com/ClickHouse/ClickHouse/issues/35156). [#37761](https://github.com/ClickHouse/ClickHouse/pull/37761) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Octal literals are not supported. [#37765](https://github.com/ClickHouse/ClickHouse/pull/37765) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* fix toString error on DatatypeDate32. [#37775](https://github.com/ClickHouse/ClickHouse/pull/37775) ([LiuNeng](https://github.com/liuneng1994)).
|
||||||
|
* The clickhouse-keeper setting `dead_session_check_period_ms` was transformed into microseconds (multiplied by 1000), which lead to dead sessions only being cleaned up after several minutes (instead of 500ms). [#37824](https://github.com/ClickHouse/ClickHouse/pull/37824) ([Michael Lex](https://github.com/mlex)).
|
||||||
|
* Fix possible "No more packets are available" for distributed queries (in case of `async_socket_for_remote`/`use_hedged_requests` is disabled). [#37826](https://github.com/ClickHouse/ClickHouse/pull/37826) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Do not drop the inner target table when executing `ALTER TABLE … MODIFY QUERY` in WindowView. [#37879](https://github.com/ClickHouse/ClickHouse/pull/37879) ([vxider](https://github.com/Vxider)).
|
||||||
|
* Fix directory ownership of coordination dir in clickhouse-keeper Docker image. Fixes [#37914](https://github.com/ClickHouse/ClickHouse/issues/37914). [#37915](https://github.com/ClickHouse/ClickHouse/pull/37915) ([James Maidment](https://github.com/jamesmaidment)).
|
||||||
|
* Dictionaries fix custom query with update field and `{condition}`. Closes [#33746](https://github.com/ClickHouse/ClickHouse/issues/33746). [#37947](https://github.com/ClickHouse/ClickHouse/pull/37947) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Fix possible incorrect result of `SELECT ... WITH FILL` in the case when `ORDER BY` should be applied after `WITH FILL` result (e.g. for outer query). Incorrect result was caused by optimization for `ORDER BY` expressions ([#35623](https://github.com/ClickHouse/ClickHouse/issues/35623)). Closes [#37904](https://github.com/ClickHouse/ClickHouse/issues/37904). [#37959](https://github.com/ClickHouse/ClickHouse/pull/37959) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* Add missing default columns when pushing to the target table in WindowView, fix [#37815](https://github.com/ClickHouse/ClickHouse/issues/37815). [#37965](https://github.com/ClickHouse/ClickHouse/pull/37965) ([vxider](https://github.com/Vxider)).
|
||||||
|
* Fixed a stack overflow issue that would cause compilation to fail. [#37996](https://github.com/ClickHouse/ClickHouse/pull/37996) ([Han Shukai](https://github.com/KinderRiven)).
|
||||||
|
* when open enable_filesystem_query_cache_limit, throw Reserved cache size exceeds the remaining cache size. [#38004](https://github.com/ClickHouse/ClickHouse/pull/38004) ([xiedeyantu](https://github.com/xiedeyantu)).
|
||||||
|
* Query, containing ORDER BY ... WITH FILL, can generate extra rows when multiple WITH FILL columns are present. [#38074](https://github.com/ClickHouse/ClickHouse/pull/38074) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
|
||||||
|
#### Bug Fix (user-visible misbehaviour in official stable or prestable release)
|
||||||
|
|
||||||
|
* Fix converting types for UNION queries (may produce LOGICAL_ERROR). [#34775](https://github.com/ClickHouse/ClickHouse/pull/34775) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* TTL merge may not be scheduled again if BackgroundExecutor is busy. --merges_with_ttl_counter is increased in selectPartsToMerge() --merge task will be ignored if BackgroundExecutor is busy --merges_with_ttl_counter will not be decrease. [#36387](https://github.com/ClickHouse/ClickHouse/pull/36387) ([lthaooo](https://github.com/lthaooo)).
|
||||||
|
* Fix overrided settings value of `normalize_function_names`. [#36937](https://github.com/ClickHouse/ClickHouse/pull/36937) ([李扬](https://github.com/taiyang-li)).
|
||||||
|
* Fix for exponential time decaying window functions. Now respecting boundaries of the window. [#36944](https://github.com/ClickHouse/ClickHouse/pull/36944) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Fix bug datetime64 parse from string '1969-12-31 23:59:59.123'. Close [#36994](https://github.com/ClickHouse/ClickHouse/issues/36994). [#37039](https://github.com/ClickHouse/ClickHouse/pull/37039) ([李扬](https://github.com/taiyang-li)).
|
||||||
|
|
||||||
|
#### NO CL ENTRY
|
||||||
|
|
||||||
|
* NO CL ENTRY: 'Revert "Fix mutations in tables with columns of type `Object`"'. [#37355](https://github.com/ClickHouse/ClickHouse/pull/37355) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* NO CL ENTRY: 'Revert "Remove height restrictions from the query div in play web tool, and m…"'. [#37501](https://github.com/ClickHouse/ClickHouse/pull/37501) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* NO CL ENTRY: 'Revert "Add support for preprocessing ZooKeeper operations in `clickhouse-keeper`"'. [#37534](https://github.com/ClickHouse/ClickHouse/pull/37534) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* NO CL ENTRY: 'Revert "(only with zero-copy replication, non-production experimental feature not recommended to use) fix possible deadlock during fetching part"'. [#37545](https://github.com/ClickHouse/ClickHouse/pull/37545) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* NO CL ENTRY: 'Revert "RFC: Fix converting types for UNION queries (may produce LOGICAL_ERROR)"'. [#37582](https://github.com/ClickHouse/ClickHouse/pull/37582) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* NO CL ENTRY: 'Revert "Revert "(only with zero-copy replication, non-production experimental feature not recommended to use) fix possible deadlock during fetching part""'. [#37598](https://github.com/ClickHouse/ClickHouse/pull/37598) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* NO CL ENTRY: 'Revert "Implemented changing comment to a ReplicatedMergeTree table"'. [#37627](https://github.com/ClickHouse/ClickHouse/pull/37627) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* NO CL ENTRY: 'Revert "Remove resursive submodules"'. [#37774](https://github.com/ClickHouse/ClickHouse/pull/37774) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* NO CL ENTRY: 'Revert "Fix possible segfault in schema inference"'. [#37785](https://github.com/ClickHouse/ClickHouse/pull/37785) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* NO CL ENTRY: 'Revert "Revert "Fix possible segfault in schema inference""'. [#37787](https://github.com/ClickHouse/ClickHouse/pull/37787) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* NO CL ENTRY: 'Add more Rust client libraries to documentation'. [#37880](https://github.com/ClickHouse/ClickHouse/pull/37880) ([Paul Loyd](https://github.com/loyd)).
|
||||||
|
* NO CL ENTRY: 'Revert "Fix errors of CheckTriviallyCopyableMove type"'. [#37902](https://github.com/ClickHouse/ClickHouse/pull/37902) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* NO CL ENTRY: 'Revert "Don't try to kill empty list of containers in `integration/runner`"'. [#38001](https://github.com/ClickHouse/ClickHouse/pull/38001) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* NO CL ENTRY: 'Revert "add d3js based trace visualizer as gantt chart"'. [#38043](https://github.com/ClickHouse/ClickHouse/pull/38043) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* NO CL ENTRY: 'Revert "Add backoff to merges in replicated queue if `always_fetch_merged_part` is enabled"'. [#38082](https://github.com/ClickHouse/ClickHouse/pull/38082) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* NO CL ENTRY: 'Revert "More parallel execution for queries with `FINAL`"'. [#38094](https://github.com/ClickHouse/ClickHouse/pull/38094) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* NO CL ENTRY: 'Revert "Revert "add d3js based trace visualizer as gantt chart""'. [#38129](https://github.com/ClickHouse/ClickHouse/pull/38129) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
@ -19,7 +19,7 @@ The following tutorial is based on the Ubuntu Linux system. With appropriate cha
|
|||||||
### Install Git, CMake, Python and Ninja {#install-git-cmake-python-and-ninja}
|
### Install Git, CMake, Python and Ninja {#install-git-cmake-python-and-ninja}
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
sudo apt-get install git cmake python ninja-build
|
sudo apt-get install git cmake ccache python3 ninja-build
|
||||||
```
|
```
|
||||||
|
|
||||||
Or cmake3 instead of cmake on older systems.
|
Or cmake3 instead of cmake on older systems.
|
||||||
|
@ -97,7 +97,7 @@ SELECT library_name, license_type, license_path FROM system.licenses ORDER BY li
|
|||||||
## Adding new third-party libraries and maintaining patches in third-party libraries {#adding-third-party-libraries}
|
## Adding new third-party libraries and maintaining patches in third-party libraries {#adding-third-party-libraries}
|
||||||
|
|
||||||
1. Each third-party library must reside in a dedicated directory under the `contrib/` directory of the ClickHouse repository. Avoid dumps/copies of external code, instead use Git submodule feature to pull third-party code from an external upstream repository.
|
1. Each third-party library must reside in a dedicated directory under the `contrib/` directory of the ClickHouse repository. Avoid dumps/copies of external code, instead use Git submodule feature to pull third-party code from an external upstream repository.
|
||||||
2. Submodules are listed in `.gitmodule`. If the external library can be used as-is, you may reference the upstream repository directly. Otherwise, i.e. the external library requires patching/customization, create a fork of the official repository in the [Clickhouse organization in GitHub](https://github.com/ClickHouse).
|
2. Submodules are listed in `.gitmodule`. If the external library can be used as-is, you may reference the upstream repository directly. Otherwise, i.e. the external library requires patching/customization, create a fork of the official repository in the [ClickHouse organization in GitHub](https://github.com/ClickHouse).
|
||||||
3. In the latter case, create a branch with `clickhouse/` prefix from the branch you want to integrate, e.g. `clickhouse/master` (for `master`) or `clickhouse/release/vX.Y.Z` (for a `release/vX.Y.Z` tag). The purpose of this branch is to isolate customization of the library from upstream work. For example, pulls from the upstream repository into the fork will leave all `clickhouse/` branches unaffected. Submodules in `contrib/` must only track `clickhouse/` branches of forked third-party repositories.
|
3. In the latter case, create a branch with `clickhouse/` prefix from the branch you want to integrate, e.g. `clickhouse/master` (for `master`) or `clickhouse/release/vX.Y.Z` (for a `release/vX.Y.Z` tag). The purpose of this branch is to isolate customization of the library from upstream work. For example, pulls from the upstream repository into the fork will leave all `clickhouse/` branches unaffected. Submodules in `contrib/` must only track `clickhouse/` branches of forked third-party repositories.
|
||||||
4. To patch a fork of a third-party library, create a dedicated branch with `clickhouse/` prefix in the fork, e.g. `clickhouse/fix-some-desaster`. Finally, merge the patch branch into the custom tracking branch (e.g. `clickhouse/master` or `clickhouse/release/vX.Y.Z`) using a PR.
|
4. To patch a fork of a third-party library, create a dedicated branch with `clickhouse/` prefix in the fork, e.g. `clickhouse/fix-some-desaster`. Finally, merge the patch branch into the custom tracking branch (e.g. `clickhouse/master` or `clickhouse/release/vX.Y.Z`) using a PR.
|
||||||
5. Always create patches of third-party libraries with the official repository in mind. Once a PR of a patch branch to the `clickhouse/` branch in the fork repository is done and the submodule version in ClickHouse official repository is bumped, consider opening another PR from the patch branch to the upstream library repository. This ensures, that 1) the contribution has more than a single use case and importance, 2) others will also benefit from it, 3) the change will not remain a maintenance burden solely on ClickHouse developers.
|
5. Always create patches of third-party libraries with the official repository in mind. Once a PR of a patch branch to the `clickhouse/` branch in the fork repository is done and the submodule version in ClickHouse official repository is bumped, consider opening another PR from the patch branch to the upstream library repository. This ensures, that 1) the contribution has more than a single use case and importance, 2) others will also benefit from it, 3) the change will not remain a maintenance burden solely on ClickHouse developers.
|
||||||
|
@ -52,7 +52,7 @@ Database in ClickHouse, exchanging data with the PostgreSQL server:
|
|||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
CREATE DATABASE test_database
|
CREATE DATABASE test_database
|
||||||
ENGINE = PostgreSQL('postgres1:5432', 'test_database', 'postgres', 'mysecretpassword', 1);
|
ENGINE = PostgreSQL('postgres1:5432', 'test_database', 'postgres', 'mysecretpassword', 'schema_name',1);
|
||||||
```
|
```
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
|
@ -27,7 +27,7 @@ Compressed data for `INSERT` and `ALTER` queries is replicated (for more informa
|
|||||||
- The `DROP TABLE` query deletes the replica located on the server where the query is run.
|
- The `DROP TABLE` query deletes the replica located on the server where the query is run.
|
||||||
- The `RENAME` query renames the table on one of the replicas. In other words, replicated tables can have different names on different replicas.
|
- The `RENAME` query renames the table on one of the replicas. In other words, replicated tables can have different names on different replicas.
|
||||||
|
|
||||||
ClickHouse uses [Apache ZooKeeper](https://zookeeper.apache.org) for storing replicas meta information. Use ZooKeeper version 3.4.5 or newer.
|
ClickHouse uses [ClickHouse Keeper](../../../guides/sre/keeper/clickhouse-keeper.md) for storing replicas meta information. It is possible to use ZooKeeper version 3.4.5 or newer, but ClickHouse Keeper is recommended.
|
||||||
|
|
||||||
To use replication, set parameters in the [zookeeper](../../../operations/server-configuration-parameters/settings.md#server-settings_zookeeper) server configuration section.
|
To use replication, set parameters in the [zookeeper](../../../operations/server-configuration-parameters/settings.md#server-settings_zookeeper) server configuration section.
|
||||||
|
|
||||||
@ -35,7 +35,7 @@ To use replication, set parameters in the [zookeeper](../../../operations/server
|
|||||||
Don’t neglect the security setting. ClickHouse supports the `digest` [ACL scheme](https://zookeeper.apache.org/doc/current/zookeeperProgrammers.html#sc_ZooKeeperAccessControl) of the ZooKeeper security subsystem.
|
Don’t neglect the security setting. ClickHouse supports the `digest` [ACL scheme](https://zookeeper.apache.org/doc/current/zookeeperProgrammers.html#sc_ZooKeeperAccessControl) of the ZooKeeper security subsystem.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
Example of setting the addresses of the ZooKeeper cluster:
|
Example of setting the addresses of the ClickHouse Keeper cluster:
|
||||||
|
|
||||||
``` xml
|
``` xml
|
||||||
<zookeeper>
|
<zookeeper>
|
||||||
@ -54,8 +54,8 @@ Example of setting the addresses of the ZooKeeper cluster:
|
|||||||
</zookeeper>
|
</zookeeper>
|
||||||
```
|
```
|
||||||
|
|
||||||
ClickHouse also supports to store replicas meta information in the auxiliary ZooKeeper cluster by providing ZooKeeper cluster name and path as engine arguments.
|
ClickHouse also supports storing replicas meta information in an auxiliary ZooKeeper cluster. Do this by providing the ZooKeeper cluster name and path as engine arguments.
|
||||||
In other word, it supports to store the metadata of differnt tables in different ZooKeeper clusters.
|
In other words, it supports storing the metadata of different tables in different ZooKeeper clusters.
|
||||||
|
|
||||||
Example of setting the addresses of the auxiliary ZooKeeper cluster:
|
Example of setting the addresses of the auxiliary ZooKeeper cluster:
|
||||||
|
|
||||||
@ -122,8 +122,8 @@ The `Replicated` prefix is added to the table engine name. For example:`Replicat
|
|||||||
|
|
||||||
**Replicated\*MergeTree parameters**
|
**Replicated\*MergeTree parameters**
|
||||||
|
|
||||||
- `zoo_path` — The path to the table in ZooKeeper.
|
- `zoo_path` — The path to the table in ClickHouse Keeper.
|
||||||
- `replica_name` — The replica name in ZooKeeper.
|
- `replica_name` — The replica name in ClickHouse Keeper.
|
||||||
- `other_parameters` — Parameters of an engine which is used for creating the replicated version, for example, version in `ReplacingMergeTree`.
|
- `other_parameters` — Parameters of an engine which is used for creating the replicated version, for example, version in `ReplacingMergeTree`.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
@ -168,18 +168,18 @@ Example:
|
|||||||
</macros>
|
</macros>
|
||||||
```
|
```
|
||||||
|
|
||||||
The path to the table in ZooKeeper should be unique for each replicated table. Tables on different shards should have different paths.
|
The path to the table in ClickHouse Keeper should be unique for each replicated table. Tables on different shards should have different paths.
|
||||||
In this case, the path consists of the following parts:
|
In this case, the path consists of the following parts:
|
||||||
|
|
||||||
`/clickhouse/tables/` is the common prefix. We recommend using exactly this one.
|
`/clickhouse/tables/` is the common prefix. We recommend using exactly this one.
|
||||||
|
|
||||||
`{layer}-{shard}` is the shard identifier. In this example it consists of two parts, since the example cluster uses bi-level sharding. For most tasks, you can leave just the {shard} substitution, which will be expanded to the shard identifier.
|
`{layer}-{shard}` is the shard identifier. In this example it consists of two parts, since the example cluster uses bi-level sharding. For most tasks, you can leave just the {shard} substitution, which will be expanded to the shard identifier.
|
||||||
|
|
||||||
`table_name` is the name of the node for the table in ZooKeeper. It is a good idea to make it the same as the table name. It is defined explicitly, because in contrast to the table name, it does not change after a RENAME query.
|
`table_name` is the name of the node for the table in ClickHouse Keeper. It is a good idea to make it the same as the table name. It is defined explicitly, because in contrast to the table name, it does not change after a RENAME query.
|
||||||
*HINT*: you could add a database name in front of `table_name` as well. E.g. `db_name.table_name`
|
*HINT*: you could add a database name in front of `table_name` as well. E.g. `db_name.table_name`
|
||||||
|
|
||||||
The two built-in substitutions `{database}` and `{table}` can be used, they expand into the table name and the database name respectively (unless these macros are defined in the `macros` section). So the zookeeper path can be specified as `'/clickhouse/tables/{layer}-{shard}/{database}/{table}'`.
|
The two built-in substitutions `{database}` and `{table}` can be used, they expand into the table name and the database name respectively (unless these macros are defined in the `macros` section). So the zookeeper path can be specified as `'/clickhouse/tables/{layer}-{shard}/{database}/{table}'`.
|
||||||
Be careful with table renames when using these built-in substitutions. The path in Zookeeper cannot be changed, and when the table is renamed, the macros will expand into a different path, the table will refer to a path that does not exist in Zookeeper, and will go into read-only mode.
|
Be careful with table renames when using these built-in substitutions. The path in ClickHouse Keeper cannot be changed, and when the table is renamed, the macros will expand into a different path, the table will refer to a path that does not exist in ClickHouse Keeper, and will go into read-only mode.
|
||||||
|
|
||||||
The replica name identifies different replicas of the same table. You can use the server name for this, as in the example. The name only needs to be unique within each shard.
|
The replica name identifies different replicas of the same table. You can use the server name for this, as in the example. The name only needs to be unique within each shard.
|
||||||
|
|
||||||
@ -220,21 +220,21 @@ To delete a replica, run `DROP TABLE`. However, only one replica is deleted –
|
|||||||
|
|
||||||
## Recovery After Failures {#recovery-after-failures}
|
## Recovery After Failures {#recovery-after-failures}
|
||||||
|
|
||||||
If ZooKeeper is unavailable when a server starts, replicated tables switch to read-only mode. The system periodically attempts to connect to ZooKeeper.
|
If ClickHouse Keeper is unavailable when a server starts, replicated tables switch to read-only mode. The system periodically attempts to connect to ClickHouse Keeper.
|
||||||
|
|
||||||
If ZooKeeper is unavailable during an `INSERT`, or an error occurs when interacting with ZooKeeper, an exception is thrown.
|
If ClickHouse Keeper is unavailable during an `INSERT`, or an error occurs when interacting with ClickHouse Keeper, an exception is thrown.
|
||||||
|
|
||||||
After connecting to ZooKeeper, the system checks whether the set of data in the local file system matches the expected set of data (ZooKeeper stores this information). If there are minor inconsistencies, the system resolves them by syncing data with the replicas.
|
After connecting to ClickHouse Keeper, the system checks whether the set of data in the local file system matches the expected set of data (ClickHouse Keeper stores this information). If there are minor inconsistencies, the system resolves them by syncing data with the replicas.
|
||||||
|
|
||||||
If the system detects broken data parts (with the wrong size of files) or unrecognized parts (parts written to the file system but not recorded in ZooKeeper), it moves them to the `detached` subdirectory (they are not deleted). Any missing parts are copied from the replicas.
|
If the system detects broken data parts (with the wrong size of files) or unrecognized parts (parts written to the file system but not recorded in ClickHouse Keeper), it moves them to the `detached` subdirectory (they are not deleted). Any missing parts are copied from the replicas.
|
||||||
|
|
||||||
Note that ClickHouse does not perform any destructive actions such as automatically deleting a large amount of data.
|
Note that ClickHouse does not perform any destructive actions such as automatically deleting a large amount of data.
|
||||||
|
|
||||||
When the server starts (or establishes a new session with ZooKeeper), it only checks the quantity and sizes of all files. If the file sizes match but bytes have been changed somewhere in the middle, this is not detected immediately, but only when attempting to read the data for a `SELECT` query. The query throws an exception about a non-matching checksum or size of a compressed block. In this case, data parts are added to the verification queue and copied from the replicas if necessary.
|
When the server starts (or establishes a new session with ClickHouse Keeper), it only checks the quantity and sizes of all files. If the file sizes match but bytes have been changed somewhere in the middle, this is not detected immediately, but only when attempting to read the data for a `SELECT` query. The query throws an exception about a non-matching checksum or size of a compressed block. In this case, data parts are added to the verification queue and copied from the replicas if necessary.
|
||||||
|
|
||||||
If the local set of data differs too much from the expected one, a safety mechanism is triggered. The server enters this in the log and refuses to launch. The reason for this is that this case may indicate a configuration error, such as if a replica on a shard was accidentally configured like a replica on a different shard. However, the thresholds for this mechanism are set fairly low, and this situation might occur during normal failure recovery. In this case, data is restored semi-automatically - by “pushing a button”.
|
If the local set of data differs too much from the expected one, a safety mechanism is triggered. The server enters this in the log and refuses to launch. The reason for this is that this case may indicate a configuration error, such as if a replica on a shard was accidentally configured like a replica on a different shard. However, the thresholds for this mechanism are set fairly low, and this situation might occur during normal failure recovery. In this case, data is restored semi-automatically - by “pushing a button”.
|
||||||
|
|
||||||
To start recovery, create the node `/path_to_table/replica_name/flags/force_restore_data` in ZooKeeper with any content, or run the command to restore all replicated tables:
|
To start recovery, create the node `/path_to_table/replica_name/flags/force_restore_data` in ClickHouse Keeper with any content, or run the command to restore all replicated tables:
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data
|
sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data
|
||||||
@ -249,11 +249,11 @@ If all data and metadata disappeared from one of the servers, follow these steps
|
|||||||
1. Install ClickHouse on the server. Define substitutions correctly in the config file that contains the shard identifier and replicas, if you use them.
|
1. Install ClickHouse on the server. Define substitutions correctly in the config file that contains the shard identifier and replicas, if you use them.
|
||||||
2. If you had unreplicated tables that must be manually duplicated on the servers, copy their data from a replica (in the directory `/var/lib/clickhouse/data/db_name/table_name/`).
|
2. If you had unreplicated tables that must be manually duplicated on the servers, copy their data from a replica (in the directory `/var/lib/clickhouse/data/db_name/table_name/`).
|
||||||
3. Copy table definitions located in `/var/lib/clickhouse/metadata/` from a replica. If a shard or replica identifier is defined explicitly in the table definitions, correct it so that it corresponds to this replica. (Alternatively, start the server and make all the `ATTACH TABLE` queries that should have been in the .sql files in `/var/lib/clickhouse/metadata/`.)
|
3. Copy table definitions located in `/var/lib/clickhouse/metadata/` from a replica. If a shard or replica identifier is defined explicitly in the table definitions, correct it so that it corresponds to this replica. (Alternatively, start the server and make all the `ATTACH TABLE` queries that should have been in the .sql files in `/var/lib/clickhouse/metadata/`.)
|
||||||
4. To start recovery, create the ZooKeeper node `/path_to_table/replica_name/flags/force_restore_data` with any content, or run the command to restore all replicated tables: `sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data`
|
4. To start recovery, create the ClickHouse Keeper node `/path_to_table/replica_name/flags/force_restore_data` with any content, or run the command to restore all replicated tables: `sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data`
|
||||||
|
|
||||||
Then start the server (restart, if it is already running). Data will be downloaded from replicas.
|
Then start the server (restart, if it is already running). Data will be downloaded from replicas.
|
||||||
|
|
||||||
An alternative recovery option is to delete information about the lost replica from ZooKeeper (`/path_to_table/replica_name`), then create the replica again as described in “[Creating replicated tables](#creating-replicated-tables)”.
|
An alternative recovery option is to delete information about the lost replica from ClickHouse Keeper (`/path_to_table/replica_name`), then create the replica again as described in “[Creating replicated tables](#creating-replicated-tables)”.
|
||||||
|
|
||||||
There is no restriction on network bandwidth during recovery. Keep this in mind if you are restoring many replicas at once.
|
There is no restriction on network bandwidth during recovery. Keep this in mind if you are restoring many replicas at once.
|
||||||
|
|
||||||
@ -276,13 +276,13 @@ Create a MergeTree table with a different name. Move all the data from the direc
|
|||||||
If you want to get rid of a `ReplicatedMergeTree` table without launching the server:
|
If you want to get rid of a `ReplicatedMergeTree` table without launching the server:
|
||||||
|
|
||||||
- Delete the corresponding `.sql` file in the metadata directory (`/var/lib/clickhouse/metadata/`).
|
- Delete the corresponding `.sql` file in the metadata directory (`/var/lib/clickhouse/metadata/`).
|
||||||
- Delete the corresponding path in ZooKeeper (`/path_to_table/replica_name`).
|
- Delete the corresponding path in ClickHouse Keeper (`/path_to_table/replica_name`).
|
||||||
|
|
||||||
After this, you can launch the server, create a `MergeTree` table, move the data to its directory, and then restart the server.
|
After this, you can launch the server, create a `MergeTree` table, move the data to its directory, and then restart the server.
|
||||||
|
|
||||||
## Recovery When Metadata in the Zookeeper Cluster Is Lost or Damaged {#recovery-when-metadata-in-the-zookeeper-cluster-is-lost-or-damaged}
|
## Recovery When Metadata in the ClickHouse Keeper Cluster Is Lost or Damaged {#recovery-when-metadata-in-the-zookeeper-cluster-is-lost-or-damaged}
|
||||||
|
|
||||||
If the data in ZooKeeper was lost or damaged, you can save data by moving it to an unreplicated table as described above.
|
If the data in ClickHouse Keeper was lost or damaged, you can save data by moving it to an unreplicated table as described above.
|
||||||
|
|
||||||
**See Also**
|
**See Also**
|
||||||
|
|
||||||
|
@ -87,7 +87,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
|
|||||||
|
|
||||||
The hits and visits dataset is used in the ClickHouse test
|
The hits and visits dataset is used in the ClickHouse test
|
||||||
routines, this is one of the queries from the test suite. The rest
|
routines, this is one of the queries from the test suite. The rest
|
||||||
of the tests are refernced in the *What's Next* section at the
|
of the tests are refernced in the *Next Steps* section at the
|
||||||
end of this page.
|
end of this page.
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
|
@ -7,4 +7,6 @@ sidebar_label: C++ Client Library
|
|||||||
|
|
||||||
See README at [clickhouse-cpp](https://github.com/ClickHouse/clickhouse-cpp) repository.
|
See README at [clickhouse-cpp](https://github.com/ClickHouse/clickhouse-cpp) repository.
|
||||||
|
|
||||||
[Original article](https://clickhouse.com/docs/en/interfaces/cpp/) <!--hide-->
|
# userver Asynchronous Framework
|
||||||
|
|
||||||
|
[userver (beta)](https://github.com/userver-framework/userver) has builtin support for ClickHouse.
|
||||||
|
@ -47,6 +47,8 @@ ClickHouse Inc does **not** maintain the libraries listed below and hasn’t don
|
|||||||
- [ClickHouse (Ruby)](https://github.com/shlima/click_house)
|
- [ClickHouse (Ruby)](https://github.com/shlima/click_house)
|
||||||
- [clickhouse-activerecord](https://github.com/PNixx/clickhouse-activerecord)
|
- [clickhouse-activerecord](https://github.com/PNixx/clickhouse-activerecord)
|
||||||
- Rust
|
- Rust
|
||||||
|
- [clickhouse.rs](https://github.com/loyd/clickhouse.rs)
|
||||||
|
- [clickhouse-rs](https://github.com/suharev7/clickhouse-rs)
|
||||||
- [Klickhouse](https://github.com/Protryon/klickhouse)
|
- [Klickhouse](https://github.com/Protryon/klickhouse)
|
||||||
- R
|
- R
|
||||||
- [clickhouse-r](https://github.com/hannesmuehleisen/clickhouse-r)
|
- [clickhouse-r](https://github.com/hannesmuehleisen/clickhouse-r)
|
||||||
|
@ -28,6 +28,9 @@ ClickHouse, Inc. does **not** maintain the tools and libraries listed below and
|
|||||||
- [Kafka](https://kafka.apache.org)
|
- [Kafka](https://kafka.apache.org)
|
||||||
- [clickhouse_sinker](https://github.com/housepower/clickhouse_sinker) (uses [Go client](https://github.com/ClickHouse/clickhouse-go/))
|
- [clickhouse_sinker](https://github.com/housepower/clickhouse_sinker) (uses [Go client](https://github.com/ClickHouse/clickhouse-go/))
|
||||||
- [stream-loader-clickhouse](https://github.com/adform/stream-loader)
|
- [stream-loader-clickhouse](https://github.com/adform/stream-loader)
|
||||||
|
- Batch processing
|
||||||
|
- [Spark](https://spark.apache.org)
|
||||||
|
- [spark-clickhouse-connector](https://github.com/housepower/spark-clickhouse-connector)
|
||||||
- Stream processing
|
- Stream processing
|
||||||
- [Flink](https://flink.apache.org)
|
- [Flink](https://flink.apache.org)
|
||||||
- [flink-clickhouse-sink](https://github.com/ivi-ru/flink-clickhouse-sink)
|
- [flink-clickhouse-sink](https://github.com/ivi-ru/flink-clickhouse-sink)
|
||||||
|
@ -325,14 +325,14 @@ clickhouse-keeper-converter --zookeeper-logs-dir /var/lib/zookeeper/version-2 --
|
|||||||
|
|
||||||
## Recovering after losing quorum
|
## Recovering after losing quorum
|
||||||
|
|
||||||
Because Clickhouse Keeper uses Raft it can tolerate certain amount of node crashes depending on the cluster size. \
|
Because ClickHouse Keeper uses Raft it can tolerate certain amount of node crashes depending on the cluster size. \
|
||||||
E.g. for a 3-node cluster, it will continue working correctly if only 1 node crashes.
|
E.g. for a 3-node cluster, it will continue working correctly if only 1 node crashes.
|
||||||
|
|
||||||
Cluster configuration can be dynamically configured but there are some limitations. Reconfiguration relies on Raft also
|
Cluster configuration can be dynamically configured but there are some limitations. Reconfiguration relies on Raft also
|
||||||
so to add/remove a node from the cluster you need to have a quorum. If you lose too many nodes in your cluster at the same time without any chance
|
so to add/remove a node from the cluster you need to have a quorum. If you lose too many nodes in your cluster at the same time without any chance
|
||||||
of starting them again, Raft will stop working and not allow you to reconfigure your cluster using the conventional way.
|
of starting them again, Raft will stop working and not allow you to reconfigure your cluster using the conventional way.
|
||||||
|
|
||||||
Nevertheless, Clickhouse Keeper has a recovery mode which allows you to forcefully reconfigure your cluster with only 1 node.
|
Nevertheless, ClickHouse Keeper has a recovery mode which allows you to forcefully reconfigure your cluster with only 1 node.
|
||||||
This should be done only as your last resort if you cannot start your nodes again, or start a new instance on the same endpoint.
|
This should be done only as your last resort if you cannot start your nodes again, or start a new instance on the same endpoint.
|
||||||
|
|
||||||
Important things to note before continuing:
|
Important things to note before continuing:
|
||||||
|
@ -2,6 +2,18 @@
|
|||||||
|
|
||||||
The values of `merge_tree` settings (for all MergeTree tables) can be viewed in the table `system.merge_tree_settings`, they can be overridden in `config.xml` in the `merge_tree` section, or set in the `SETTINGS` section of each table.
|
The values of `merge_tree` settings (for all MergeTree tables) can be viewed in the table `system.merge_tree_settings`, they can be overridden in `config.xml` in the `merge_tree` section, or set in the `SETTINGS` section of each table.
|
||||||
|
|
||||||
|
These are example overrides for `max_suspicious_broken_parts`:
|
||||||
|
|
||||||
|
## max_suspicious_broken_parts
|
||||||
|
|
||||||
|
If the number of broken parts in a single partition exceeds the `max_suspicious_broken_parts` value, automatic deletion is denied.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Any positive integer.
|
||||||
|
|
||||||
|
Default value: 10.
|
||||||
|
|
||||||
Override example in `config.xml`:
|
Override example in `config.xml`:
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
@ -152,6 +164,187 @@ Default value: 604800 (1 week).
|
|||||||
|
|
||||||
Similar to [replicated_deduplication_window](#replicated-deduplication-window), `replicated_deduplication_window_seconds` specifies how long to store hash sums of blocks for insert deduplication. Hash sums older than `replicated_deduplication_window_seconds` are removed from ClickHouse Keeper, even if they are less than ` replicated_deduplication_window`.
|
Similar to [replicated_deduplication_window](#replicated-deduplication-window), `replicated_deduplication_window_seconds` specifies how long to store hash sums of blocks for insert deduplication. Hash sums older than `replicated_deduplication_window_seconds` are removed from ClickHouse Keeper, even if they are less than ` replicated_deduplication_window`.
|
||||||
|
|
||||||
|
## max_replicated_logs_to_keep
|
||||||
|
|
||||||
|
How many records may be in the ClickHouse Keeper log if there is inactive replica. An inactive replica becomes lost when when this number exceed.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Any positive integer.
|
||||||
|
|
||||||
|
Default value: 1000
|
||||||
|
|
||||||
|
## min_replicated_logs_to_keep
|
||||||
|
|
||||||
|
Keep about this number of last records in ZooKeeper log, even if they are obsolete. It doesn't affect work of tables: used only to diagnose ZooKeeper log before cleaning.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Any positive integer.
|
||||||
|
|
||||||
|
Default value: 10
|
||||||
|
|
||||||
|
## prefer_fetch_merged_part_time_threshold
|
||||||
|
|
||||||
|
If the time passed since a replication log (ClickHouse Keeper or ZooKeeper) entry creation exceeds this threshold, and the sum of the size of parts is greater than `prefer_fetch_merged_part_size_threshold`, then prefer fetching merged part from a replica instead of doing merge locally. This is to speed up very long merges.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Any positive integer.
|
||||||
|
|
||||||
|
Default value: 3600
|
||||||
|
|
||||||
|
## prefer_fetch_merged_part_size_threshold
|
||||||
|
|
||||||
|
If the sum of the size of parts exceeds this threshold and the time since a replication log entry creation is greater than `prefer_fetch_merged_part_time_threshold`, then prefer fetching merged part from a replica instead of doing merge locally. This is to speed up very long merges.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Any positive integer.
|
||||||
|
|
||||||
|
Default value: 10,737,418,240
|
||||||
|
|
||||||
|
## execute_merges_on_single_replica_time_threshold
|
||||||
|
|
||||||
|
When this setting has a value greater than zero, only a single replica starts the merge immediately, and other replicas wait up to that amount of time to download the result instead of doing merges locally. If the chosen replica doesn't finish the merge during that amount of time, fallback to standard behavior happens.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Any positive integer.
|
||||||
|
|
||||||
|
Default value: 0 (seconds)
|
||||||
|
|
||||||
|
## remote_fs_execute_merges_on_single_replica_time_threshold
|
||||||
|
|
||||||
|
When this setting has a value greater than than zero only a single replica starts the merge immediately if merged part on shared storage and `allow_remote_fs_zero_copy_replication` is enabled.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Any positive integer.
|
||||||
|
|
||||||
|
Default value: 1800
|
||||||
|
|
||||||
|
## try_fetch_recompressed_part_timeout
|
||||||
|
|
||||||
|
Recompression works slow in most cases, so we don't start merge with recompression until this timeout and trying to fetch recompressed part from replica which assigned this merge with recompression.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Any positive integer.
|
||||||
|
|
||||||
|
Default value: 7200
|
||||||
|
|
||||||
|
## always_fetch_merged_part
|
||||||
|
|
||||||
|
If true, this replica never merges parts and always downloads merged parts from other replicas.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- true, false
|
||||||
|
|
||||||
|
Default value: false
|
||||||
|
|
||||||
|
## max_suspicious_broken_parts
|
||||||
|
|
||||||
|
Max broken parts, if more - deny automatic deletion.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Any positive integer.
|
||||||
|
|
||||||
|
Default value: 10
|
||||||
|
|
||||||
|
## max_suspicious_broken_parts_bytes
|
||||||
|
|
||||||
|
|
||||||
|
Max size of all broken parts, if more - deny automatic deletion.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Any positive integer.
|
||||||
|
|
||||||
|
Default value: 1,073,741,824
|
||||||
|
|
||||||
|
## max_files_to_modify_in_alter_columns
|
||||||
|
|
||||||
|
Do not apply ALTER if number of files for modification(deletion, addition) is greater than this setting.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Any positive integer.
|
||||||
|
|
||||||
|
Default value: 75
|
||||||
|
|
||||||
|
## max_files_to_remove_in_alter_columns
|
||||||
|
|
||||||
|
Do not apply ALTER, if the number of files for deletion is greater than this setting.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Any positive integer.
|
||||||
|
|
||||||
|
Default value: 50
|
||||||
|
|
||||||
|
## replicated_max_ratio_of_wrong_parts
|
||||||
|
|
||||||
|
If the ratio of wrong parts to total number of parts is less than this - allow to start.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Float, 0.0 - 1.0
|
||||||
|
|
||||||
|
Default value: 0.5
|
||||||
|
|
||||||
|
## replicated_max_parallel_fetches_for_host
|
||||||
|
|
||||||
|
Limit parallel fetches from endpoint (actually pool size).
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Any positive integer.
|
||||||
|
|
||||||
|
Default value: 15
|
||||||
|
|
||||||
|
## replicated_fetches_http_connection_timeout
|
||||||
|
|
||||||
|
HTTP connection timeout for part fetch requests. Inherited from default profile `http_connection_timeout` if not set explicitly.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Any positive integer.
|
||||||
|
|
||||||
|
Default value: Inherited from default profile `http_connection_timeout` if not set explicitly.
|
||||||
|
|
||||||
|
## replicated_can_become_leader
|
||||||
|
|
||||||
|
If true, replicated tables replicas on this node will try to acquire leadership.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- true, false
|
||||||
|
|
||||||
|
Default value: true
|
||||||
|
|
||||||
|
## zookeeper_session_expiration_check_period
|
||||||
|
|
||||||
|
ZooKeeper session expiration check period, in seconds.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Any positive integer.
|
||||||
|
|
||||||
|
Default value: 60
|
||||||
|
|
||||||
|
## detach_old_local_parts_when_cloning_replica
|
||||||
|
|
||||||
|
Do not remove old local parts when repairing lost replica.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- true, false
|
||||||
|
|
||||||
|
Default value: true
|
||||||
|
|
||||||
## replicated_fetches_http_connection_timeout {#replicated_fetches_http_connection_timeout}
|
## replicated_fetches_http_connection_timeout {#replicated_fetches_http_connection_timeout}
|
||||||
|
|
||||||
HTTP connection timeout (in seconds) for part fetch requests. Inherited from default profile [http_connection_timeout](./settings.md#http_connection_timeout) if not set explicitly.
|
HTTP connection timeout (in seconds) for part fetch requests. Inherited from default profile [http_connection_timeout](./settings.md#http_connection_timeout) if not set explicitly.
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# replication_queue
|
# replication_queue
|
||||||
|
|
||||||
Contains information about tasks from replication queues stored in Clickhouse Keeper, or ZooKeeper, for tables in the `ReplicatedMergeTree` family.
|
Contains information about tasks from replication queues stored in ClickHouse Keeper, or ZooKeeper, for tables in the `ReplicatedMergeTree` family.
|
||||||
|
|
||||||
Columns:
|
Columns:
|
||||||
|
|
||||||
|
@ -274,6 +274,6 @@ end script
|
|||||||
|
|
||||||
## Antivirus software {#antivirus-software}
|
## Antivirus software {#antivirus-software}
|
||||||
|
|
||||||
If you use antivirus software configure it to skip folders with Clickhouse datafiles (`/var/lib/clickhouse`) otherwise performance may be reduced and you may experience unexpected errors during data ingestion and background merges.
|
If you use antivirus software configure it to skip folders with ClickHouse datafiles (`/var/lib/clickhouse`) otherwise performance may be reduced and you may experience unexpected errors during data ingestion and background merges.
|
||||||
|
|
||||||
[Original article](https://clickhouse.com/docs/en/operations/tips/)
|
[Original article](https://clickhouse.com/docs/en/operations/tips/)
|
||||||
|
@ -6,7 +6,7 @@ sidebar_position: 33
|
|||||||
|
|
||||||
Calculates the amount `Σ((x - x̅)^2) / (n - 1)`, where `n` is the sample size and `x̅`is the average value of `x`.
|
Calculates the amount `Σ((x - x̅)^2) / (n - 1)`, where `n` is the sample size and `x̅`is the average value of `x`.
|
||||||
|
|
||||||
It represents an unbiased estimate of the variance of a random variable if passed values form its sample.
|
It represents an unbiased estimate of the variance of a random variable if passed values from its sample.
|
||||||
|
|
||||||
Returns `Float64`. When `n <= 1`, returns `+∞`.
|
Returns `Float64`. When `n <= 1`, returns `+∞`.
|
||||||
|
|
||||||
|
@ -19,11 +19,10 @@ This function encrypts data using these modes:
|
|||||||
|
|
||||||
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
||||||
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
||||||
- aes-128-cfb1, aes-192-cfb1, aes-256-cfb1
|
- aes-128-cfb128
|
||||||
- aes-128-cfb8, aes-192-cfb8, aes-256-cfb8
|
|
||||||
- aes-128-cfb128, aes-192-cfb128, aes-256-cfb128
|
|
||||||
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
||||||
- aes-128-gcm, aes-192-gcm, aes-256-gcm
|
- aes-128-gcm, aes-192-gcm, aes-256-gcm
|
||||||
|
- aes-128-ctr, aes-192-ctr, aes-256-ctr
|
||||||
|
|
||||||
**Syntax**
|
**Syntax**
|
||||||
|
|
||||||
@ -63,9 +62,9 @@ Insert some data (please avoid storing the keys/ivs in the database as this unde
|
|||||||
Query:
|
Query:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
INSERT INTO encryption_test VALUES('aes-256-cfb128 no IV', encrypt('aes-256-cfb128', 'Secret', '12345678910121314151617181920212')),\
|
INSERT INTO encryption_test VALUES('aes-256-ofb no IV', encrypt('aes-256-ofb', 'Secret', '12345678910121314151617181920212')),\
|
||||||
('aes-256-cfb128 no IV, different key', encrypt('aes-256-cfb128', 'Secret', 'keykeykeykeykeykeykeykeykeykeyke')),\
|
('aes-256-ofb no IV, different key', encrypt('aes-256-ofb', 'Secret', 'keykeykeykeykeykeykeykeykeykeyke')),\
|
||||||
('aes-256-cfb128 with IV', encrypt('aes-256-cfb128', 'Secret', '12345678910121314151617181920212', 'iviviviviviviviv')),\
|
('aes-256-ofb with IV', encrypt('aes-256-ofb', 'Secret', '12345678910121314151617181920212', 'iviviviviviviviv')),\
|
||||||
('aes-256-cbc no IV', encrypt('aes-256-cbc', 'Secret', '12345678910121314151617181920212'));
|
('aes-256-cbc no IV', encrypt('aes-256-cbc', 'Secret', '12345678910121314151617181920212'));
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -78,12 +77,12 @@ SELECT comment, hex(secret) FROM encryption_test;
|
|||||||
Result:
|
Result:
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
┌─comment─────────────────────────────┬─hex(secret)──────────────────────┐
|
┌─comment──────────────────────────┬─hex(secret)──────────────────────┐
|
||||||
│ aes-256-cfb128 no IV │ B4972BDC4459 │
|
│ aes-256-ofb no IV │ B4972BDC4459 │
|
||||||
│ aes-256-cfb128 no IV, different key │ 2FF57C092DC9 │
|
│ aes-256-ofb no IV, different key │ 2FF57C092DC9 │
|
||||||
│ aes-256-cfb128 with IV │ 5E6CB398F653 │
|
│ aes-256-ofb with IV │ 5E6CB398F653 │
|
||||||
│ aes-256-cbc no IV │ 1BC0629A92450D9E73A00E7D02CF4142 │
|
│ aes-256-cbc no IV │ 1BC0629A92450D9E73A00E7D02CF4142 │
|
||||||
└─────────────────────────────────────┴──────────────────────────────────┘
|
└──────────────────────────────────┴──────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
Example with `-gcm`:
|
Example with `-gcm`:
|
||||||
@ -116,9 +115,7 @@ Supported encryption modes:
|
|||||||
|
|
||||||
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
||||||
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
||||||
- aes-128-cfb1, aes-192-cfb1, aes-256-cfb1
|
- aes-128-cfb128
|
||||||
- aes-128-cfb8, aes-192-cfb8, aes-256-cfb8
|
|
||||||
- aes-128-cfb128, aes-192-cfb128, aes-256-cfb128
|
|
||||||
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
||||||
|
|
||||||
**Syntax**
|
**Syntax**
|
||||||
@ -145,7 +142,7 @@ Given equal input `encrypt` and `aes_encrypt_mysql` produce the same ciphertext:
|
|||||||
Query:
|
Query:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT encrypt('aes-256-cfb128', 'Secret', '12345678910121314151617181920212', 'iviviviviviviviv') = aes_encrypt_mysql('aes-256-cfb128', 'Secret', '12345678910121314151617181920212', 'iviviviviviviviv') AS ciphertexts_equal;
|
SELECT encrypt('aes-256-ofb', 'Secret', '12345678910121314151617181920212', 'iviviviviviviviv') = aes_encrypt_mysql('aes-256-ofb', 'Secret', '12345678910121314151617181920212', 'iviviviviviviviv') AS ciphertexts_equal;
|
||||||
```
|
```
|
||||||
|
|
||||||
Result:
|
Result:
|
||||||
@ -161,14 +158,14 @@ But `encrypt` fails when `key` or `iv` is longer than expected:
|
|||||||
Query:
|
Query:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT encrypt('aes-256-cfb128', 'Secret', '123456789101213141516171819202122', 'iviviviviviviviv123');
|
SELECT encrypt('aes-256-ofb', 'Secret', '123456789101213141516171819202122', 'iviviviviviviviv123');
|
||||||
```
|
```
|
||||||
|
|
||||||
Result:
|
Result:
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
Received exception from server (version 21.1.2):
|
Received exception from server (version 22.6.1):
|
||||||
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Invalid key size: 33 expected 32: While processing encrypt('aes-256-cfb128', 'Secret', '123456789101213141516171819202122', 'iviviviviviviviv123').
|
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Invalid key size: 33 expected 32: While processing encrypt('aes-256-ofb', 'Secret', '123456789101213141516171819202122', 'iviviviviviviviv123').
|
||||||
```
|
```
|
||||||
|
|
||||||
While `aes_encrypt_mysql` produces MySQL-compatitalbe output:
|
While `aes_encrypt_mysql` produces MySQL-compatitalbe output:
|
||||||
@ -176,7 +173,7 @@ While `aes_encrypt_mysql` produces MySQL-compatitalbe output:
|
|||||||
Query:
|
Query:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT hex(aes_encrypt_mysql('aes-256-cfb128', 'Secret', '123456789101213141516171819202122', 'iviviviviviviviv123')) AS ciphertext;
|
SELECT hex(aes_encrypt_mysql('aes-256-ofb', 'Secret', '123456789101213141516171819202122', 'iviviviviviviviv123')) AS ciphertext;
|
||||||
```
|
```
|
||||||
|
|
||||||
Result:
|
Result:
|
||||||
@ -192,7 +189,7 @@ Notice how supplying even longer `IV` produces the same result
|
|||||||
Query:
|
Query:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT hex(aes_encrypt_mysql('aes-256-cfb128', 'Secret', '123456789101213141516171819202122', 'iviviviviviviviv123456')) AS ciphertext
|
SELECT hex(aes_encrypt_mysql('aes-256-ofb', 'Secret', '123456789101213141516171819202122', 'iviviviviviviviv123456')) AS ciphertext
|
||||||
```
|
```
|
||||||
|
|
||||||
Result:
|
Result:
|
||||||
@ -206,7 +203,7 @@ Result:
|
|||||||
Which is binary equal to what MySQL produces on same inputs:
|
Which is binary equal to what MySQL produces on same inputs:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
mysql> SET block_encryption_mode='aes-256-cfb128';
|
mysql> SET block_encryption_mode='aes-256-ofb';
|
||||||
Query OK, 0 rows affected (0.00 sec)
|
Query OK, 0 rows affected (0.00 sec)
|
||||||
|
|
||||||
mysql> SELECT aes_encrypt('Secret', '123456789101213141516171819202122', 'iviviviviviviviv123456') as ciphertext;
|
mysql> SELECT aes_encrypt('Secret', '123456789101213141516171819202122', 'iviviviviviviviv123456') as ciphertext;
|
||||||
@ -224,11 +221,10 @@ This function decrypts ciphertext into a plaintext using these modes:
|
|||||||
|
|
||||||
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
||||||
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
||||||
- aes-128-cfb1, aes-192-cfb1, aes-256-cfb1
|
- aes-128-cfb128
|
||||||
- aes-128-cfb8, aes-192-cfb8, aes-256-cfb8
|
|
||||||
- aes-128-cfb128, aes-192-cfb128, aes-256-cfb128
|
|
||||||
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
||||||
- aes-128-gcm, aes-192-gcm, aes-256-gcm
|
- aes-128-gcm, aes-192-gcm, aes-256-gcm
|
||||||
|
- aes-128-ctr, aes-192-ctr, aes-256-ctr
|
||||||
|
|
||||||
**Syntax**
|
**Syntax**
|
||||||
|
|
||||||
@ -265,12 +261,12 @@ Result:
|
|||||||
│ aes-256-gcm │ A8A3CCBC6426CFEEB60E4EAE03D3E94204C1B09E0254 │
|
│ aes-256-gcm │ A8A3CCBC6426CFEEB60E4EAE03D3E94204C1B09E0254 │
|
||||||
│ aes-256-gcm with AAD │ A8A3CCBC6426D9A1017A0A932322F1852260A4AD6837 │
|
│ aes-256-gcm with AAD │ A8A3CCBC6426D9A1017A0A932322F1852260A4AD6837 │
|
||||||
└──────────────────────┴──────────────────────────────────────────────┘
|
└──────────────────────┴──────────────────────────────────────────────┘
|
||||||
┌─comment─────────────────────────────┬─hex(secret)──────────────────────┐
|
┌─comment──────────────────────────┬─hex(secret)──────────────────────┐
|
||||||
│ aes-256-cfb128 no IV │ B4972BDC4459 │
|
│ aes-256-ofb no IV │ B4972BDC4459 │
|
||||||
│ aes-256-cfb128 no IV, different key │ 2FF57C092DC9 │
|
│ aes-256-ofb no IV, different key │ 2FF57C092DC9 │
|
||||||
│ aes-256-cfb128 with IV │ 5E6CB398F653 │
|
│ aes-256-ofb with IV │ 5E6CB398F653 │
|
||||||
│ aes-256-cbc no IV │ 1BC0629A92450D9E73A00E7D02CF4142 │
|
│ aes-256-cbc no IV │ 1BC0629A92450D9E73A00E7D02CF4142 │
|
||||||
└─────────────────────────────────────┴──────────────────────────────────┘
|
└──────────────────────────────────┴──────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
Now let's try to decrypt all that data.
|
Now let's try to decrypt all that data.
|
||||||
@ -284,13 +280,19 @@ SELECT comment, decrypt('aes-256-cfb128', secret, '12345678910121314151617181920
|
|||||||
Result:
|
Result:
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
┌─comment─────────────────────────────┬─plaintext─┐
|
┌─comment──────────────┬─plaintext──┐
|
||||||
│ aes-256-cfb128 no IV │ Secret │
|
│ aes-256-gcm │ OQ<4F>E
|
||||||
│ aes-256-cfb128 no IV, different key │ <20>4<EFBFBD>
|
<20>t<EFBFBD>7T<37>\<5C><><EFBFBD>\<5C> │
|
||||||
<20> │
|
│ aes-256-gcm with AAD │ OQ<4F>E
|
||||||
│ aes-256-cfb128 with IV │ <20><><EFBFBD>6<EFBFBD>~ │
|
<20>\<5C><>si<73><69><EFBFBD><EFBFBD>;<3B>o<EFBFBD><6F> │
|
||||||
│aes-256-cbc no IV │ <20>2*4<>h3c<33>4w<34><77>@
|
└──────────────────────┴────────────┘
|
||||||
└─────────────────────────────────────┴───────────┘
|
┌─comment──────────────────────────┬─plaintext─┐
|
||||||
|
│ aes-256-ofb no IV │ Secret │
|
||||||
|
│ aes-256-ofb no IV, different key │ <20>4<EFBFBD>
|
||||||
|
<20> │
|
||||||
|
│ aes-256-ofb with IV │ <20><><EFBFBD>6<EFBFBD>~ │
|
||||||
|
│aes-256-cbc no IV │ <20>2*4<>h3c<33>4w<34><77>@
|
||||||
|
└──────────────────────────────────┴───────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
Notice how only a portion of the data was properly decrypted, and the rest is gibberish since either `mode`, `key`, or `iv` were different upon encryption.
|
Notice how only a portion of the data was properly decrypted, and the rest is gibberish since either `mode`, `key`, or `iv` were different upon encryption.
|
||||||
@ -305,9 +307,7 @@ Supported decryption modes:
|
|||||||
|
|
||||||
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
||||||
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
||||||
- aes-128-cfb1, aes-192-cfb1, aes-256-cfb1
|
- aes-128-cfb128
|
||||||
- aes-128-cfb8, aes-192-cfb8, aes-256-cfb8
|
|
||||||
- aes-128-cfb128, aes-192-cfb128, aes-256-cfb128
|
|
||||||
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
||||||
|
|
||||||
**Syntax**
|
**Syntax**
|
||||||
@ -332,7 +332,7 @@ aes_decrypt_mysql('mode', 'ciphertext', 'key' [, iv])
|
|||||||
Let's decrypt data we've previously encrypted with MySQL:
|
Let's decrypt data we've previously encrypted with MySQL:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
mysql> SET block_encryption_mode='aes-256-cfb128';
|
mysql> SET block_encryption_mode='aes-256-ofb';
|
||||||
Query OK, 0 rows affected (0.00 sec)
|
Query OK, 0 rows affected (0.00 sec)
|
||||||
|
|
||||||
mysql> SELECT aes_encrypt('Secret', '123456789101213141516171819202122', 'iviviviviviviviv123456') as ciphertext;
|
mysql> SELECT aes_encrypt('Secret', '123456789101213141516171819202122', 'iviviviviviviviv123456') as ciphertext;
|
||||||
@ -347,7 +347,7 @@ mysql> SELECT aes_encrypt('Secret', '123456789101213141516171819202122', 'iviviv
|
|||||||
Query:
|
Query:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT aes_decrypt_mysql('aes-256-cfb128', unhex('24E9E4966469'), '123456789101213141516171819202122', 'iviviviviviviviv123456') AS plaintext
|
SELECT aes_decrypt_mysql('aes-256-ofb', unhex('24E9E4966469'), '123456789101213141516171819202122', 'iviviviviviviviv123456') AS plaintext
|
||||||
```
|
```
|
||||||
|
|
||||||
Result:
|
Result:
|
||||||
|
@ -273,16 +273,16 @@ Converts ASCII Latin symbols in a string to uppercase.
|
|||||||
## lowerUTF8
|
## lowerUTF8
|
||||||
|
|
||||||
Converts a string to lowercase, assuming the string contains a set of bytes that make up a UTF-8 encoded text.
|
Converts a string to lowercase, assuming the string contains a set of bytes that make up a UTF-8 encoded text.
|
||||||
It does not detect the language. So for Turkish the result might not be exactly correct.
|
It does not detect the language. E.g. for Turkish the result might not be exactly correct (i/İ vs. i/I).
|
||||||
If the length of the UTF-8 byte sequence is different for upper and lower case of a code point, the result may be incorrect for this code point.
|
If the length of the UTF-8 byte sequence is different for upper and lower case of a code point, the result may be incorrect for this code point.
|
||||||
If the string contains a set of bytes that is not UTF-8, then the behavior is undefined.
|
If the string contains a sequence of bytes that are not valid UTF-8, then the behavior is undefined.
|
||||||
|
|
||||||
## upperUTF8
|
## upperUTF8
|
||||||
|
|
||||||
Converts a string to uppercase, assuming the string contains a set of bytes that make up a UTF-8 encoded text.
|
Converts a string to uppercase, assuming the string contains a set of bytes that make up a UTF-8 encoded text.
|
||||||
It does not detect the language. So for Turkish the result might not be exactly correct.
|
It does not detect the language. E.g. for Turkish the result might not be exactly correct (i/İ vs. i/I).
|
||||||
If the length of the UTF-8 byte sequence is different for upper and lower case of a code point, the result may be incorrect for this code point.
|
If the length of the UTF-8 byte sequence is different for upper and lower case of a code point, the result may be incorrect for this code point.
|
||||||
If the string contains a set of bytes that is not UTF-8, then the behavior is undefined.
|
If the string contains a sequence of bytes that are not valid UTF-8, then the behavior is undefined.
|
||||||
|
|
||||||
## isValidUTF8
|
## isValidUTF8
|
||||||
|
|
||||||
|
@ -350,11 +350,14 @@ In all `multiSearch*` functions the number of needles should be less than 2<sup>
|
|||||||
|
|
||||||
## match(haystack, pattern)
|
## match(haystack, pattern)
|
||||||
|
|
||||||
Checks whether the string matches the `pattern` regular expression. A `re2` regular expression. The [syntax](https://github.com/google/re2/wiki/Syntax) of the `re2` regular expressions is more limited than the syntax of the Perl regular expressions.
|
Checks whether the string matches the regular expression `pattern` in `re2` syntax. `Re2` has a more limited [syntax](https://github.com/google/re2/wiki/Syntax) than Perl regular expressions.
|
||||||
|
|
||||||
Returns 0 if it does not match, or 1 if it matches.
|
Returns 0 if it does not match, or 1 if it matches.
|
||||||
|
|
||||||
The regular expression works with the string as if it is a set of bytes. The regular expression can’t contain null bytes.
|
Matching is based on UTF-8, e.g. `.` matches the Unicode code point `¥` which is represented in UTF-8 using two bytes. The regular expression must not contain null bytes.
|
||||||
|
If the haystack or pattern contain a sequence of bytes that are not valid UTF-8, then the behavior is undefined.
|
||||||
|
No automatic Unicode normalization is performed, if you need it you can use the [normalizeUTF8*()](https://clickhouse.com/docs/en/sql-reference/functions/string-functions/) functions for that.
|
||||||
|
|
||||||
For patterns to search for substrings in a string, it is better to use LIKE or ‘position’, since they work much faster.
|
For patterns to search for substrings in a string, it is better to use LIKE or ‘position’, since they work much faster.
|
||||||
|
|
||||||
## multiMatchAny(haystack, \[pattern<sub>1</sub>, pattern<sub>2</sub>, …, pattern<sub>n</sub>\])
|
## multiMatchAny(haystack, \[pattern<sub>1</sub>, pattern<sub>2</sub>, …, pattern<sub>n</sub>\])
|
||||||
@ -498,6 +501,10 @@ The regular expression can contain the metasymbols `%` and `_`.
|
|||||||
|
|
||||||
Use the backslash (`\`) for escaping metasymbols. See the note on escaping in the description of the ‘match’ function.
|
Use the backslash (`\`) for escaping metasymbols. See the note on escaping in the description of the ‘match’ function.
|
||||||
|
|
||||||
|
Matching is based on UTF-8, e.g. `_` matches the Unicode code point `¥` which is represented in UTF-8 using two bytes.
|
||||||
|
If the haystack or pattern contain a sequence of bytes that are not valid UTF-8, then the behavior is undefined.
|
||||||
|
No automatic Unicode normalization is performed, if you need it you can use the [normalizeUTF8*()](https://clickhouse.com/docs/en/sql-reference/functions/string-functions/) functions for that.
|
||||||
|
|
||||||
For regular expressions like `%needle%`, the code is more optimal and works as fast as the `position` function.
|
For regular expressions like `%needle%`, the code is more optimal and works as fast as the `position` function.
|
||||||
For other regular expressions, the code is the same as for the ‘match’ function.
|
For other regular expressions, the code is the same as for the ‘match’ function.
|
||||||
|
|
||||||
@ -509,6 +516,8 @@ The same thing as ‘like’, but negative.
|
|||||||
|
|
||||||
Case insensitive variant of [like](https://clickhouse.com/docs/en/sql-reference/functions/string-search-functions/#function-like) function. You can use `ILIKE` operator instead of the `ilike` function.
|
Case insensitive variant of [like](https://clickhouse.com/docs/en/sql-reference/functions/string-search-functions/#function-like) function. You can use `ILIKE` operator instead of the `ilike` function.
|
||||||
|
|
||||||
|
The function ignores the language, e.g. for Turkish (i/İ), the result might be incorrect.
|
||||||
|
|
||||||
**Syntax**
|
**Syntax**
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
|
@ -43,28 +43,38 @@ For tuple subtraction: [tupleMinus](../../sql-reference/functions/tuple-function
|
|||||||
|
|
||||||
## Comparison Operators
|
## Comparison Operators
|
||||||
|
|
||||||
|
### equals function
|
||||||
`a = b` – The `equals(a, b)` function.
|
`a = b` – The `equals(a, b)` function.
|
||||||
|
|
||||||
`a == b` – The `equals(a, b)` function.
|
`a == b` – The `equals(a, b)` function.
|
||||||
|
|
||||||
|
### notEquals function
|
||||||
`a != b` – The `notEquals(a, b)` function.
|
`a != b` – The `notEquals(a, b)` function.
|
||||||
|
|
||||||
`a <> b` – The `notEquals(a, b)` function.
|
`a <> b` – The `notEquals(a, b)` function.
|
||||||
|
|
||||||
|
### lessOrEquals function
|
||||||
`a <= b` – The `lessOrEquals(a, b)` function.
|
`a <= b` – The `lessOrEquals(a, b)` function.
|
||||||
|
|
||||||
|
### greaterOrEquals function
|
||||||
`a >= b` – The `greaterOrEquals(a, b)` function.
|
`a >= b` – The `greaterOrEquals(a, b)` function.
|
||||||
|
|
||||||
|
### less function
|
||||||
`a < b` – The `less(a, b)` function.
|
`a < b` – The `less(a, b)` function.
|
||||||
|
|
||||||
|
### greater function
|
||||||
`a > b` – The `greater(a, b)` function.
|
`a > b` – The `greater(a, b)` function.
|
||||||
|
|
||||||
|
### like function
|
||||||
`a LIKE s` – The `like(a, b)` function.
|
`a LIKE s` – The `like(a, b)` function.
|
||||||
|
|
||||||
|
### notLike function
|
||||||
`a NOT LIKE s` – The `notLike(a, b)` function.
|
`a NOT LIKE s` – The `notLike(a, b)` function.
|
||||||
|
|
||||||
|
### ilike function
|
||||||
`a ILIKE s` – The `ilike(a, b)` function.
|
`a ILIKE s` – The `ilike(a, b)` function.
|
||||||
|
|
||||||
|
### BETWEEN function
|
||||||
`a BETWEEN b AND c` – The same as `a >= b AND a <= c`.
|
`a BETWEEN b AND c` – The same as `a >= b AND a <= c`.
|
||||||
|
|
||||||
`a NOT BETWEEN b AND c` – The same as `a < b OR a > c`.
|
`a NOT BETWEEN b AND c` – The same as `a < b OR a > c`.
|
||||||
@ -73,20 +83,28 @@ For tuple subtraction: [tupleMinus](../../sql-reference/functions/tuple-function
|
|||||||
|
|
||||||
See [IN operators](../../sql-reference/operators/in.md) and [EXISTS](../../sql-reference/operators/exists.md) operator.
|
See [IN operators](../../sql-reference/operators/in.md) and [EXISTS](../../sql-reference/operators/exists.md) operator.
|
||||||
|
|
||||||
|
### in function
|
||||||
`a IN ...` – The `in(a, b)` function.
|
`a IN ...` – The `in(a, b)` function.
|
||||||
|
|
||||||
|
### notIn function
|
||||||
`a NOT IN ...` – The `notIn(a, b)` function.
|
`a NOT IN ...` – The `notIn(a, b)` function.
|
||||||
|
|
||||||
|
### globalIn function
|
||||||
`a GLOBAL IN ...` – The `globalIn(a, b)` function.
|
`a GLOBAL IN ...` – The `globalIn(a, b)` function.
|
||||||
|
|
||||||
|
### globalNotIn function
|
||||||
`a GLOBAL NOT IN ...` – The `globalNotIn(a, b)` function.
|
`a GLOBAL NOT IN ...` – The `globalNotIn(a, b)` function.
|
||||||
|
|
||||||
|
### in subquery function
|
||||||
`a = ANY (subquery)` – The `in(a, subquery)` function.
|
`a = ANY (subquery)` – The `in(a, subquery)` function.
|
||||||
|
|
||||||
|
### notIn subquery function
|
||||||
`a != ANY (subquery)` – The same as `a NOT IN (SELECT singleValueOrNull(*) FROM subquery)`.
|
`a != ANY (subquery)` – The same as `a NOT IN (SELECT singleValueOrNull(*) FROM subquery)`.
|
||||||
|
|
||||||
|
### in subquery function
|
||||||
`a = ALL (subquery)` – The same as `a IN (SELECT singleValueOrNull(*) FROM subquery)`.
|
`a = ALL (subquery)` – The same as `a IN (SELECT singleValueOrNull(*) FROM subquery)`.
|
||||||
|
|
||||||
|
### notIn subquery function
|
||||||
`a != ALL (subquery)` – The `notIn(a, subquery)` function.
|
`a != ALL (subquery)` – The `notIn(a, subquery)` function.
|
||||||
|
|
||||||
|
|
||||||
|
@ -248,6 +248,7 @@ Specialized codecs:
|
|||||||
- `Delta(delta_bytes)` — Compression approach in which raw values are replaced by the difference of two neighboring values, except for the first value that stays unchanged. Up to `delta_bytes` are used for storing delta values, so `delta_bytes` is the maximum size of raw values. Possible `delta_bytes` values: 1, 2, 4, 8. The default value for `delta_bytes` is `sizeof(type)` if equal to 1, 2, 4, or 8. In all other cases, it’s 1.
|
- `Delta(delta_bytes)` — Compression approach in which raw values are replaced by the difference of two neighboring values, except for the first value that stays unchanged. Up to `delta_bytes` are used for storing delta values, so `delta_bytes` is the maximum size of raw values. Possible `delta_bytes` values: 1, 2, 4, 8. The default value for `delta_bytes` is `sizeof(type)` if equal to 1, 2, 4, or 8. In all other cases, it’s 1.
|
||||||
- `DoubleDelta` — Calculates delta of deltas and writes it in compact binary form. Optimal compression rates are achieved for monotonic sequences with a constant stride, such as time series data. Can be used with any fixed-width type. Implements the algorithm used in Gorilla TSDB, extending it to support 64-bit types. Uses 1 extra bit for 32-byte deltas: 5-bit prefixes instead of 4-bit prefixes. For additional information, see Compressing Time Stamps in [Gorilla: A Fast, Scalable, In-Memory Time Series Database](http://www.vldb.org/pvldb/vol8/p1816-teller.pdf).
|
- `DoubleDelta` — Calculates delta of deltas and writes it in compact binary form. Optimal compression rates are achieved for monotonic sequences with a constant stride, such as time series data. Can be used with any fixed-width type. Implements the algorithm used in Gorilla TSDB, extending it to support 64-bit types. Uses 1 extra bit for 32-byte deltas: 5-bit prefixes instead of 4-bit prefixes. For additional information, see Compressing Time Stamps in [Gorilla: A Fast, Scalable, In-Memory Time Series Database](http://www.vldb.org/pvldb/vol8/p1816-teller.pdf).
|
||||||
- `Gorilla` — Calculates XOR between current and previous value and writes it in compact binary form. Efficient when storing a series of floating point values that change slowly, because the best compression rate is achieved when neighboring values are binary equal. Implements the algorithm used in Gorilla TSDB, extending it to support 64-bit types. For additional information, see Compressing Values in [Gorilla: A Fast, Scalable, In-Memory Time Series Database](http://www.vldb.org/pvldb/vol8/p1816-teller.pdf).
|
- `Gorilla` — Calculates XOR between current and previous value and writes it in compact binary form. Efficient when storing a series of floating point values that change slowly, because the best compression rate is achieved when neighboring values are binary equal. Implements the algorithm used in Gorilla TSDB, extending it to support 64-bit types. For additional information, see Compressing Values in [Gorilla: A Fast, Scalable, In-Memory Time Series Database](http://www.vldb.org/pvldb/vol8/p1816-teller.pdf).
|
||||||
|
- `FPC` - Repeatedly predicts the next floating point value in the sequence using the better of two predictors, then XORs the actual with the predicted value, and leading-zero compresses the result. Similar to Gorilla, this is efficient when storing a series of floating point values that change slowly. For 64-bit values (double), FPC is faster than Gorilla, for 32-bit values your mileage may vary. For a detailed description of the algorithm see [High Throughput Compression of Double-Precision Floating-Point Data](https://userweb.cs.txstate.edu/~burtscher/papers/dcc07a.pdf).
|
||||||
- `T64` — Compression approach that crops unused high bits of values in integer data types (including `Enum`, `Date` and `DateTime`). At each step of its algorithm, codec takes a block of 64 values, puts them into 64x64 bit matrix, transposes it, crops the unused bits of values and returns the rest as a sequence. Unused bits are the bits, that do not differ between maximum and minimum values in the whole data part for which the compression is used.
|
- `T64` — Compression approach that crops unused high bits of values in integer data types (including `Enum`, `Date` and `DateTime`). At each step of its algorithm, codec takes a block of 64 values, puts them into 64x64 bit matrix, transposes it, crops the unused bits of values and returns the rest as a sequence. Unused bits are the bits, that do not differ between maximum and minimum values in the whole data part for which the compression is used.
|
||||||
|
|
||||||
`DoubleDelta` and `Gorilla` codecs are used in Gorilla TSDB as the components of its compressing algorithm. Gorilla approach is effective in scenarios when there is a sequence of slowly changing values with their timestamps. Timestamps are effectively compressed by the `DoubleDelta` codec, and values are effectively compressed by the `Gorilla` codec. For example, to get an effectively stored table, you can create it in the following configuration:
|
`DoubleDelta` and `Gorilla` codecs are used in Gorilla TSDB as the components of its compressing algorithm. Gorilla approach is effective in scenarios when there is a sequence of slowly changing values with their timestamps. Timestamps are effectively compressed by the `DoubleDelta` codec, and values are effectively compressed by the `Gorilla` codec. For example, to get an effectively stored table, you can create it in the following configuration:
|
||||||
|
@ -252,12 +252,14 @@ This is an experimental feature that may change in backwards-incompatible ways i
|
|||||||
:::
|
:::
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
CREATE WINDOW VIEW [IF NOT EXISTS] [db.]table_name [TO [db.]table_name] [ENGINE = engine] [WATERMARK = strategy] [ALLOWED_LATENESS = interval_function] AS SELECT ... GROUP BY time_window_function
|
CREATE WINDOW VIEW [IF NOT EXISTS] [db.]table_name [TO [db.]table_name] [INNER ENGINE engine] [ENGINE engine] [WATERMARK strategy] [ALLOWED_LATENESS interval_function] [POPULATE] AS SELECT ... GROUP BY time_window_function
|
||||||
```
|
```
|
||||||
|
|
||||||
Window view can aggregate data by time window and output the results when the window is ready to fire. It stores the partial aggregation results in an inner(or specified) table to reduce latency and can push the processing result to a specified table or push notifications using the WATCH query.
|
Window view can aggregate data by time window and output the results when the window is ready to fire. It stores the partial aggregation results in an inner(or specified) table to reduce latency and can push the processing result to a specified table or push notifications using the WATCH query.
|
||||||
|
|
||||||
Creating a window view is similar to creating `MATERIALIZED VIEW`. Window view needs an inner storage engine to store intermediate data. The inner storage will use `AggregatingMergeTree` as the default engine.
|
Creating a window view is similar to creating `MATERIALIZED VIEW`. Window view needs an inner storage engine to store intermediate data. The inner storage can be specified by using `INNER ENGINE` clause, the window view will use `AggregatingMergeTree` as the default inner engine.
|
||||||
|
|
||||||
|
When creating a window view without `TO [db].[table]`, you must specify `ENGINE` – the table engine for storing data.
|
||||||
|
|
||||||
### Time Window Functions
|
### Time Window Functions
|
||||||
|
|
||||||
@ -297,6 +299,8 @@ CREATE WINDOW VIEW test.wv TO test.dst WATERMARK=ASCENDING ALLOWED_LATENESS=INTE
|
|||||||
|
|
||||||
Note that elements emitted by a late firing should be treated as updated results of a previous computation. Instead of firing at the end of windows, the window view will fire immediately when the late event arrives. Thus, it will result in multiple outputs for the same window. Users need to take these duplicated results into account or deduplicate them.
|
Note that elements emitted by a late firing should be treated as updated results of a previous computation. Instead of firing at the end of windows, the window view will fire immediately when the late event arrives. Thus, it will result in multiple outputs for the same window. Users need to take these duplicated results into account or deduplicate them.
|
||||||
|
|
||||||
|
You can modify `SELECT` query that was specified in the window view by using `ALTER TABLE … MODIFY QUERY` statement. The data structure resulting in a new `SELECT` query should be the same as the original `SELECT` query when with or without `TO [db.]name` clause. Note that the data in the current window will be lost because the intermediate state cannot be reused.
|
||||||
|
|
||||||
### Monitoring New Windows
|
### Monitoring New Windows
|
||||||
|
|
||||||
Window view supports the [WATCH](../../../sql-reference/statements/watch.md) query to monitoring changes, or use `TO` syntax to output the results to a table.
|
Window view supports the [WATCH](../../../sql-reference/statements/watch.md) query to monitoring changes, or use `TO` syntax to output the results to a table.
|
||||||
@ -314,6 +318,7 @@ WATCH [db.]window_view
|
|||||||
|
|
||||||
- `window_view_clean_interval`: The clean interval of window view in seconds to free outdated data. The system will retain the windows that have not been fully triggered according to the system time or `WATERMARK` configuration, and the other data will be deleted.
|
- `window_view_clean_interval`: The clean interval of window view in seconds to free outdated data. The system will retain the windows that have not been fully triggered according to the system time or `WATERMARK` configuration, and the other data will be deleted.
|
||||||
- `window_view_heartbeat_interval`: The heartbeat interval in seconds to indicate the watch query is alive.
|
- `window_view_heartbeat_interval`: The heartbeat interval in seconds to indicate the watch query is alive.
|
||||||
|
- `wait_for_window_view_fire_signal_timeout`: Timeout for waiting for window view fire signal in event time processing.
|
||||||
|
|
||||||
### Example
|
### Example
|
||||||
|
|
||||||
|
@ -48,9 +48,9 @@ You can see that `GROUP BY` for `y = NULL` summed up `x`, as if `NULL` is this v
|
|||||||
|
|
||||||
If you pass several keys to `GROUP BY`, the result will give you all the combinations of the selection, as if `NULL` were a specific value.
|
If you pass several keys to `GROUP BY`, the result will give you all the combinations of the selection, as if `NULL` were a specific value.
|
||||||
|
|
||||||
## WITH ROLLUP Modifier
|
## ROLLUP Modifier
|
||||||
|
|
||||||
`WITH ROLLUP` modifier is used to calculate subtotals for the key expressions, based on their order in the `GROUP BY` list. The subtotals rows are added after the result table.
|
`ROLLUP` modifier is used to calculate subtotals for the key expressions, based on their order in the `GROUP BY` list. The subtotals rows are added after the result table.
|
||||||
|
|
||||||
The subtotals are calculated in the reverse order: at first subtotals are calculated for the last key expression in the list, then for the previous one, and so on up to the first key expression.
|
The subtotals are calculated in the reverse order: at first subtotals are calculated for the last key expression in the list, then for the previous one, and so on up to the first key expression.
|
||||||
|
|
||||||
@ -78,7 +78,7 @@ Consider the table t:
|
|||||||
Query:
|
Query:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT year, month, day, count(*) FROM t GROUP BY year, month, day WITH ROLLUP;
|
SELECT year, month, day, count(*) FROM t GROUP BY ROLLUP(year, month, day);
|
||||||
```
|
```
|
||||||
As `GROUP BY` section has three key expressions, the result contains four tables with subtotals "rolled up" from right to left:
|
As `GROUP BY` section has three key expressions, the result contains four tables with subtotals "rolled up" from right to left:
|
||||||
|
|
||||||
@ -109,10 +109,14 @@ As `GROUP BY` section has three key expressions, the result contains four tables
|
|||||||
│ 0 │ 0 │ 0 │ 6 │
|
│ 0 │ 0 │ 0 │ 6 │
|
||||||
└──────┴───────┴─────┴─────────┘
|
└──────┴───────┴─────┴─────────┘
|
||||||
```
|
```
|
||||||
|
The same query also can be written using `WITH` keyword.
|
||||||
|
```sql
|
||||||
|
SELECT year, month, day, count(*) FROM t GROUP BY year, month, day WITH ROLLUP;
|
||||||
|
```
|
||||||
|
|
||||||
## WITH CUBE Modifier
|
## CUBE Modifier
|
||||||
|
|
||||||
`WITH CUBE` modifier is used to calculate subtotals for every combination of the key expressions in the `GROUP BY` list. The subtotals rows are added after the result table.
|
`CUBE` modifier is used to calculate subtotals for every combination of the key expressions in the `GROUP BY` list. The subtotals rows are added after the result table.
|
||||||
|
|
||||||
In the subtotals rows the values of all "grouped" key expressions are set to `0` or empty line.
|
In the subtotals rows the values of all "grouped" key expressions are set to `0` or empty line.
|
||||||
|
|
||||||
@ -138,7 +142,7 @@ Consider the table t:
|
|||||||
Query:
|
Query:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT year, month, day, count(*) FROM t GROUP BY year, month, day WITH CUBE;
|
SELECT year, month, day, count(*) FROM t GROUP BY CUBE(year, month, day);
|
||||||
```
|
```
|
||||||
|
|
||||||
As `GROUP BY` section has three key expressions, the result contains eight tables with subtotals for all key expression combinations:
|
As `GROUP BY` section has three key expressions, the result contains eight tables with subtotals for all key expression combinations:
|
||||||
@ -196,6 +200,10 @@ Columns, excluded from `GROUP BY`, are filled with zeros.
|
|||||||
│ 0 │ 0 │ 0 │ 6 │
|
│ 0 │ 0 │ 0 │ 6 │
|
||||||
└──────┴───────┴─────┴─────────┘
|
└──────┴───────┴─────┴─────────┘
|
||||||
```
|
```
|
||||||
|
The same query also can be written using `WITH` keyword.
|
||||||
|
```sql
|
||||||
|
SELECT year, month, day, count(*) FROM t GROUP BY year, month, day WITH CUBE;
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
## WITH TOTALS Modifier
|
## WITH TOTALS Modifier
|
||||||
@ -260,6 +268,39 @@ GROUP BY domain
|
|||||||
|
|
||||||
For every different key value encountered, `GROUP BY` calculates a set of aggregate function values.
|
For every different key value encountered, `GROUP BY` calculates a set of aggregate function values.
|
||||||
|
|
||||||
|
## GROUPING SETS modifier
|
||||||
|
|
||||||
|
This is the most general modifier.
|
||||||
|
This modifier allows to manually specify several aggregation key sets (grouping sets).
|
||||||
|
Aggregation is performed separately for each grouping set, after that all results are combined.
|
||||||
|
If a column is not presented in a grouping set, it's filled with a default value.
|
||||||
|
|
||||||
|
In other words, modifiers described above can be represented via `GROUPING SETS`.
|
||||||
|
Despite the fact that queries with `ROLLUP`, `CUBE` and `GROUPING SETS` modifiers are syntactically equal, they may have different performance.
|
||||||
|
When `GROUPING SETS` try to execute everything in parallel, `ROLLUP` and `CUBE` are executing the final merging of the aggregates in a single thread.
|
||||||
|
|
||||||
|
In the situation when source columns contain default values, it might be hard to distinguish if a row is a part of the aggregation which uses those columns as keys or not.
|
||||||
|
To solve this problem `GROUPING` function must be used.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
The following two queries are equivalent.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Query 1
|
||||||
|
SELECT year, month, day, count(*) FROM t GROUP BY year, month, day WITH ROLLUP;
|
||||||
|
|
||||||
|
-- Query 2
|
||||||
|
SELECT year, month, day, count(*) FROM t GROUP BY
|
||||||
|
GROUPING SETS
|
||||||
|
(
|
||||||
|
(year, month, day),
|
||||||
|
(year, month),
|
||||||
|
(year),
|
||||||
|
()
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
## Implementation Details
|
## Implementation Details
|
||||||
|
|
||||||
Aggregation is one of the most important features of a column-oriented DBMS, and thus it’s implementation is one of the most heavily optimized parts of ClickHouse. By default, aggregation is done in memory using a hash-table. It has 40+ specializations that are chosen automatically depending on “grouping key” data types.
|
Aggregation is one of the most important features of a column-oriented DBMS, and thus it’s implementation is one of the most heavily optimized parts of ClickHouse. By default, aggregation is done in memory using a hash-table. It has 40+ specializations that are chosen automatically depending on “grouping key” data types.
|
||||||
|
@ -32,6 +32,7 @@ The list of available `SYSTEM` statements:
|
|||||||
- [START TTL MERGES](#query_language-start-ttl-merges)
|
- [START TTL MERGES](#query_language-start-ttl-merges)
|
||||||
- [STOP MOVES](#query_language-stop-moves)
|
- [STOP MOVES](#query_language-stop-moves)
|
||||||
- [START MOVES](#query_language-start-moves)
|
- [START MOVES](#query_language-start-moves)
|
||||||
|
- [SYSTEM UNFREEZE](#query_language-system-unfreeze)
|
||||||
- [STOP FETCHES](#query_language-system-stop-fetches)
|
- [STOP FETCHES](#query_language-system-stop-fetches)
|
||||||
- [START FETCHES](#query_language-system-start-fetches)
|
- [START FETCHES](#query_language-system-start-fetches)
|
||||||
- [STOP REPLICATED SENDS](#query_language-system-start-replicated-sends)
|
- [STOP REPLICATED SENDS](#query_language-system-start-replicated-sends)
|
||||||
@ -239,6 +240,14 @@ Returns `Ok.` even if table does not exist. Returns error when database does not
|
|||||||
SYSTEM START MOVES [[db.]merge_tree_family_table_name]
|
SYSTEM START MOVES [[db.]merge_tree_family_table_name]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### SYSTEM UNFREEZE {#query_language-system-unfreeze}
|
||||||
|
|
||||||
|
Clears freezed backup with the specified name from all the disks. See more about unfreezing separate parts in [ALTER TABLE table_name UNFREEZE WITH NAME ](alter/partition.md#alter_unfreeze-partition)
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SYSTEM UNFREEZE WITH NAME <backup_name>
|
||||||
|
```
|
||||||
|
|
||||||
## Managing ReplicatedMergeTree Tables
|
## Managing ReplicatedMergeTree Tables
|
||||||
|
|
||||||
ClickHouse can manage background replication related processes in [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replication.md#table_engines-replication) tables.
|
ClickHouse can manage background replication related processes in [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replication.md#table_engines-replication) tables.
|
||||||
|
@ -55,3 +55,372 @@ https://dev.mysql.com/doc/refman/8.0/en/window-function-descriptions.html
|
|||||||
https://dev.mysql.com/doc/refman/8.0/en/window-functions-usage.html
|
https://dev.mysql.com/doc/refman/8.0/en/window-functions-usage.html
|
||||||
|
|
||||||
https://dev.mysql.com/doc/refman/8.0/en/window-functions-frames.html
|
https://dev.mysql.com/doc/refman/8.0/en/window-functions-frames.html
|
||||||
|
|
||||||
|
## Syntax
|
||||||
|
|
||||||
|
```text
|
||||||
|
aggregate_function (column_name)
|
||||||
|
OVER ([PARTITION BY groupping_column] [ORDER BY sorting_column]
|
||||||
|
[ROWS or RANGE expression_to_bounds_of_frame])
|
||||||
|
```
|
||||||
|
|
||||||
|
- `PARTITION BY` - defines how to break a resultset into groups.
|
||||||
|
- `ORDER BY` - defines how to order rows inside the group during calculation aggregate_function.
|
||||||
|
- `ROWS or RANGE` - defines bounds of a frame, aggregate_function is calculated within a frame.
|
||||||
|
|
||||||
|
```text
|
||||||
|
PARTITION
|
||||||
|
┌─────────────────┐ <-- UNBOUNDED PRECEDING (BEGINNING of the PARTITION)
|
||||||
|
│ │
|
||||||
|
│ │
|
||||||
|
│=================│ <-- N PRECEDING <─┐
|
||||||
|
│ N ROWS │ │ F
|
||||||
|
│ Before CURRENT │ │ R
|
||||||
|
│~~~~~~~~~~~~~~~~~│ <-- CURRENT ROW │ A
|
||||||
|
│ M ROWS │ │ M
|
||||||
|
│ After CURRENT │ │ E
|
||||||
|
│=================│ <-- M FOLLOWING <─┘
|
||||||
|
│ │
|
||||||
|
│ │
|
||||||
|
└─────────────────┘ <--- UNBOUNDED FOLLOWING (END of the PARTITION)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE wf_partition
|
||||||
|
(
|
||||||
|
`part_key` UInt64,
|
||||||
|
`value` UInt64
|
||||||
|
)
|
||||||
|
ENGINE = Memory;
|
||||||
|
|
||||||
|
INSERT INTO wf_partition FORMAT Values
|
||||||
|
(1,1,1), (1,2,2), (1,3,3), (2,0,0), (3,0,0);
|
||||||
|
|
||||||
|
SELECT
|
||||||
|
part_key,
|
||||||
|
value,
|
||||||
|
order,
|
||||||
|
groupArray(value) OVER (PARTITION BY part_key) AS frame_values
|
||||||
|
FROM wf_partition
|
||||||
|
ORDER BY
|
||||||
|
part_key ASC,
|
||||||
|
value ASC;
|
||||||
|
|
||||||
|
┌─part_key─┬─value─┬─order─┬─frame_values─┐
|
||||||
|
│ 1 │ 1 │ 1 │ [1,2,3] │ <┐
|
||||||
|
│ 1 │ 2 │ 2 │ [1,2,3] │ │ 1-st group
|
||||||
|
│ 1 │ 3 │ 3 │ [1,2,3] │ <┘
|
||||||
|
│ 2 │ 0 │ 0 │ [0] │ <- 2-nd group
|
||||||
|
│ 3 │ 0 │ 0 │ [0] │ <- 3-d group
|
||||||
|
└──────────┴───────┴───────┴──────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE wf_frame
|
||||||
|
(
|
||||||
|
`part_key` UInt64,
|
||||||
|
`value` UInt64,
|
||||||
|
`order` UInt64
|
||||||
|
)
|
||||||
|
ENGINE = Memory;
|
||||||
|
|
||||||
|
INSERT INTO wf_frame FORMAT Values
|
||||||
|
(1,1,1), (1,2,2), (1,3,3), (1,4,4), (1,5,5);
|
||||||
|
|
||||||
|
-- frame is bounded by bounds of a partition (BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
|
||||||
|
SELECT
|
||||||
|
part_key,
|
||||||
|
value,
|
||||||
|
order,
|
||||||
|
groupArray(value) OVER (PARTITION BY part_key ORDER BY order ASC
|
||||||
|
Rows BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS frame_values
|
||||||
|
FROM wf_frame
|
||||||
|
ORDER BY
|
||||||
|
part_key ASC,
|
||||||
|
value ASC;
|
||||||
|
|
||||||
|
┌─part_key─┬─value─┬─order─┬─frame_values─┐
|
||||||
|
│ 1 │ 1 │ 1 │ [1,2,3,4,5] │
|
||||||
|
│ 1 │ 2 │ 2 │ [1,2,3,4,5] │
|
||||||
|
│ 1 │ 3 │ 3 │ [1,2,3,4,5] │
|
||||||
|
│ 1 │ 4 │ 4 │ [1,2,3,4,5] │
|
||||||
|
│ 1 │ 5 │ 5 │ [1,2,3,4,5] │
|
||||||
|
└──────────┴───────┴───────┴──────────────┘
|
||||||
|
|
||||||
|
-- short form - no bound expression, no order by
|
||||||
|
SELECT
|
||||||
|
part_key,
|
||||||
|
value,
|
||||||
|
order,
|
||||||
|
groupArray(value) OVER (PARTITION BY part_key) AS frame_values
|
||||||
|
FROM wf_frame
|
||||||
|
ORDER BY
|
||||||
|
part_key ASC,
|
||||||
|
value ASC;
|
||||||
|
┌─part_key─┬─value─┬─order─┬─frame_values─┐
|
||||||
|
│ 1 │ 1 │ 1 │ [1,2,3,4,5] │
|
||||||
|
│ 1 │ 2 │ 2 │ [1,2,3,4,5] │
|
||||||
|
│ 1 │ 3 │ 3 │ [1,2,3,4,5] │
|
||||||
|
│ 1 │ 4 │ 4 │ [1,2,3,4,5] │
|
||||||
|
│ 1 │ 5 │ 5 │ [1,2,3,4,5] │
|
||||||
|
└──────────┴───────┴───────┴──────────────┘
|
||||||
|
|
||||||
|
-- frame is bounded by the beggining of a partition and the current row
|
||||||
|
SELECT
|
||||||
|
part_key,
|
||||||
|
value,
|
||||||
|
order,
|
||||||
|
groupArray(value) OVER (PARTITION BY part_key ORDER BY order ASC
|
||||||
|
Rows BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS frame_values
|
||||||
|
FROM wf_frame
|
||||||
|
ORDER BY
|
||||||
|
part_key ASC,
|
||||||
|
value ASC;
|
||||||
|
|
||||||
|
┌─part_key─┬─value─┬─order─┬─frame_values─┐
|
||||||
|
│ 1 │ 1 │ 1 │ [1] │
|
||||||
|
│ 1 │ 2 │ 2 │ [1,2] │
|
||||||
|
│ 1 │ 3 │ 3 │ [1,2,3] │
|
||||||
|
│ 1 │ 4 │ 4 │ [1,2,3,4] │
|
||||||
|
│ 1 │ 5 │ 5 │ [1,2,3,4,5] │
|
||||||
|
└──────────┴───────┴───────┴──────────────┘
|
||||||
|
|
||||||
|
-- short form (frame is bounded by the beggining of a partition and the current row)
|
||||||
|
SELECT
|
||||||
|
part_key,
|
||||||
|
value,
|
||||||
|
order,
|
||||||
|
groupArray(value) OVER (PARTITION BY part_key ORDER BY order ASC) AS frame_values
|
||||||
|
FROM wf_frame
|
||||||
|
ORDER BY
|
||||||
|
part_key ASC,
|
||||||
|
value ASC;
|
||||||
|
┌─part_key─┬─value─┬─order─┬─frame_values─┐
|
||||||
|
│ 1 │ 1 │ 1 │ [1] │
|
||||||
|
│ 1 │ 2 │ 2 │ [1,2] │
|
||||||
|
│ 1 │ 3 │ 3 │ [1,2,3] │
|
||||||
|
│ 1 │ 4 │ 4 │ [1,2,3,4] │
|
||||||
|
│ 1 │ 5 │ 5 │ [1,2,3,4,5] │
|
||||||
|
└──────────┴───────┴───────┴──────────────┘
|
||||||
|
|
||||||
|
-- frame is bounded by the beggining of a partition and the current row, but order is backward
|
||||||
|
SELECT
|
||||||
|
part_key,
|
||||||
|
value,
|
||||||
|
order,
|
||||||
|
groupArray(value) OVER (PARTITION BY part_key ORDER BY order DESC) AS frame_values
|
||||||
|
FROM wf_frame
|
||||||
|
ORDER BY
|
||||||
|
part_key ASC,
|
||||||
|
value ASC;
|
||||||
|
┌─part_key─┬─value─┬─order─┬─frame_values─┐
|
||||||
|
│ 1 │ 1 │ 1 │ [5,4,3,2,1] │
|
||||||
|
│ 1 │ 2 │ 2 │ [5,4,3,2] │
|
||||||
|
│ 1 │ 3 │ 3 │ [5,4,3] │
|
||||||
|
│ 1 │ 4 │ 4 │ [5,4] │
|
||||||
|
│ 1 │ 5 │ 5 │ [5] │
|
||||||
|
└──────────┴───────┴───────┴──────────────┘
|
||||||
|
|
||||||
|
-- sliding frame - 1 PRECEDING ROW AND CURRENT ROW
|
||||||
|
SELECT
|
||||||
|
part_key,
|
||||||
|
value,
|
||||||
|
order,
|
||||||
|
groupArray(value) OVER (PARTITION BY part_key ORDER BY order ASC
|
||||||
|
Rows BETWEEN 1 PRECEDING AND CURRENT ROW) AS frame_values
|
||||||
|
FROM wf_frame
|
||||||
|
ORDER BY
|
||||||
|
part_key ASC,
|
||||||
|
value ASC;
|
||||||
|
|
||||||
|
┌─part_key─┬─value─┬─order─┬─frame_values─┐
|
||||||
|
│ 1 │ 1 │ 1 │ [1] │
|
||||||
|
│ 1 │ 2 │ 2 │ [1,2] │
|
||||||
|
│ 1 │ 3 │ 3 │ [2,3] │
|
||||||
|
│ 1 │ 4 │ 4 │ [3,4] │
|
||||||
|
│ 1 │ 5 │ 5 │ [4,5] │
|
||||||
|
└──────────┴───────┴───────┴──────────────┘
|
||||||
|
|
||||||
|
-- sliding frame - Rows BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING
|
||||||
|
SELECT
|
||||||
|
part_key,
|
||||||
|
value,
|
||||||
|
order,
|
||||||
|
groupArray(value) OVER (PARTITION BY part_key ORDER BY order ASC
|
||||||
|
Rows BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING) AS frame_values
|
||||||
|
FROM wf_frame
|
||||||
|
ORDER BY
|
||||||
|
part_key ASC,
|
||||||
|
value ASC;
|
||||||
|
┌─part_key─┬─value─┬─order─┬─frame_values─┐
|
||||||
|
│ 1 │ 1 │ 1 │ [1,2,3,4,5] │
|
||||||
|
│ 1 │ 2 │ 2 │ [1,2,3,4,5] │
|
||||||
|
│ 1 │ 3 │ 3 │ [2,3,4,5] │
|
||||||
|
│ 1 │ 4 │ 4 │ [3,4,5] │
|
||||||
|
│ 1 │ 5 │ 5 │ [4,5] │
|
||||||
|
└──────────┴───────┴───────┴──────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## Real world examples
|
||||||
|
|
||||||
|
### Maximum/total salary per department.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE employees
|
||||||
|
(
|
||||||
|
`department` String,
|
||||||
|
`employee_name` String,
|
||||||
|
`salary` Float
|
||||||
|
)
|
||||||
|
ENGINE = Memory;
|
||||||
|
|
||||||
|
INSERT INTO employees FORMAT Values
|
||||||
|
('Finance', 'Jonh', 200),
|
||||||
|
('Finance', 'Joan', 210),
|
||||||
|
('Finance', 'Jean', 505),
|
||||||
|
('IT', 'Tim', 200),
|
||||||
|
('IT', 'Anna', 300),
|
||||||
|
('IT', 'Elen', 500);
|
||||||
|
|
||||||
|
SELECT
|
||||||
|
department,
|
||||||
|
employee_name AS emp,
|
||||||
|
salary,
|
||||||
|
max_salary_per_dep,
|
||||||
|
total_salary_per_dep,
|
||||||
|
round((salary / total_salary_per_dep) * 100, 2) AS `share_per_dep(%)`
|
||||||
|
FROM
|
||||||
|
(
|
||||||
|
SELECT
|
||||||
|
department,
|
||||||
|
employee_name,
|
||||||
|
salary,
|
||||||
|
max(salary) OVER wndw AS max_salary_per_dep,
|
||||||
|
sum(salary) OVER wndw AS total_salary_per_dep
|
||||||
|
FROM employees
|
||||||
|
WINDOW wndw AS (PARTITION BY department
|
||||||
|
rows BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
|
||||||
|
ORDER BY
|
||||||
|
department ASC,
|
||||||
|
employee_name ASC
|
||||||
|
);
|
||||||
|
|
||||||
|
┌─department─┬─emp──┬─salary─┬─max_salary_per_dep─┬─total_salary_per_dep─┬─share_per_dep(%)─┐
|
||||||
|
│ Finance │ Jean │ 505 │ 505 │ 915 │ 55.19 │
|
||||||
|
│ Finance │ Joan │ 210 │ 505 │ 915 │ 22.95 │
|
||||||
|
│ Finance │ Jonh │ 200 │ 505 │ 915 │ 21.86 │
|
||||||
|
│ IT │ Anna │ 300 │ 500 │ 1000 │ 30 │
|
||||||
|
│ IT │ Elen │ 500 │ 500 │ 1000 │ 50 │
|
||||||
|
│ IT │ Tim │ 200 │ 500 │ 1000 │ 20 │
|
||||||
|
└────────────┴──────┴────────┴────────────────────┴──────────────────────┴──────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cumulative sum.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE events
|
||||||
|
(
|
||||||
|
`metric` String,
|
||||||
|
`ts` DateTime,
|
||||||
|
`value` Float
|
||||||
|
)
|
||||||
|
ENGINE = Memory
|
||||||
|
|
||||||
|
INSERT INTO warehouse VALUES
|
||||||
|
('sku38', '2020-01-01', 9),
|
||||||
|
('sku38', '2020-02-01', 1),
|
||||||
|
('sku38', '2020-03-01', -4),
|
||||||
|
('sku1', '2020-01-01', 1),
|
||||||
|
('sku1', '2020-02-01', 1),
|
||||||
|
('sku1', '2020-03-01', 1);
|
||||||
|
|
||||||
|
SELECT
|
||||||
|
item,
|
||||||
|
ts,
|
||||||
|
value,
|
||||||
|
sum(value) OVER (PARTITION BY item ORDER BY ts ASC) AS stock_balance
|
||||||
|
FROM warehouse
|
||||||
|
ORDER BY
|
||||||
|
item ASC,
|
||||||
|
ts ASC;
|
||||||
|
|
||||||
|
┌─item──┬──────────────────ts─┬─value─┬─stock_balance─┐
|
||||||
|
│ sku1 │ 2020-01-01 00:00:00 │ 1 │ 1 │
|
||||||
|
│ sku1 │ 2020-02-01 00:00:00 │ 1 │ 2 │
|
||||||
|
│ sku1 │ 2020-03-01 00:00:00 │ 1 │ 3 │
|
||||||
|
│ sku38 │ 2020-01-01 00:00:00 │ 9 │ 9 │
|
||||||
|
│ sku38 │ 2020-02-01 00:00:00 │ 1 │ 10 │
|
||||||
|
│ sku38 │ 2020-03-01 00:00:00 │ -4 │ 6 │
|
||||||
|
└───────┴─────────────────────┴───────┴───────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Moving / Sliding Average (per 3 rows)
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE sensors
|
||||||
|
(
|
||||||
|
`metric` String,
|
||||||
|
`ts` DateTime,
|
||||||
|
`value` Float
|
||||||
|
)
|
||||||
|
ENGINE = Memory;
|
||||||
|
|
||||||
|
insert into sensors values('cpu_temp', '2020-01-01 00:00:00', 87),
|
||||||
|
('cpu_temp', '2020-01-01 00:00:01', 77),
|
||||||
|
('cpu_temp', '2020-01-01 00:00:02', 93),
|
||||||
|
('cpu_temp', '2020-01-01 00:00:03', 87),
|
||||||
|
('cpu_temp', '2020-01-01 00:00:04', 87),
|
||||||
|
('cpu_temp', '2020-01-01 00:00:05', 87),
|
||||||
|
('cpu_temp', '2020-01-01 00:00:06', 87),
|
||||||
|
('cpu_temp', '2020-01-01 00:00:07', 87);
|
||||||
|
SELECT
|
||||||
|
metric,
|
||||||
|
ts,
|
||||||
|
value,
|
||||||
|
avg(value) OVER
|
||||||
|
(PARTITION BY metric ORDER BY ts ASC Rows BETWEEN 2 PRECEDING AND CURRENT ROW)
|
||||||
|
AS moving_avg_temp
|
||||||
|
FROM sensors
|
||||||
|
ORDER BY
|
||||||
|
metric ASC,
|
||||||
|
ts ASC;
|
||||||
|
|
||||||
|
┌─metric───┬──────────────────ts─┬─value─┬───moving_avg_temp─┐
|
||||||
|
│ cpu_temp │ 2020-01-01 00:00:00 │ 87 │ 87 │
|
||||||
|
│ cpu_temp │ 2020-01-01 00:00:01 │ 77 │ 82 │
|
||||||
|
│ cpu_temp │ 2020-01-01 00:00:02 │ 93 │ 85.66666666666667 │
|
||||||
|
│ cpu_temp │ 2020-01-01 00:00:03 │ 87 │ 85.66666666666667 │
|
||||||
|
│ cpu_temp │ 2020-01-01 00:00:04 │ 87 │ 89 │
|
||||||
|
│ cpu_temp │ 2020-01-01 00:00:05 │ 87 │ 87 │
|
||||||
|
│ cpu_temp │ 2020-01-01 00:00:06 │ 87 │ 87 │
|
||||||
|
│ cpu_temp │ 2020-01-01 00:00:07 │ 87 │ 87 │
|
||||||
|
└──────────┴─────────────────────┴───────┴───────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Moving / Sliding Average (per 10 seconds)
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT
|
||||||
|
metric,
|
||||||
|
ts,
|
||||||
|
value,
|
||||||
|
avg(value) OVER (PARTITION BY metric ORDER BY ts
|
||||||
|
Range BETWEEN 10 PRECEDING AND CURRENT ROW) AS moving_avg_10_seconds_temp
|
||||||
|
FROM sensors
|
||||||
|
ORDER BY
|
||||||
|
metric ASC,
|
||||||
|
ts ASC;
|
||||||
|
|
||||||
|
┌─metric───┬──────────────────ts─┬─value─┬─moving_avg_10_seconds_temp─┐
|
||||||
|
│ cpu_temp │ 2020-01-01 00:00:00 │ 87 │ 87 │
|
||||||
|
│ cpu_temp │ 2020-01-01 00:01:10 │ 77 │ 77 │
|
||||||
|
│ cpu_temp │ 2020-01-01 00:02:20 │ 93 │ 93 │
|
||||||
|
│ cpu_temp │ 2020-01-01 00:03:30 │ 87 │ 87 │
|
||||||
|
│ cpu_temp │ 2020-01-01 00:04:40 │ 87 │ 87 │
|
||||||
|
│ cpu_temp │ 2020-01-01 00:05:50 │ 87 │ 87 │
|
||||||
|
│ cpu_temp │ 2020-01-01 00:06:00 │ 87 │ 87 │
|
||||||
|
│ cpu_temp │ 2020-01-01 00:07:10 │ 87 │ 87 │
|
||||||
|
└──────────┴─────────────────────┴───────┴────────────────────────────┘
|
||||||
|
```
|
||||||
|
@ -39,6 +39,6 @@ Question candidates:
|
|||||||
- How to kill a process (query) in ClickHouse?
|
- How to kill a process (query) in ClickHouse?
|
||||||
- How to implement pivot (like in pandas)?
|
- How to implement pivot (like in pandas)?
|
||||||
- How to remove the default ClickHouse user through users.d?
|
- How to remove the default ClickHouse user through users.d?
|
||||||
- Importing MySQL dump to Clickhouse
|
- Importing MySQL dump to ClickHouse
|
||||||
- Window function workarounds (row\_number, lag/lead, running diff/sum/average)
|
- Window function workarounds (row\_number, lag/lead, running diff/sum/average)
|
||||||
##}
|
##}
|
||||||
|
@ -22,9 +22,9 @@ ClickHouse позволяет автоматически удалять данн
|
|||||||
|
|
||||||
ClickHouse не удаляет данные в реальном времени, как СУБД [OLTP](https://en.wikipedia.org/wiki/Online_transaction_processing). Больше всего на такое удаление похожи мутации. Они выполняются с помощью запросов `ALTER ... DELETE` или `ALTER ... UPDATE`. В отличие от обычных запросов `DELETE` и `UPDATE`, мутации выполняются асинхронно, в пакетном режиме, не в реальном времени. В остальном после слов `ALTER TABLE` синтаксис обычных запросов и мутаций одинаковый.
|
ClickHouse не удаляет данные в реальном времени, как СУБД [OLTP](https://en.wikipedia.org/wiki/Online_transaction_processing). Больше всего на такое удаление похожи мутации. Они выполняются с помощью запросов `ALTER ... DELETE` или `ALTER ... UPDATE`. В отличие от обычных запросов `DELETE` и `UPDATE`, мутации выполняются асинхронно, в пакетном режиме, не в реальном времени. В остальном после слов `ALTER TABLE` синтаксис обычных запросов и мутаций одинаковый.
|
||||||
|
|
||||||
`ALTER DELETE` можно использовать для гибкого удаления устаревших данных. Если вам нужно делать это регулярно, единственный недостаток такого подхода будет заключаться в том, что потребуется внешняя система для запуска запроса. Кроме того, могут возникнуть некоторые проблемы с производительностью, поскольку мутации перезаписывают целые куски данных если в них содержится хотя бы одна строка, которую нужно удалить.
|
`ALTER DELETE` можно использовать для гибкого удаления устаревших данных. Если вам нужно делать это регулярно, основной недостаток такого подхода будет заключаться в том, что потребуется внешняя система для запуска запроса. Кроме того, могут возникнуть некоторые проблемы с производительностью, поскольку мутации перезаписывают целые куски данных если в них содержится хотя бы одна строка, которую нужно удалить.
|
||||||
|
|
||||||
Это самый распространенный подход к тому, чтобы обеспечить соблюдение принципов [GDPR](https://gdpr-info.eu) в вашей системе на ClickHouse.
|
Это - самый распространенный подход к тому, чтобы обеспечить соблюдение принципов [GDPR](https://gdpr-info.eu) в вашей системе на ClickHouse.
|
||||||
|
|
||||||
Подробнее смотрите в разделе [Мутации](../../sql-reference/statements/alter/index.md#alter-mutations).
|
Подробнее смотрите в разделе [Мутации](../../sql-reference/statements/alter/index.md#alter-mutations).
|
||||||
|
|
||||||
|
@ -41,6 +41,8 @@ sidebar_label: "Клиентские библиотеки от сторонни
|
|||||||
- [ClickHouse (Ruby)](https://github.com/shlima/click_house)
|
- [ClickHouse (Ruby)](https://github.com/shlima/click_house)
|
||||||
- [clickhouse-activerecord](https://github.com/PNixx/clickhouse-activerecord)
|
- [clickhouse-activerecord](https://github.com/PNixx/clickhouse-activerecord)
|
||||||
- Rust
|
- Rust
|
||||||
|
- [clickhouse.rs](https://github.com/loyd/clickhouse.rs)
|
||||||
|
- [clickhouse-rs](https://github.com/suharev7/clickhouse-rs)
|
||||||
- [Klickhouse](https://github.com/Protryon/klickhouse)
|
- [Klickhouse](https://github.com/Protryon/klickhouse)
|
||||||
- R
|
- R
|
||||||
- [clickhouse-r](https://github.com/hannesmuehleisen/clickhouse-r)
|
- [clickhouse-r](https://github.com/hannesmuehleisen/clickhouse-r)
|
||||||
|
@ -55,5 +55,5 @@ ORDER BY id
|
|||||||
|
|
||||||
## Смотрите также
|
## Смотрите также
|
||||||
|
|
||||||
- [Reducing Clickhouse Storage Cost with the Low Cardinality Type – Lessons from an Instana Engineer](https://www.instana.com/blog/reducing-clickhouse-storage-cost-with-the-low-cardinality-type-lessons-from-an-instana-engineer/).
|
- [Reducing ClickHouse Storage Cost with the Low Cardinality Type – Lessons from an Instana Engineer](https://www.instana.com/blog/reducing-clickhouse-storage-cost-with-the-low-cardinality-type-lessons-from-an-instana-engineer/).
|
||||||
- [String Optimization (video presentation in Russian)](https://youtu.be/rqf-ILRgBdY?list=PL0Z2YDlm0b3iwXCpEFiOOYmwXzVmjJfEt). [Slides in English](https://github.com/ClickHouse/clickhouse-presentations/raw/master/meetup19/string_optimization.pdf).
|
- [String Optimization (video presentation in Russian)](https://youtu.be/rqf-ILRgBdY?list=PL0Z2YDlm0b3iwXCpEFiOOYmwXzVmjJfEt). [Slides in English](https://github.com/ClickHouse/clickhouse-presentations/raw/master/meetup19/string_optimization.pdf).
|
||||||
|
@ -11,7 +11,7 @@ sidebar_label: "Функции для шифрования"
|
|||||||
|
|
||||||
Длина инициализирующего вектора всегда 16 байт (лишние байты игнорируются).
|
Длина инициализирующего вектора всегда 16 байт (лишние байты игнорируются).
|
||||||
|
|
||||||
Обратите внимание, что до версии Clickhouse 21.1 эти функции работали медленно.
|
Обратите внимание, что до версии ClickHouse 21.1 эти функции работали медленно.
|
||||||
|
|
||||||
## encrypt {#encrypt}
|
## encrypt {#encrypt}
|
||||||
|
|
||||||
@ -19,11 +19,10 @@ sidebar_label: "Функции для шифрования"
|
|||||||
|
|
||||||
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
||||||
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
||||||
- aes-128-cfb1, aes-192-cfb1, aes-256-cfb1
|
- aes-128-cfb128
|
||||||
- aes-128-cfb8, aes-192-cfb8, aes-256-cfb8
|
|
||||||
- aes-128-cfb128, aes-192-cfb128, aes-256-cfb128
|
|
||||||
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
||||||
- aes-128-gcm, aes-192-gcm, aes-256-gcm
|
- aes-128-gcm, aes-192-gcm, aes-256-gcm
|
||||||
|
- aes-128-ctr, aes-192-ctr, aes-256-ctr
|
||||||
|
|
||||||
**Синтаксис**
|
**Синтаксис**
|
||||||
|
|
||||||
@ -63,9 +62,9 @@ ENGINE = Memory;
|
|||||||
Запрос:
|
Запрос:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
INSERT INTO encryption_test VALUES('aes-256-cfb128 no IV', encrypt('aes-256-cfb128', 'Secret', '12345678910121314151617181920212')),\
|
INSERT INTO encryption_test VALUES('aes-256-ofb no IV', encrypt('aes-256-ofb', 'Secret', '12345678910121314151617181920212')),\
|
||||||
('aes-256-cfb128 no IV, different key', encrypt('aes-256-cfb128', 'Secret', 'keykeykeykeykeykeykeykeykeykeyke')),\
|
('aes-256-ofb no IV, different key', encrypt('aes-256-ofb', 'Secret', 'keykeykeykeykeykeykeykeykeykeyke')),\
|
||||||
('aes-256-cfb128 with IV', encrypt('aes-256-cfb128', 'Secret', '12345678910121314151617181920212', 'iviviviviviviviv')),\
|
('aes-256-ofb with IV', encrypt('aes-256-ofb', 'Secret', '12345678910121314151617181920212', 'iviviviviviviviv')),\
|
||||||
('aes-256-cbc no IV', encrypt('aes-256-cbc', 'Secret', '12345678910121314151617181920212'));
|
('aes-256-cbc no IV', encrypt('aes-256-cbc', 'Secret', '12345678910121314151617181920212'));
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -78,12 +77,12 @@ SELECT comment, hex(secret) FROM encryption_test;
|
|||||||
Результат:
|
Результат:
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
┌─comment─────────────────────────────┬─hex(secret)──────────────────────┐
|
┌─comment──────────────────────────┬─hex(secret)──────────────────────┐
|
||||||
│ aes-256-cfb128 no IV │ B4972BDC4459 │
|
│ aes-256-ofb no IV │ B4972BDC4459 │
|
||||||
│ aes-256-cfb128 no IV, different key │ 2FF57C092DC9 │
|
│ aes-256-ofb no IV, different key │ 2FF57C092DC9 │
|
||||||
│ aes-256-cfb128 with IV │ 5E6CB398F653 │
|
│ aes-256-ofb with IV │ 5E6CB398F653 │
|
||||||
│ aes-256-cbc no IV │ 1BC0629A92450D9E73A00E7D02CF4142 │
|
│ aes-256-cbc no IV │ 1BC0629A92450D9E73A00E7D02CF4142 │
|
||||||
└─────────────────────────────────────┴──────────────────────────────────┘
|
└──────────────────────────────────┴──────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
Пример в режиме `-gcm`:
|
Пример в режиме `-gcm`:
|
||||||
@ -116,9 +115,7 @@ SELECT comment, hex(secret) FROM encryption_test WHERE comment LIKE '%gcm%';
|
|||||||
|
|
||||||
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
||||||
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
||||||
- aes-128-cfb1, aes-192-cfb1, aes-256-cfb1
|
- aes-128-cfb128
|
||||||
- aes-128-cfb8, aes-192-cfb8, aes-256-cfb8
|
|
||||||
- aes-128-cfb128, aes-192-cfb128, aes-256-cfb128
|
|
||||||
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
||||||
|
|
||||||
**Синтаксис**
|
**Синтаксис**
|
||||||
@ -145,7 +142,7 @@ aes_encrypt_mysql('mode', 'plaintext', 'key' [, iv])
|
|||||||
Запрос:
|
Запрос:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT encrypt('aes-256-cfb128', 'Secret', '12345678910121314151617181920212', 'iviviviviviviviv') = aes_encrypt_mysql('aes-256-cfb128', 'Secret', '12345678910121314151617181920212', 'iviviviviviviviv') AS ciphertexts_equal;
|
SELECT encrypt('aes-256-ofb', 'Secret', '12345678910121314151617181920212', 'iviviviviviviviv') = aes_encrypt_mysql('aes-256-ofb', 'Secret', '12345678910121314151617181920212', 'iviviviviviviviv') AS ciphertexts_equal;
|
||||||
```
|
```
|
||||||
|
|
||||||
Результат:
|
Результат:
|
||||||
@ -161,14 +158,14 @@ SELECT encrypt('aes-256-cfb128', 'Secret', '12345678910121314151617181920212', '
|
|||||||
Запрос:
|
Запрос:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT encrypt('aes-256-cfb128', 'Secret', '123456789101213141516171819202122', 'iviviviviviviviv123');
|
SELECT encrypt('aes-256-ofb', 'Secret', '123456789101213141516171819202122', 'iviviviviviviviv123');
|
||||||
```
|
```
|
||||||
|
|
||||||
Результат:
|
Результат:
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
Received exception from server (version 21.1.2):
|
Received exception from server (version 21.1.2):
|
||||||
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Invalid key size: 33 expected 32: While processing encrypt('aes-256-cfb128', 'Secret', '123456789101213141516171819202122', 'iviviviviviviviv123').
|
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Invalid key size: 33 expected 32: While processing encrypt('aes-256-ofb', 'Secret', '123456789101213141516171819202122', 'iviviviviviviviv123').
|
||||||
```
|
```
|
||||||
|
|
||||||
Однако функция `aes_encrypt_mysql` в аналогичном случае возвращает результат, который может быть обработан MySQL:
|
Однако функция `aes_encrypt_mysql` в аналогичном случае возвращает результат, который может быть обработан MySQL:
|
||||||
@ -176,7 +173,7 @@ Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Invalid ke
|
|||||||
Запрос:
|
Запрос:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT hex(aes_encrypt_mysql('aes-256-cfb128', 'Secret', '123456789101213141516171819202122', 'iviviviviviviviv123')) AS ciphertext;
|
SELECT hex(aes_encrypt_mysql('aes-256-ofb', 'Secret', '123456789101213141516171819202122', 'iviviviviviviviv123')) AS ciphertext;
|
||||||
```
|
```
|
||||||
|
|
||||||
Результат:
|
Результат:
|
||||||
@ -192,7 +189,7 @@ SELECT hex(aes_encrypt_mysql('aes-256-cfb128', 'Secret', '1234567891012131415161
|
|||||||
Запрос:
|
Запрос:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT hex(aes_encrypt_mysql('aes-256-cfb128', 'Secret', '123456789101213141516171819202122', 'iviviviviviviviv123456')) AS ciphertext
|
SELECT hex(aes_encrypt_mysql('aes-256-ofb', 'Secret', '123456789101213141516171819202122', 'iviviviviviviviv123456')) AS ciphertext
|
||||||
```
|
```
|
||||||
|
|
||||||
Результат:
|
Результат:
|
||||||
@ -206,7 +203,7 @@ SELECT hex(aes_encrypt_mysql('aes-256-cfb128', 'Secret', '1234567891012131415161
|
|||||||
Это совпадает с результатом, возвращаемым MySQL при таких же входящих значениях:
|
Это совпадает с результатом, возвращаемым MySQL при таких же входящих значениях:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
mysql> SET block_encryption_mode='aes-256-cfb128';
|
mysql> SET block_encryption_mode='aes-256-ofb';
|
||||||
Query OK, 0 rows affected (0.00 sec)
|
Query OK, 0 rows affected (0.00 sec)
|
||||||
|
|
||||||
mysql> SELECT aes_encrypt('Secret', '123456789101213141516171819202122', 'iviviviviviviviv123456') as ciphertext;
|
mysql> SELECT aes_encrypt('Secret', '123456789101213141516171819202122', 'iviviviviviviviv123456') as ciphertext;
|
||||||
@ -224,11 +221,10 @@ mysql> SELECT aes_encrypt('Secret', '123456789101213141516171819202122', 'iviviv
|
|||||||
|
|
||||||
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
||||||
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
||||||
- aes-128-cfb1, aes-192-cfb1, aes-256-cfb1
|
- aes-128-cfb128
|
||||||
- aes-128-cfb8, aes-192-cfb8, aes-256-cfb8
|
|
||||||
- aes-128-cfb128, aes-192-cfb128, aes-256-cfb128
|
|
||||||
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
||||||
- aes-128-gcm, aes-192-gcm, aes-256-gcm
|
- aes-128-gcm, aes-192-gcm, aes-256-gcm
|
||||||
|
- aes-128-ctr, aes-192-ctr, aes-256-ctr
|
||||||
|
|
||||||
**Синтаксис**
|
**Синтаксис**
|
||||||
|
|
||||||
@ -265,12 +261,12 @@ SELECT comment, hex(secret) FROM encryption_test;
|
|||||||
│ aes-256-gcm │ A8A3CCBC6426CFEEB60E4EAE03D3E94204C1B09E0254 │
|
│ aes-256-gcm │ A8A3CCBC6426CFEEB60E4EAE03D3E94204C1B09E0254 │
|
||||||
│ aes-256-gcm with AAD │ A8A3CCBC6426D9A1017A0A932322F1852260A4AD6837 │
|
│ aes-256-gcm with AAD │ A8A3CCBC6426D9A1017A0A932322F1852260A4AD6837 │
|
||||||
└──────────────────────┴──────────────────────────────────────────────┘
|
└──────────────────────┴──────────────────────────────────────────────┘
|
||||||
┌─comment─────────────────────────────┬─hex(secret)──────────────────────┐
|
┌─comment──────────────────────────┬─hex(secret)──────────────────────┐
|
||||||
│ aes-256-cfb128 no IV │ B4972BDC4459 │
|
│ aes-256-ofb no IV │ B4972BDC4459 │
|
||||||
│ aes-256-cfb128 no IV, different key │ 2FF57C092DC9 │
|
│ aes-256-ofb no IV, different key │ 2FF57C092DC9 │
|
||||||
│ aes-256-cfb128 with IV │ 5E6CB398F653 │
|
│ aes-256-ofb with IV │ 5E6CB398F653 │
|
||||||
│ aes-256-cbc no IV │ 1BC0629A92450D9E73A00E7D02CF4142 │
|
│ aes-256-cbc no IV │ 1BC0629A92450D9E73A00E7D02CF4142 │
|
||||||
└─────────────────────────────────────┴──────────────────────────────────┘
|
└──────────────────────────────────┴──────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
Теперь попытаемся расшифровать эти данные:
|
Теперь попытаемся расшифровать эти данные:
|
||||||
@ -278,19 +274,25 @@ SELECT comment, hex(secret) FROM encryption_test;
|
|||||||
Запрос:
|
Запрос:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT comment, decrypt('aes-256-cfb128', secret, '12345678910121314151617181920212') as plaintext FROM encryption_test;
|
SELECT comment, decrypt('aes-256-ofb', secret, '12345678910121314151617181920212') as plaintext FROM encryption_test;
|
||||||
```
|
```
|
||||||
|
|
||||||
Результат:
|
Результат:
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
┌─comment─────────────────────────────┬─plaintext─┐
|
┌─comment──────────────┬─plaintext──┐
|
||||||
│ aes-256-cfb128 no IV │ Secret │
|
│ aes-256-gcm │ OQ<4F>E
|
||||||
│ aes-256-cfb128 no IV, different key │ <20>4<EFBFBD>
|
<20>t<EFBFBD>7T<37>\<5C><><EFBFBD>\<5C> │
|
||||||
<20> │
|
│ aes-256-gcm with AAD │ OQ<4F>E
|
||||||
│ aes-256-cfb128 with IV │ <20><><EFBFBD>6<EFBFBD>~ │
|
<20>\<5C><>si<73><69><EFBFBD><EFBFBD>;<3B>o<EFBFBD><6F> │
|
||||||
│aes-256-cbc no IV │ <20>2*4<>h3c<33>4w<34><77>@
|
└──────────────────────┴────────────┘
|
||||||
└─────────────────────────────────────┴───────────┘
|
┌─comment──────────────────────────┬─plaintext─┐
|
||||||
|
│ aes-256-ofb no IV │ Secret │
|
||||||
|
│ aes-256-ofb no IV, different key │ <20>4<EFBFBD>
|
||||||
|
<20> │
|
||||||
|
│ aes-256-ofb with IV │ <20><><EFBFBD>6<EFBFBD>~ │
|
||||||
|
│aes-256-cbc no IV │ <20>2*4<>h3c<33>4w<34><77>@
|
||||||
|
└──────────────────────────────────┴───────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
Обратите внимание, что только часть данных была расшифрована верно. Оставшаяся часть расшифрована некорректно, так как при шифровании использовались другие значения `mode`, `key`, или `iv`.
|
Обратите внимание, что только часть данных была расшифрована верно. Оставшаяся часть расшифрована некорректно, так как при шифровании использовались другие значения `mode`, `key`, или `iv`.
|
||||||
@ -305,9 +307,7 @@ SELECT comment, decrypt('aes-256-cfb128', secret, '12345678910121314151617181920
|
|||||||
|
|
||||||
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
||||||
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
||||||
- aes-128-cfb1, aes-192-cfb1, aes-256-cfb1
|
- aes-128-cfb128
|
||||||
- aes-128-cfb8, aes-192-cfb8, aes-256-cfb8
|
|
||||||
- aes-128-cfb128, aes-192-cfb128, aes-256-cfb128
|
|
||||||
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
||||||
|
|
||||||
**Синтаксис**
|
**Синтаксис**
|
||||||
@ -333,7 +333,7 @@ aes_decrypt_mysql('mode', 'ciphertext', 'key' [, iv])
|
|||||||
|
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
mysql> SET block_encryption_mode='aes-256-cfb128';
|
mysql> SET block_encryption_mode='aes-256-ofb';
|
||||||
Query OK, 0 rows affected (0.00 sec)
|
Query OK, 0 rows affected (0.00 sec)
|
||||||
|
|
||||||
mysql> SELECT aes_encrypt('Secret', '123456789101213141516171819202122', 'iviviviviviviviv123456') as ciphertext;
|
mysql> SELECT aes_encrypt('Secret', '123456789101213141516171819202122', 'iviviviviviviviv123456') as ciphertext;
|
||||||
@ -348,7 +348,7 @@ mysql> SELECT aes_encrypt('Secret', '123456789101213141516171819202122', 'iviviv
|
|||||||
Запрос:
|
Запрос:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT aes_decrypt_mysql('aes-256-cfb128', unhex('24E9E4966469'), '123456789101213141516171819202122', 'iviviviviviviviv123456') AS plaintext;
|
SELECT aes_decrypt_mysql('aes-256-ofb', unhex('24E9E4966469'), '123456789101213141516171819202122', 'iviviviviviviviv123456') AS plaintext;
|
||||||
```
|
```
|
||||||
|
|
||||||
Результат:
|
Результат:
|
||||||
|
@ -30,6 +30,7 @@ sidebar_label: SYSTEM
|
|||||||
- [START TTL MERGES](#query_language-start-ttl-merges)
|
- [START TTL MERGES](#query_language-start-ttl-merges)
|
||||||
- [STOP MOVES](#query_language-stop-moves)
|
- [STOP MOVES](#query_language-stop-moves)
|
||||||
- [START MOVES](#query_language-start-moves)
|
- [START MOVES](#query_language-start-moves)
|
||||||
|
- [SYSTEM UNFREEZE](#query_language-system-unfreeze)
|
||||||
- [STOP FETCHES](#query_language-system-stop-fetches)
|
- [STOP FETCHES](#query_language-system-stop-fetches)
|
||||||
- [START FETCHES](#query_language-system-start-fetches)
|
- [START FETCHES](#query_language-system-start-fetches)
|
||||||
- [STOP REPLICATED SENDS](#query_language-system-start-replicated-sends)
|
- [STOP REPLICATED SENDS](#query_language-system-start-replicated-sends)
|
||||||
@ -235,6 +236,14 @@ SYSTEM STOP MOVES [[db.]merge_tree_family_table_name]
|
|||||||
SYSTEM START MOVES [[db.]merge_tree_family_table_name]
|
SYSTEM START MOVES [[db.]merge_tree_family_table_name]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### SYSTEM UNFREEZE {#query_language-system-unfreeze}
|
||||||
|
|
||||||
|
Удаляет с диска все "замороженные" партиции данного бэкапа. Про удаление партиций по отдельности смотрите запрос [ALTER TABLE table_name UNFREEZE WITH NAME ](alter/partition.md#alter_unfreeze-partition)
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SYSTEM UNFREEZE WITH NAME <backup_name>
|
||||||
|
```
|
||||||
|
|
||||||
## Managing ReplicatedMergeTree Tables {#query-language-system-replicated}
|
## Managing ReplicatedMergeTree Tables {#query-language-system-replicated}
|
||||||
|
|
||||||
ClickHouse может управлять фоновыми процессами связанными c репликацией в таблицах семейства [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replacingmergetree.md).
|
ClickHouse может управлять фоновыми процессами связанными c репликацией в таблицах семейства [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replacingmergetree.md).
|
||||||
|
@ -5,4 +5,4 @@ sidebar_position: 82
|
|||||||
|
|
||||||
# Что нового в ClickHouse?
|
# Что нового в ClickHouse?
|
||||||
|
|
||||||
Планы развития вкратце изложены [здесь](https://github.com/ClickHouse/ClickHouse/issues/17623), а новости по предыдущим релизам подробно описаны в [журнале изменений](./changelog/).
|
Планы развития вкратце изложены [здесь](https://github.com/ClickHouse/ClickHouse/issues/32513), а новости по предыдущим релизам подробно описаны в [журнале изменений](./changelog/).
|
||||||
|
@ -41,6 +41,10 @@ Yandex**没有**维护下面列出的库,也没有做过任何广泛的测试
|
|||||||
- Ruby
|
- Ruby
|
||||||
- [ClickHouse (Ruby)](https://github.com/shlima/click_house)
|
- [ClickHouse (Ruby)](https://github.com/shlima/click_house)
|
||||||
- [clickhouse-activerecord](https://github.com/PNixx/clickhouse-activerecord)
|
- [clickhouse-activerecord](https://github.com/PNixx/clickhouse-activerecord)
|
||||||
|
- Rust
|
||||||
|
- [clickhouse.rs](https://github.com/loyd/clickhouse.rs)
|
||||||
|
- [clickhouse-rs](https://github.com/suharev7/clickhouse-rs)
|
||||||
|
- [Klickhouse](https://github.com/Protryon/klickhouse)
|
||||||
- R
|
- R
|
||||||
- [clickhouse-r](https://github.com/hannesmuehleisen/clickhouse-r)
|
- [clickhouse-r](https://github.com/hannesmuehleisen/clickhouse-r)
|
||||||
- [RClickHouse](https://github.com/IMSMWU/RClickHouse)
|
- [RClickHouse](https://github.com/IMSMWU/RClickHouse)
|
||||||
|
@ -250,12 +250,14 @@ Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table defa
|
|||||||
`set allow_experimental_window_view = 1`。
|
`set allow_experimental_window_view = 1`。
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
CREATE WINDOW VIEW [IF NOT EXISTS] [db.]table_name [TO [db.]table_name] [ENGINE = engine] [WATERMARK = strategy] [ALLOWED_LATENESS = interval_function] AS SELECT ... GROUP BY time_window_function
|
CREATE WINDOW VIEW [IF NOT EXISTS] [db.]table_name [TO [db.]table_name] [INNER ENGINE engine] [ENGINE engine] [WATERMARK strategy] [ALLOWED_LATENESS interval_function] [POPULATE] AS SELECT ... GROUP BY time_window_function
|
||||||
```
|
```
|
||||||
|
|
||||||
Window view可以通过时间窗口聚合数据,并在满足窗口触发条件时自动触发对应窗口计算。其通过将计算状态保存降低处理延迟,支持将处理结果输出至目标表或通过`WATCH`语句输出至终端。
|
Window view可以通过时间窗口聚合数据,并在满足窗口触发条件时自动触发对应窗口计算。其通过将计算状态保存降低处理延迟,支持将处理结果输出至目标表或通过`WATCH`语句输出至终端。
|
||||||
|
|
||||||
创建window view的方式和创建物化视图类似。Window view使用默认为`AggregatingMergeTree`的内部存储引擎存储计算中间状态。
|
创建window view的方式和创建物化视图类似。Window view通过`INNER ENGINE`指定内部存储引擎以存储窗口计算中间状态,默认使用`AggregatingMergeTree`作为内部中间状态存储引擎。
|
||||||
|
|
||||||
|
创建不带`TO [db].[table]`的window view时,必须指定`ENGINE` – 用于存储数据的表引擎。
|
||||||
|
|
||||||
### 时间窗口函数 {#window-view-shi-jian-chuang-kou-han-shu}
|
### 时间窗口函数 {#window-view-shi-jian-chuang-kou-han-shu}
|
||||||
|
|
||||||
@ -295,6 +297,10 @@ CREATE WINDOW VIEW test.wv TO test.dst WATERMARK=ASCENDING ALLOWED_LATENESS=INTE
|
|||||||
|
|
||||||
需要注意的是,迟到消息需要更新之前的处理结果。与在窗口结束时触发不同,迟到消息到达时window view会立即触发计算。因此,会导致同一个窗口输出多次计算结果。用户需要注意这种情况,并消除重复结果。
|
需要注意的是,迟到消息需要更新之前的处理结果。与在窗口结束时触发不同,迟到消息到达时window view会立即触发计算。因此,会导致同一个窗口输出多次计算结果。用户需要注意这种情况,并消除重复结果。
|
||||||
|
|
||||||
|
### 查询语句修改 {#window-view-cha-xun-yu-ju-xiu-gai}
|
||||||
|
|
||||||
|
用户可以通过`ALTER TABLE ... MODIFY QUERY`语句修改window view的`SELECT`查询语句。无论是否使用`TO [db.]name`语句,新`SELECT`语句的数据结构均需和旧语句相同。需要注意的是,由于窗口计算中间状态无法复用,修改查询语句时会丢失当前窗口数据。
|
||||||
|
|
||||||
### 新窗口监控 {#window-view-xin-chuang-kou-jian-kong}
|
### 新窗口监控 {#window-view-xin-chuang-kou-jian-kong}
|
||||||
|
|
||||||
Window view可以通过`WATCH`语句将处理结果推送至终端,或通过`TO`语句将结果推送至数据表。
|
Window view可以通过`WATCH`语句将处理结果推送至终端,或通过`TO`语句将结果推送至数据表。
|
||||||
@ -309,6 +315,7 @@ WATCH [db.]name [LIMIT n]
|
|||||||
|
|
||||||
- `window_view_clean_interval`: window view清除过期数据间隔(单位为秒)。系统会定期清除过期数据,尚未触发的窗口数据不会被清除。
|
- `window_view_clean_interval`: window view清除过期数据间隔(单位为秒)。系统会定期清除过期数据,尚未触发的窗口数据不会被清除。
|
||||||
- `window_view_heartbeat_interval`: 用于判断watch查询活跃的心跳时间间隔。
|
- `window_view_heartbeat_interval`: 用于判断watch查询活跃的心跳时间间隔。
|
||||||
|
- `wait_for_window_view_fire_signal_timeout`: Event time 处理模式下,窗口触发信号等待超时时间。
|
||||||
|
|
||||||
### 示例 {#window-view-shi-li}
|
### 示例 {#window-view-shi-li}
|
||||||
|
|
||||||
|
@ -26,6 +26,7 @@ sidebar_label: SYSTEM
|
|||||||
- [START TTL MERGES](#query_language-start-ttl-merges)
|
- [START TTL MERGES](#query_language-start-ttl-merges)
|
||||||
- [STOP MOVES](#query_language-stop-moves)
|
- [STOP MOVES](#query_language-stop-moves)
|
||||||
- [START MOVES](#query_language-start-moves)
|
- [START MOVES](#query_language-start-moves)
|
||||||
|
- [SYSTEM UNFREEZE](#query_language-system-unfreeze)
|
||||||
- [STOP FETCHES](#query_language-system-stop-fetches)
|
- [STOP FETCHES](#query_language-system-stop-fetches)
|
||||||
- [START FETCHES](#query_language-system-start-fetches)
|
- [START FETCHES](#query_language-system-start-fetches)
|
||||||
- [STOP REPLICATED SENDS](#query_language-system-start-replicated-sends)
|
- [STOP REPLICATED SENDS](#query_language-system-start-replicated-sends)
|
||||||
@ -203,6 +204,14 @@ SYSTEM STOP MOVES [[db.]merge_tree_family_table_name]
|
|||||||
SYSTEM STOP MOVES [[db.]merge_tree_family_table_name]
|
SYSTEM STOP MOVES [[db.]merge_tree_family_table_name]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### SYSTEM UNFREEZE {#query_language-system-unfreeze}
|
||||||
|
|
||||||
|
从所有磁盘中清除具有指定名称的冻结备份。 查看更多关于解冻单独部分的信息 [ALTER TABLE table_name UNFREEZE WITH NAME ](alter/partition.md#alter_unfreeze-partition)
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SYSTEM UNFREEZE WITH NAME <backup_name>
|
||||||
|
```
|
||||||
|
|
||||||
## Managing ReplicatedMergeTree Tables {#query-language-system-replicated}
|
## Managing ReplicatedMergeTree Tables {#query-language-system-replicated}
|
||||||
|
|
||||||
管理 [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replacingmergetree.md)表的后台复制相关进程。
|
管理 [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replacingmergetree.md)表的后台复制相关进程。
|
||||||
|
@ -508,7 +508,7 @@ else ()
|
|||||||
endif()
|
endif()
|
||||||
|
|
||||||
if (USE_BINARY_HASH)
|
if (USE_BINARY_HASH)
|
||||||
add_custom_command(TARGET clickhouse POST_BUILD COMMAND ./clickhouse hash-binary > hash && ${OBJCOPY_PATH} --add-section .note.ClickHouse.hash=hash clickhouse COMMENT "Adding .note.ClickHouse.hash to clickhouse" VERBATIM)
|
add_custom_command(TARGET clickhouse POST_BUILD COMMAND ./clickhouse hash-binary > hash && ${OBJCOPY_PATH} --add-section .clickhouse.hash=hash clickhouse COMMENT "Adding section '.clickhouse.hash' to clickhouse binary" VERBATIM)
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
if (INSTALL_STRIPPED_BINARIES)
|
if (INSTALL_STRIPPED_BINARIES)
|
||||||
|
@ -39,7 +39,7 @@ int mainEntryClickHouseKeeperConverter(int argc, char ** argv)
|
|||||||
|
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
DB::KeeperStorage storage(500, "");
|
DB::KeeperStorage storage(500, "", true);
|
||||||
|
|
||||||
DB::deserializeKeeperStorageFromSnapshotsDir(storage, options["zookeeper-snapshots-dir"].as<std::string>(), logger);
|
DB::deserializeKeeperStorageFromSnapshotsDir(storage, options["zookeeper-snapshots-dir"].as<std::string>(), logger);
|
||||||
DB::deserializeLogsAndApplyToStorage(storage, options["zookeeper-logs-dir"].as<std::string>(), logger);
|
DB::deserializeLogsAndApplyToStorage(storage, options["zookeeper-logs-dir"].as<std::string>(), logger);
|
||||||
|
@ -82,7 +82,7 @@ int mainEntryClickHouseDisks(int argc, char ** argv);
|
|||||||
int mainEntryClickHouseHashBinary(int, char **)
|
int mainEntryClickHouseHashBinary(int, char **)
|
||||||
{
|
{
|
||||||
/// Intentionally without newline. So you can run:
|
/// Intentionally without newline. So you can run:
|
||||||
/// objcopy --add-section .note.ClickHouse.hash=<(./clickhouse hash-binary) clickhouse
|
/// objcopy --add-section .clickhouse.hash=<(./clickhouse hash-binary) clickhouse
|
||||||
std::cout << getHashOfLoadedBinaryHex();
|
std::cout << getHashOfLoadedBinaryHex();
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -744,16 +744,18 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
|||||||
/// But there are other sections of the binary (e.g. exception handling tables)
|
/// But there are other sections of the binary (e.g. exception handling tables)
|
||||||
/// that are interpreted (not executed) but can alter the behaviour of the program as well.
|
/// that are interpreted (not executed) but can alter the behaviour of the program as well.
|
||||||
|
|
||||||
|
/// Please keep the below log messages in-sync with the ones in daemon/BaseDaemon.cpp
|
||||||
|
|
||||||
String calculated_binary_hash = getHashOfLoadedBinaryHex();
|
String calculated_binary_hash = getHashOfLoadedBinaryHex();
|
||||||
|
|
||||||
if (stored_binary_hash.empty())
|
if (stored_binary_hash.empty())
|
||||||
{
|
{
|
||||||
LOG_WARNING(log, "Calculated checksum of the binary: {}."
|
LOG_WARNING(log, "Integrity check of the executable skipped because the reference checksum could not be read."
|
||||||
" There is no information about the reference checksum.", calculated_binary_hash);
|
" (calculated checksum: {})", calculated_binary_hash);
|
||||||
}
|
}
|
||||||
else if (calculated_binary_hash == stored_binary_hash)
|
else if (calculated_binary_hash == stored_binary_hash)
|
||||||
{
|
{
|
||||||
LOG_INFO(log, "Calculated checksum of the binary: {}, integrity check passed.", calculated_binary_hash);
|
LOG_INFO(log, "Integrity check of the executable successfully passed (checksum: {})", calculated_binary_hash);
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
@ -769,14 +771,14 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
|||||||
else
|
else
|
||||||
{
|
{
|
||||||
throw Exception(ErrorCodes::CORRUPTED_DATA,
|
throw Exception(ErrorCodes::CORRUPTED_DATA,
|
||||||
"Calculated checksum of the ClickHouse binary ({0}) does not correspond"
|
"Calculated checksum of the executable ({0}) does not correspond"
|
||||||
" to the reference checksum stored in the binary ({1})."
|
" to the reference checksum stored in the executable ({1})."
|
||||||
" It may indicate one of the following:"
|
" This may indicate one of the following:"
|
||||||
" - the file {2} was changed just after startup;"
|
" - the executable {2} was changed just after startup;"
|
||||||
" - the file {2} is damaged on disk due to faulty hardware;"
|
" - the executable {2} was corrupted on disk due to faulty hardware;"
|
||||||
" - the loaded executable is damaged in memory due to faulty hardware;"
|
" - the loaded executable was corrupted in memory due to faulty hardware;"
|
||||||
" - the file {2} was intentionally modified;"
|
" - the file {2} was intentionally modified;"
|
||||||
" - logical error in code."
|
" - a logical error in the code."
|
||||||
, calculated_binary_hash, stored_binary_hash, executable_path);
|
, calculated_binary_hash, stored_binary_hash, executable_path);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -1513,7 +1515,7 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
|||||||
|
|
||||||
/// Init trace collector only after trace_log system table was created
|
/// Init trace collector only after trace_log system table was created
|
||||||
/// Disable it if we collect test coverage information, because it will work extremely slow.
|
/// Disable it if we collect test coverage information, because it will work extremely slow.
|
||||||
#if USE_UNWIND && !WITH_COVERAGE && defined(__x86_64__)
|
#if USE_UNWIND && !WITH_COVERAGE
|
||||||
/// Profilers cannot work reliably with any other libunwind or without PHDR cache.
|
/// Profilers cannot work reliably with any other libunwind or without PHDR cache.
|
||||||
if (hasPHDRCache())
|
if (hasPHDRCache())
|
||||||
{
|
{
|
||||||
|
@ -164,6 +164,7 @@ enum class AccessType
|
|||||||
M(SYSTEM_FLUSH_LOGS, "FLUSH LOGS", GLOBAL, SYSTEM_FLUSH) \
|
M(SYSTEM_FLUSH_LOGS, "FLUSH LOGS", GLOBAL, SYSTEM_FLUSH) \
|
||||||
M(SYSTEM_FLUSH, "", GROUP, SYSTEM) \
|
M(SYSTEM_FLUSH, "", GROUP, SYSTEM) \
|
||||||
M(SYSTEM_THREAD_FUZZER, "SYSTEM START THREAD FUZZER, SYSTEM STOP THREAD FUZZER, START THREAD FUZZER, STOP THREAD FUZZER", GLOBAL, SYSTEM) \
|
M(SYSTEM_THREAD_FUZZER, "SYSTEM START THREAD FUZZER, SYSTEM STOP THREAD FUZZER, START THREAD FUZZER, STOP THREAD FUZZER", GLOBAL, SYSTEM) \
|
||||||
|
M(SYSTEM_UNFREEZE, "SYSTEM UNFREEZE", GLOBAL, SYSTEM) \
|
||||||
M(SYSTEM, "", GROUP, ALL) /* allows to execute SYSTEM {SHUTDOWN|RELOAD CONFIG|...} */ \
|
M(SYSTEM, "", GROUP, ALL) /* allows to execute SYSTEM {SHUTDOWN|RELOAD CONFIG|...} */ \
|
||||||
\
|
\
|
||||||
M(dictGet, "dictHas, dictGetHierarchy, dictIsIn", DICTIONARY, ALL) /* allows to execute functions dictGet(), dictHas(), dictGetHierarchy(), dictIsIn() */\
|
M(dictGet, "dictHas, dictGetHierarchy, dictIsIn", DICTIONARY, ALL) /* allows to execute functions dictGet(), dictHas(), dictGetHierarchy(), dictIsIn() */\
|
||||||
|
@ -120,6 +120,7 @@ namespace
|
|||||||
|
|
||||||
AccessRights res = access;
|
AccessRights res = access;
|
||||||
res.modifyFlags(modifier);
|
res.modifyFlags(modifier);
|
||||||
|
res.modifyFlagsWithGrantOption(modifier);
|
||||||
|
|
||||||
/// Anyone has access to the "system" and "information_schema" database.
|
/// Anyone has access to the "system" and "information_schema" database.
|
||||||
res.grant(AccessType::SELECT, DatabaseCatalog::SYSTEM_DATABASE);
|
res.grant(AccessType::SELECT, DatabaseCatalog::SYSTEM_DATABASE);
|
||||||
|
@ -76,27 +76,27 @@ public:
|
|||||||
data(place).~Data();
|
data(place).~Data();
|
||||||
}
|
}
|
||||||
|
|
||||||
void add(AggregateDataPtr, const IColumn **, size_t, Arena *) const override
|
void add(AggregateDataPtr __restrict, const IColumn **, size_t, Arena *) const override
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
void merge(AggregateDataPtr, ConstAggregateDataPtr, Arena *) const override
|
void merge(AggregateDataPtr __restrict, ConstAggregateDataPtr, Arena *) const override
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
void serialize(ConstAggregateDataPtr, WriteBuffer & buf, std::optional<size_t> /* version */) const override
|
void serialize(ConstAggregateDataPtr __restrict, WriteBuffer & buf, std::optional<size_t> /* version */) const override
|
||||||
{
|
{
|
||||||
char c = 0;
|
char c = 0;
|
||||||
buf.write(c);
|
buf.write(c);
|
||||||
}
|
}
|
||||||
|
|
||||||
void deserialize(AggregateDataPtr /* place */, ReadBuffer & buf, std::optional<size_t> /* version */, Arena *) const override
|
void deserialize(AggregateDataPtr __restrict /* place */, ReadBuffer & buf, std::optional<size_t> /* version */, Arena *) const override
|
||||||
{
|
{
|
||||||
char c = 0;
|
char c = 0;
|
||||||
buf.read(c);
|
buf.read(c);
|
||||||
}
|
}
|
||||||
|
|
||||||
void insertResultInto(AggregateDataPtr, IColumn & to, Arena *) const override
|
void insertResultInto(AggregateDataPtr __restrict, IColumn & to, Arena *) const override
|
||||||
{
|
{
|
||||||
to.insertDefault();
|
to.insertDefault();
|
||||||
}
|
}
|
||||||
|
@ -236,7 +236,7 @@ public:
|
|||||||
void addBatchSinglePlace(
|
void addBatchSinglePlace(
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
Arena *,
|
Arena *,
|
||||||
ssize_t if_argument_pos) const final
|
ssize_t if_argument_pos) const final
|
||||||
@ -260,7 +260,7 @@ public:
|
|||||||
void addBatchSinglePlaceNotNull(
|
void addBatchSinglePlaceNotNull(
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
const UInt8 * null_map,
|
const UInt8 * null_map,
|
||||||
Arena *,
|
Arena *,
|
||||||
|
@ -41,7 +41,7 @@ public:
|
|||||||
memset(place, 0, sizeOfData());
|
memset(place, 0, sizeOfData());
|
||||||
}
|
}
|
||||||
|
|
||||||
void destroy(AggregateDataPtr) const noexcept override
|
void destroy(AggregateDataPtr __restrict) const noexcept override
|
||||||
{
|
{
|
||||||
// nothing
|
// nothing
|
||||||
}
|
}
|
||||||
@ -61,7 +61,7 @@ public:
|
|||||||
return alignof(T);
|
return alignof(T);
|
||||||
}
|
}
|
||||||
|
|
||||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||||
{
|
{
|
||||||
const auto * y_col = static_cast<const ColumnUInt8 *>(columns[category_count]);
|
const auto * y_col = static_cast<const ColumnUInt8 *>(columns[category_count]);
|
||||||
bool y = y_col->getData()[row_num];
|
bool y = y_col->getData()[row_num];
|
||||||
@ -78,7 +78,7 @@ public:
|
|||||||
reinterpret_cast<T *>(place)[category_count * 2 + size_t(y)] += 1;
|
reinterpret_cast<T *>(place)[category_count * 2 + size_t(y)] += 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||||
{
|
{
|
||||||
for (size_t i : collections::range(0, category_count + 1))
|
for (size_t i : collections::range(0, category_count + 1))
|
||||||
{
|
{
|
||||||
@ -87,12 +87,12 @@ public:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf, std::optional<size_t> /* version */) const override
|
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf, std::optional<size_t> /* version */) const override
|
||||||
{
|
{
|
||||||
buf.write(place, sizeOfData());
|
buf.write(place, sizeOfData());
|
||||||
}
|
}
|
||||||
|
|
||||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, std::optional<size_t> /* version */, Arena *) const override
|
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, std::optional<size_t> /* version */, Arena *) const override
|
||||||
{
|
{
|
||||||
buf.read(place, sizeOfData());
|
buf.read(place, sizeOfData());
|
||||||
}
|
}
|
||||||
|
@ -65,7 +65,7 @@ public:
|
|||||||
void addBatchSinglePlace(
|
void addBatchSinglePlace(
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
Arena *,
|
Arena *,
|
||||||
ssize_t if_argument_pos) const override
|
ssize_t if_argument_pos) const override
|
||||||
@ -84,7 +84,7 @@ public:
|
|||||||
void addBatchSinglePlaceNotNull(
|
void addBatchSinglePlaceNotNull(
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
const UInt8 * null_map,
|
const UInt8 * null_map,
|
||||||
Arena *,
|
Arena *,
|
||||||
@ -222,7 +222,7 @@ public:
|
|||||||
void addBatchSinglePlace(
|
void addBatchSinglePlace(
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
Arena *,
|
Arena *,
|
||||||
ssize_t if_argument_pos) const override
|
ssize_t if_argument_pos) const override
|
||||||
|
@ -122,7 +122,7 @@ public:
|
|||||||
void addBatchSinglePlace(
|
void addBatchSinglePlace(
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
Arena * arena,
|
Arena * arena,
|
||||||
ssize_t) const override
|
ssize_t) const override
|
||||||
|
@ -100,7 +100,7 @@ public:
|
|||||||
void addBatch(
|
void addBatch(
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr * places,
|
AggregateDataPtr * __restrict places,
|
||||||
size_t place_offset,
|
size_t place_offset,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
Arena * arena,
|
Arena * arena,
|
||||||
@ -112,7 +112,7 @@ public:
|
|||||||
void addBatchSinglePlace(
|
void addBatchSinglePlace(
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
Arena * arena,
|
Arena * arena,
|
||||||
ssize_t) const override
|
ssize_t) const override
|
||||||
@ -123,7 +123,7 @@ public:
|
|||||||
void addBatchSinglePlaceNotNull(
|
void addBatchSinglePlaceNotNull(
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
const UInt8 * null_map,
|
const UInt8 * null_map,
|
||||||
Arena * arena,
|
Arena * arena,
|
||||||
|
@ -362,7 +362,7 @@ public:
|
|||||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, std::optional<size_t> /* version */, Arena *) const override { this->data(place).read(buf); }
|
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, std::optional<size_t> /* version */, Arena *) const override { this->data(place).read(buf); }
|
||||||
|
|
||||||
void predictValues(
|
void predictValues(
|
||||||
ConstAggregateDataPtr place,
|
ConstAggregateDataPtr __restrict place,
|
||||||
IColumn & to,
|
IColumn & to,
|
||||||
const ColumnsWithTypeAndName & arguments,
|
const ColumnsWithTypeAndName & arguments,
|
||||||
size_t offset,
|
size_t offset,
|
||||||
|
@ -105,7 +105,7 @@ public:
|
|||||||
|
|
||||||
DataTypePtr getReturnType() const override { return std::make_shared<DataTypeMap>(DataTypes{key_type, nested_func->getReturnType()}); }
|
DataTypePtr getReturnType() const override { return std::make_shared<DataTypeMap>(DataTypes{key_type, nested_func->getReturnType()}); }
|
||||||
|
|
||||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||||
{
|
{
|
||||||
const auto & map_column = assert_cast<const ColumnMap &>(*columns[0]);
|
const auto & map_column = assert_cast<const ColumnMap &>(*columns[0]);
|
||||||
const auto & map_nested_tuple = map_column.getNestedData();
|
const auto & map_nested_tuple = map_column.getNestedData();
|
||||||
@ -160,7 +160,7 @@ public:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||||
{
|
{
|
||||||
auto & merged_maps = this->data(place).merged_maps;
|
auto & merged_maps = this->data(place).merged_maps;
|
||||||
const auto & rhs_maps = this->data(rhs).merged_maps;
|
const auto & rhs_maps = this->data(rhs).merged_maps;
|
||||||
@ -178,7 +178,7 @@ public:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf, std::optional<size_t> /* version */) const override
|
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf, std::optional<size_t> /* version */) const override
|
||||||
{
|
{
|
||||||
auto & merged_maps = this->data(place).merged_maps;
|
auto & merged_maps = this->data(place).merged_maps;
|
||||||
writeVarUInt(merged_maps.size(), buf);
|
writeVarUInt(merged_maps.size(), buf);
|
||||||
@ -190,7 +190,7 @@ public:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, std::optional<size_t> /* version */, Arena * arena) const override
|
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, std::optional<size_t> /* version */, Arena * arena) const override
|
||||||
{
|
{
|
||||||
auto & merged_maps = this->data(place).merged_maps;
|
auto & merged_maps = this->data(place).merged_maps;
|
||||||
UInt64 size;
|
UInt64 size;
|
||||||
@ -209,7 +209,7 @@ public:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena * arena) const override
|
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena * arena) const override
|
||||||
{
|
{
|
||||||
auto & map_column = assert_cast<ColumnMap &>(to);
|
auto & map_column = assert_cast<ColumnMap &>(to);
|
||||||
auto & nested_column = map_column.getNestedColumn();
|
auto & nested_column = map_column.getNestedColumn();
|
||||||
|
@ -33,11 +33,11 @@ public:
|
|||||||
|
|
||||||
bool allocatesMemoryInArena() const override { return false; }
|
bool allocatesMemoryInArena() const override { return false; }
|
||||||
|
|
||||||
void create(AggregateDataPtr) const override
|
void create(AggregateDataPtr __restrict) const override
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
void destroy(AggregateDataPtr) const noexcept override
|
void destroy(AggregateDataPtr __restrict) const noexcept override
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -56,11 +56,11 @@ public:
|
|||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
void add(AggregateDataPtr, const IColumn **, size_t, Arena *) const override
|
void add(AggregateDataPtr __restrict, const IColumn **, size_t, Arena *) const override
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
void merge(AggregateDataPtr, ConstAggregateDataPtr, Arena *) const override
|
void merge(AggregateDataPtr __restrict, ConstAggregateDataPtr, Arena *) const override
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -69,14 +69,14 @@ public:
|
|||||||
writeChar('\0', buf);
|
writeChar('\0', buf);
|
||||||
}
|
}
|
||||||
|
|
||||||
void deserialize(AggregateDataPtr, ReadBuffer & buf, std::optional<size_t>, Arena *) const override
|
void deserialize(AggregateDataPtr __restrict, ReadBuffer & buf, std::optional<size_t>, Arena *) const override
|
||||||
{
|
{
|
||||||
[[maybe_unused]] char symbol;
|
[[maybe_unused]] char symbol;
|
||||||
readChar(symbol, buf);
|
readChar(symbol, buf);
|
||||||
assert(symbol == '\0');
|
assert(symbol == '\0');
|
||||||
}
|
}
|
||||||
|
|
||||||
void insertResultInto(AggregateDataPtr, IColumn & to, Arena *) const override
|
void insertResultInto(AggregateDataPtr __restrict, IColumn & to, Arena *) const override
|
||||||
{
|
{
|
||||||
to.insertDefault();
|
to.insertDefault();
|
||||||
}
|
}
|
||||||
|
@ -309,7 +309,7 @@ public:
|
|||||||
void addBatchSinglePlace( /// NOLINT
|
void addBatchSinglePlace( /// NOLINT
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
Arena * arena,
|
Arena * arena,
|
||||||
ssize_t if_argument_pos = -1) const override
|
ssize_t if_argument_pos = -1) const override
|
||||||
|
@ -99,7 +99,7 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
void add(
|
void add(
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
size_t row_num,
|
size_t row_num,
|
||||||
Arena * arena) const override
|
Arena * arena) const override
|
||||||
@ -138,7 +138,7 @@ public:
|
|||||||
void addBatchSinglePlace( /// NOLINT
|
void addBatchSinglePlace( /// NOLINT
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
Arena * arena,
|
Arena * arena,
|
||||||
ssize_t if_argument_pos = -1) const override
|
ssize_t if_argument_pos = -1) const override
|
||||||
@ -169,7 +169,7 @@ public:
|
|||||||
void addBatchSinglePlaceNotNull( /// NOLINT
|
void addBatchSinglePlaceNotNull( /// NOLINT
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
const UInt8 * null_map,
|
const UInt8 * null_map,
|
||||||
Arena * arena,
|
Arena * arena,
|
||||||
@ -206,7 +206,7 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
void merge(
|
void merge(
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
ConstAggregateDataPtr rhs,
|
ConstAggregateDataPtr rhs,
|
||||||
Arena * arena) const override
|
Arena * arena) const override
|
||||||
{
|
{
|
||||||
@ -227,14 +227,14 @@ public:
|
|||||||
(places[i] + place_offset)[size_of_data] |= rhs[i][size_of_data];
|
(places[i] + place_offset)[size_of_data] |= rhs[i][size_of_data];
|
||||||
}
|
}
|
||||||
|
|
||||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf, std::optional<size_t> version) const override
|
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf, std::optional<size_t> version) const override
|
||||||
{
|
{
|
||||||
nested_function->serialize(place, buf, version);
|
nested_function->serialize(place, buf, version);
|
||||||
|
|
||||||
writeChar(place[size_of_data], buf);
|
writeChar(place[size_of_data], buf);
|
||||||
}
|
}
|
||||||
|
|
||||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, std::optional<size_t> version, Arena * arena) const override
|
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, std::optional<size_t> version, Arena * arena) const override
|
||||||
{
|
{
|
||||||
nested_function->deserialize(place, buf, version, arena);
|
nested_function->deserialize(place, buf, version, arena);
|
||||||
|
|
||||||
@ -261,7 +261,7 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
void insertResultInto(
|
void insertResultInto(
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
IColumn & to,
|
IColumn & to,
|
||||||
Arena * arena) const override
|
Arena * arena) const override
|
||||||
{
|
{
|
||||||
|
@ -134,7 +134,7 @@ public:
|
|||||||
nested_function->destroy(place + i * size_of_data);
|
nested_function->destroy(place + i * size_of_data);
|
||||||
}
|
}
|
||||||
|
|
||||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||||
{
|
{
|
||||||
Key key;
|
Key key;
|
||||||
|
|
||||||
@ -151,19 +151,19 @@ public:
|
|||||||
nested_function->add(place + pos * size_of_data, columns, row_num, arena);
|
nested_function->add(place + pos * size_of_data, columns, row_num, arena);
|
||||||
}
|
}
|
||||||
|
|
||||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||||
{
|
{
|
||||||
for (size_t i = 0; i < total; ++i)
|
for (size_t i = 0; i < total; ++i)
|
||||||
nested_function->merge(place + i * size_of_data, rhs + i * size_of_data, arena);
|
nested_function->merge(place + i * size_of_data, rhs + i * size_of_data, arena);
|
||||||
}
|
}
|
||||||
|
|
||||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf, std::optional<size_t> version) const override
|
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf, std::optional<size_t> version) const override
|
||||||
{
|
{
|
||||||
for (size_t i = 0; i < total; ++i)
|
for (size_t i = 0; i < total; ++i)
|
||||||
nested_function->serialize(place + i * size_of_data, buf, version);
|
nested_function->serialize(place + i * size_of_data, buf, version);
|
||||||
}
|
}
|
||||||
|
|
||||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, std::optional<size_t> version, Arena * arena) const override
|
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, std::optional<size_t> version, Arena * arena) const override
|
||||||
{
|
{
|
||||||
for (size_t i = 0; i < total; ++i)
|
for (size_t i = 0; i < total; ++i)
|
||||||
nested_function->deserialize(place + i * size_of_data, buf, version, arena);
|
nested_function->deserialize(place + i * size_of_data, buf, version, arena);
|
||||||
@ -174,7 +174,7 @@ public:
|
|||||||
return std::make_shared<DataTypeArray>(nested_function->getReturnType());
|
return std::make_shared<DataTypeArray>(nested_function->getReturnType());
|
||||||
}
|
}
|
||||||
|
|
||||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena * arena) const override
|
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena * arena) const override
|
||||||
{
|
{
|
||||||
auto & col = assert_cast<ColumnArray &>(to);
|
auto & col = assert_cast<ColumnArray &>(to);
|
||||||
auto & col_offsets = assert_cast<ColumnArray::ColumnOffsets &>(col.getOffsetsColumn());
|
auto & col_offsets = assert_cast<ColumnArray::ColumnOffsets &>(col.getOffsetsColumn());
|
||||||
|
@ -158,8 +158,8 @@ class SequenceNextNodeImpl final
|
|||||||
using Self = SequenceNextNodeImpl<T, Node>;
|
using Self = SequenceNextNodeImpl<T, Node>;
|
||||||
|
|
||||||
using Data = SequenceNextNodeGeneralData<Node>;
|
using Data = SequenceNextNodeGeneralData<Node>;
|
||||||
static Data & data(AggregateDataPtr place) { return *reinterpret_cast<Data *>(place); }
|
static Data & data(AggregateDataPtr __restrict place) { return *reinterpret_cast<Data *>(place); }
|
||||||
static const Data & data(ConstAggregateDataPtr place) { return *reinterpret_cast<const Data *>(place); }
|
static const Data & data(ConstAggregateDataPtr __restrict place) { return *reinterpret_cast<const Data *>(place); }
|
||||||
|
|
||||||
static constexpr size_t base_cond_column_idx = 2;
|
static constexpr size_t base_cond_column_idx = 2;
|
||||||
static constexpr size_t event_column_idx = 1;
|
static constexpr size_t event_column_idx = 1;
|
||||||
@ -216,7 +216,7 @@ public:
|
|||||||
a.value.push_back(v->clone(arena), arena);
|
a.value.push_back(v->clone(arena), arena);
|
||||||
}
|
}
|
||||||
|
|
||||||
void create(AggregateDataPtr place) const override /// NOLINT
|
void create(AggregateDataPtr __restrict place) const override /// NOLINT
|
||||||
{
|
{
|
||||||
new (place) Data;
|
new (place) Data;
|
||||||
}
|
}
|
||||||
|
@ -110,7 +110,7 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
void add(
|
void add(
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
size_t row_num,
|
size_t row_num,
|
||||||
Arena *
|
Arena *
|
||||||
@ -125,17 +125,17 @@ public:
|
|||||||
this->data(place).add(x, y);
|
this->data(place).add(x, y);
|
||||||
}
|
}
|
||||||
|
|
||||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||||
{
|
{
|
||||||
this->data(place).merge(this->data(rhs));
|
this->data(place).merge(this->data(rhs));
|
||||||
}
|
}
|
||||||
|
|
||||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf, std::optional<size_t> /* version */) const override
|
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf, std::optional<size_t> /* version */) const override
|
||||||
{
|
{
|
||||||
this->data(place).serialize(buf);
|
this->data(place).serialize(buf);
|
||||||
}
|
}
|
||||||
|
|
||||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, std::optional<size_t> /* version */, Arena *) const override
|
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, std::optional<size_t> /* version */, Arena *) const override
|
||||||
{
|
{
|
||||||
this->data(place).deserialize(buf);
|
this->data(place).deserialize(buf);
|
||||||
}
|
}
|
||||||
@ -163,7 +163,7 @@ public:
|
|||||||
bool allocatesMemoryInArena() const override { return false; }
|
bool allocatesMemoryInArena() const override { return false; }
|
||||||
|
|
||||||
void insertResultInto(
|
void insertResultInto(
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
IColumn & to,
|
IColumn & to,
|
||||||
Arena *) const override
|
Arena *) const override
|
||||||
{
|
{
|
||||||
|
@ -298,7 +298,7 @@ public:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * /*arena*/) const override
|
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr __restrict rhs, Arena * /*arena*/) const override
|
||||||
{
|
{
|
||||||
this->data(place).merge(this->data(rhs));
|
this->data(place).merge(this->data(rhs));
|
||||||
}
|
}
|
||||||
|
@ -445,7 +445,7 @@ public:
|
|||||||
void addBatchSinglePlace(
|
void addBatchSinglePlace(
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
Arena *,
|
Arena *,
|
||||||
ssize_t if_argument_pos) const override
|
ssize_t if_argument_pos) const override
|
||||||
@ -465,7 +465,7 @@ public:
|
|||||||
void addBatchSinglePlaceNotNull(
|
void addBatchSinglePlaceNotNull(
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
const UInt8 * null_map,
|
const UInt8 * null_map,
|
||||||
Arena *,
|
Arena *,
|
||||||
|
@ -150,7 +150,7 @@ public:
|
|||||||
/// Used for machine learning methods. Predict result from trained model.
|
/// Used for machine learning methods. Predict result from trained model.
|
||||||
/// Will insert result into `to` column for rows in range [offset, offset + limit).
|
/// Will insert result into `to` column for rows in range [offset, offset + limit).
|
||||||
virtual void predictValues(
|
virtual void predictValues(
|
||||||
ConstAggregateDataPtr /* place */,
|
ConstAggregateDataPtr __restrict /* place */,
|
||||||
IColumn & /*to*/,
|
IColumn & /*to*/,
|
||||||
const ColumnsWithTypeAndName & /*arguments*/,
|
const ColumnsWithTypeAndName & /*arguments*/,
|
||||||
size_t /*offset*/,
|
size_t /*offset*/,
|
||||||
@ -209,7 +209,7 @@ public:
|
|||||||
virtual void addBatchSinglePlace( /// NOLINT
|
virtual void addBatchSinglePlace( /// NOLINT
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
Arena * arena,
|
Arena * arena,
|
||||||
ssize_t if_argument_pos = -1) const = 0;
|
ssize_t if_argument_pos = -1) const = 0;
|
||||||
@ -218,7 +218,7 @@ public:
|
|||||||
virtual void addBatchSparseSinglePlace(
|
virtual void addBatchSparseSinglePlace(
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
Arena * arena) const = 0;
|
Arena * arena) const = 0;
|
||||||
|
|
||||||
@ -228,7 +228,7 @@ public:
|
|||||||
virtual void addBatchSinglePlaceNotNull( /// NOLINT
|
virtual void addBatchSinglePlaceNotNull( /// NOLINT
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
const UInt8 * null_map,
|
const UInt8 * null_map,
|
||||||
Arena * arena,
|
Arena * arena,
|
||||||
@ -237,7 +237,7 @@ public:
|
|||||||
virtual void addBatchSinglePlaceFromInterval( /// NOLINT
|
virtual void addBatchSinglePlaceFromInterval( /// NOLINT
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
Arena * arena,
|
Arena * arena,
|
||||||
ssize_t if_argument_pos = -1)
|
ssize_t if_argument_pos = -1)
|
||||||
@ -370,7 +370,7 @@ template <typename Derived>
|
|||||||
class IAggregateFunctionHelper : public IAggregateFunction
|
class IAggregateFunctionHelper : public IAggregateFunction
|
||||||
{
|
{
|
||||||
private:
|
private:
|
||||||
static void addFree(const IAggregateFunction * that, AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena)
|
static void addFree(const IAggregateFunction * that, AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena)
|
||||||
{
|
{
|
||||||
static_cast<const Derived &>(*that).add(place, columns, row_num, arena);
|
static_cast<const Derived &>(*that).add(place, columns, row_num, arena);
|
||||||
}
|
}
|
||||||
@ -450,7 +450,7 @@ public:
|
|||||||
void addBatchSinglePlace( /// NOLINT
|
void addBatchSinglePlace( /// NOLINT
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
Arena * arena,
|
Arena * arena,
|
||||||
ssize_t if_argument_pos = -1) const override
|
ssize_t if_argument_pos = -1) const override
|
||||||
@ -474,7 +474,7 @@ public:
|
|||||||
void addBatchSparseSinglePlace(
|
void addBatchSparseSinglePlace(
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
Arena * arena) const override
|
Arena * arena) const override
|
||||||
{
|
{
|
||||||
@ -493,7 +493,7 @@ public:
|
|||||||
void addBatchSinglePlaceNotNull( /// NOLINT
|
void addBatchSinglePlaceNotNull( /// NOLINT
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
const UInt8 * null_map,
|
const UInt8 * null_map,
|
||||||
Arena * arena,
|
Arena * arena,
|
||||||
@ -517,7 +517,7 @@ public:
|
|||||||
void addBatchSinglePlaceFromInterval( /// NOLINT
|
void addBatchSinglePlaceFromInterval( /// NOLINT
|
||||||
size_t row_begin,
|
size_t row_begin,
|
||||||
size_t row_end,
|
size_t row_end,
|
||||||
AggregateDataPtr place,
|
AggregateDataPtr __restrict place,
|
||||||
const IColumn ** columns,
|
const IColumn ** columns,
|
||||||
Arena * arena,
|
Arena * arena,
|
||||||
ssize_t if_argument_pos = -1)
|
ssize_t if_argument_pos = -1)
|
||||||
@ -661,7 +661,7 @@ public:
|
|||||||
IAggregateFunctionDataHelper(const DataTypes & argument_types_, const Array & parameters_)
|
IAggregateFunctionDataHelper(const DataTypes & argument_types_, const Array & parameters_)
|
||||||
: IAggregateFunctionHelper<Derived>(argument_types_, parameters_) {}
|
: IAggregateFunctionHelper<Derived>(argument_types_, parameters_) {}
|
||||||
|
|
||||||
void create(AggregateDataPtr place) const override /// NOLINT
|
void create(AggregateDataPtr __restrict place) const override /// NOLINT
|
||||||
{
|
{
|
||||||
new (place) Data;
|
new (place) Data;
|
||||||
}
|
}
|
||||||
|
@ -326,7 +326,7 @@ Strings BackupCoordinationDistributed::listFiles(const String & prefix, const St
|
|||||||
elements.push_back(String{new_element});
|
elements.push_back(String{new_element});
|
||||||
}
|
}
|
||||||
|
|
||||||
std::sort(elements.begin(), elements.end());
|
::sort(elements.begin(), elements.end());
|
||||||
return elements;
|
return elements;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user