mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-21 15:12:02 +00:00
Merge remote-tracking branch 'origin/master' into dev_replace
This commit is contained in:
commit
2523f8f0c6
@ -2,8 +2,7 @@
|
|||||||
name: Documentation issue
|
name: Documentation issue
|
||||||
about: Report something incorrect or missing in documentation
|
about: Report something incorrect or missing in documentation
|
||||||
title: ''
|
title: ''
|
||||||
labels: documentation
|
labels: comp-documentation
|
||||||
assignees: BayoNet
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
2
.gitignore
vendored
2
.gitignore
vendored
@ -124,3 +124,5 @@ website/package-lock.json
|
|||||||
|
|
||||||
# Toolchains
|
# Toolchains
|
||||||
/cmake/toolchain/*
|
/cmake/toolchain/*
|
||||||
|
|
||||||
|
*.iml
|
||||||
|
16
.gitmodules
vendored
16
.gitmodules
vendored
@ -44,6 +44,7 @@
|
|||||||
[submodule "contrib/protobuf"]
|
[submodule "contrib/protobuf"]
|
||||||
path = contrib/protobuf
|
path = contrib/protobuf
|
||||||
url = https://github.com/ClickHouse-Extras/protobuf.git
|
url = https://github.com/ClickHouse-Extras/protobuf.git
|
||||||
|
branch = v3.13.0.1
|
||||||
[submodule "contrib/boost"]
|
[submodule "contrib/boost"]
|
||||||
path = contrib/boost
|
path = contrib/boost
|
||||||
url = https://github.com/ClickHouse-Extras/boost.git
|
url = https://github.com/ClickHouse-Extras/boost.git
|
||||||
@ -107,6 +108,7 @@
|
|||||||
[submodule "contrib/grpc"]
|
[submodule "contrib/grpc"]
|
||||||
path = contrib/grpc
|
path = contrib/grpc
|
||||||
url = https://github.com/ClickHouse-Extras/grpc.git
|
url = https://github.com/ClickHouse-Extras/grpc.git
|
||||||
|
branch = v1.33.2
|
||||||
[submodule "contrib/aws"]
|
[submodule "contrib/aws"]
|
||||||
path = contrib/aws
|
path = contrib/aws
|
||||||
url = https://github.com/ClickHouse-Extras/aws-sdk-cpp.git
|
url = https://github.com/ClickHouse-Extras/aws-sdk-cpp.git
|
||||||
@ -190,3 +192,17 @@
|
|||||||
path = contrib/croaring
|
path = contrib/croaring
|
||||||
url = https://github.com/RoaringBitmap/CRoaring
|
url = https://github.com/RoaringBitmap/CRoaring
|
||||||
branch = v0.2.66
|
branch = v0.2.66
|
||||||
|
[submodule "contrib/miniselect"]
|
||||||
|
path = contrib/miniselect
|
||||||
|
url = https://github.com/danlark1/miniselect
|
||||||
|
[submodule "contrib/rocksdb"]
|
||||||
|
path = contrib/rocksdb
|
||||||
|
url = https://github.com/facebook/rocksdb
|
||||||
|
branch = v6.14.5
|
||||||
|
[submodule "contrib/xz"]
|
||||||
|
path = contrib/xz
|
||||||
|
url = https://github.com/xz-mirror/xz
|
||||||
|
[submodule "contrib/abseil-cpp"]
|
||||||
|
path = contrib/abseil-cpp
|
||||||
|
url = https://github.com/ClickHouse-Extras/abseil-cpp.git
|
||||||
|
branch = lts_2020_02_25
|
||||||
|
612
CHANGELOG.md
612
CHANGELOG.md
@ -1,6 +1,442 @@
|
|||||||
|
## ClickHouse release 20.11
|
||||||
|
|
||||||
|
### ClickHouse release v20.11.3.3-stable, 2020-11-13
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix rare silent crashes when query profiler is on and ClickHouse is installed on OS with glibc version that has (supposedly) broken asynchronous unwind tables for some functions. This fixes [#15301](https://github.com/ClickHouse/ClickHouse/issues/15301). This fixes [#13098](https://github.com/ClickHouse/ClickHouse/issues/13098). [#16846](https://github.com/ClickHouse/ClickHouse/pull/16846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.11.2.1, 2020-11-11
|
||||||
|
|
||||||
|
#### Backward Incompatible Change
|
||||||
|
|
||||||
|
* If some `profile` was specified in `distributed_ddl` config section, then this profile could overwrite settings of `default` profile on server startup. It's fixed, now settings of distributed DDL queries should not affect global server settings. [#16635](https://github.com/ClickHouse/ClickHouse/pull/16635) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Restrict to use of non-comparable data types (like `AggregateFunction`) in keys (Sorting key, Primary key, Partition key, and so on). [#16601](https://github.com/ClickHouse/ClickHouse/pull/16601) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Remove `ANALYZE` and `AST` queries, and make the setting `enable_debug_queries` obsolete since now it is the part of full featured `EXPLAIN` query. [#16536](https://github.com/ClickHouse/ClickHouse/pull/16536) ([Ivan](https://github.com/abyss7)).
|
||||||
|
* Aggregate functions `boundingRatio`, `rankCorr`, `retention`, `timeSeriesGroupSum`, `timeSeriesGroupRateSum`, `windowFunnel` were erroneously made case-insensitive. Now their names are made case sensitive as designed. Only functions that are specified in SQL standard or made for compatibility with other DBMS or functions similar to those should be case-insensitive. [#16407](https://github.com/ClickHouse/ClickHouse/pull/16407) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Make `rankCorr` function return nan on insufficient data https://github.com/ClickHouse/ClickHouse/issues/16124. [#16135](https://github.com/ClickHouse/ClickHouse/pull/16135) ([hexiaoting](https://github.com/hexiaoting)).
|
||||||
|
|
||||||
|
#### New Feature
|
||||||
|
|
||||||
|
* Added support of LDAP as a user directory for locally non-existent users. [#12736](https://github.com/ClickHouse/ClickHouse/pull/12736) ([Denis Glazachev](https://github.com/traceon)).
|
||||||
|
* Add `system.replicated_fetches` table which shows currently running background fetches. [#16428](https://github.com/ClickHouse/ClickHouse/pull/16428) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Added setting `date_time_output_format`. [#15845](https://github.com/ClickHouse/ClickHouse/pull/15845) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Added minimal web UI to ClickHouse. [#16158](https://github.com/ClickHouse/ClickHouse/pull/16158) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Allows to read/write Single protobuf message at once (w/o length-delimiters). [#15199](https://github.com/ClickHouse/ClickHouse/pull/15199) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Added initial OpenTelemetry support. ClickHouse now accepts OpenTelemetry traceparent headers over Native and HTTP protocols, and passes them downstream in some cases. The trace spans for executed queries are saved into the `system.opentelemetry_span_log` table. [#14195](https://github.com/ClickHouse/ClickHouse/pull/14195) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Allow specify primary key in column list of `CREATE TABLE` query. This is needed for compatibility with other SQL dialects. [#15823](https://github.com/ClickHouse/ClickHouse/pull/15823) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Implement `OFFSET offset_row_count {ROW | ROWS} FETCH {FIRST | NEXT} fetch_row_count {ROW | ROWS} {ONLY | WITH TIES}` in SELECT query with ORDER BY. This is the SQL-standard way to specify `LIMIT`. [#15855](https://github.com/ClickHouse/ClickHouse/pull/15855) ([hexiaoting](https://github.com/hexiaoting)).
|
||||||
|
* `errorCodeToName` function - return variable name of the error (useful for analyzing query_log and similar). `system.errors` table - shows how many times errors has been happened (respects `system_events_show_zero_values`). [#16438](https://github.com/ClickHouse/ClickHouse/pull/16438) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Added function `untuple` which is a special function which can introduce new columns to the SELECT list by expanding a named tuple. [#16242](https://github.com/ClickHouse/ClickHouse/pull/16242) ([Nikolai Kochetov](https://github.com/KochetovNicolai), [Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Now we can provide identifiers via query parameters. And these parameters can be used as table objects or columns. [#16594](https://github.com/ClickHouse/ClickHouse/pull/16594) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Added big integers (UInt256, Int128, Int256) and UUID data types support for MergeTree BloomFilter index. Big integers is an experimental feature. [#16642](https://github.com/ClickHouse/ClickHouse/pull/16642) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Add `farmFingerprint64` function (non-cryptographic string hashing). [#16570](https://github.com/ClickHouse/ClickHouse/pull/16570) ([Jacob Hayes](https://github.com/JacobHayes)).
|
||||||
|
* Add `log_queries_min_query_duration_ms`, only queries slower then the value of this setting will go to `query_log`/`query_thread_log` (i.e. something like `slow_query_log` in mysql). [#16529](https://github.com/ClickHouse/ClickHouse/pull/16529) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Ability to create a docker image on the top of `Alpine`. Uses precompiled binary and glibc components from ubuntu 20.04. [#16479](https://github.com/ClickHouse/ClickHouse/pull/16479) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Added `toUUIDOrNull`, `toUUIDOrZero` cast functions. [#16337](https://github.com/ClickHouse/ClickHouse/pull/16337) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Add `max_concurrent_queries_for_all_users` setting, see [#6636](https://github.com/ClickHouse/ClickHouse/issues/6636) for use cases. [#16154](https://github.com/ClickHouse/ClickHouse/pull/16154) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
* Add a new option `print_query_id` to clickhouse-client. It helps generate arbitrary strings with the current query id generated by the client. Also print query id in clickhouse-client by default. [#15809](https://github.com/ClickHouse/ClickHouse/pull/15809) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add `tid` and `logTrace` functions. This closes [#9434](https://github.com/ClickHouse/ClickHouse/issues/9434). [#15803](https://github.com/ClickHouse/ClickHouse/pull/15803) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Add function `formatReadableTimeDelta` that format time delta to human readable string ... [#15497](https://github.com/ClickHouse/ClickHouse/pull/15497) ([Filipe Caixeta](https://github.com/filipecaixeta)).
|
||||||
|
* Added `disable_merges` option for volumes in multi-disk configuration. [#13956](https://github.com/ClickHouse/ClickHouse/pull/13956) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
|
||||||
|
#### Experimental Feature
|
||||||
|
|
||||||
|
* New functions `encrypt`, `aes_encrypt_mysql`, `decrypt`, `aes_decrypt_mysql`. These functions are working slowly, so we consider it as an experimental feature. [#11844](https://github.com/ClickHouse/ClickHouse/pull/11844) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Mask password in data_path in the `system.distribution_queue`. [#16727](https://github.com/ClickHouse/ClickHouse/pull/16727) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix `IN` operator over several columns and tuples with enabled `transform_null_in` setting. Fixes [#15310](https://github.com/ClickHouse/ClickHouse/issues/15310). [#16722](https://github.com/ClickHouse/ClickHouse/pull/16722) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* The setting `max_parallel_replicas` worked incorrectly if the queried table has no sampling. This fixes [#5733](https://github.com/ClickHouse/ClickHouse/issues/5733). [#16675](https://github.com/ClickHouse/ClickHouse/pull/16675) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix optimize_read_in_order/optimize_aggregation_in_order with max_threads > 0 and expression in ORDER BY. [#16637](https://github.com/ClickHouse/ClickHouse/pull/16637) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Calculation of `DEFAULT` expressions was involving possible name collisions (that was very unlikely to encounter). This fixes [#9359](https://github.com/ClickHouse/ClickHouse/issues/9359). [#16612](https://github.com/ClickHouse/ClickHouse/pull/16612) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix `query_thread_log.query_duration_ms` unit. [#16563](https://github.com/ClickHouse/ClickHouse/pull/16563) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix a bug when using MySQL Master -> MySQL Slave -> ClickHouse MaterializeMySQL Engine. `MaterializeMySQL` is an experimental feature. [#16504](https://github.com/ClickHouse/ClickHouse/pull/16504) ([TCeason](https://github.com/TCeason)).
|
||||||
|
* Specifically crafted argument of `round` function with `Decimal` was leading to integer division by zero. This fixes [#13338](https://github.com/ClickHouse/ClickHouse/issues/13338). [#16451](https://github.com/ClickHouse/ClickHouse/pull/16451) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix DROP TABLE for Distributed (racy with INSERT). [#16409](https://github.com/ClickHouse/ClickHouse/pull/16409) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix processing of very large entries in replication queue. Very large entries may appear in ALTER queries if table structure is extremely large (near 1 MB). This fixes [#16307](https://github.com/ClickHouse/ClickHouse/issues/16307). [#16332](https://github.com/ClickHouse/ClickHouse/pull/16332) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed the inconsistent behaviour when a part of return data could be dropped because the set for its filtration wasn't created. [#16308](https://github.com/ClickHouse/ClickHouse/pull/16308) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix dictGet in sharding_key (and similar places, i.e. when the function context is stored permanently). [#16205](https://github.com/ClickHouse/ClickHouse/pull/16205) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix the exception thrown in `clickhouse-local` when trying to execute `OPTIMIZE` command. Fixes [#16076](https://github.com/ClickHouse/ClickHouse/issues/16076). [#16192](https://github.com/ClickHouse/ClickHouse/pull/16192) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Fixes [#15780](https://github.com/ClickHouse/ClickHouse/issues/15780) regression, e.g. `indexOf([1, 2, 3], toLowCardinality(1))` now is prohibited but it should not be. [#16038](https://github.com/ClickHouse/ClickHouse/pull/16038) ([Mike](https://github.com/myrrc)).
|
||||||
|
* Fix bug with MySQL database. When MySQL server used as database engine is down some queries raise Exception, because they try to get tables from disabled server, while it's unnecessary. For example, query `SELECT ... FROM system.parts` should work only with MergeTree tables and don't touch MySQL database at all. [#16032](https://github.com/ClickHouse/ClickHouse/pull/16032) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Now exception will be thrown when `ALTER MODIFY COLUMN ... DEFAULT ...` has incompatible default with column type. Fixes [#15854](https://github.com/ClickHouse/ClickHouse/issues/15854). [#15858](https://github.com/ClickHouse/ClickHouse/pull/15858) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fixed IPv4CIDRToRange/IPv6CIDRToRange functions to accept const IP-column values. [#15856](https://github.com/ClickHouse/ClickHouse/pull/15856) ([vladimir-golovchenko](https://github.com/vladimir-golovchenko)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Treat `INTERVAL '1 hour'` as equivalent to `INTERVAL 1 HOUR`, to be compatible with Postgres and similar. This fixes [#15637](https://github.com/ClickHouse/ClickHouse/issues/15637). [#15978](https://github.com/ClickHouse/ClickHouse/pull/15978) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Enable parsing enum values by their numeric ids for CSV, TSV and JSON input formats. [#15685](https://github.com/ClickHouse/ClickHouse/pull/15685) ([vivarum](https://github.com/vivarum)).
|
||||||
|
* Better read task scheduling for JBOD architecture and `MergeTree` storage. New setting `read_backoff_min_concurrency` which serves as the lower limit to the number of reading threads. [#16423](https://github.com/ClickHouse/ClickHouse/pull/16423) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add missing support for `LowCardinality` in `Avro` format. [#16521](https://github.com/ClickHouse/ClickHouse/pull/16521) ([Mike](https://github.com/myrrc)).
|
||||||
|
* Workaround for use `S3` with nginx server as proxy. Nginx currenty does not accept urls with empty path like `http://domain.com?delete`, but vanilla aws-sdk-cpp produces this kind of urls. This commit uses patched aws-sdk-cpp version, which makes urls with "/" as path in this cases, like `http://domain.com/?delete`. [#16814](https://github.com/ClickHouse/ClickHouse/pull/16814) ([ianton-ru](https://github.com/ianton-ru)).
|
||||||
|
* Better diagnostics on parse errors in input data. Provide row number on `Cannot read all data` errors. [#16644](https://github.com/ClickHouse/ClickHouse/pull/16644) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Make the behaviour of `minMap` and `maxMap` more desireable. It will not skip zero values in the result. Fixes [#16087](https://github.com/ClickHouse/ClickHouse/issues/16087). [#16631](https://github.com/ClickHouse/ClickHouse/pull/16631) ([Ildus Kurbangaliev](https://github.com/ildus)).
|
||||||
|
* Better update of ZooKeeper configuration in runtime. [#16630](https://github.com/ClickHouse/ClickHouse/pull/16630) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
* Apply SETTINGS clause as early as possible. It allows to modify more settings in the query. This closes [#3178](https://github.com/ClickHouse/ClickHouse/issues/3178). [#16619](https://github.com/ClickHouse/ClickHouse/pull/16619) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Now `event_time_microseconds` field stores in Decimal64, not UInt64. [#16617](https://github.com/ClickHouse/ClickHouse/pull/16617) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Now paratmeterized functions can be used in `APPLY` column transformer. [#16589](https://github.com/ClickHouse/ClickHouse/pull/16589) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Improve scheduling of background task which removes data of dropped tables in `Atomic` databases. `Atomic` databases do not create broken symlink to table data directory if table actually has no data directory. [#16584](https://github.com/ClickHouse/ClickHouse/pull/16584) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Subqueries in `WITH` section (CTE) can reference previous subqueries in `WITH` section by their name. [#16575](https://github.com/ClickHouse/ClickHouse/pull/16575) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add current_database into `system.query_thread_log`. [#16558](https://github.com/ClickHouse/ClickHouse/pull/16558) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Allow to fetch parts that are already committed or outdated in the current instance into the detached directory. It's useful when migrating tables from another cluster and having N to 1 shards mapping. It's also consistent with the current fetchPartition implementation. [#16538](https://github.com/ClickHouse/ClickHouse/pull/16538) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Multiple improvements for `RabbitMQ`: Fixed bug for [#16263](https://github.com/ClickHouse/ClickHouse/issues/16263). Also minimized event loop lifetime. Added more efficient queues setup. [#16426](https://github.com/ClickHouse/ClickHouse/pull/16426) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Fix debug assertion in `quantileDeterministic` function. In previous version it may also transfer up to two times more data over the network. Although no bug existed. This fixes [#15683](https://github.com/ClickHouse/ClickHouse/issues/15683). [#16410](https://github.com/ClickHouse/ClickHouse/pull/16410) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add `TablesToDropQueueSize` metric. It's equal to number of dropped tables, that are waiting for background data removal. [#16364](https://github.com/ClickHouse/ClickHouse/pull/16364) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Better diagnostics when client has dropped connection. In previous versions, `Attempt to read after EOF` and `Broken pipe` exceptions were logged in server. In new version, it's information message `Client has dropped the connection, cancel the query.`. [#16329](https://github.com/ClickHouse/ClickHouse/pull/16329) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add total_rows/total_bytes (from system.tables) support for Set/Join table engines. [#16306](https://github.com/ClickHouse/ClickHouse/pull/16306) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Now it's possible to specify `PRIMARY KEY` without `ORDER BY` for MergeTree table engines family. Closes [#15591](https://github.com/ClickHouse/ClickHouse/issues/15591). [#16284](https://github.com/ClickHouse/ClickHouse/pull/16284) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* If there is no tmp folder in the system (chroot, misconfigutation etc) `clickhouse-local` will create temporary subfolder in the current directory. [#16280](https://github.com/ClickHouse/ClickHouse/pull/16280) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Add support for nested data types (like named tuple) as sub-types. Fixes [#15587](https://github.com/ClickHouse/ClickHouse/issues/15587). [#16262](https://github.com/ClickHouse/ClickHouse/pull/16262) ([Ivan](https://github.com/abyss7)).
|
||||||
|
* Support for `database_atomic_wait_for_drop_and_detach_synchronously`/`NO DELAY`/`SYNC` for `DROP DATABASE`. [#16127](https://github.com/ClickHouse/ClickHouse/pull/16127) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add `allow_nondeterministic_optimize_skip_unused_shards` (to allow non deterministic like `rand()` or `dictGet()` in sharding key). [#16105](https://github.com/ClickHouse/ClickHouse/pull/16105) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix `memory_profiler_step`/`max_untracked_memory` for queries via HTTP (test included). Fix the issue that adjusting this value globally in xml config does not help either, since those settings are not applied anyway, only default (4MB) value is [used](https://github.com/ClickHouse/ClickHouse/blob/17731245336d8c84f75e4c0894c5797ed7732190/src/Common/ThreadStatus.h#L104). Fix `query_id` for the most root ThreadStatus of the http query (by initializing QueryScope after reading query_id). [#16101](https://github.com/ClickHouse/ClickHouse/pull/16101) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Now it's allowed to execute `ALTER ... ON CLUSTER` queries regardless of the `<internal_replication>` setting in cluster config. [#16075](https://github.com/ClickHouse/ClickHouse/pull/16075) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix rare issue when `clickhouse-client` may abort on exit due to loading of suggestions. This fixes [#16035](https://github.com/ClickHouse/ClickHouse/issues/16035). [#16047](https://github.com/ClickHouse/ClickHouse/pull/16047) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add support of `cache` layout for `Redis` dictionaries with complex key. [#15985](https://github.com/ClickHouse/ClickHouse/pull/15985) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix query hang (endless loop) in case of misconfiguration (`connections_with_failover_max_tries` set to 0). [#15876](https://github.com/ClickHouse/ClickHouse/pull/15876) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Change level of some log messages from information to debug, so information messages will not appear for every query. This closes [#5293](https://github.com/ClickHouse/ClickHouse/issues/5293). [#15816](https://github.com/ClickHouse/ClickHouse/pull/15816) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Remove `MemoryTrackingInBackground*` metrics to avoid potentially misleading results. This fixes [#15684](https://github.com/ClickHouse/ClickHouse/issues/15684). [#15813](https://github.com/ClickHouse/ClickHouse/pull/15813) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add reconnects to `zookeeper-dump-tree` tool. [#15711](https://github.com/ClickHouse/ClickHouse/pull/15711) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Allow explicitly specify columns list in `CREATE TABLE table AS table_function(...)` query. Fixes [#9249](https://github.com/ClickHouse/ClickHouse/issues/9249) Fixes [#14214](https://github.com/ClickHouse/ClickHouse/issues/14214). [#14295](https://github.com/ClickHouse/ClickHouse/pull/14295) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
|
||||||
|
#### Performance Improvement
|
||||||
|
|
||||||
|
* Do not merge parts across partitions in SELECT FINAL. [#15938](https://github.com/ClickHouse/ClickHouse/pull/15938) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Improve performance of `-OrNull` and `-OrDefault` aggregate functions. [#16661](https://github.com/ClickHouse/ClickHouse/pull/16661) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Improve performance of `quantileMerge`. In previous versions it was obnoxiously slow. This closes [#1463](https://github.com/ClickHouse/ClickHouse/issues/1463). [#16643](https://github.com/ClickHouse/ClickHouse/pull/16643) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Improve performance of logical functions a little. [#16347](https://github.com/ClickHouse/ClickHouse/pull/16347) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Improved performance of merges assignment in MergeTree table engines. Shouldn't be visible for the user. [#16191](https://github.com/ClickHouse/ClickHouse/pull/16191) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Speedup hashed/sparse_hashed dictionary loading by preallocating the hash table. [#15454](https://github.com/ClickHouse/ClickHouse/pull/15454) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Now trivial count optimization becomes slightly non-trivial. Predicates that contain exact partition expr can be optimized too. This also fixes [#11092](https://github.com/ClickHouse/ClickHouse/issues/11092) which returns wrong count when `max_parallel_replicas > 1`. [#15074](https://github.com/ClickHouse/ClickHouse/pull/15074) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
|
||||||
|
* Add flaky check for stateless tests. It will detect potentially flaky functional tests in advance, before they are merged. [#16238](https://github.com/ClickHouse/ClickHouse/pull/16238) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Use proper version for `croaring` instead of amalgamation. [#16285](https://github.com/ClickHouse/ClickHouse/pull/16285) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
* Improve generation of build files for `ya.make` build system (Arcadia). [#16700](https://github.com/ClickHouse/ClickHouse/pull/16700) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add MySQL BinLog file check tool for `MaterializeMySQL` database engine. `MaterializeMySQL` is an experimental feature. [#16223](https://github.com/ClickHouse/ClickHouse/pull/16223) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Check for executable bit on non-executable files. People often accidentially commit executable files from Windows. [#15843](https://github.com/ClickHouse/ClickHouse/pull/15843) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Check for `#pragma once` in headers. [#15818](https://github.com/ClickHouse/ClickHouse/pull/15818) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix illegal code style `&vector[idx]` in libhdfs3. This fixes libcxx debug build. See also https://github.com/ClickHouse-Extras/libhdfs3/pull/8 . [#15815](https://github.com/ClickHouse/ClickHouse/pull/15815) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix build of one miscellaneous example tool on Mac OS. Note that we don't build examples on Mac OS in our CI (we build only ClickHouse binary), so there is zero chance it will not break again. This fixes [#15804](https://github.com/ClickHouse/ClickHouse/issues/15804). [#15808](https://github.com/ClickHouse/ClickHouse/pull/15808) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Simplify Sys/V init script. [#14135](https://github.com/ClickHouse/ClickHouse/pull/14135) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Added `boost::program_options` to `db_generator` in order to increase its usability. This closes [#15940](https://github.com/ClickHouse/ClickHouse/issues/15940). [#15973](https://github.com/ClickHouse/ClickHouse/pull/15973) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
|
||||||
|
|
||||||
|
## ClickHouse release 20.10
|
||||||
|
|
||||||
|
### ClickHouse release v20.10.4.1-stable, 2020-11-13
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix rare silent crashes when query profiler is on and ClickHouse is installed on OS with glibc version that has (supposedly) broken asynchronous unwind tables for some functions. This fixes [#15301](https://github.com/ClickHouse/ClickHouse/issues/15301). This fixes [#13098](https://github.com/ClickHouse/ClickHouse/issues/13098). [#16846](https://github.com/ClickHouse/ClickHouse/pull/16846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix `IN` operator over several columns and tuples with enabled `transform_null_in` setting. Fixes [#15310](https://github.com/ClickHouse/ClickHouse/issues/15310). [#16722](https://github.com/ClickHouse/ClickHouse/pull/16722) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* This will fix optimize_read_in_order/optimize_aggregation_in_order with max_threads>0 and expression in ORDER BY. [#16637](https://github.com/ClickHouse/ClickHouse/pull/16637) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Now when parsing AVRO from input the LowCardinality is removed from type. Fixes [#16188](https://github.com/ClickHouse/ClickHouse/issues/16188). [#16521](https://github.com/ClickHouse/ClickHouse/pull/16521) ([Mike](https://github.com/myrrc)).
|
||||||
|
* Fix rapid growth of metadata when using MySQL Master -> MySQL Slave -> ClickHouse MaterializeMySQL Engine, and `slave_parallel_worker` enabled on MySQL Slave, by properly shrinking GTID sets. This fixes [#15951](https://github.com/ClickHouse/ClickHouse/issues/15951). [#16504](https://github.com/ClickHouse/ClickHouse/pull/16504) ([TCeason](https://github.com/TCeason)).
|
||||||
|
* Fix DROP TABLE for Distributed (racy with INSERT). [#16409](https://github.com/ClickHouse/ClickHouse/pull/16409) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix processing of very large entries in replication queue. Very large entries may appear in ALTER queries if table structure is extremely large (near 1 MB). This fixes [#16307](https://github.com/ClickHouse/ClickHouse/issues/16307). [#16332](https://github.com/ClickHouse/ClickHouse/pull/16332) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix bug with MySQL database. When MySQL server used as database engine is down some queries raise Exception, because they try to get tables from disabled server, while it's unnecessary. For example, query `SELECT ... FROM system.parts` should work only with MergeTree tables and don't touch MySQL database at all. [#16032](https://github.com/ClickHouse/ClickHouse/pull/16032) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Workaround for use S3 with nginx server as proxy. Nginx currenty does not accept urls with empty path like http://domain.com?delete, but vanilla aws-sdk-cpp produces this kind of urls. This commit uses patched aws-sdk-cpp version, which makes urls with "/" as path in this cases, like http://domain.com/?delete. [#16813](https://github.com/ClickHouse/ClickHouse/pull/16813) ([ianton-ru](https://github.com/ianton-ru)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.10.3.30, 2020-10-28
|
||||||
|
|
||||||
|
#### Backward Incompatible Change
|
||||||
|
|
||||||
|
* Make `multiple_joins_rewriter_version` obsolete. Remove first version of joins rewriter. [#15472](https://github.com/ClickHouse/ClickHouse/pull/15472) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Change default value of `format_regexp_escaping_rule` setting (it's related to `Regexp` format) to `Raw` (it means - read whole subpattern as a value) to make the behaviour more like to what users expect. [#15426](https://github.com/ClickHouse/ClickHouse/pull/15426) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add support for nested multiline comments `/* comment /* comment */ */` in SQL. This conforms to the SQL standard. [#14655](https://github.com/ClickHouse/ClickHouse/pull/14655) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Added MergeTree settings (`max_replicated_merges_with_ttl_in_queue` and `max_number_of_merges_with_ttl_in_pool`) to control the number of merges with TTL in the background pool and replicated queue. This change breaks compatibility with older versions only if you use delete TTL. Otherwise, replication will stay compatible. You can avoid incompatibility issues if you update all shard replicas at once or execute `SYSTEM STOP TTL MERGES` until you finish the update of all replicas. If you'll get an incompatible entry in the replication queue, first of all, execute `SYSTEM STOP TTL MERGES` and after `ALTER TABLE ... DETACH PARTITION ...` the partition where incompatible TTL merge was assigned. Attach it back on a single replica. [#14490](https://github.com/ClickHouse/ClickHouse/pull/14490) ([alesapin](https://github.com/alesapin)).
|
||||||
|
|
||||||
|
#### New Feature
|
||||||
|
|
||||||
|
* Background data recompression. Add the ability to specify `TTL ... RECOMPRESS codec_name` for MergeTree table engines family. [#14494](https://github.com/ClickHouse/ClickHouse/pull/14494) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Add parallel quorum inserts. This closes [#15601](https://github.com/ClickHouse/ClickHouse/issues/15601). [#15601](https://github.com/ClickHouse/ClickHouse/pull/15601) ([Latysheva Alexandra](https://github.com/alexelex)).
|
||||||
|
* Settings for additional enforcement of data durability. Useful for non-replicated setups. [#11948](https://github.com/ClickHouse/ClickHouse/pull/11948) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* When duplicate block is written to replica where it does not exist locally (has not been fetched from replicas), don't ignore it and write locally to achieve the same effect as if it was successfully replicated. [#11684](https://github.com/ClickHouse/ClickHouse/pull/11684) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Now we support `WITH <identifier> AS (subquery) ... ` to introduce named subqueries in the query context. This closes [#2416](https://github.com/ClickHouse/ClickHouse/issues/2416). This closes [#4967](https://github.com/ClickHouse/ClickHouse/issues/4967). [#14771](https://github.com/ClickHouse/ClickHouse/pull/14771) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Introduce `enable_global_with_statement` setting which propagates the first select's `WITH` statements to other select queries at the same level, and makes aliases in `WITH` statements visible to subqueries. [#15451](https://github.com/ClickHouse/ClickHouse/pull/15451) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Secure inter-cluster query execution (with initial_user as current query user). [#13156](https://github.com/ClickHouse/ClickHouse/pull/13156) ([Azat Khuzhin](https://github.com/azat)). [#15551](https://github.com/ClickHouse/ClickHouse/pull/15551) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add the ability to remove column properties and table TTLs. Introduced queries `ALTER TABLE MODIFY COLUMN col_name REMOVE what_to_remove` and `ALTER TABLE REMOVE TTL`. Both operations are lightweight and executed at the metadata level. [#14742](https://github.com/ClickHouse/ClickHouse/pull/14742) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Added format `RawBLOB`. It is intended for input or output a single value without any escaping and delimiters. This closes [#15349](https://github.com/ClickHouse/ClickHouse/issues/15349). [#15364](https://github.com/ClickHouse/ClickHouse/pull/15364) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add the `reinterpretAsUUID` function that allows to convert a big-endian byte string to UUID. [#15480](https://github.com/ClickHouse/ClickHouse/pull/15480) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Implement `force_data_skipping_indices` setting. [#15642](https://github.com/ClickHouse/ClickHouse/pull/15642) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add a setting `output_format_pretty_row_numbers` to numerate the result in Pretty formats. This closes [#15350](https://github.com/ClickHouse/ClickHouse/issues/15350). [#15443](https://github.com/ClickHouse/ClickHouse/pull/15443) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Added query obfuscation tool. It allows to share more queries for better testing. This closes [#15268](https://github.com/ClickHouse/ClickHouse/issues/15268). [#15321](https://github.com/ClickHouse/ClickHouse/pull/15321) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add table function `null('structure')`. [#14797](https://github.com/ClickHouse/ClickHouse/pull/14797) ([vxider](https://github.com/Vxider)).
|
||||||
|
* Added `formatReadableQuantity` function. It is useful for reading big numbers by human. [#14725](https://github.com/ClickHouse/ClickHouse/pull/14725) ([Artem Hnilov](https://github.com/BooBSD)).
|
||||||
|
* Add format `LineAsString` that accepts a sequence of lines separated by newlines, every line is parsed as a whole as a single String field. [#14703](https://github.com/ClickHouse/ClickHouse/pull/14703) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)), [#13846](https://github.com/ClickHouse/ClickHouse/pull/13846) ([hexiaoting](https://github.com/hexiaoting)).
|
||||||
|
* Add `JSONStrings` format which output data in arrays of strings. [#14333](https://github.com/ClickHouse/ClickHouse/pull/14333) ([hcz](https://github.com/hczhcz)).
|
||||||
|
* Add support for "Raw" column format for `Regexp` format. It allows to simply extract subpatterns as a whole without any escaping rules. [#15363](https://github.com/ClickHouse/ClickHouse/pull/15363) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Allow configurable `NULL` representation for `TSV` output format. It is controlled by the setting `output_format_tsv_null_representation` which is `\N` by default. This closes [#9375](https://github.com/ClickHouse/ClickHouse/issues/9375). Note that the setting only controls output format and `\N` is the only supported `NULL` representation for `TSV` input format. [#14586](https://github.com/ClickHouse/ClickHouse/pull/14586) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Support Decimal data type for `MaterializedMySQL`. `MaterializedMySQL` is an experimental feature. [#14535](https://github.com/ClickHouse/ClickHouse/pull/14535) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Add new feature: `SHOW DATABASES LIKE 'xxx'`. [#14521](https://github.com/ClickHouse/ClickHouse/pull/14521) ([hexiaoting](https://github.com/hexiaoting)).
|
||||||
|
* Added a script to import (arbitrary) git repository to ClickHouse as a sample dataset. [#14471](https://github.com/ClickHouse/ClickHouse/pull/14471) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Now insert statements can have asterisk (or variants) with column transformers in the column list. [#14453](https://github.com/ClickHouse/ClickHouse/pull/14453) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* New query complexity limit settings `max_rows_to_read_leaf`, `max_bytes_to_read_leaf` for distributed queries to limit max rows/bytes read on the leaf nodes. Limit is applied for local reads only, *excluding* the final merge stage on the root node. [#14221](https://github.com/ClickHouse/ClickHouse/pull/14221) ([Roman Khavronenko](https://github.com/hagen1778)).
|
||||||
|
* Allow user to specify settings for `ReplicatedMergeTree*` storage in `<replicated_merge_tree>` section of config file. It works similarly to `<merge_tree>` section. For `ReplicatedMergeTree*` storages settings from `<merge_tree>` and `<replicated_merge_tree>` are applied together, but settings from `<replicated_merge_tree>` has higher priority. Added `system.replicated_merge_tree_settings` table. [#13573](https://github.com/ClickHouse/ClickHouse/pull/13573) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add `mapPopulateSeries` function. [#13166](https://github.com/ClickHouse/ClickHouse/pull/13166) ([Ildus Kurbangaliev](https://github.com/ildus)).
|
||||||
|
* Supporting MySQL types: `decimal` (as ClickHouse `Decimal`) and `datetime` with sub-second precision (as `DateTime64`). [#11512](https://github.com/ClickHouse/ClickHouse/pull/11512) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
|
* Introduce `event_time_microseconds` field to `system.text_log`, `system.trace_log`, `system.query_log` and `system.query_thread_log` tables. [#14760](https://github.com/ClickHouse/ClickHouse/pull/14760) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Add `event_time_microseconds` to `system.asynchronous_metric_log` & `system.metric_log` tables. [#14514](https://github.com/ClickHouse/ClickHouse/pull/14514) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Add `query_start_time_microseconds` field to `system.query_log` & `system.query_thread_log` tables. [#14252](https://github.com/ClickHouse/ClickHouse/pull/14252) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix the case when memory can be overallocated regardless to the limit. This closes [#14560](https://github.com/ClickHouse/ClickHouse/issues/14560). [#16206](https://github.com/ClickHouse/ClickHouse/pull/16206) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix `executable` dictionary source hang. In previous versions, when using some formats (e.g. `JSONEachRow`) data was not feed to a child process before it outputs at least something. This closes [#1697](https://github.com/ClickHouse/ClickHouse/issues/1697). This closes [#2455](https://github.com/ClickHouse/ClickHouse/issues/2455). [#14525](https://github.com/ClickHouse/ClickHouse/pull/14525) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix double free in case of exception in function `dictGet`. It could have happened if dictionary was loaded with error. [#16429](https://github.com/ClickHouse/ClickHouse/pull/16429) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix group by with totals/rollup/cube modifers and min/max functions over group by keys. Fixes [#16393](https://github.com/ClickHouse/ClickHouse/issues/16393). [#16397](https://github.com/ClickHouse/ClickHouse/pull/16397) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix async Distributed INSERT with prefer_localhost_replica=0 and internal_replication. [#16358](https://github.com/ClickHouse/ClickHouse/pull/16358) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix a very wrong code in TwoLevelStringHashTable implementation, which might lead to memory leak. [#16264](https://github.com/ClickHouse/ClickHouse/pull/16264) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix segfault in some cases of wrong aggregation in lambdas. [#16082](https://github.com/ClickHouse/ClickHouse/pull/16082) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix `ALTER MODIFY ... ORDER BY` query hang for `ReplicatedVersionedCollapsingMergeTree`. This fixes [#15980](https://github.com/ClickHouse/ClickHouse/issues/15980). [#16011](https://github.com/ClickHouse/ClickHouse/pull/16011) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fix collate name & charset name parser and support `length = 0` for string type. [#16008](https://github.com/ClickHouse/ClickHouse/pull/16008) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Allow to use `direct` layout for dictionaries with complex keys. [#16007](https://github.com/ClickHouse/ClickHouse/pull/16007) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Prevent replica hang for 5-10 mins when replication error happens after a period of inactivity. [#15987](https://github.com/ClickHouse/ClickHouse/pull/15987) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes [#15628](https://github.com/ClickHouse/ClickHouse/issues/15628). [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fix crash on create database failure. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) - Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixes [#12513](https://github.com/ClickHouse/ClickHouse/issues/12513): difference expressions with same alias when query is reanalyzed. [#15886](https://github.com/ClickHouse/ClickHouse/pull/15886) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fix possible very rare deadlocks in RBAC implementation. [#15875](https://github.com/ClickHouse/ClickHouse/pull/15875) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix exception `Block structure mismatch` in `SELECT ... ORDER BY DESC` queries which were executed after `ALTER MODIFY COLUMN` query. Fixes [#15800](https://github.com/ClickHouse/ClickHouse/issues/15800). [#15852](https://github.com/ClickHouse/ClickHouse/pull/15852) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fix `select count()` inaccuracy. [#15767](https://github.com/ClickHouse/ClickHouse/pull/15767) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix some cases of queries, in which only virtual columns are selected. Previously `Not found column _nothing in block` exception may be thrown. Fixes [#12298](https://github.com/ClickHouse/ClickHouse/issues/12298). [#15756](https://github.com/ClickHouse/ClickHouse/pull/15756) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix drop of materialized view with inner table in Atomic database (hangs all subsequent DROP TABLE due to hang of the worker thread, due to recursive DROP TABLE for inner table of MV). [#15743](https://github.com/ClickHouse/ClickHouse/pull/15743) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Possibility to move part to another disk/volume if the first attempt was failed. [#15723](https://github.com/ClickHouse/ClickHouse/pull/15723) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Fix error `Cannot find column` which may happen at insertion into `MATERIALIZED VIEW` in case if query for `MV` containes `ARRAY JOIN`. [#15717](https://github.com/ClickHouse/ClickHouse/pull/15717) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed too low default value of `max_replicated_logs_to_keep` setting, which might cause replicas to become lost too often. Improve lost replica recovery process by choosing the most up-to-date replica to clone. Also do not remove old parts from lost replica, detach them instead. [#15701](https://github.com/ClickHouse/ClickHouse/pull/15701) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix rare race condition in dictionaries and tables from MySQL. [#15686](https://github.com/ClickHouse/ClickHouse/pull/15686) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix (benign) race condition in AMQP-CPP. [#15667](https://github.com/ClickHouse/ClickHouse/pull/15667) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix error `Cannot add simple transform to empty Pipe` which happened while reading from `Buffer` table which has different structure than destination table. It was possible if destination table returned empty result for query. Fixes [#15529](https://github.com/ClickHouse/ClickHouse/issues/15529). [#15662](https://github.com/ClickHouse/ClickHouse/pull/15662) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Proper error handling during insert into MergeTree with S3. MergeTree over S3 is an experimental feature. [#15657](https://github.com/ClickHouse/ClickHouse/pull/15657) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Fixed bug with S3 table function: region from URL was not applied to S3 client configuration. [#15646](https://github.com/ClickHouse/ClickHouse/pull/15646) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Fix the order of destruction for resources in `ReadFromStorage` step of query plan. It might cause crashes in rare cases. Possibly connected with [#15610](https://github.com/ClickHouse/ClickHouse/issues/15610). [#15645](https://github.com/ClickHouse/ClickHouse/pull/15645) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Subtract `ReadonlyReplica` metric when detach readonly tables. [#15592](https://github.com/ClickHouse/ClickHouse/pull/15592) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
* Fixed `Element ... is not a constant expression` error when using `JSON*` function result in `VALUES`, `LIMIT` or right side of `IN` operator. [#15589](https://github.com/ClickHouse/ClickHouse/pull/15589) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Query will finish faster in case of exception. Cancel execution on remote replicas if exception happens. [#15578](https://github.com/ClickHouse/ClickHouse/pull/15578) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Prevent the possibility of error message `Could not calculate available disk space (statvfs), errno: 4, strerror: Interrupted system call`. This fixes [#15541](https://github.com/ClickHouse/ClickHouse/issues/15541). [#15557](https://github.com/ClickHouse/ClickHouse/pull/15557) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix `Database <db> doesn't exist.` in queries with IN and Distributed table when there's no database on initiator. [#15538](https://github.com/ClickHouse/ClickHouse/pull/15538) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix bug when `ILIKE` operator stops being case insensitive if `LIKE` with the same pattern was executed. [#15536](https://github.com/ClickHouse/ClickHouse/pull/15536) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix `Missing columns` errors when selecting columns which absent in data, but depend on other columns which also absent in data. Fixes [#15530](https://github.com/ClickHouse/ClickHouse/issues/15530). [#15532](https://github.com/ClickHouse/ClickHouse/pull/15532) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Throw an error when a single parameter is passed to ReplicatedMergeTree instead of ignoring it. [#15516](https://github.com/ClickHouse/ClickHouse/pull/15516) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
* Fix bug with event subscription in DDLWorker which rarely may lead to query hangs in `ON CLUSTER`. Introduced in [#13450](https://github.com/ClickHouse/ClickHouse/issues/13450). [#15477](https://github.com/ClickHouse/ClickHouse/pull/15477) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Report proper error when the second argument of `boundingRatio` aggregate function has a wrong type. [#15407](https://github.com/ClickHouse/ClickHouse/pull/15407) ([detailyang](https://github.com/detailyang)).
|
||||||
|
* Fixes [#15365](https://github.com/ClickHouse/ClickHouse/issues/15365): attach a database with MySQL engine throws exception (no query context). [#15384](https://github.com/ClickHouse/ClickHouse/pull/15384) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fix the case of multiple occurrences of column transformers in a select query. [#15378](https://github.com/ClickHouse/ClickHouse/pull/15378) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed compression in `S3` storage. [#15376](https://github.com/ClickHouse/ClickHouse/pull/15376) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Fix bug where queries like `SELECT toStartOfDay(today())` fail complaining about empty time_zone argument. [#15319](https://github.com/ClickHouse/ClickHouse/pull/15319) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix race condition during MergeTree table rename and background cleanup. [#15304](https://github.com/ClickHouse/ClickHouse/pull/15304) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix rare race condition on server startup when system logs are enabled. [#15300](https://github.com/ClickHouse/ClickHouse/pull/15300) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix hang of queries with a lot of subqueries to same table of `MySQL` engine. Previously, if there were more than 16 subqueries to same `MySQL` table in query, it hang forever. [#15299](https://github.com/ClickHouse/ClickHouse/pull/15299) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix 'Unknown identifier' in GROUP BY when query has JOIN over Merge table. [#15242](https://github.com/ClickHouse/ClickHouse/pull/15242) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Fix instance crash when using `joinGet` with `LowCardinality` types. This fixes https://github.com/ClickHouse/ClickHouse/issues/15214. [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Adjust Decimal field size in MySQL column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)).
|
||||||
|
* Fixes `Data compressed with different methods` in `join_algorithm='auto'`. Keep LowCardinality as type for left table join key in `join_algorithm='partial_merge'`. [#15088](https://github.com/ClickHouse/ClickHouse/pull/15088) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Update `jemalloc` to fix `percpu_arena` with affinity mask. [#15035](https://github.com/ClickHouse/ClickHouse/pull/15035) ([Azat Khuzhin](https://github.com/azat)). [#14957](https://github.com/ClickHouse/ClickHouse/pull/14957) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes https://github.com/ClickHouse/ClickHouse/issues/14908. [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* If function `bar` was called with specifically crafted arguments, buffer overflow was possible. This closes [#13926](https://github.com/ClickHouse/ClickHouse/issues/13926). [#15028](https://github.com/ClickHouse/ClickHouse/pull/15028) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in Docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix crash in RIGHT or FULL JOIN with join_algorith='auto' when memory limit exceeded and we should change HashJoin with MergeJoin. [#15002](https://github.com/ClickHouse/ClickHouse/pull/15002) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Now settings `number_of_free_entries_in_pool_to_execute_mutation` and `number_of_free_entries_in_pool_to_lower_max_size_of_merge` can be equal to `background_pool_size`. [#14975](https://github.com/ClickHouse/ClickHouse/pull/14975) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix to make predicate push down work when subquery contains `finalizeAggregation` function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes https://github.com/ClickHouse/ClickHouse/issues/14923. [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fixed `.metadata.tmp File exists` error. [#14898](https://github.com/ClickHouse/ClickHouse/pull/14898) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fix the issue when some invocations of `extractAllGroups` function may trigger "Memory limit exceeded" error. This fixes [#13383](https://github.com/ClickHouse/ClickHouse/issues/13383). [#14889](https://github.com/ClickHouse/ClickHouse/pull/14889) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix SIGSEGV for an attempt to INSERT into StorageFile with file descriptor. [#14887](https://github.com/ClickHouse/ClickHouse/pull/14887) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fixed segfault in `cache` dictionary [#14837](https://github.com/ClickHouse/ClickHouse/issues/14837). [#14879](https://github.com/ClickHouse/ClickHouse/pull/14879) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fixed bug in parsing MySQL binlog events, which causes `Attempt to read after eof` and `Packet payload is not fully read` in `MaterializeMySQL` database engine. [#14852](https://github.com/ClickHouse/ClickHouse/pull/14852) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fix rare error in `SELECT` queries when the queried column has `DEFAULT` expression which depends on the other column which also has `DEFAULT` and not present in select query and not exists on disk. Partially fixes [#14531](https://github.com/ClickHouse/ClickHouse/issues/14531). [#14845](https://github.com/ClickHouse/ClickHouse/pull/14845) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix a problem where the server may get stuck on startup while talking to ZooKeeper, if the configuration files have to be fetched from ZK (using the `from_zk` include option). This fixes [#14814](https://github.com/ClickHouse/ClickHouse/issues/14814). [#14843](https://github.com/ClickHouse/ClickHouse/pull/14843) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Fix wrong monotonicity detection for shrunk `Int -> Int` cast of signed types. It might lead to incorrect query result. This bug is unveiled in [#14513](https://github.com/ClickHouse/ClickHouse/issues/14513). [#14783](https://github.com/ClickHouse/ClickHouse/pull/14783) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* `Replace` column transformer should replace identifiers with cloned ASTs. This fixes https://github.com/ClickHouse/ClickHouse/issues/14695 . [#14734](https://github.com/ClickHouse/ClickHouse/pull/14734) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed missed default database name in metadata of materialized view when executing `ALTER ... MODIFY QUERY`. [#14664](https://github.com/ClickHouse/ClickHouse/pull/14664) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix bug when `ALTER UPDATE` mutation with `Nullable` column in assignment expression and constant value (like `UPDATE x = 42`) leads to incorrect value in column or segfault. Fixes [#13634](https://github.com/ClickHouse/ClickHouse/issues/13634), [#14045](https://github.com/ClickHouse/ClickHouse/issues/14045). [#14646](https://github.com/ClickHouse/ClickHouse/pull/14646) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix wrong Decimal multiplication result caused wrong decimal scale of result column. [#14603](https://github.com/ClickHouse/ClickHouse/pull/14603) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Fix function `has` with `LowCardinality` of `Nullable`. [#14591](https://github.com/ClickHouse/ClickHouse/pull/14591) ([Mike](https://github.com/myrrc)).
|
||||||
|
* Cleanup data directory after Zookeeper exceptions during CreateQuery for StorageReplicatedMergeTree Engine. [#14563](https://github.com/ClickHouse/ClickHouse/pull/14563) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix rare segfaults in functions with combinator `-Resample`, which could appear in result of overflow with very large parameters. [#14562](https://github.com/ClickHouse/ClickHouse/pull/14562) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix a bug when converting `Nullable(String)` to Enum. Introduced by https://github.com/ClickHouse/ClickHouse/pull/12745. This fixes https://github.com/ClickHouse/ClickHouse/issues/14435. [#14530](https://github.com/ClickHouse/ClickHouse/pull/14530) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed the incorrect sorting order of `Nullable` column. This fixes [#14344](https://github.com/ClickHouse/ClickHouse/issues/14344). [#14495](https://github.com/ClickHouse/ClickHouse/pull/14495) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix `currentDatabase()` function cannot be used in `ON CLUSTER` ddl query. [#14211](https://github.com/ClickHouse/ClickHouse/pull/14211) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fixed `Packet payload is not fully read` error in `MaterializeMySQL` database engine. [#14696](https://github.com/ClickHouse/ClickHouse/pull/14696) ([BohuTANG](https://github.com/BohuTANG)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Enable `Atomic` database engine by default for newly created databases. [#15003](https://github.com/ClickHouse/ClickHouse/pull/15003) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Add the ability to specify specialized codecs like `Delta`, `T64`, etc. for columns with subtypes. Implements [#12551](https://github.com/ClickHouse/ClickHouse/issues/12551), fixes [#11397](https://github.com/ClickHouse/ClickHouse/issues/11397), fixes [#4609](https://github.com/ClickHouse/ClickHouse/issues/4609). [#15089](https://github.com/ClickHouse/ClickHouse/pull/15089) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Dynamic reload of zookeeper config. [#14678](https://github.com/ClickHouse/ClickHouse/pull/14678) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
* Now it's allowed to execute `ALTER ... ON CLUSTER` queries regardless of the `<internal_replication>` setting in cluster config. [#16075](https://github.com/ClickHouse/ClickHouse/pull/16075) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Now `joinGet` supports multi-key lookup. Continuation of [#12418](https://github.com/ClickHouse/ClickHouse/issues/12418). [#13015](https://github.com/ClickHouse/ClickHouse/pull/13015) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Wait for `DROP/DETACH TABLE` to actually finish if `NO DELAY` or `SYNC` is specified for `Atomic` database. [#15448](https://github.com/ClickHouse/ClickHouse/pull/15448) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Now it's possible to change the type of version column for `VersionedCollapsingMergeTree` with `ALTER` query. [#15442](https://github.com/ClickHouse/ClickHouse/pull/15442) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Unfold `{database}`, `{table}` and `{uuid}` macros in `zookeeper_path` on replicated table creation. Do not allow `RENAME TABLE` if it may break `zookeeper_path` after server restart. Fixes [#6917](https://github.com/ClickHouse/ClickHouse/issues/6917). [#15348](https://github.com/ClickHouse/ClickHouse/pull/15348) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* The function `now` allows an argument with timezone. This closes [15264](https://github.com/ClickHouse/ClickHouse/issues/15264). [#15285](https://github.com/ClickHouse/ClickHouse/pull/15285) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Do not allow connections to ClickHouse server until all scripts in `/docker-entrypoint-initdb.d/` are executed. [#15244](https://github.com/ClickHouse/ClickHouse/pull/15244) ([Aleksei Kozharin](https://github.com/alekseik1)).
|
||||||
|
* Added `optimize` setting to `EXPLAIN PLAN` query. If enabled, query plan level optimisations are applied. Enabled by default. [#15201](https://github.com/ClickHouse/ClickHouse/pull/15201) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Proper exception message for wrong number of arguments of CAST. This closes [#13992](https://github.com/ClickHouse/ClickHouse/issues/13992). [#15029](https://github.com/ClickHouse/ClickHouse/pull/15029) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add option to disable TTL move on data part insert. [#15000](https://github.com/ClickHouse/ClickHouse/pull/15000) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Ignore key constraints when doing mutations. Without this pull request, it's not possible to do mutations when `force_index_by_date = 1` or `force_primary_key = 1`. [#14973](https://github.com/ClickHouse/ClickHouse/pull/14973) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Allow to drop Replicated table if previous drop attempt was failed due to ZooKeeper session expiration. This fixes [#11891](https://github.com/ClickHouse/ClickHouse/issues/11891). [#14926](https://github.com/ClickHouse/ClickHouse/pull/14926) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed excessive settings constraint violation when running SELECT with SETTINGS from a distributed table. [#14876](https://github.com/ClickHouse/ClickHouse/pull/14876) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Provide a `load_balancing_first_offset` query setting to explicitly state what the first replica is. It's used together with `FIRST_OR_RANDOM` load balancing strategy, which allows to control replicas workload. [#14867](https://github.com/ClickHouse/ClickHouse/pull/14867) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Show subqueries for `SET` and `JOIN` in `EXPLAIN` result. [#14856](https://github.com/ClickHouse/ClickHouse/pull/14856) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Allow using multi-volume storage configuration in storage `Distributed`. [#14839](https://github.com/ClickHouse/ClickHouse/pull/14839) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Construct `query_start_time` and `query_start_time_microseconds` from the same timespec. [#14831](https://github.com/ClickHouse/ClickHouse/pull/14831) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Support for disabling persistency for `StorageJoin` and `StorageSet`, this feature is controlled by setting `disable_set_and_join_persistency`. And this PR solved issue [#6318](https://github.com/ClickHouse/ClickHouse/issues/6318). [#14776](https://github.com/ClickHouse/ClickHouse/pull/14776) ([vxider](https://github.com/Vxider)).
|
||||||
|
* Now `COLUMNS` can be used to wrap over a list of columns and apply column transformers afterwards. [#14775](https://github.com/ClickHouse/ClickHouse/pull/14775) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add `merge_algorithm` to `system.merges` table to improve merging inspections. [#14705](https://github.com/ClickHouse/ClickHouse/pull/14705) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix potential memory leak caused by zookeeper exists watch. [#14693](https://github.com/ClickHouse/ClickHouse/pull/14693) ([hustnn](https://github.com/hustnn)).
|
||||||
|
* Allow parallel execution of distributed DDL. [#14684](https://github.com/ClickHouse/ClickHouse/pull/14684) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add `QueryMemoryLimitExceeded` event counter. This closes [#14589](https://github.com/ClickHouse/ClickHouse/issues/14589). [#14647](https://github.com/ClickHouse/ClickHouse/pull/14647) ([fastio](https://github.com/fastio)).
|
||||||
|
* Fix some trailing whitespaces in query formatting. [#14595](https://github.com/ClickHouse/ClickHouse/pull/14595) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* ClickHouse treats partition expr and key expr differently. Partition expr is used to construct an minmax index containing related columns, while primary key expr is stored as an expr. Sometimes user might partition a table at coarser levels, such as `partition by i / 1000`. However, binary operators are not monotonic and this PR tries to fix that. It might also benifit other use cases. [#14513](https://github.com/ClickHouse/ClickHouse/pull/14513) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add an option to skip access checks for `DiskS3`. `s3` disk is an experimental feature. [#14497](https://github.com/ClickHouse/ClickHouse/pull/14497) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Speed up server shutdown process if there are ongoing S3 requests. [#14496](https://github.com/ClickHouse/ClickHouse/pull/14496) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* `SYSTEM RELOAD CONFIG` now throws an exception if failed to reload and continues using the previous users.xml. The background periodic reloading also continues using the previous users.xml if failed to reload. [#14492](https://github.com/ClickHouse/ClickHouse/pull/14492) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* For INSERTs with inline data in VALUES format in the script mode of `clickhouse-client`, support semicolon as the data terminator, in addition to the new line. Closes https://github.com/ClickHouse/ClickHouse/issues/12288. [#13192](https://github.com/ClickHouse/ClickHouse/pull/13192) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Support custom codecs in compact parts. [#12183](https://github.com/ClickHouse/ClickHouse/pull/12183) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
#### Performance Improvement
|
||||||
|
|
||||||
|
* Enable compact parts by default for small parts. This will allow to process frequent inserts slightly more efficiently (4..100 times). [#11913](https://github.com/ClickHouse/ClickHouse/pull/11913) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Improve `quantileTDigest` performance. This fixes [#2668](https://github.com/ClickHouse/ClickHouse/issues/2668). [#15542](https://github.com/ClickHouse/ClickHouse/pull/15542) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Significantly reduce memory usage in AggregatingInOrderTransform/optimize_aggregation_in_order. [#15543](https://github.com/ClickHouse/ClickHouse/pull/15543) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Faster 256-bit multiplication. [#15418](https://github.com/ClickHouse/ClickHouse/pull/15418) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Improve performance of 256-bit types using (u)int64_t as base type for wide integers. Original wide integers use 8-bit types as base. [#14859](https://github.com/ClickHouse/ClickHouse/pull/14859) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Explicitly use a temporary disk to store vertical merge temporary data. [#15639](https://github.com/ClickHouse/ClickHouse/pull/15639) ([Grigory Pervakov](https://github.com/GrigoryPervakov)).
|
||||||
|
* Use one S3 DeleteObjects request instead of multiple DeleteObject in a loop. No any functionality changes, so covered by existing tests like integration/test_log_family_s3. [#15238](https://github.com/ClickHouse/ClickHouse/pull/15238) ([ianton-ru](https://github.com/ianton-ru)).
|
||||||
|
* Fix `DateTime <op> DateTime` mistakenly choosing the slow generic implementation. This fixes https://github.com/ClickHouse/ClickHouse/issues/15153. [#15178](https://github.com/ClickHouse/ClickHouse/pull/15178) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Improve performance of GROUP BY key of type `FixedString`. [#15034](https://github.com/ClickHouse/ClickHouse/pull/15034) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Only `mlock` code segment when starting clickhouse-server. In previous versions, all mapped regions were locked in memory, including debug info. Debug info is usually splitted to a separate file but if it isn't, it led to +2..3 GiB memory usage. [#14929](https://github.com/ClickHouse/ClickHouse/pull/14929) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* ClickHouse binary become smaller due to link time optimization.
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
|
||||||
|
* Now we use clang-11 for production ClickHouse build. [#15239](https://github.com/ClickHouse/ClickHouse/pull/15239) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Now we use clang-11 to build ClickHouse in CI. [#14846](https://github.com/ClickHouse/ClickHouse/pull/14846) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Switch binary builds (Linux, Darwin, AArch64, FreeDSD) to clang-11. [#15622](https://github.com/ClickHouse/ClickHouse/pull/15622) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||||
|
* Now all test images use `llvm-symbolizer-11`. [#15069](https://github.com/ClickHouse/ClickHouse/pull/15069) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Allow to build with llvm-11. [#15366](https://github.com/ClickHouse/ClickHouse/pull/15366) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Switch from `clang-tidy-10` to `clang-tidy-11`. [#14922](https://github.com/ClickHouse/ClickHouse/pull/14922) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Use LLVM's experimental pass manager by default. [#15608](https://github.com/ClickHouse/ClickHouse/pull/15608) ([Danila Kutenin](https://github.com/danlark1)).
|
||||||
|
* Don't allow any C++ translation unit to build more than 10 minutes or to use more than 10 GB or memory. This fixes [#14925](https://github.com/ClickHouse/ClickHouse/issues/14925). [#15060](https://github.com/ClickHouse/ClickHouse/pull/15060) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Make performance test more stable and representative by splitting test runs and profile runs. [#15027](https://github.com/ClickHouse/ClickHouse/pull/15027) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Attempt to make performance test more reliable. It is done by remapping the executable memory of the process on the fly with `madvise` to use transparent huge pages - it can lower the number of iTLB misses which is the main source of instabilities in performance tests. [#14685](https://github.com/ClickHouse/ClickHouse/pull/14685) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Convert to python3. This closes [#14886](https://github.com/ClickHouse/ClickHouse/issues/14886). [#15007](https://github.com/ClickHouse/ClickHouse/pull/15007) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fail early in functional tests if server failed to respond. This closes [#15262](https://github.com/ClickHouse/ClickHouse/issues/15262). [#15267](https://github.com/ClickHouse/ClickHouse/pull/15267) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Allow to run AArch64 version of clickhouse-server without configs. This facilitates [#15174](https://github.com/ClickHouse/ClickHouse/issues/15174). [#15266](https://github.com/ClickHouse/ClickHouse/pull/15266) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Improvements in CI docker images: get rid of ZooKeeper and single script for test configs installation. [#15215](https://github.com/ClickHouse/ClickHouse/pull/15215) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix CMake options forwarding in fast test script. Fixes error in [#14711](https://github.com/ClickHouse/ClickHouse/issues/14711). [#15155](https://github.com/ClickHouse/ClickHouse/pull/15155) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Added a script to perform hardware benchmark in a single command. [#15115](https://github.com/ClickHouse/ClickHouse/pull/15115) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Splitted huge test `test_dictionaries_all_layouts_and_sources` into smaller ones. [#15110](https://github.com/ClickHouse/ClickHouse/pull/15110) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Maybe fix MSan report in base64 (on servers with AVX-512). This fixes [#14006](https://github.com/ClickHouse/ClickHouse/issues/14006). [#15030](https://github.com/ClickHouse/ClickHouse/pull/15030) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Reformat and cleanup code in all integration test *.py files. [#14864](https://github.com/ClickHouse/ClickHouse/pull/14864) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix MaterializeMySQL empty transaction unstable test case found in CI. [#14854](https://github.com/ClickHouse/ClickHouse/pull/14854) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Attempt to speed up build a little. [#14808](https://github.com/ClickHouse/ClickHouse/pull/14808) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Speed up build a little by removing unused headers. [#14714](https://github.com/ClickHouse/ClickHouse/pull/14714) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix build failure in OSX. [#14761](https://github.com/ClickHouse/ClickHouse/pull/14761) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Enable ccache by default in cmake if it's found in OS. [#14575](https://github.com/ClickHouse/ClickHouse/pull/14575) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Control CI builds configuration from the ClickHouse repository. [#14547](https://github.com/ClickHouse/ClickHouse/pull/14547) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* In CMake files: - Moved some options' descriptions' parts to comments above. - Replace 0 -> `OFF`, 1 -> `ON` in `option`s default values. - Added some descriptions and links to docs to the options. - Replaced `FUZZER` option (there is another option `ENABLE_FUZZING` which also enables same functionality). - Removed `ENABLE_GTEST_LIBRARY` option as there is `ENABLE_TESTS`. See the full description in PR: [#14711](https://github.com/ClickHouse/ClickHouse/pull/14711) ([Mike](https://github.com/myrrc)).
|
||||||
|
* Make binary a bit smaller (~50 Mb for debug version). [#14555](https://github.com/ClickHouse/ClickHouse/pull/14555) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Use std::filesystem::path in ConfigProcessor for concatenating file paths. [#14558](https://github.com/ClickHouse/ClickHouse/pull/14558) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix debug assertion in `bitShiftLeft()` when called with negative big integer. [#14697](https://github.com/ClickHouse/ClickHouse/pull/14697) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
|
||||||
|
|
||||||
## ClickHouse release 20.9
|
## ClickHouse release 20.9
|
||||||
|
|
||||||
### ClickHouse release v20.9.2.20-stable, 2020-09-22
|
### ClickHouse release v20.9.5.5-stable, 2020-11-13
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix rare silent crashes when query profiler is on and ClickHouse is installed on OS with glibc version that has (supposedly) broken asynchronous unwind tables for some functions. This fixes [#15301](https://github.com/ClickHouse/ClickHouse/issues/15301). This fixes [#13098](https://github.com/ClickHouse/ClickHouse/issues/13098). [#16846](https://github.com/ClickHouse/ClickHouse/pull/16846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Now when parsing AVRO from input the LowCardinality is removed from type. Fixes [#16188](https://github.com/ClickHouse/ClickHouse/issues/16188). [#16521](https://github.com/ClickHouse/ClickHouse/pull/16521) ([Mike](https://github.com/myrrc)).
|
||||||
|
* Fix rapid growth of metadata when using MySQL Master -> MySQL Slave -> ClickHouse MaterializeMySQL Engine, and `slave_parallel_worker` enabled on MySQL Slave, by properly shrinking GTID sets. This fixes [#15951](https://github.com/ClickHouse/ClickHouse/issues/15951). [#16504](https://github.com/ClickHouse/ClickHouse/pull/16504) ([TCeason](https://github.com/TCeason)).
|
||||||
|
* Fix DROP TABLE for Distributed (racy with INSERT). [#16409](https://github.com/ClickHouse/ClickHouse/pull/16409) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix processing of very large entries in replication queue. Very large entries may appear in ALTER queries if table structure is extremely large (near 1 MB). This fixes [#16307](https://github.com/ClickHouse/ClickHouse/issues/16307). [#16332](https://github.com/ClickHouse/ClickHouse/pull/16332) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed the inconsistent behaviour when a part of return data could be dropped because the set for its filtration wasn't created. [#16308](https://github.com/ClickHouse/ClickHouse/pull/16308) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix bug with MySQL database. When MySQL server used as database engine is down some queries raise Exception, because they try to get tables from disabled server, while it's unnecessary. For example, query `SELECT ... FROM system.parts` should work only with MergeTree tables and don't touch MySQL database at all. [#16032](https://github.com/ClickHouse/ClickHouse/pull/16032) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.9.4.76-stable (2020-10-29)
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix double free in case of exception in function `dictGet`. It could have happened if dictionary was loaded with error. [#16429](https://github.com/ClickHouse/ClickHouse/pull/16429) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix group by with totals/rollup/cube modifers and min/max functions over group by keys. Fixes [#16393](https://github.com/ClickHouse/ClickHouse/issues/16393). [#16397](https://github.com/ClickHouse/ClickHouse/pull/16397) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix async Distributed INSERT w/ prefer_localhost_replica=0 and internal_replication. [#16358](https://github.com/ClickHouse/ClickHouse/pull/16358) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix a very wrong code in TwoLevelStringHashTable implementation, which might lead to memory leak. I'm suprised how this bug can lurk for so long.... [#16264](https://github.com/ClickHouse/ClickHouse/pull/16264) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix the case when memory can be overallocated regardless to the limit. This closes [#14560](https://github.com/ClickHouse/ClickHouse/issues/14560). [#16206](https://github.com/ClickHouse/ClickHouse/pull/16206) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix `ALTER MODIFY ... ORDER BY` query hang for `ReplicatedVersionedCollapsingMergeTree`. This fixes [#15980](https://github.com/ClickHouse/ClickHouse/issues/15980). [#16011](https://github.com/ClickHouse/ClickHouse/pull/16011) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix collate name & charset name parser and support `length = 0` for string type. [#16008](https://github.com/ClickHouse/ClickHouse/pull/16008) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Allow to use direct layout for dictionaries with complex keys. [#16007](https://github.com/ClickHouse/ClickHouse/pull/16007) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Prevent replica hang for 5-10 mins when replication error happens after a period of inactivity. [#15987](https://github.com/ClickHouse/ClickHouse/pull/15987) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes https://github.com/ClickHouse/ClickHouse/issues/15628. [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix a crash when database creation fails. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix possible deadlocks in RBAC. [#15875](https://github.com/ClickHouse/ClickHouse/pull/15875) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix exception `Block structure mismatch` in `SELECT ... ORDER BY DESC` queries which were executed after `ALTER MODIFY COLUMN` query. Fixes [#15800](https://github.com/ClickHouse/ClickHouse/issues/15800). [#15852](https://github.com/ClickHouse/ClickHouse/pull/15852) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix `select count()` inaccuracy for MaterializeMySQL. [#15767](https://github.com/ClickHouse/ClickHouse/pull/15767) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix some cases of queries, in which only virtual columns are selected. Previously `Not found column _nothing in block` exception may be thrown. Fixes [#12298](https://github.com/ClickHouse/ClickHouse/issues/12298). [#15756](https://github.com/ClickHouse/ClickHouse/pull/15756) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fixed too low default value of `max_replicated_logs_to_keep` setting, which might cause replicas to become lost too often. Improve lost replica recovery process by choosing the most up-to-date replica to clone. Also do not remove old parts from lost replica, detach them instead. [#15701](https://github.com/ClickHouse/ClickHouse/pull/15701) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix error `Cannot add simple transform to empty Pipe` which happened while reading from `Buffer` table which has different structure than destination table. It was possible if destination table returned empty result for query. Fixes [#15529](https://github.com/ClickHouse/ClickHouse/issues/15529). [#15662](https://github.com/ClickHouse/ClickHouse/pull/15662) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed bug with globs in S3 table function, region from URL was not applied to S3 client configuration. [#15646](https://github.com/ClickHouse/ClickHouse/pull/15646) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Decrement the `ReadonlyReplica` metric when detaching read-only tables. This fixes https://github.com/ClickHouse/ClickHouse/issues/15598. [#15592](https://github.com/ClickHouse/ClickHouse/pull/15592) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
* Throw an error when a single parameter is passed to ReplicatedMergeTree instead of ignoring it. [#15516](https://github.com/ClickHouse/ClickHouse/pull/15516) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Now it's allowed to execute `ALTER ... ON CLUSTER` queries regardless of the `<internal_replication>` setting in cluster config. [#16075](https://github.com/ClickHouse/ClickHouse/pull/16075) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Unfold `{database}`, `{table}` and `{uuid}` macros in `ReplicatedMergeTree` arguments on table creation. [#16160](https://github.com/ClickHouse/ClickHouse/pull/16160) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.9.3.45-stable (2020-10-09)
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix error `Cannot find column` which may happen at insertion into `MATERIALIZED VIEW` in case if query for `MV` containes `ARRAY JOIN`. [#15717](https://github.com/ClickHouse/ClickHouse/pull/15717) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix race condition in AMQP-CPP. [#15667](https://github.com/ClickHouse/ClickHouse/pull/15667) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix the order of destruction for resources in `ReadFromStorage` step of query plan. It might cause crashes in rare cases. Possibly connected with [#15610](https://github.com/ClickHouse/ClickHouse/issues/15610). [#15645](https://github.com/ClickHouse/ClickHouse/pull/15645) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed `Element ... is not a constant expression` error when using `JSON*` function result in `VALUES`, `LIMIT` or right side of `IN` operator. [#15589](https://github.com/ClickHouse/ClickHouse/pull/15589) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Prevent the possibility of error message `Could not calculate available disk space (statvfs), errno: 4, strerror: Interrupted system call`. This fixes [#15541](https://github.com/ClickHouse/ClickHouse/issues/15541). [#15557](https://github.com/ClickHouse/ClickHouse/pull/15557) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Significantly reduce memory usage in AggregatingInOrderTransform/optimize_aggregation_in_order. [#15543](https://github.com/ClickHouse/ClickHouse/pull/15543) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix bug when `ILIKE` operator stops being case insensitive if `LIKE` with the same pattern was executed. [#15536](https://github.com/ClickHouse/ClickHouse/pull/15536) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix `Missing columns` errors when selecting columns which absent in data, but depend on other columns which also absent in data. Fixes [#15530](https://github.com/ClickHouse/ClickHouse/issues/15530). [#15532](https://github.com/ClickHouse/ClickHouse/pull/15532) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix bug with event subscription in DDLWorker which rarely may lead to query hangs in `ON CLUSTER`. Introduced in [#13450](https://github.com/ClickHouse/ClickHouse/issues/13450). [#15477](https://github.com/ClickHouse/ClickHouse/pull/15477) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Report proper error when the second argument of `boundingRatio` aggregate function has a wrong type. [#15407](https://github.com/ClickHouse/ClickHouse/pull/15407) ([detailyang](https://github.com/detailyang)).
|
||||||
|
* Fix bug where queries like SELECT toStartOfDay(today()) fail complaining about empty time_zone argument. [#15319](https://github.com/ClickHouse/ClickHouse/pull/15319) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix race condition during MergeTree table rename and background cleanup. [#15304](https://github.com/ClickHouse/ClickHouse/pull/15304) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix rare race condition on server startup when system.logs are enabled. [#15300](https://github.com/ClickHouse/ClickHouse/pull/15300) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix instance crash when using joinGet with LowCardinality types. This fixes https://github.com/ClickHouse/ClickHouse/issues/15214. [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Adjust decimals field size in mysql column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)).
|
||||||
|
* Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix to make predicate push down work when subquery contains finalizeAggregation function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Fix a problem where the server may get stuck on startup while talking to ZooKeeper, if the configuration files have to be fetched from ZK (using the `from_zk` include option). This fixes [#14814](https://github.com/ClickHouse/ClickHouse/issues/14814). [#14843](https://github.com/ClickHouse/ClickHouse/pull/14843) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Now it's possible to change the type of version column for `VersionedCollapsingMergeTree` with `ALTER` query. [#15442](https://github.com/ClickHouse/ClickHouse/pull/15442) ([alesapin](https://github.com/alesapin)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.9.2.20, 2020-09-22
|
||||||
|
|
||||||
#### New Feature
|
#### New Feature
|
||||||
|
|
||||||
@ -74,6 +510,110 @@
|
|||||||
|
|
||||||
## ClickHouse release 20.8
|
## ClickHouse release 20.8
|
||||||
|
|
||||||
|
### ClickHouse release v20.8.6.6-lts, 2020-11-13
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix rare silent crashes when query profiler is on and ClickHouse is installed on OS with glibc version that has (supposedly) broken asynchronous unwind tables for some functions. This fixes [#15301](https://github.com/ClickHouse/ClickHouse/issues/15301). This fixes [#13098](https://github.com/ClickHouse/ClickHouse/issues/13098). [#16846](https://github.com/ClickHouse/ClickHouse/pull/16846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Now when parsing AVRO from input the LowCardinality is removed from type. Fixes [#16188](https://github.com/ClickHouse/ClickHouse/issues/16188). [#16521](https://github.com/ClickHouse/ClickHouse/pull/16521) ([Mike](https://github.com/myrrc)).
|
||||||
|
* Fix rapid growth of metadata when using MySQL Master -> MySQL Slave -> ClickHouse MaterializeMySQL Engine, and `slave_parallel_worker` enabled on MySQL Slave, by properly shrinking GTID sets. This fixes [#15951](https://github.com/ClickHouse/ClickHouse/issues/15951). [#16504](https://github.com/ClickHouse/ClickHouse/pull/16504) ([TCeason](https://github.com/TCeason)).
|
||||||
|
* Fix DROP TABLE for Distributed (racy with INSERT). [#16409](https://github.com/ClickHouse/ClickHouse/pull/16409) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix processing of very large entries in replication queue. Very large entries may appear in ALTER queries if table structure is extremely large (near 1 MB). This fixes [#16307](https://github.com/ClickHouse/ClickHouse/issues/16307). [#16332](https://github.com/ClickHouse/ClickHouse/pull/16332) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed the inconsistent behaviour when a part of return data could be dropped because the set for its filtration wasn't created. [#16308](https://github.com/ClickHouse/ClickHouse/pull/16308) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix bug with MySQL database. When MySQL server used as database engine is down some queries raise Exception, because they try to get tables from disabled server, while it's unnecessary. For example, query `SELECT ... FROM system.parts` should work only with MergeTree tables and don't touch MySQL database at all. [#16032](https://github.com/ClickHouse/ClickHouse/pull/16032) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.8.5.45-lts, 2020-10-29
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix double free in case of exception in function `dictGet`. It could have happened if dictionary was loaded with error. [#16429](https://github.com/ClickHouse/ClickHouse/pull/16429) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix group by with totals/rollup/cube modifers and min/max functions over group by keys. Fixes [#16393](https://github.com/ClickHouse/ClickHouse/issues/16393). [#16397](https://github.com/ClickHouse/ClickHouse/pull/16397) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix async Distributed INSERT w/ prefer_localhost_replica=0 and internal_replication. [#16358](https://github.com/ClickHouse/ClickHouse/pull/16358) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix a possible memory leak during `GROUP BY` with string keys, caused by an error in `TwoLevelStringHashTable` implementation. [#16264](https://github.com/ClickHouse/ClickHouse/pull/16264) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix the case when memory can be overallocated regardless to the limit. This closes [#14560](https://github.com/ClickHouse/ClickHouse/issues/14560). [#16206](https://github.com/ClickHouse/ClickHouse/pull/16206) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix `ALTER MODIFY ... ORDER BY` query hang for `ReplicatedVersionedCollapsingMergeTree`. This fixes [#15980](https://github.com/ClickHouse/ClickHouse/issues/15980). [#16011](https://github.com/ClickHouse/ClickHouse/pull/16011) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix collate name & charset name parser and support `length = 0` for string type. [#16008](https://github.com/ClickHouse/ClickHouse/pull/16008) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Allow to use direct layout for dictionaries with complex keys. [#16007](https://github.com/ClickHouse/ClickHouse/pull/16007) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Prevent replica hang for 5-10 mins when replication error happens after a period of inactivity. [#15987](https://github.com/ClickHouse/ClickHouse/pull/15987) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes [#15628](https://github.com/ClickHouse/ClickHouse/issues/15628). [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix a crash when database creation fails. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix possible deadlocks in RBAC. [#15875](https://github.com/ClickHouse/ClickHouse/pull/15875) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix exception `Block structure mismatch` in `SELECT ... ORDER BY DESC` queries which were executed after `ALTER MODIFY COLUMN` query. Fixes [#15800](https://github.com/ClickHouse/ClickHouse/issues/15800). [#15852](https://github.com/ClickHouse/ClickHouse/pull/15852) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix some cases of queries, in which only virtual columns are selected. Previously `Not found column _nothing in block` exception may be thrown. Fixes [#12298](https://github.com/ClickHouse/ClickHouse/issues/12298). [#15756](https://github.com/ClickHouse/ClickHouse/pull/15756) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix error `Cannot find column` which may happen at insertion into `MATERIALIZED VIEW` in case if query for `MV` containes `ARRAY JOIN`. [#15717](https://github.com/ClickHouse/ClickHouse/pull/15717) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed too low default value of `max_replicated_logs_to_keep` setting, which might cause replicas to become lost too often. Improve lost replica recovery process by choosing the most up-to-date replica to clone. Also do not remove old parts from lost replica, detach them instead. [#15701](https://github.com/ClickHouse/ClickHouse/pull/15701) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix error `Cannot add simple transform to empty Pipe` which happened while reading from `Buffer` table which has different structure than destination table. It was possible if destination table returned empty result for query. Fixes [#15529](https://github.com/ClickHouse/ClickHouse/issues/15529). [#15662](https://github.com/ClickHouse/ClickHouse/pull/15662) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed bug with globs in S3 table function, region from URL was not applied to S3 client configuration. [#15646](https://github.com/ClickHouse/ClickHouse/pull/15646) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Decrement the `ReadonlyReplica` metric when detaching read-only tables. This fixes [#15598](https://github.com/ClickHouse/ClickHouse/issues/15598). [#15592](https://github.com/ClickHouse/ClickHouse/pull/15592) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
* Throw an error when a single parameter is passed to ReplicatedMergeTree instead of ignoring it. [#15516](https://github.com/ClickHouse/ClickHouse/pull/15516) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Now it's allowed to execute `ALTER ... ON CLUSTER` queries regardless of the `<internal_replication>` setting in cluster config. [#16075](https://github.com/ClickHouse/ClickHouse/pull/16075) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Unfold `{database}`, `{table}` and `{uuid}` macros in `ReplicatedMergeTree` arguments on table creation. [#16159](https://github.com/ClickHouse/ClickHouse/pull/16159) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.8.4.11-lts, 2020-10-09
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix the order of destruction for resources in `ReadFromStorage` step of query plan. It might cause crashes in rare cases. Possibly connected with [#15610](https://github.com/ClickHouse/ClickHouse/issues/15610). [#15645](https://github.com/ClickHouse/ClickHouse/pull/15645) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed `Element ... is not a constant expression` error when using `JSON*` function result in `VALUES`, `LIMIT` or right side of `IN` operator. [#15589](https://github.com/ClickHouse/ClickHouse/pull/15589) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Prevent the possibility of error message `Could not calculate available disk space (statvfs), errno: 4, strerror: Interrupted system call`. This fixes [#15541](https://github.com/ClickHouse/ClickHouse/issues/15541). [#15557](https://github.com/ClickHouse/ClickHouse/pull/15557) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Significantly reduce memory usage in AggregatingInOrderTransform/optimize_aggregation_in_order. [#15543](https://github.com/ClickHouse/ClickHouse/pull/15543) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix bug when `ILIKE` operator stops being case insensitive if `LIKE` with the same pattern was executed. [#15536](https://github.com/ClickHouse/ClickHouse/pull/15536) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix `Missing columns` errors when selecting columns which absent in data, but depend on other columns which also absent in data. Fixes [#15530](https://github.com/ClickHouse/ClickHouse/issues/15530). [#15532](https://github.com/ClickHouse/ClickHouse/pull/15532) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix bug with event subscription in DDLWorker which rarely may lead to query hangs in `ON CLUSTER`. Introduced in [#13450](https://github.com/ClickHouse/ClickHouse/issues/13450). [#15477](https://github.com/ClickHouse/ClickHouse/pull/15477) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Report proper error when the second argument of `boundingRatio` aggregate function has a wrong type. [#15407](https://github.com/ClickHouse/ClickHouse/pull/15407) ([detailyang](https://github.com/detailyang)).
|
||||||
|
* Fix race condition during MergeTree table rename and background cleanup. [#15304](https://github.com/ClickHouse/ClickHouse/pull/15304) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix rare race condition on server startup when system.logs are enabled. [#15300](https://github.com/ClickHouse/ClickHouse/pull/15300) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix instance crash when using joinGet with LowCardinality types. This fixes https://github.com/ClickHouse/ClickHouse/issues/15214. [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Adjust decimals field size in mysql column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)).
|
||||||
|
* We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes https://github.com/ClickHouse/ClickHouse/issues/14908. [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* If function `bar` was called with specifically crafter arguments, buffer overflow was possible. This closes [#13926](https://github.com/ClickHouse/ClickHouse/issues/13926). [#15028](https://github.com/ClickHouse/ClickHouse/pull/15028) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Now settings `number_of_free_entries_in_pool_to_execute_mutation` and `number_of_free_entries_in_pool_to_lower_max_size_of_merge` can be equal to `background_pool_size`. [#14975](https://github.com/ClickHouse/ClickHouse/pull/14975) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix to make predicate push down work when subquery contains finalizeAggregation function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes https://github.com/ClickHouse/ClickHouse/issues/14923. [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Fixed `.metadata.tmp File exists` error when using `MaterializeMySQL` database engine. [#14898](https://github.com/ClickHouse/ClickHouse/pull/14898) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fix a problem where the server may get stuck on startup while talking to ZooKeeper, if the configuration files have to be fetched from ZK (using the `from_zk` include option). This fixes [#14814](https://github.com/ClickHouse/ClickHouse/issues/14814). [#14843](https://github.com/ClickHouse/ClickHouse/pull/14843) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Fix wrong monotonicity detection for shrunk `Int -> Int` cast of signed types. It might lead to incorrect query result. This bug is unveiled in [#14513](https://github.com/ClickHouse/ClickHouse/issues/14513). [#14783](https://github.com/ClickHouse/ClickHouse/pull/14783) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed the incorrect sorting order of `Nullable` column. This fixes [#14344](https://github.com/ClickHouse/ClickHouse/issues/14344). [#14495](https://github.com/ClickHouse/ClickHouse/pull/14495) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Now it's possible to change the type of version column for `VersionedCollapsingMergeTree` with `ALTER` query. [#15442](https://github.com/ClickHouse/ClickHouse/pull/15442) ([alesapin](https://github.com/alesapin)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.8.3.18-stable, 2020-09-18
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix the issue when some invocations of `extractAllGroups` function may trigger "Memory limit exceeded" error. This fixes [#13383](https://github.com/ClickHouse/ClickHouse/issues/13383). [#14889](https://github.com/ClickHouse/ClickHouse/pull/14889) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix SIGSEGV for an attempt to INSERT into StorageFile(fd). [#14887](https://github.com/ClickHouse/ClickHouse/pull/14887) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix rare error in `SELECT` queries when the queried column has `DEFAULT` expression which depends on the other column which also has `DEFAULT` and not present in select query and not exists on disk. Partially fixes [#14531](https://github.com/ClickHouse/ClickHouse/issues/14531). [#14845](https://github.com/ClickHouse/ClickHouse/pull/14845) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fixed missed default database name in metadata of materialized view when executing `ALTER ... MODIFY QUERY`. [#14664](https://github.com/ClickHouse/ClickHouse/pull/14664) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix bug when `ALTER UPDATE` mutation with Nullable column in assignment expression and constant value (like `UPDATE x = 42`) leads to incorrect value in column or segfault. Fixes [#13634](https://github.com/ClickHouse/ClickHouse/issues/13634), [#14045](https://github.com/ClickHouse/ClickHouse/issues/14045). [#14646](https://github.com/ClickHouse/ClickHouse/pull/14646) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix wrong Decimal multiplication result caused wrong decimal scale of result column. [#14603](https://github.com/ClickHouse/ClickHouse/pull/14603) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Added the checker as neither calling `lc->isNullable()` nor calling `ls->getDictionaryPtr()->isNullable()` would return the correct result. [#14591](https://github.com/ClickHouse/ClickHouse/pull/14591) ([myrrc](https://github.com/myrrc)).
|
||||||
|
* Cleanup data directory after Zookeeper exceptions during CreateQuery for StorageReplicatedMergeTree Engine. [#14563](https://github.com/ClickHouse/ClickHouse/pull/14563) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix rare segfaults in functions with combinator -Resample, which could appear in result of overflow with very large parameters. [#14562](https://github.com/ClickHouse/ClickHouse/pull/14562) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Speed up server shutdown process if there are ongoing S3 requests. [#14858](https://github.com/ClickHouse/ClickHouse/pull/14858) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Allow using multi-volume storage configuration in storage Distributed. [#14839](https://github.com/ClickHouse/ClickHouse/pull/14839) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Speed up server shutdown process if there are ongoing S3 requests. [#14496](https://github.com/ClickHouse/ClickHouse/pull/14496) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Support custom codecs in compact parts. [#12183](https://github.com/ClickHouse/ClickHouse/pull/12183) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
|
||||||
### ClickHouse release v20.8.2.3-stable, 2020-09-08
|
### ClickHouse release v20.8.2.3-stable, 2020-09-08
|
||||||
|
|
||||||
#### Backward Incompatible Change
|
#### Backward Incompatible Change
|
||||||
@ -84,7 +624,6 @@
|
|||||||
|
|
||||||
#### New Feature
|
#### New Feature
|
||||||
|
|
||||||
* ClickHouse can work as MySQL replica - it is implemented by `MaterializeMySQL` database engine. Implements [#4006](https://github.com/ClickHouse/ClickHouse/issues/4006). [#10851](https://github.com/ClickHouse/ClickHouse/pull/10851) ([Winter Zhang](https://github.com/zhang2014)).
|
|
||||||
* Add the ability to specify `Default` compression codec for columns that correspond to settings specified in `config.xml`. Implements: [#9074](https://github.com/ClickHouse/ClickHouse/issues/9074). [#14049](https://github.com/ClickHouse/ClickHouse/pull/14049) ([alesapin](https://github.com/alesapin)).
|
* Add the ability to specify `Default` compression codec for columns that correspond to settings specified in `config.xml`. Implements: [#9074](https://github.com/ClickHouse/ClickHouse/issues/9074). [#14049](https://github.com/ClickHouse/ClickHouse/pull/14049) ([alesapin](https://github.com/alesapin)).
|
||||||
* Support Kerberos authentication in Kafka, using `krb5` and `cyrus-sasl` libraries. [#12771](https://github.com/ClickHouse/ClickHouse/pull/12771) ([Ilya Golshtein](https://github.com/ilejn)).
|
* Support Kerberos authentication in Kafka, using `krb5` and `cyrus-sasl` libraries. [#12771](https://github.com/ClickHouse/ClickHouse/pull/12771) ([Ilya Golshtein](https://github.com/ilejn)).
|
||||||
* Add function `normalizeQuery` that replaces literals, sequences of literals and complex aliases with placeholders. Add function `normalizedQueryHash` that returns identical 64bit hash values for similar queries. It helps to analyze query log. This closes [#11271](https://github.com/ClickHouse/ClickHouse/issues/11271). [#13816](https://github.com/ClickHouse/ClickHouse/pull/13816) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
* Add function `normalizeQuery` that replaces literals, sequences of literals and complex aliases with placeholders. Add function `normalizedQueryHash` that returns identical 64bit hash values for similar queries. It helps to analyze query log. This closes [#11271](https://github.com/ClickHouse/ClickHouse/issues/11271). [#13816](https://github.com/ClickHouse/ClickHouse/pull/13816) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
@ -184,6 +723,7 @@
|
|||||||
|
|
||||||
#### Experimental Feature
|
#### Experimental Feature
|
||||||
|
|
||||||
|
* ClickHouse can work as MySQL replica - it is implemented by `MaterializeMySQL` database engine. Implements [#4006](https://github.com/ClickHouse/ClickHouse/issues/4006). [#10851](https://github.com/ClickHouse/ClickHouse/pull/10851) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
* Add types `Int128`, `Int256`, `UInt256` and related functions for them. Extend Decimals with Decimal256 (precision up to 76 digits). New types are under the setting `allow_experimental_bigint_types`. It is working extremely slow and bad. The implementation is incomplete. Please don't use this feature. [#13097](https://github.com/ClickHouse/ClickHouse/pull/13097) ([Artem Zuikov](https://github.com/4ertus2)).
|
* Add types `Int128`, `Int256`, `UInt256` and related functions for them. Extend Decimals with Decimal256 (precision up to 76 digits). New types are under the setting `allow_experimental_bigint_types`. It is working extremely slow and bad. The implementation is incomplete. Please don't use this feature. [#13097](https://github.com/ClickHouse/ClickHouse/pull/13097) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
|
||||||
#### Build/Testing/Packaging Improvement
|
#### Build/Testing/Packaging Improvement
|
||||||
@ -1424,6 +1964,74 @@ No changes compared to v20.4.3.16-stable.
|
|||||||
|
|
||||||
## ClickHouse release v20.3
|
## ClickHouse release v20.3
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.3.21.2-lts, 2020-11-02
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix dictGet in sharding_key (and similar places, i.e. when the function context is stored permanently). [#16205](https://github.com/ClickHouse/ClickHouse/pull/16205) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix missing or excessive headers in `TSV/CSVWithNames` formats. This fixes [#12504](https://github.com/ClickHouse/ClickHouse/issues/12504). [#13343](https://github.com/ClickHouse/ClickHouse/pull/13343) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.3.20.6-lts, 2020-10-09
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15724](https://github.com/ClickHouse/ClickHouse/pull/15724), [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix hang of queries with a lot of subqueries to same table of `MySQL` engine. Previously, if there were more than 16 subqueries to same `MySQL` table in query, it hang forever. [#15299](https://github.com/ClickHouse/ClickHouse/pull/15299) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix 'Unknown identifier' in GROUP BY when query has JOIN over Merge table. [#15242](https://github.com/ClickHouse/ClickHouse/pull/15242) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Fix to make predicate push down work when subquery contains finalizeAggregation function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Concurrent `ALTER ... REPLACE/MOVE PARTITION ...` queries might cause deadlock. It's fixed. [#13626](https://github.com/ClickHouse/ClickHouse/pull/13626) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.3.19.4-lts, 2020-09-18
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix rare error in `SELECT` queries when the queried column has `DEFAULT` expression which depends on the other column which also has `DEFAULT` and not present in select query and not exists on disk. Partially fixes [#14531](https://github.com/ClickHouse/ClickHouse/issues/14531). [#14845](https://github.com/ClickHouse/ClickHouse/pull/14845) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix bug when `ALTER UPDATE` mutation with Nullable column in assignment expression and constant value (like `UPDATE x = 42`) leads to incorrect value in column or segfault. Fixes [#13634](https://github.com/ClickHouse/ClickHouse/issues/13634), [#14045](https://github.com/ClickHouse/ClickHouse/issues/14045). [#14646](https://github.com/ClickHouse/ClickHouse/pull/14646) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix wrong Decimal multiplication result caused wrong decimal scale of result column. [#14603](https://github.com/ClickHouse/ClickHouse/pull/14603) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Support custom codecs in compact parts. [#12183](https://github.com/ClickHouse/ClickHouse/pull/12183) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.3.18.10-lts, 2020-09-08
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Stop query execution if exception happened in `PipelineExecutor` itself. This could prevent rare possible query hung. Continuation of [#14334](https://github.com/ClickHouse/ClickHouse/issues/14334). [#14402](https://github.com/ClickHouse/ClickHouse/pull/14402) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed the behaviour when sometimes cache-dictionary returned default value instead of present value from source. [#13624](https://github.com/ClickHouse/ClickHouse/pull/13624) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix parsing row policies from users.xml when names of databases or tables contain dots. This fixes [#5779](https://github.com/ClickHouse/ClickHouse/issues/5779), [#12527](https://github.com/ClickHouse/ClickHouse/issues/12527). [#13199](https://github.com/ClickHouse/ClickHouse/pull/13199) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix CAST(Nullable(String), Enum()). [#12745](https://github.com/ClickHouse/ClickHouse/pull/12745) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fixed data race in `text_log`. It does not correspond to any real bug. [#9726](https://github.com/ClickHouse/ClickHouse/pull/9726) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Fix wrong error for long queries. It was possible to get syntax error other than `Max query size exceeded` for correct query. [#13928](https://github.com/ClickHouse/ClickHouse/pull/13928) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Return NULL/zero when value is not parsed completely in parseDateTimeBestEffortOrNull/Zero functions. This fixes [#7876](https://github.com/ClickHouse/ClickHouse/issues/7876). [#11653](https://github.com/ClickHouse/ClickHouse/pull/11653) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
#### Performance Improvement
|
||||||
|
|
||||||
|
* Slightly optimize very short queries with LowCardinality. [#14129](https://github.com/ClickHouse/ClickHouse/pull/14129) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
|
||||||
|
* Fix UBSan report (adding zero to nullptr) in HashTable that appeared after migration to clang-10. [#10638](https://github.com/ClickHouse/ClickHouse/pull/10638) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.3.17.173-lts, 2020-08-15
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix crash in JOIN with StorageMerge and `set enable_optimize_predicate_expression=1`. [#13679](https://github.com/ClickHouse/ClickHouse/pull/13679) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Fix invalid return type for comparison of tuples with `NULL` elements. Fixes [#12461](https://github.com/ClickHouse/ClickHouse/issues/12461). [#13420](https://github.com/ClickHouse/ClickHouse/pull/13420) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix queries with constant columns and `ORDER BY` prefix of primary key. [#13396](https://github.com/ClickHouse/ClickHouse/pull/13396) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Return passed number for numbers with MSB set in roundUpToPowerOfTwoOrZero(). [#13234](https://github.com/ClickHouse/ClickHouse/pull/13234) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
|
||||||
|
|
||||||
### ClickHouse release v20.3.16.165-lts 2020-08-10
|
### ClickHouse release v20.3.16.165-lts 2020-08-10
|
||||||
|
|
||||||
#### Bug Fix
|
#### Bug Fix
|
||||||
|
@ -445,6 +445,7 @@ include (cmake/find/brotli.cmake)
|
|||||||
include (cmake/find/protobuf.cmake)
|
include (cmake/find/protobuf.cmake)
|
||||||
include (cmake/find/grpc.cmake)
|
include (cmake/find/grpc.cmake)
|
||||||
include (cmake/find/pdqsort.cmake)
|
include (cmake/find/pdqsort.cmake)
|
||||||
|
include (cmake/find/miniselect.cmake)
|
||||||
include (cmake/find/hdfs3.cmake) # uses protobuf
|
include (cmake/find/hdfs3.cmake) # uses protobuf
|
||||||
include (cmake/find/poco.cmake)
|
include (cmake/find/poco.cmake)
|
||||||
include (cmake/find/curl.cmake)
|
include (cmake/find/curl.cmake)
|
||||||
@ -455,6 +456,8 @@ include (cmake/find/simdjson.cmake)
|
|||||||
include (cmake/find/rapidjson.cmake)
|
include (cmake/find/rapidjson.cmake)
|
||||||
include (cmake/find/fastops.cmake)
|
include (cmake/find/fastops.cmake)
|
||||||
include (cmake/find/odbc.cmake)
|
include (cmake/find/odbc.cmake)
|
||||||
|
include (cmake/find/rocksdb.cmake)
|
||||||
|
|
||||||
|
|
||||||
if(NOT USE_INTERNAL_PARQUET_LIBRARY)
|
if(NOT USE_INTERNAL_PARQUET_LIBRARY)
|
||||||
set (ENABLE_ORC OFF CACHE INTERNAL "")
|
set (ENABLE_ORC OFF CACHE INTERNAL "")
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <cassert>
|
#include <cassert>
|
||||||
|
#include <stdexcept> // for std::logic_error
|
||||||
#include <string>
|
#include <string>
|
||||||
#include <vector>
|
#include <vector>
|
||||||
#include <functional>
|
#include <functional>
|
||||||
|
@ -3,7 +3,6 @@
|
|||||||
/// Macros for convenient usage of Poco logger.
|
/// Macros for convenient usage of Poco logger.
|
||||||
|
|
||||||
#include <fmt/format.h>
|
#include <fmt/format.h>
|
||||||
#include <fmt/ostream.h>
|
|
||||||
#include <Poco/Logger.h>
|
#include <Poco/Logger.h>
|
||||||
#include <Poco/Message.h>
|
#include <Poco/Message.h>
|
||||||
#include <Common/CurrentThread.h>
|
#include <Common/CurrentThread.h>
|
||||||
|
37
base/common/sort.h
Normal file
37
base/common/sort.h
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#if !defined(ARCADIA_BUILD)
|
||||||
|
# include <miniselect/floyd_rivest_select.h> // Y_IGNORE
|
||||||
|
#else
|
||||||
|
# include <algorithm>
|
||||||
|
#endif
|
||||||
|
|
||||||
|
template <class RandomIt>
|
||||||
|
void nth_element(RandomIt first, RandomIt nth, RandomIt last)
|
||||||
|
{
|
||||||
|
#if !defined(ARCADIA_BUILD)
|
||||||
|
::miniselect::floyd_rivest_select(first, nth, last);
|
||||||
|
#else
|
||||||
|
::std::nth_element(first, nth, last);
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
|
||||||
|
template <class RandomIt>
|
||||||
|
void partial_sort(RandomIt first, RandomIt middle, RandomIt last)
|
||||||
|
{
|
||||||
|
#if !defined(ARCADIA_BUILD)
|
||||||
|
::miniselect::floyd_rivest_partial_sort(first, middle, last);
|
||||||
|
#else
|
||||||
|
::std::partial_sort(first, middle, last);
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
|
||||||
|
template <class RandomIt, class Compare>
|
||||||
|
void partial_sort(RandomIt first, RandomIt middle, RandomIt last, Compare compare)
|
||||||
|
{
|
||||||
|
#if !defined(ARCADIA_BUILD)
|
||||||
|
::miniselect::floyd_rivest_partial_sort(first, middle, last, compare);
|
||||||
|
#else
|
||||||
|
::std::partial_sort(first, middle, last, compare);
|
||||||
|
#endif
|
||||||
|
}
|
@ -5,6 +5,9 @@
|
|||||||
/// (See at http://www.boost.org/LICENSE_1_0.txt)
|
/// (See at http://www.boost.org/LICENSE_1_0.txt)
|
||||||
|
|
||||||
#include "throwError.h"
|
#include "throwError.h"
|
||||||
|
#include <cfloat>
|
||||||
|
#include <limits>
|
||||||
|
#include <cassert>
|
||||||
|
|
||||||
namespace wide
|
namespace wide
|
||||||
{
|
{
|
||||||
@ -192,7 +195,7 @@ struct integer<Bits, Signed>::_impl
|
|||||||
}
|
}
|
||||||
|
|
||||||
template <typename T>
|
template <typename T>
|
||||||
constexpr static auto to_Integral(T f) noexcept
|
__attribute__((no_sanitize("undefined"))) constexpr static auto to_Integral(T f) noexcept
|
||||||
{
|
{
|
||||||
if constexpr (std::is_same_v<T, __int128>)
|
if constexpr (std::is_same_v<T, __int128>)
|
||||||
return f;
|
return f;
|
||||||
@ -225,25 +228,54 @@ struct integer<Bits, Signed>::_impl
|
|||||||
self.items[i] = 0;
|
self.items[i] = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
constexpr static void wide_integer_from_bultin(integer<Bits, Signed> & self, double rhs) noexcept
|
/**
|
||||||
{
|
* N.B. t is constructed from double, so max(t) = max(double) ~ 2^310
|
||||||
if ((rhs > 0 && rhs < std::numeric_limits<uint64_t>::max()) || (rhs < 0 && rhs > std::numeric_limits<int64_t>::min()))
|
* the recursive call happens when t / 2^64 > 2^64, so there won't be more than 5 of them.
|
||||||
|
*
|
||||||
|
* t = a1 * max_int + b1, a1 > max_int, b1 < max_int
|
||||||
|
* a1 = a2 * max_int + b2, a2 > max_int, b2 < max_int
|
||||||
|
* a_(n - 1) = a_n * max_int + b2, a_n <= max_int <- base case.
|
||||||
|
*/
|
||||||
|
template <class T>
|
||||||
|
constexpr static void set_multiplier(integer<Bits, Signed> & self, T t) noexcept {
|
||||||
|
constexpr uint64_t max_int = std::numeric_limits<uint64_t>::max();
|
||||||
|
const T alpha = t / max_int;
|
||||||
|
|
||||||
|
if (alpha <= max_int)
|
||||||
|
self = static_cast<uint64_t>(alpha);
|
||||||
|
else // max(double) / 2^64 will surely contain less than 52 precision bits, so speed up computations.
|
||||||
|
set_multiplier<double>(self, alpha);
|
||||||
|
|
||||||
|
self *= max_int;
|
||||||
|
self += static_cast<uint64_t>(t - alpha * max_int); // += b_i
|
||||||
|
}
|
||||||
|
|
||||||
|
constexpr static void wide_integer_from_bultin(integer<Bits, Signed>& self, double rhs) noexcept {
|
||||||
|
constexpr int64_t max_int = std::numeric_limits<int64_t>::max();
|
||||||
|
constexpr int64_t min_int = std::numeric_limits<int64_t>::min();
|
||||||
|
|
||||||
|
/// There are values in int64 that have more than 53 significant bits (in terms of double
|
||||||
|
/// representation). Such values, being promoted to double, are rounded up or down. If they are rounded up,
|
||||||
|
/// the result may not fit in 64 bits.
|
||||||
|
/// The example of such a number is 9.22337e+18.
|
||||||
|
/// As to_Integral does a static_cast to int64_t, it may result in UB.
|
||||||
|
/// The necessary check here is that long double has enough significant (mantissa) bits to store the
|
||||||
|
/// int64_t max value precisely.
|
||||||
|
static_assert(LDBL_MANT_DIG >= 64,
|
||||||
|
"On your system long double has less than 64 precision bits,"
|
||||||
|
"which may result in UB when initializing double from int64_t");
|
||||||
|
|
||||||
|
if ((rhs > 0 && rhs < max_int) || (rhs < 0 && rhs > min_int))
|
||||||
{
|
{
|
||||||
self = to_Integral(rhs);
|
self = static_cast<int64_t>(rhs);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
long double r = rhs;
|
const long double rhs_long_double = (static_cast<long double>(rhs) < 0)
|
||||||
if (r < 0)
|
? -static_cast<long double>(rhs)
|
||||||
r = -r;
|
: rhs;
|
||||||
|
|
||||||
size_t count = r / std::numeric_limits<uint64_t>::max();
|
set_multiplier(self, rhs_long_double);
|
||||||
self = count;
|
|
||||||
self *= std::numeric_limits<uint64_t>::max();
|
|
||||||
long double to_diff = count;
|
|
||||||
to_diff *= std::numeric_limits<uint64_t>::max();
|
|
||||||
|
|
||||||
self += to_Integral(r - to_diff);
|
|
||||||
|
|
||||||
if (rhs < 0)
|
if (rhs < 0)
|
||||||
self = -self;
|
self = -self;
|
||||||
|
@ -1,4 +1,6 @@
|
|||||||
# This file is generated automatically, do not edit. See 'ya.make.in' and use 'utils/generate-ya-make' to regenerate it.
|
# This file is generated automatically, do not edit. See 'ya.make.in' and use 'utils/generate-ya-make' to regenerate it.
|
||||||
|
OWNER(g:clickhouse)
|
||||||
|
|
||||||
LIBRARY()
|
LIBRARY()
|
||||||
|
|
||||||
ADDINCL(
|
ADDINCL(
|
||||||
@ -35,25 +37,25 @@ PEERDIR(
|
|||||||
CFLAGS(-g0)
|
CFLAGS(-g0)
|
||||||
|
|
||||||
SRCS(
|
SRCS(
|
||||||
argsToConfig.cpp
|
|
||||||
coverage.cpp
|
|
||||||
DateLUT.cpp
|
DateLUT.cpp
|
||||||
DateLUTImpl.cpp
|
DateLUTImpl.cpp
|
||||||
|
JSON.cpp
|
||||||
|
LineReader.cpp
|
||||||
|
StringRef.cpp
|
||||||
|
argsToConfig.cpp
|
||||||
|
coverage.cpp
|
||||||
demangle.cpp
|
demangle.cpp
|
||||||
errnoToString.cpp
|
errnoToString.cpp
|
||||||
getFQDNOrHostName.cpp
|
getFQDNOrHostName.cpp
|
||||||
getMemoryAmount.cpp
|
getMemoryAmount.cpp
|
||||||
getResource.cpp
|
getResource.cpp
|
||||||
getThreadId.cpp
|
getThreadId.cpp
|
||||||
JSON.cpp
|
|
||||||
LineReader.cpp
|
|
||||||
mremap.cpp
|
mremap.cpp
|
||||||
phdr_cache.cpp
|
phdr_cache.cpp
|
||||||
preciseExp10.cpp
|
preciseExp10.cpp
|
||||||
setTerminalEcho.cpp
|
setTerminalEcho.cpp
|
||||||
shift10.cpp
|
shift10.cpp
|
||||||
sleep.cpp
|
sleep.cpp
|
||||||
StringRef.cpp
|
|
||||||
terminalColors.cpp
|
terminalColors.cpp
|
||||||
|
|
||||||
)
|
)
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
OWNER(g:clickhouse)
|
||||||
|
|
||||||
LIBRARY()
|
LIBRARY()
|
||||||
|
|
||||||
ADDINCL(
|
ADDINCL(
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
OWNER(g:clickhouse)
|
||||||
|
|
||||||
LIBRARY()
|
LIBRARY()
|
||||||
|
|
||||||
NO_COMPILER_WARNINGS()
|
NO_COMPILER_WARNINGS()
|
||||||
|
19
base/glibc-compatibility/musl/accept4.c
Normal file
19
base/glibc-compatibility/musl/accept4.c
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
#define _GNU_SOURCE
|
||||||
|
#include <sys/socket.h>
|
||||||
|
#include <errno.h>
|
||||||
|
#include <fcntl.h>
|
||||||
|
#include "syscall.h"
|
||||||
|
|
||||||
|
int accept4(int fd, struct sockaddr *restrict addr, socklen_t *restrict len, int flg)
|
||||||
|
{
|
||||||
|
if (!flg) return accept(fd, addr, len);
|
||||||
|
int ret = socketcall_cp(accept4, fd, addr, len, flg, 0, 0);
|
||||||
|
if (ret>=0 || (errno != ENOSYS && errno != EINVAL)) return ret;
|
||||||
|
ret = accept(fd, addr, len);
|
||||||
|
if (ret<0) return ret;
|
||||||
|
if (flg & SOCK_CLOEXEC)
|
||||||
|
__syscall(SYS_fcntl, ret, F_SETFD, FD_CLOEXEC);
|
||||||
|
if (flg & SOCK_NONBLOCK)
|
||||||
|
__syscall(SYS_fcntl, ret, F_SETFL, O_NONBLOCK);
|
||||||
|
return ret;
|
||||||
|
}
|
37
base/glibc-compatibility/musl/epoll.c
Normal file
37
base/glibc-compatibility/musl/epoll.c
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
#include <sys/epoll.h>
|
||||||
|
#include <signal.h>
|
||||||
|
#include <errno.h>
|
||||||
|
#include "syscall.h"
|
||||||
|
|
||||||
|
int epoll_create(int size)
|
||||||
|
{
|
||||||
|
return epoll_create1(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
int epoll_create1(int flags)
|
||||||
|
{
|
||||||
|
int r = __syscall(SYS_epoll_create1, flags);
|
||||||
|
#ifdef SYS_epoll_create
|
||||||
|
if (r==-ENOSYS && !flags) r = __syscall(SYS_epoll_create, 1);
|
||||||
|
#endif
|
||||||
|
return __syscall_ret(r);
|
||||||
|
}
|
||||||
|
|
||||||
|
int epoll_ctl(int fd, int op, int fd2, struct epoll_event *ev)
|
||||||
|
{
|
||||||
|
return syscall(SYS_epoll_ctl, fd, op, fd2, ev);
|
||||||
|
}
|
||||||
|
|
||||||
|
int epoll_pwait(int fd, struct epoll_event *ev, int cnt, int to, const sigset_t *sigs)
|
||||||
|
{
|
||||||
|
int r = __syscall(SYS_epoll_pwait, fd, ev, cnt, to, sigs, _NSIG/8);
|
||||||
|
#ifdef SYS_epoll_wait
|
||||||
|
if (r==-ENOSYS && !sigs) r = __syscall(SYS_epoll_wait, fd, ev, cnt, to);
|
||||||
|
#endif
|
||||||
|
return __syscall_ret(r);
|
||||||
|
}
|
||||||
|
|
||||||
|
int epoll_wait(int fd, struct epoll_event *ev, int cnt, int to)
|
||||||
|
{
|
||||||
|
return epoll_pwait(fd, ev, cnt, to, 0);
|
||||||
|
}
|
23
base/glibc-compatibility/musl/eventfd.c
Normal file
23
base/glibc-compatibility/musl/eventfd.c
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
#include <sys/eventfd.h>
|
||||||
|
#include <unistd.h>
|
||||||
|
#include <errno.h>
|
||||||
|
#include "syscall.h"
|
||||||
|
|
||||||
|
int eventfd(unsigned int count, int flags)
|
||||||
|
{
|
||||||
|
int r = __syscall(SYS_eventfd2, count, flags);
|
||||||
|
#ifdef SYS_eventfd
|
||||||
|
if (r==-ENOSYS && !flags) r = __syscall(SYS_eventfd, count);
|
||||||
|
#endif
|
||||||
|
return __syscall_ret(r);
|
||||||
|
}
|
||||||
|
|
||||||
|
int eventfd_read(int fd, eventfd_t *value)
|
||||||
|
{
|
||||||
|
return (sizeof(*value) == read(fd, value, sizeof(*value))) ? 0 : -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
int eventfd_write(int fd, eventfd_t value)
|
||||||
|
{
|
||||||
|
return (sizeof(value) == write(fd, &value, sizeof(value))) ? 0 : -1;
|
||||||
|
}
|
45
base/glibc-compatibility/musl/getauxval.c
Normal file
45
base/glibc-compatibility/musl/getauxval.c
Normal file
@ -0,0 +1,45 @@
|
|||||||
|
#include <sys/auxv.h>
|
||||||
|
#include <unistd.h> // __environ
|
||||||
|
#include <errno.h>
|
||||||
|
|
||||||
|
// We don't have libc struct available here. Compute aux vector manually.
|
||||||
|
static unsigned long * __auxv = NULL;
|
||||||
|
static unsigned long __auxv_secure = 0;
|
||||||
|
|
||||||
|
static size_t __find_auxv(unsigned long type)
|
||||||
|
{
|
||||||
|
size_t i;
|
||||||
|
for (i = 0; __auxv[i]; i += 2)
|
||||||
|
{
|
||||||
|
if (__auxv[i] == type)
|
||||||
|
return i + 1;
|
||||||
|
}
|
||||||
|
return (size_t) -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
__attribute__((constructor)) static void __auxv_init()
|
||||||
|
{
|
||||||
|
size_t i;
|
||||||
|
for (i = 0; __environ[i]; i++);
|
||||||
|
__auxv = (unsigned long *) (__environ + i + 1);
|
||||||
|
|
||||||
|
size_t secure_idx = __find_auxv(AT_SECURE);
|
||||||
|
if (secure_idx != ((size_t) -1))
|
||||||
|
__auxv_secure = __auxv[secure_idx];
|
||||||
|
}
|
||||||
|
|
||||||
|
unsigned long getauxval(unsigned long type)
|
||||||
|
{
|
||||||
|
if (type == AT_SECURE)
|
||||||
|
return __auxv_secure;
|
||||||
|
|
||||||
|
if (__auxv)
|
||||||
|
{
|
||||||
|
size_t index = __find_auxv(type);
|
||||||
|
if (index != ((size_t) -1))
|
||||||
|
return __auxv[index];
|
||||||
|
}
|
||||||
|
|
||||||
|
errno = ENOENT;
|
||||||
|
return 0;
|
||||||
|
}
|
8
base/glibc-compatibility/musl/secure_getenv.c
Normal file
8
base/glibc-compatibility/musl/secure_getenv.c
Normal file
@ -0,0 +1,8 @@
|
|||||||
|
#define _GNU_SOURCE
|
||||||
|
#include <stdlib.h>
|
||||||
|
#include <sys/auxv.h>
|
||||||
|
|
||||||
|
char * secure_getenv(const char * name)
|
||||||
|
{
|
||||||
|
return getauxval(AT_SECURE) ? NULL : getenv(name);
|
||||||
|
}
|
21
base/glibc-compatibility/musl/sync_file_range.c
Normal file
21
base/glibc-compatibility/musl/sync_file_range.c
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
#define _GNU_SOURCE
|
||||||
|
#include <fcntl.h>
|
||||||
|
#include <errno.h>
|
||||||
|
#include "syscall.h"
|
||||||
|
|
||||||
|
// works same in x86_64 && aarch64
|
||||||
|
#define __SYSCALL_LL_E(x) (x)
|
||||||
|
#define __SYSCALL_LL_O(x) (x)
|
||||||
|
|
||||||
|
int sync_file_range(int fd, off_t pos, off_t len, unsigned flags)
|
||||||
|
{
|
||||||
|
#if defined(SYS_sync_file_range2)
|
||||||
|
return syscall(SYS_sync_file_range2, fd, flags,
|
||||||
|
__SYSCALL_LL_E(pos), __SYSCALL_LL_E(len));
|
||||||
|
#elif defined(SYS_sync_file_range)
|
||||||
|
return __syscall(SYS_sync_file_range, fd,
|
||||||
|
__SYSCALL_LL_O(pos), __SYSCALL_LL_E(len), flags);
|
||||||
|
#else
|
||||||
|
return __syscall_ret(-ENOSYS);
|
||||||
|
#endif
|
||||||
|
}
|
@ -13,3 +13,11 @@ long __syscall(syscall_arg_t, ...);
|
|||||||
|
|
||||||
__attribute__((visibility("hidden")))
|
__attribute__((visibility("hidden")))
|
||||||
void *__vdsosym(const char *, const char *);
|
void *__vdsosym(const char *, const char *);
|
||||||
|
|
||||||
|
#define syscall(...) __syscall_ret(__syscall(__VA_ARGS__))
|
||||||
|
|
||||||
|
#define socketcall(...) __syscall_ret(__socketcall(__VA_ARGS__))
|
||||||
|
|
||||||
|
#define __socketcall(nm,a,b,c,d,e,f) __syscall(SYS_##nm, a, b, c, d, e, f)
|
||||||
|
|
||||||
|
#define socketcall_cp socketcall
|
||||||
|
@ -40,24 +40,10 @@ static int checkver(Verdef *def, int vsym, const char *vername, char *strings)
|
|||||||
#define OK_TYPES (1<<STT_NOTYPE | 1<<STT_OBJECT | 1<<STT_FUNC | 1<<STT_COMMON)
|
#define OK_TYPES (1<<STT_NOTYPE | 1<<STT_OBJECT | 1<<STT_FUNC | 1<<STT_COMMON)
|
||||||
#define OK_BINDS (1<<STB_GLOBAL | 1<<STB_WEAK | 1<<STB_GNU_UNIQUE)
|
#define OK_BINDS (1<<STB_GLOBAL | 1<<STB_WEAK | 1<<STB_GNU_UNIQUE)
|
||||||
|
|
||||||
extern char** environ;
|
|
||||||
static Ehdr *eh = NULL;
|
|
||||||
void *__vdsosym(const char *vername, const char *name);
|
|
||||||
// We don't have libc struct available here. Compute aux vector manually.
|
|
||||||
__attribute__((constructor)) static void auxv_init()
|
|
||||||
{
|
|
||||||
size_t i, *auxv;
|
|
||||||
for (i=0; environ[i]; i++);
|
|
||||||
auxv = (void *)(environ+i+1);
|
|
||||||
for (i=0; auxv[i] != AT_SYSINFO_EHDR; i+=2)
|
|
||||||
if (!auxv[i]) return;
|
|
||||||
if (!auxv[i+1]) return;
|
|
||||||
eh = (void *)auxv[i+1];
|
|
||||||
}
|
|
||||||
|
|
||||||
void *__vdsosym(const char *vername, const char *name)
|
void *__vdsosym(const char *vername, const char *name)
|
||||||
{
|
{
|
||||||
size_t i;
|
size_t i;
|
||||||
|
Ehdr * eh = (void *) getauxval(AT_SYSINFO_EHDR);
|
||||||
if (!eh) return 0;
|
if (!eh) return 0;
|
||||||
Phdr *ph = (void *)((char *)eh + eh->e_phoff);
|
Phdr *ph = (void *)((char *)eh + eh->e_phoff);
|
||||||
size_t *dynv=0, base=-1;
|
size_t *dynv=0, base=-1;
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
OWNER(g:clickhouse)
|
||||||
|
|
||||||
LIBRARY()
|
LIBRARY()
|
||||||
|
|
||||||
PEERDIR(
|
PEERDIR(
|
||||||
|
@ -113,6 +113,12 @@
|
|||||||
|
|
||||||
#include "pcg_extras.hpp"
|
#include "pcg_extras.hpp"
|
||||||
|
|
||||||
|
namespace DB
|
||||||
|
{
|
||||||
|
struct PcgSerializer;
|
||||||
|
struct PcgDeserializer;
|
||||||
|
}
|
||||||
|
|
||||||
namespace pcg_detail {
|
namespace pcg_detail {
|
||||||
|
|
||||||
using namespace pcg_extras;
|
using namespace pcg_extras;
|
||||||
@ -557,6 +563,9 @@ public:
|
|||||||
engine<xtype1, itype1,
|
engine<xtype1, itype1,
|
||||||
output_mixin1, output_previous1,
|
output_mixin1, output_previous1,
|
||||||
stream_mixin1, multiplier_mixin1>& rng);
|
stream_mixin1, multiplier_mixin1>& rng);
|
||||||
|
|
||||||
|
friend ::DB::PcgSerializer;
|
||||||
|
friend ::DB::PcgDeserializer;
|
||||||
};
|
};
|
||||||
|
|
||||||
template <typename CharT, typename Traits,
|
template <typename CharT, typename Traits,
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
OWNER(g:clickhouse)
|
||||||
|
|
||||||
LIBRARY()
|
LIBRARY()
|
||||||
|
|
||||||
ADDINCL (GLOBAL clickhouse/base/pcg-random)
|
ADDINCL (GLOBAL clickhouse/base/pcg-random)
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
OWNER(g:clickhouse)
|
||||||
|
|
||||||
LIBRARY()
|
LIBRARY()
|
||||||
|
|
||||||
CFLAGS(-g0)
|
CFLAGS(-g0)
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
OWNER(g:clickhouse)
|
||||||
|
|
||||||
LIBRARY()
|
LIBRARY()
|
||||||
|
|
||||||
ADDINCL(GLOBAL clickhouse/base/widechar_width)
|
ADDINCL(GLOBAL clickhouse/base/widechar_width)
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
OWNER(g:clickhouse)
|
||||||
|
|
||||||
RECURSE(
|
RECURSE(
|
||||||
common
|
common
|
||||||
daemon
|
daemon
|
||||||
|
@ -6,11 +6,9 @@ Defines the following variables:
|
|||||||
The include directories of the gRPC framework, including the include directories of the C++ wrapper.
|
The include directories of the gRPC framework, including the include directories of the C++ wrapper.
|
||||||
``gRPC_LIBRARIES``
|
``gRPC_LIBRARIES``
|
||||||
The libraries of the gRPC framework.
|
The libraries of the gRPC framework.
|
||||||
``gRPC_UNSECURE_LIBRARIES``
|
``gRPC_CPP_PLUGIN``
|
||||||
The libraries of the gRPC framework without SSL.
|
|
||||||
``_gRPC_CPP_PLUGIN``
|
|
||||||
The plugin for generating gRPC client and server C++ stubs from `.proto` files
|
The plugin for generating gRPC client and server C++ stubs from `.proto` files
|
||||||
``_gRPC_PYTHON_PLUGIN``
|
``gRPC_PYTHON_PLUGIN``
|
||||||
The plugin for generating gRPC client and server Python stubs from `.proto` files
|
The plugin for generating gRPC client and server Python stubs from `.proto` files
|
||||||
|
|
||||||
The following :prop_tgt:`IMPORTED` targets are also defined:
|
The following :prop_tgt:`IMPORTED` targets are also defined:
|
||||||
@ -19,6 +17,13 @@ The following :prop_tgt:`IMPORTED` targets are also defined:
|
|||||||
``grpc_cpp_plugin``
|
``grpc_cpp_plugin``
|
||||||
``grpc_python_plugin``
|
``grpc_python_plugin``
|
||||||
|
|
||||||
|
Set the following variables to adjust the behaviour of this script:
|
||||||
|
``gRPC_USE_UNSECURE_LIBRARIES``
|
||||||
|
if set gRPC_LIBRARIES will be filled with the unsecure version of the libraries (i.e. without SSL)
|
||||||
|
instead of the secure ones.
|
||||||
|
``gRPC_DEBUG`
|
||||||
|
if set the debug message will be printed.
|
||||||
|
|
||||||
Add custom commands to process ``.proto`` files to C++::
|
Add custom commands to process ``.proto`` files to C++::
|
||||||
protobuf_generate_grpc_cpp(<SRCS> <HDRS>
|
protobuf_generate_grpc_cpp(<SRCS> <HDRS>
|
||||||
[DESCRIPTORS <DESC>] [EXPORT_MACRO <MACRO>] [<ARGN>...])
|
[DESCRIPTORS <DESC>] [EXPORT_MACRO <MACRO>] [<ARGN>...])
|
||||||
@ -242,6 +247,7 @@ find_library(gRPC_LIBRARY NAMES grpc)
|
|||||||
find_library(gRPC_CPP_LIBRARY NAMES grpc++)
|
find_library(gRPC_CPP_LIBRARY NAMES grpc++)
|
||||||
find_library(gRPC_UNSECURE_LIBRARY NAMES grpc_unsecure)
|
find_library(gRPC_UNSECURE_LIBRARY NAMES grpc_unsecure)
|
||||||
find_library(gRPC_CPP_UNSECURE_LIBRARY NAMES grpc++_unsecure)
|
find_library(gRPC_CPP_UNSECURE_LIBRARY NAMES grpc++_unsecure)
|
||||||
|
find_library(gRPC_CARES_LIBRARY NAMES cares)
|
||||||
|
|
||||||
set(gRPC_LIBRARIES)
|
set(gRPC_LIBRARIES)
|
||||||
if(gRPC_USE_UNSECURE_LIBRARIES)
|
if(gRPC_USE_UNSECURE_LIBRARIES)
|
||||||
@ -259,6 +265,7 @@ else()
|
|||||||
set(gRPC_LIBRARIES ${gRPC_LIBRARIES} ${gRPC_CPP_LIBRARY})
|
set(gRPC_LIBRARIES ${gRPC_LIBRARIES} ${gRPC_CPP_LIBRARY})
|
||||||
endif()
|
endif()
|
||||||
endif()
|
endif()
|
||||||
|
set(gRPC_LIBRARIES ${gRPC_LIBRARIES} ${gRPC_CARES_LIBRARY})
|
||||||
|
|
||||||
# Restore the original find library ordering.
|
# Restore the original find library ordering.
|
||||||
if(gRPC_USE_STATIC_LIBS)
|
if(gRPC_USE_STATIC_LIBS)
|
||||||
@ -278,11 +285,11 @@ else()
|
|||||||
endif()
|
endif()
|
||||||
|
|
||||||
# Get full path to plugin.
|
# Get full path to plugin.
|
||||||
find_program(_gRPC_CPP_PLUGIN
|
find_program(gRPC_CPP_PLUGIN
|
||||||
NAMES grpc_cpp_plugin
|
NAMES grpc_cpp_plugin
|
||||||
DOC "The plugin for generating gRPC client and server C++ stubs from `.proto` files")
|
DOC "The plugin for generating gRPC client and server C++ stubs from `.proto` files")
|
||||||
|
|
||||||
find_program(_gRPC_PYTHON_PLUGIN
|
find_program(gRPC_PYTHON_PLUGIN
|
||||||
NAMES grpc_python_plugin
|
NAMES grpc_python_plugin
|
||||||
DOC "The plugin for generating gRPC client and server Python stubs from `.proto` files")
|
DOC "The plugin for generating gRPC client and server Python stubs from `.proto` files")
|
||||||
|
|
||||||
@ -317,14 +324,14 @@ endif()
|
|||||||
|
|
||||||
#include(FindPackageHandleStandardArgs.cmake)
|
#include(FindPackageHandleStandardArgs.cmake)
|
||||||
FIND_PACKAGE_HANDLE_STANDARD_ARGS(gRPC
|
FIND_PACKAGE_HANDLE_STANDARD_ARGS(gRPC
|
||||||
REQUIRED_VARS gRPC_LIBRARY gRPC_CPP_LIBRARY gRPC_UNSECURE_LIBRARY gRPC_CPP_UNSECURE_LIBRARY
|
REQUIRED_VARS gRPC_LIBRARY gRPC_CPP_LIBRARY gRPC_UNSECURE_LIBRARY gRPC_CPP_UNSECURE_LIBRARY gRPC_CARES_LIBRARY
|
||||||
gRPC_INCLUDE_DIR gRPC_CPP_INCLUDE_DIR _gRPC_CPP_PLUGIN _gRPC_PYTHON_PLUGIN)
|
gRPC_INCLUDE_DIR gRPC_CPP_INCLUDE_DIR gRPC_CPP_PLUGIN gRPC_PYTHON_PLUGIN)
|
||||||
|
|
||||||
if(gRPC_FOUND)
|
if(gRPC_FOUND)
|
||||||
if(gRPC_DEBUG)
|
if(gRPC_DEBUG)
|
||||||
message(STATUS "gRPC: INCLUDE_DIRS=${gRPC_INCLUDE_DIRS}")
|
message(STATUS "gRPC: INCLUDE_DIRS=${gRPC_INCLUDE_DIRS}")
|
||||||
message(STATUS "gRPC: LIBRARIES=${gRPC_LIBRARIES}")
|
message(STATUS "gRPC: LIBRARIES=${gRPC_LIBRARIES}")
|
||||||
message(STATUS "gRPC: CPP_PLUGIN=${_gRPC_CPP_PLUGIN}")
|
message(STATUS "gRPC: CPP_PLUGIN=${gRPC_CPP_PLUGIN}")
|
||||||
message(STATUS "gRPC: PYTHON_PLUGIN=${_gRPC_PYTHON_PLUGIN}")
|
message(STATUS "gRPC: PYTHON_PLUGIN=${gRPC_PYTHON_PLUGIN}")
|
||||||
endif()
|
endif()
|
||||||
endif()
|
endif()
|
||||||
|
@ -1,9 +1,9 @@
|
|||||||
# This strings autochanged from release_lib.sh:
|
# This strings autochanged from release_lib.sh:
|
||||||
SET(VERSION_REVISION 54442)
|
SET(VERSION_REVISION 54444)
|
||||||
SET(VERSION_MAJOR 20)
|
SET(VERSION_MAJOR 20)
|
||||||
SET(VERSION_MINOR 11)
|
SET(VERSION_MINOR 13)
|
||||||
SET(VERSION_PATCH 1)
|
SET(VERSION_PATCH 1)
|
||||||
SET(VERSION_GITHASH 76a04fb4b4f6cd27ad999baf6dc9a25e88851c42)
|
SET(VERSION_GITHASH e581f9ccfc5c64867b0f488cce72412fd2966471)
|
||||||
SET(VERSION_DESCRIBE v20.11.1.1-prestable)
|
SET(VERSION_DESCRIBE v20.13.1.1-prestable)
|
||||||
SET(VERSION_STRING 20.11.1.1)
|
SET(VERSION_STRING 20.13.1.1)
|
||||||
# end of autochange
|
# end of autochange
|
||||||
|
@ -37,8 +37,8 @@ if(NOT USE_INTERNAL_GRPC_LIBRARY)
|
|||||||
if(NOT gRPC_INCLUDE_DIRS OR NOT gRPC_LIBRARIES)
|
if(NOT gRPC_INCLUDE_DIRS OR NOT gRPC_LIBRARIES)
|
||||||
message(${RECONFIGURE_MESSAGE_LEVEL} "Can't find system gRPC library")
|
message(${RECONFIGURE_MESSAGE_LEVEL} "Can't find system gRPC library")
|
||||||
set(EXTERNAL_GRPC_LIBRARY_FOUND 0)
|
set(EXTERNAL_GRPC_LIBRARY_FOUND 0)
|
||||||
elseif(NOT _gRPC_CPP_PLUGIN)
|
elseif(NOT gRPC_CPP_PLUGIN)
|
||||||
message(${RECONFIGURE_MESSAGE_LEVEL} "Can't find system grcp_cpp_plugin")
|
message(${RECONFIGURE_MESSAGE_LEVEL} "Can't find system grpc_cpp_plugin")
|
||||||
set(EXTERNAL_GRPC_LIBRARY_FOUND 0)
|
set(EXTERNAL_GRPC_LIBRARY_FOUND 0)
|
||||||
else()
|
else()
|
||||||
set(EXTERNAL_GRPC_LIBRARY_FOUND 1)
|
set(EXTERNAL_GRPC_LIBRARY_FOUND 1)
|
||||||
@ -53,8 +53,8 @@ if(NOT EXTERNAL_GRPC_LIBRARY_FOUND AND NOT MISSING_INTERNAL_GRPC_LIBRARY)
|
|||||||
else()
|
else()
|
||||||
set(gRPC_LIBRARIES grpc grpc++)
|
set(gRPC_LIBRARIES grpc grpc++)
|
||||||
endif()
|
endif()
|
||||||
set(_gRPC_CPP_PLUGIN $<TARGET_FILE:grpc_cpp_plugin>)
|
set(gRPC_CPP_PLUGIN $<TARGET_FILE:grpc_cpp_plugin>)
|
||||||
set(_gRPC_PROTOC_EXECUTABLE $<TARGET_FILE:protobuf::protoc>)
|
set(gRPC_PYTHON_PLUGIN $<TARGET_FILE:grpc_python_plugin>)
|
||||||
|
|
||||||
include("${ClickHouse_SOURCE_DIR}/contrib/grpc-cmake/protobuf_generate_grpc.cmake")
|
include("${ClickHouse_SOURCE_DIR}/contrib/grpc-cmake/protobuf_generate_grpc.cmake")
|
||||||
|
|
||||||
@ -62,4 +62,4 @@ if(NOT EXTERNAL_GRPC_LIBRARY_FOUND AND NOT MISSING_INTERNAL_GRPC_LIBRARY)
|
|||||||
set(USE_GRPC 1)
|
set(USE_GRPC 1)
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
message(STATUS "Using gRPC=${USE_GRPC}: ${gRPC_INCLUDE_DIRS} : ${gRPC_LIBRARIES} : ${_gRPC_CPP_PLUGIN}")
|
message(STATUS "Using gRPC=${USE_GRPC}: ${gRPC_INCLUDE_DIRS} : ${gRPC_LIBRARIES} : ${gRPC_CPP_PLUGIN}")
|
||||||
|
2
cmake/find/miniselect.cmake
Normal file
2
cmake/find/miniselect.cmake
Normal file
@ -0,0 +1,2 @@
|
|||||||
|
set(MINISELECT_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/miniselect/include)
|
||||||
|
message(STATUS "Using miniselect: ${MINISELECT_INCLUDE_DIR}")
|
67
cmake/find/rocksdb.cmake
Normal file
67
cmake/find/rocksdb.cmake
Normal file
@ -0,0 +1,67 @@
|
|||||||
|
option(ENABLE_ROCKSDB "Enable ROCKSDB" ${ENABLE_LIBRARIES})
|
||||||
|
|
||||||
|
if (NOT ENABLE_ROCKSDB)
|
||||||
|
if (USE_INTERNAL_ROCKSDB_LIBRARY)
|
||||||
|
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't use internal rocksdb library with ENABLE_ROCKSDB=OFF")
|
||||||
|
endif()
|
||||||
|
return()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
option(USE_INTERNAL_ROCKSDB_LIBRARY "Set to FALSE to use system ROCKSDB library instead of bundled" ${NOT_UNBUNDLED})
|
||||||
|
|
||||||
|
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/rocksdb/CMakeLists.txt")
|
||||||
|
if (USE_INTERNAL_ROCKSDB_LIBRARY)
|
||||||
|
message (WARNING "submodule contrib is missing. to fix try run: \n git submodule update --init --recursive")
|
||||||
|
message(${RECONFIGURE_MESSAGE_LEVEL} "cannot find internal rocksdb")
|
||||||
|
endif()
|
||||||
|
set (MISSING_INTERNAL_ROCKSDB 1)
|
||||||
|
endif ()
|
||||||
|
|
||||||
|
if (NOT USE_INTERNAL_ROCKSDB_LIBRARY)
|
||||||
|
find_library (ROCKSDB_LIBRARY rocksdb)
|
||||||
|
find_path (ROCKSDB_INCLUDE_DIR NAMES rocksdb/db.h PATHS ${ROCKSDB_INCLUDE_PATHS})
|
||||||
|
if (NOT ROCKSDB_LIBRARY OR NOT ROCKSDB_INCLUDE_DIR)
|
||||||
|
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find system rocksdb library")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if (NOT SNAPPY_LIBRARY)
|
||||||
|
include(cmake/find/snappy.cmake)
|
||||||
|
endif()
|
||||||
|
if (NOT ZLIB_LIBRARY)
|
||||||
|
include(cmake/find/zlib.cmake)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
find_package(BZip2)
|
||||||
|
find_library(ZSTD_LIBRARY zstd)
|
||||||
|
find_library(LZ4_LIBRARY lz4)
|
||||||
|
find_library(GFLAGS_LIBRARY gflags)
|
||||||
|
|
||||||
|
if(SNAPPY_LIBRARY AND ZLIB_LIBRARY AND LZ4_LIBRARY AND BZIP2_FOUND AND ZSTD_LIBRARY AND GFLAGS_LIBRARY)
|
||||||
|
list (APPEND ROCKSDB_LIBRARY ${SNAPPY_LIBRARY})
|
||||||
|
list (APPEND ROCKSDB_LIBRARY ${ZLIB_LIBRARY})
|
||||||
|
list (APPEND ROCKSDB_LIBRARY ${LZ4_LIBRARY})
|
||||||
|
list (APPEND ROCKSDB_LIBRARY ${BZIP2_LIBRARY})
|
||||||
|
list (APPEND ROCKSDB_LIBRARY ${ZSTD_LIBRARY})
|
||||||
|
list (APPEND ROCKSDB_LIBRARY ${GFLAGS_LIBRARY})
|
||||||
|
else()
|
||||||
|
message (${RECONFIGURE_MESSAGE_LEVEL}
|
||||||
|
"Can't find system rocksdb: snappy=${SNAPPY_LIBRARY} ;"
|
||||||
|
" zlib=${ZLIB_LIBRARY} ;"
|
||||||
|
" lz4=${LZ4_LIBRARY} ;"
|
||||||
|
" bz2=${BZIP2_LIBRARY} ;"
|
||||||
|
" zstd=${ZSTD_LIBRARY} ;"
|
||||||
|
" gflags=${GFLAGS_LIBRARY} ;")
|
||||||
|
endif()
|
||||||
|
endif ()
|
||||||
|
|
||||||
|
if(ROCKSDB_LIBRARY AND ROCKSDB_INCLUDE_DIR)
|
||||||
|
set(USE_ROCKSDB 1)
|
||||||
|
elseif (NOT MISSING_INTERNAL_ROCKSDB)
|
||||||
|
set (USE_INTERNAL_ROCKSDB_LIBRARY 1)
|
||||||
|
|
||||||
|
set (ROCKSDB_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/rocksdb/include")
|
||||||
|
set (ROCKSDB_LIBRARY "rocksdb")
|
||||||
|
set (USE_ROCKSDB 1)
|
||||||
|
endif ()
|
||||||
|
|
||||||
|
message (STATUS "Using ROCKSDB=${USE_ROCKSDB}: ${ROCKSDB_INCLUDE_DIR} : ${ROCKSDB_LIBRARY}")
|
13
contrib/CMakeLists.txt
vendored
13
contrib/CMakeLists.txt
vendored
@ -14,6 +14,11 @@ unset (_current_dir_name)
|
|||||||
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -w")
|
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -w")
|
||||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w")
|
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w")
|
||||||
|
|
||||||
|
if (SANITIZE STREQUAL "undefined")
|
||||||
|
# 3rd-party libraries usually not intended to work with UBSan.
|
||||||
|
add_compile_options(-fno-sanitize=undefined)
|
||||||
|
endif()
|
||||||
|
|
||||||
set_property(DIRECTORY PROPERTY EXCLUDE_FROM_ALL 1)
|
set_property(DIRECTORY PROPERTY EXCLUDE_FROM_ALL 1)
|
||||||
|
|
||||||
add_subdirectory (boost-cmake)
|
add_subdirectory (boost-cmake)
|
||||||
@ -31,6 +36,7 @@ add_subdirectory (murmurhash)
|
|||||||
add_subdirectory (replxx-cmake)
|
add_subdirectory (replxx-cmake)
|
||||||
add_subdirectory (ryu-cmake)
|
add_subdirectory (ryu-cmake)
|
||||||
add_subdirectory (unixodbc-cmake)
|
add_subdirectory (unixodbc-cmake)
|
||||||
|
add_subdirectory (xz)
|
||||||
|
|
||||||
add_subdirectory (poco-cmake)
|
add_subdirectory (poco-cmake)
|
||||||
add_subdirectory (croaring-cmake)
|
add_subdirectory (croaring-cmake)
|
||||||
@ -157,9 +163,6 @@ if(USE_INTERNAL_SNAPPY_LIBRARY)
|
|||||||
add_subdirectory(snappy)
|
add_subdirectory(snappy)
|
||||||
|
|
||||||
set (SNAPPY_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/snappy")
|
set (SNAPPY_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/snappy")
|
||||||
if(SANITIZE STREQUAL "undefined")
|
|
||||||
target_compile_options(${SNAPPY_LIBRARY} PRIVATE -fno-sanitize=undefined)
|
|
||||||
endif()
|
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
if (USE_INTERNAL_PARQUET_LIBRARY)
|
if (USE_INTERNAL_PARQUET_LIBRARY)
|
||||||
@ -318,3 +321,7 @@ if (USE_KRB5)
|
|||||||
add_subdirectory (cyrus-sasl-cmake)
|
add_subdirectory (cyrus-sasl-cmake)
|
||||||
endif()
|
endif()
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
|
if (USE_INTERNAL_ROCKSDB_LIBRARY)
|
||||||
|
add_subdirectory(rocksdb-cmake)
|
||||||
|
endif()
|
||||||
|
1
contrib/abseil-cpp
vendored
Submodule
1
contrib/abseil-cpp
vendored
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit 4f3b686f86c3ebaba7e4e926e62a79cb1c659a54
|
2
contrib/aws
vendored
2
contrib/aws
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 17e10c0fc77f22afe890fa6d1b283760e5edaa56
|
Subproject commit a220591e335923ce1c19bbf9eb925787f7ab6c13
|
2
contrib/cctz
vendored
2
contrib/cctz
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 7a2db4ece6e0f1b246173cbdb62711ae258ee841
|
Subproject commit 260ba195ef6c489968bae8c88c62a67cdac5ff9d
|
2
contrib/grpc
vendored
2
contrib/grpc
vendored
@ -1 +1 @@
|
|||||||
Subproject commit a6570b863cf76c9699580ba51c7827d5bffaac43
|
Subproject commit 7436366ceb341ba5c00ea29f1645e02a2b70bf93
|
@ -1,6 +1,7 @@
|
|||||||
set(_gRPC_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/grpc")
|
set(_gRPC_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/grpc")
|
||||||
set(_gRPC_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/grpc")
|
set(_gRPC_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/grpc")
|
||||||
|
|
||||||
|
# Use re2 from ClickHouse contrib, not from gRPC third_party.
|
||||||
if(NOT RE2_INCLUDE_DIR)
|
if(NOT RE2_INCLUDE_DIR)
|
||||||
message(FATAL_ERROR " grpc: The location of the \"re2\" library is unknown")
|
message(FATAL_ERROR " grpc: The location of the \"re2\" library is unknown")
|
||||||
endif()
|
endif()
|
||||||
@ -8,6 +9,7 @@ set(gRPC_RE2_PROVIDER "clickhouse" CACHE STRING "" FORCE)
|
|||||||
set(_gRPC_RE2_INCLUDE_DIR "${RE2_INCLUDE_DIR}")
|
set(_gRPC_RE2_INCLUDE_DIR "${RE2_INCLUDE_DIR}")
|
||||||
set(_gRPC_RE2_LIBRARIES "${RE2_LIBRARY}")
|
set(_gRPC_RE2_LIBRARIES "${RE2_LIBRARY}")
|
||||||
|
|
||||||
|
# Use zlib from ClickHouse contrib, not from gRPC third_party.
|
||||||
if(NOT ZLIB_INCLUDE_DIRS)
|
if(NOT ZLIB_INCLUDE_DIRS)
|
||||||
message(FATAL_ERROR " grpc: The location of the \"zlib\" library is unknown")
|
message(FATAL_ERROR " grpc: The location of the \"zlib\" library is unknown")
|
||||||
endif()
|
endif()
|
||||||
@ -15,6 +17,7 @@ set(gRPC_ZLIB_PROVIDER "clickhouse" CACHE STRING "" FORCE)
|
|||||||
set(_gRPC_ZLIB_INCLUDE_DIR "${ZLIB_INCLUDE_DIRS}")
|
set(_gRPC_ZLIB_INCLUDE_DIR "${ZLIB_INCLUDE_DIRS}")
|
||||||
set(_gRPC_ZLIB_LIBRARIES "${ZLIB_LIBRARIES}")
|
set(_gRPC_ZLIB_LIBRARIES "${ZLIB_LIBRARIES}")
|
||||||
|
|
||||||
|
# Use protobuf from ClickHouse contrib, not from gRPC third_party.
|
||||||
if(NOT Protobuf_INCLUDE_DIR OR NOT Protobuf_LIBRARY)
|
if(NOT Protobuf_INCLUDE_DIR OR NOT Protobuf_LIBRARY)
|
||||||
message(FATAL_ERROR " grpc: The location of the \"protobuf\" library is unknown")
|
message(FATAL_ERROR " grpc: The location of the \"protobuf\" library is unknown")
|
||||||
elseif (NOT Protobuf_PROTOC_EXECUTABLE)
|
elseif (NOT Protobuf_PROTOC_EXECUTABLE)
|
||||||
@ -29,21 +32,33 @@ set(_gRPC_PROTOBUF_PROTOC "protoc")
|
|||||||
set(_gRPC_PROTOBUF_PROTOC_EXECUTABLE "${Protobuf_PROTOC_EXECUTABLE}")
|
set(_gRPC_PROTOBUF_PROTOC_EXECUTABLE "${Protobuf_PROTOC_EXECUTABLE}")
|
||||||
set(_gRPC_PROTOBUF_PROTOC_LIBRARIES "${Protobuf_PROTOC_LIBRARY}")
|
set(_gRPC_PROTOBUF_PROTOC_LIBRARIES "${Protobuf_PROTOC_LIBRARY}")
|
||||||
|
|
||||||
|
# Use OpenSSL from ClickHouse contrib, not from gRPC third_party.
|
||||||
set(gRPC_SSL_PROVIDER "clickhouse" CACHE STRING "" FORCE)
|
set(gRPC_SSL_PROVIDER "clickhouse" CACHE STRING "" FORCE)
|
||||||
set(_gRPC_SSL_INCLUDE_DIR ${OPENSSL_INCLUDE_DIR})
|
set(_gRPC_SSL_INCLUDE_DIR ${OPENSSL_INCLUDE_DIR})
|
||||||
set(_gRPC_SSL_LIBRARIES ${OPENSSL_LIBRARIES})
|
set(_gRPC_SSL_LIBRARIES ${OPENSSL_LIBRARIES})
|
||||||
|
|
||||||
|
# Use abseil-cpp from ClickHouse contrib, not from gRPC third_party.
|
||||||
|
set(gRPC_ABSL_PROVIDER "clickhouse" CACHE STRING "" FORCE)
|
||||||
|
set(ABSL_ROOT_DIR "${ClickHouse_SOURCE_DIR}/contrib/abseil-cpp")
|
||||||
|
if(NOT EXISTS "${ABSL_ROOT_DIR}/CMakeLists.txt")
|
||||||
|
message(FATAL_ERROR " grpc: submodule third_party/abseil-cpp is missing. To fix try run: \n git submodule update --init --recursive")
|
||||||
|
endif()
|
||||||
|
add_subdirectory("${ABSL_ROOT_DIR}" "${ClickHouse_BINARY_DIR}/contrib/abseil-cpp")
|
||||||
|
|
||||||
|
# Choose to build static or shared library for c-ares.
|
||||||
|
if (MAKE_STATIC_LIBRARIES)
|
||||||
|
set(CARES_STATIC ON CACHE BOOL "" FORCE)
|
||||||
|
set(CARES_SHARED OFF CACHE BOOL "" FORCE)
|
||||||
|
else ()
|
||||||
|
set(CARES_STATIC OFF CACHE BOOL "" FORCE)
|
||||||
|
set(CARES_SHARED ON CACHE BOOL "" FORCE)
|
||||||
|
endif ()
|
||||||
|
|
||||||
# We don't want to build C# extensions.
|
# We don't want to build C# extensions.
|
||||||
set(gRPC_BUILD_CSHARP_EXT OFF)
|
set(gRPC_BUILD_CSHARP_EXT OFF)
|
||||||
|
|
||||||
# We don't want to build abseil tests, so we temporarily switch BUILD_TESTING off.
|
|
||||||
set(_gRPC_ORIG_BUILD_TESTING ${BUILD_TESTING})
|
|
||||||
set(BUILD_TESTING OFF)
|
|
||||||
|
|
||||||
add_subdirectory("${_gRPC_SOURCE_DIR}" "${_gRPC_BINARY_DIR}")
|
add_subdirectory("${_gRPC_SOURCE_DIR}" "${_gRPC_BINARY_DIR}")
|
||||||
|
|
||||||
set(BUILD_TESTING ${_gRPC_ORIG_BUILD_TESTING})
|
|
||||||
|
|
||||||
# The contrib/grpc/CMakeLists.txt redefined the PROTOBUF_GENERATE_GRPC_CPP() function for its own purposes,
|
# The contrib/grpc/CMakeLists.txt redefined the PROTOBUF_GENERATE_GRPC_CPP() function for its own purposes,
|
||||||
# so we need to redefine it back.
|
# so we need to redefine it back.
|
||||||
include("${ClickHouse_SOURCE_DIR}/contrib/grpc-cmake/protobuf_generate_grpc.cmake")
|
include("${ClickHouse_SOURCE_DIR}/contrib/grpc-cmake/protobuf_generate_grpc.cmake")
|
||||||
|
2
contrib/libunwind
vendored
2
contrib/libunwind
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 27026ef4a9c6c8cc956d1d131c4d794e24096981
|
Subproject commit 7d78d3618910752c256b2b58c3895f4efea47fac
|
@ -22,7 +22,16 @@ set_source_files_properties(${LIBUNWIND_C_SOURCES} PROPERTIES COMPILE_FLAGS "-st
|
|||||||
set(LIBUNWIND_ASM_SOURCES
|
set(LIBUNWIND_ASM_SOURCES
|
||||||
${LIBUNWIND_SOURCE_DIR}/src/UnwindRegistersRestore.S
|
${LIBUNWIND_SOURCE_DIR}/src/UnwindRegistersRestore.S
|
||||||
${LIBUNWIND_SOURCE_DIR}/src/UnwindRegistersSave.S)
|
${LIBUNWIND_SOURCE_DIR}/src/UnwindRegistersSave.S)
|
||||||
set_source_files_properties(${LIBUNWIND_ASM_SOURCES} PROPERTIES LANGUAGE C)
|
|
||||||
|
# CMake doesn't pass the correct architecture for Apple prior to CMake 3.19 [1]
|
||||||
|
# Workaround these two issues by compiling as C.
|
||||||
|
#
|
||||||
|
# [1]: https://gitlab.kitware.com/cmake/cmake/-/issues/20771
|
||||||
|
if (APPLE AND CMAKE_VERSION VERSION_LESS 3.19)
|
||||||
|
set_source_files_properties(${LIBUNWIND_ASM_SOURCES} PROPERTIES LANGUAGE C)
|
||||||
|
else()
|
||||||
|
enable_language(ASM)
|
||||||
|
endif()
|
||||||
|
|
||||||
set(LIBUNWIND_SOURCES
|
set(LIBUNWIND_SOURCES
|
||||||
${LIBUNWIND_CXX_SOURCES}
|
${LIBUNWIND_CXX_SOURCES}
|
||||||
|
1
contrib/miniselect
vendored
Submodule
1
contrib/miniselect
vendored
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit be0af6bd0b6eb044d1acc4f754b229972d99903a
|
2
contrib/poco
vendored
2
contrib/poco
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 757d947235b307675cff964f29b19d388140a9eb
|
Subproject commit f49c6ab8d3aa71828bd1b411485c21722e8c9d82
|
2
contrib/protobuf
vendored
2
contrib/protobuf
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 445d1ae73a450b1e94622e7040989aa2048402e3
|
Subproject commit 73b12814204ad9068ba352914d0dc244648b48ee
|
1
contrib/rocksdb
vendored
Submodule
1
contrib/rocksdb
vendored
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit 35d8e36ef1b8e3e0759ca81215f855226a0a54bd
|
673
contrib/rocksdb-cmake/CMakeLists.txt
Normal file
673
contrib/rocksdb-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,673 @@
|
|||||||
|
## this file is extracted from `contrib/rocksdb/CMakeLists.txt`
|
||||||
|
set(ROCKSDB_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/rocksdb")
|
||||||
|
list(APPEND CMAKE_MODULE_PATH "${ROCKSDB_SOURCE_DIR}/cmake/modules/")
|
||||||
|
|
||||||
|
find_program(CCACHE_FOUND ccache)
|
||||||
|
if(CCACHE_FOUND)
|
||||||
|
set_property(GLOBAL PROPERTY RULE_LAUNCH_COMPILE ccache)
|
||||||
|
set_property(GLOBAL PROPERTY RULE_LAUNCH_LINK ccache)
|
||||||
|
endif(CCACHE_FOUND)
|
||||||
|
|
||||||
|
if (SANITIZE STREQUAL "undefined")
|
||||||
|
set(WITH_UBSAN ON)
|
||||||
|
elseif (SANITIZE STREQUAL "address")
|
||||||
|
set(WITH_ASAN ON)
|
||||||
|
elseif (SANITIZE STREQUAL "thread")
|
||||||
|
set(WITH_TSAN ON)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
|
||||||
|
set(PORTABLE ON)
|
||||||
|
## always disable jemalloc for rocksdb by default
|
||||||
|
## because it introduces non-standard jemalloc APIs
|
||||||
|
option(WITH_JEMALLOC "build with JeMalloc" OFF)
|
||||||
|
option(WITH_SNAPPY "build with SNAPPY" ${USE_SNAPPY})
|
||||||
|
## lz4, zlib, zstd is enabled in ClickHouse by default
|
||||||
|
option(WITH_LZ4 "build with lz4" ON)
|
||||||
|
option(WITH_ZLIB "build with zlib" ON)
|
||||||
|
option(WITH_ZSTD "build with zstd" ON)
|
||||||
|
|
||||||
|
# third-party/folly is only validated to work on Linux and Windows for now.
|
||||||
|
# So only turn it on there by default.
|
||||||
|
if(CMAKE_SYSTEM_NAME MATCHES "Linux|Windows")
|
||||||
|
if(MSVC AND MSVC_VERSION LESS 1910)
|
||||||
|
# Folly does not compile with MSVC older than VS2017
|
||||||
|
option(WITH_FOLLY_DISTRIBUTED_MUTEX "build with folly::DistributedMutex" OFF)
|
||||||
|
else()
|
||||||
|
option(WITH_FOLLY_DISTRIBUTED_MUTEX "build with folly::DistributedMutex" ON)
|
||||||
|
endif()
|
||||||
|
else()
|
||||||
|
option(WITH_FOLLY_DISTRIBUTED_MUTEX "build with folly::DistributedMutex" OFF)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if( NOT DEFINED CMAKE_CXX_STANDARD )
|
||||||
|
set(CMAKE_CXX_STANDARD 11)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(MSVC)
|
||||||
|
option(WITH_XPRESS "build with windows built in compression" OFF)
|
||||||
|
include(${ROCKSDB_SOURCE_DIR}/thirdparty.inc)
|
||||||
|
else()
|
||||||
|
if(CMAKE_SYSTEM_NAME MATCHES "FreeBSD" AND NOT CMAKE_SYSTEM_NAME MATCHES "kFreeBSD")
|
||||||
|
# FreeBSD has jemalloc as default malloc
|
||||||
|
# but it does not have all the jemalloc files in include/...
|
||||||
|
set(WITH_JEMALLOC ON)
|
||||||
|
else()
|
||||||
|
if(WITH_JEMALLOC)
|
||||||
|
add_definitions(-DROCKSDB_JEMALLOC -DJEMALLOC_NO_DEMANGLE)
|
||||||
|
list(APPEND THIRDPARTY_LIBS jemalloc)
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(WITH_SNAPPY)
|
||||||
|
add_definitions(-DSNAPPY)
|
||||||
|
list(APPEND THIRDPARTY_LIBS snappy)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(WITH_ZLIB)
|
||||||
|
add_definitions(-DZLIB)
|
||||||
|
list(APPEND THIRDPARTY_LIBS zlib)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(WITH_LZ4)
|
||||||
|
add_definitions(-DLZ4)
|
||||||
|
list(APPEND THIRDPARTY_LIBS lz4)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(WITH_ZSTD)
|
||||||
|
add_definitions(-DZSTD)
|
||||||
|
include_directories(${ZSTD_INCLUDE_DIR})
|
||||||
|
include_directories(${ZSTD_INCLUDE_DIR}/common)
|
||||||
|
include_directories(${ZSTD_INCLUDE_DIR}/dictBuilder)
|
||||||
|
include_directories(${ZSTD_INCLUDE_DIR}/deprecated)
|
||||||
|
|
||||||
|
list(APPEND THIRDPARTY_LIBS zstd)
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
string(TIMESTAMP TS "%Y/%m/%d %H:%M:%S" UTC)
|
||||||
|
set(GIT_DATE_TIME "${TS}" CACHE STRING "the time we first built rocksdb")
|
||||||
|
|
||||||
|
find_package(Git)
|
||||||
|
|
||||||
|
if(GIT_FOUND AND EXISTS "${ROCKSDB_SOURCE_DIR}/.git")
|
||||||
|
if(WIN32)
|
||||||
|
execute_process(COMMAND $ENV{COMSPEC} /C ${GIT_EXECUTABLE} -C ${ROCKSDB_SOURCE_DIR} rev-parse HEAD OUTPUT_VARIABLE GIT_SHA)
|
||||||
|
else()
|
||||||
|
execute_process(COMMAND ${GIT_EXECUTABLE} -C ${ROCKSDB_SOURCE_DIR} rev-parse HEAD OUTPUT_VARIABLE GIT_SHA)
|
||||||
|
endif()
|
||||||
|
else()
|
||||||
|
set(GIT_SHA 0)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
string(REGEX REPLACE "[^0-9a-f]+" "" GIT_SHA "${GIT_SHA}")
|
||||||
|
|
||||||
|
set(BUILD_VERSION_CC ${CMAKE_BINARY_DIR}/rocksdb_build_version.cc)
|
||||||
|
configure_file(${ROCKSDB_SOURCE_DIR}/util/build_version.cc.in ${BUILD_VERSION_CC} @ONLY)
|
||||||
|
add_library(rocksdb_build_version OBJECT ${BUILD_VERSION_CC})
|
||||||
|
target_include_directories(rocksdb_build_version PRIVATE
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util)
|
||||||
|
if(MSVC)
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /Zi /nologo /EHsc /GS /Gd /GR /GF /fp:precise /Zc:wchar_t /Zc:forScope /errorReport:queue")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /FC /d2Zi+ /W4 /wd4127 /wd4800 /wd4996 /wd4351 /wd4100 /wd4204 /wd4324")
|
||||||
|
else()
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -W -Wextra -Wall")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wsign-compare -Wshadow -Wno-unused-parameter -Wno-unused-variable -Woverloaded-virtual -Wnon-virtual-dtor -Wno-missing-field-initializers -Wno-strict-aliasing")
|
||||||
|
if(MINGW)
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-format -fno-asynchronous-unwind-tables")
|
||||||
|
add_definitions(-D_POSIX_C_SOURCE=1)
|
||||||
|
endif()
|
||||||
|
if(NOT CMAKE_BUILD_TYPE STREQUAL "Debug")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-omit-frame-pointer")
|
||||||
|
include(CheckCXXCompilerFlag)
|
||||||
|
CHECK_CXX_COMPILER_FLAG("-momit-leaf-frame-pointer" HAVE_OMIT_LEAF_FRAME_POINTER)
|
||||||
|
if(HAVE_OMIT_LEAF_FRAME_POINTER)
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -momit-leaf-frame-pointer")
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
include(CheckCCompilerFlag)
|
||||||
|
if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64")
|
||||||
|
CHECK_C_COMPILER_FLAG("-mcpu=power9" HAS_POWER9)
|
||||||
|
if(HAS_POWER9)
|
||||||
|
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -mcpu=power9 -mtune=power9")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mcpu=power9 -mtune=power9")
|
||||||
|
else()
|
||||||
|
CHECK_C_COMPILER_FLAG("-mcpu=power8" HAS_POWER8)
|
||||||
|
if(HAS_POWER8)
|
||||||
|
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -mcpu=power8 -mtune=power8")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mcpu=power8 -mtune=power8")
|
||||||
|
endif(HAS_POWER8)
|
||||||
|
endif(HAS_POWER9)
|
||||||
|
CHECK_C_COMPILER_FLAG("-maltivec" HAS_ALTIVEC)
|
||||||
|
if(HAS_ALTIVEC)
|
||||||
|
message(STATUS " HAS_ALTIVEC yes")
|
||||||
|
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -maltivec")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -maltivec")
|
||||||
|
endif(HAS_ALTIVEC)
|
||||||
|
endif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64")
|
||||||
|
|
||||||
|
if(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64")
|
||||||
|
CHECK_C_COMPILER_FLAG("-march=armv8-a+crc+crypto" HAS_ARMV8_CRC)
|
||||||
|
if(HAS_ARMV8_CRC)
|
||||||
|
message(STATUS " HAS_ARMV8_CRC yes")
|
||||||
|
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -march=armv8-a+crc+crypto -Wno-unused-function")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -march=armv8-a+crc+crypto -Wno-unused-function")
|
||||||
|
endif(HAS_ARMV8_CRC)
|
||||||
|
endif(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64")
|
||||||
|
|
||||||
|
|
||||||
|
include(CheckCXXSourceCompiles)
|
||||||
|
if(NOT MSVC)
|
||||||
|
set(CMAKE_REQUIRED_FLAGS "-msse4.2 -mpclmul")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
CHECK_CXX_SOURCE_COMPILES("
|
||||||
|
#include <cstdint>
|
||||||
|
#include <nmmintrin.h>
|
||||||
|
#include <wmmintrin.h>
|
||||||
|
int main() {
|
||||||
|
volatile uint32_t x = _mm_crc32_u32(0, 0);
|
||||||
|
const auto a = _mm_set_epi64x(0, 0);
|
||||||
|
const auto b = _mm_set_epi64x(0, 0);
|
||||||
|
const auto c = _mm_clmulepi64_si128(a, b, 0x00);
|
||||||
|
auto d = _mm_cvtsi128_si64(c);
|
||||||
|
}
|
||||||
|
" HAVE_SSE42)
|
||||||
|
unset(CMAKE_REQUIRED_FLAGS)
|
||||||
|
if(HAVE_SSE42)
|
||||||
|
add_definitions(-DHAVE_SSE42)
|
||||||
|
add_definitions(-DHAVE_PCLMUL)
|
||||||
|
elseif(FORCE_SSE42)
|
||||||
|
message(FATAL_ERROR "FORCE_SSE42=ON but unable to compile with SSE4.2 enabled")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
CHECK_CXX_SOURCE_COMPILES("
|
||||||
|
#if defined(_MSC_VER) && !defined(__thread)
|
||||||
|
#define __thread __declspec(thread)
|
||||||
|
#endif
|
||||||
|
int main() {
|
||||||
|
static __thread int tls;
|
||||||
|
}
|
||||||
|
" HAVE_THREAD_LOCAL)
|
||||||
|
if(HAVE_THREAD_LOCAL)
|
||||||
|
add_definitions(-DROCKSDB_SUPPORT_THREAD_LOCAL)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
option(FAIL_ON_WARNINGS "Treat compile warnings as errors" ON)
|
||||||
|
if(FAIL_ON_WARNINGS)
|
||||||
|
if(MSVC)
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /WX")
|
||||||
|
else() # assume GCC
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror")
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
option(WITH_ASAN "build with ASAN" OFF)
|
||||||
|
if(WITH_ASAN)
|
||||||
|
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fsanitize=address")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsanitize=address")
|
||||||
|
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fsanitize=address")
|
||||||
|
if(WITH_JEMALLOC)
|
||||||
|
message(FATAL "ASAN does not work well with JeMalloc")
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
option(WITH_TSAN "build with TSAN" OFF)
|
||||||
|
if(WITH_TSAN)
|
||||||
|
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fsanitize=thread -pie")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsanitize=thread -fPIC")
|
||||||
|
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fsanitize=thread -fPIC")
|
||||||
|
if(WITH_JEMALLOC)
|
||||||
|
message(FATAL "TSAN does not work well with JeMalloc")
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
option(WITH_UBSAN "build with UBSAN" OFF)
|
||||||
|
if(WITH_UBSAN)
|
||||||
|
add_definitions(-DROCKSDB_UBSAN_RUN)
|
||||||
|
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fsanitize=undefined")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsanitize=undefined")
|
||||||
|
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fsanitize=undefined")
|
||||||
|
if(WITH_JEMALLOC)
|
||||||
|
message(FATAL "UBSAN does not work well with JeMalloc")
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
|
||||||
|
if(CMAKE_SYSTEM_NAME MATCHES "Cygwin")
|
||||||
|
add_definitions(-fno-builtin-memcmp -DCYGWIN)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "Darwin")
|
||||||
|
add_definitions(-DOS_MACOSX)
|
||||||
|
if(CMAKE_SYSTEM_PROCESSOR MATCHES arm)
|
||||||
|
add_definitions(-DIOS_CROSS_COMPILE -DROCKSDB_LITE)
|
||||||
|
# no debug info for IOS, that will make our library big
|
||||||
|
add_definitions(-DNDEBUG)
|
||||||
|
endif()
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "Linux")
|
||||||
|
add_definitions(-DOS_LINUX)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "SunOS")
|
||||||
|
add_definitions(-DOS_SOLARIS)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "kFreeBSD")
|
||||||
|
add_definitions(-DOS_GNU_KFREEBSD)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "FreeBSD")
|
||||||
|
add_definitions(-DOS_FREEBSD)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "NetBSD")
|
||||||
|
add_definitions(-DOS_NETBSD)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "OpenBSD")
|
||||||
|
add_definitions(-DOS_OPENBSD)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "DragonFly")
|
||||||
|
add_definitions(-DOS_DRAGONFLYBSD)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "Android")
|
||||||
|
add_definitions(-DOS_ANDROID)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "Windows")
|
||||||
|
add_definitions(-DWIN32 -DOS_WIN -D_MBCS -DWIN64 -DNOMINMAX)
|
||||||
|
if(MINGW)
|
||||||
|
add_definitions(-D_WIN32_WINNT=_WIN32_WINNT_VISTA)
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(NOT WIN32)
|
||||||
|
add_definitions(-DROCKSDB_PLATFORM_POSIX -DROCKSDB_LIB_IO_POSIX)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
option(WITH_FALLOCATE "build with fallocate" ON)
|
||||||
|
if(WITH_FALLOCATE)
|
||||||
|
CHECK_CXX_SOURCE_COMPILES("
|
||||||
|
#include <fcntl.h>
|
||||||
|
#include <linux/falloc.h>
|
||||||
|
int main() {
|
||||||
|
int fd = open(\"/dev/null\", 0);
|
||||||
|
fallocate(fd, FALLOC_FL_KEEP_SIZE, 0, 1024);
|
||||||
|
}
|
||||||
|
" HAVE_FALLOCATE)
|
||||||
|
if(HAVE_FALLOCATE)
|
||||||
|
add_definitions(-DROCKSDB_FALLOCATE_PRESENT)
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
CHECK_CXX_SOURCE_COMPILES("
|
||||||
|
#include <fcntl.h>
|
||||||
|
int main() {
|
||||||
|
int fd = open(\"/dev/null\", 0);
|
||||||
|
sync_file_range(fd, 0, 1024, SYNC_FILE_RANGE_WRITE);
|
||||||
|
}
|
||||||
|
" HAVE_SYNC_FILE_RANGE_WRITE)
|
||||||
|
if(HAVE_SYNC_FILE_RANGE_WRITE)
|
||||||
|
add_definitions(-DROCKSDB_RANGESYNC_PRESENT)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
CHECK_CXX_SOURCE_COMPILES("
|
||||||
|
#include <pthread.h>
|
||||||
|
int main() {
|
||||||
|
(void) PTHREAD_MUTEX_ADAPTIVE_NP;
|
||||||
|
}
|
||||||
|
" HAVE_PTHREAD_MUTEX_ADAPTIVE_NP)
|
||||||
|
if(HAVE_PTHREAD_MUTEX_ADAPTIVE_NP)
|
||||||
|
add_definitions(-DROCKSDB_PTHREAD_ADAPTIVE_MUTEX)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
include(CheckCXXSymbolExists)
|
||||||
|
if(CMAKE_SYSTEM_NAME MATCHES "^FreeBSD")
|
||||||
|
check_cxx_symbol_exists(malloc_usable_size ${ROCKSDB_SOURCE_DIR}/malloc_np.h HAVE_MALLOC_USABLE_SIZE)
|
||||||
|
else()
|
||||||
|
check_cxx_symbol_exists(malloc_usable_size ${ROCKSDB_SOURCE_DIR}/malloc.h HAVE_MALLOC_USABLE_SIZE)
|
||||||
|
endif()
|
||||||
|
if(HAVE_MALLOC_USABLE_SIZE)
|
||||||
|
add_definitions(-DROCKSDB_MALLOC_USABLE_SIZE)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
check_cxx_symbol_exists(sched_getcpu sched.h HAVE_SCHED_GETCPU)
|
||||||
|
if(HAVE_SCHED_GETCPU)
|
||||||
|
add_definitions(-DROCKSDB_SCHED_GETCPU_PRESENT)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
check_cxx_symbol_exists(getauxval auvx.h HAVE_AUXV_GETAUXVAL)
|
||||||
|
if(HAVE_AUXV_GETAUXVAL)
|
||||||
|
add_definitions(-DROCKSDB_AUXV_GETAUXVAL_PRESENT)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
include_directories(${ROCKSDB_SOURCE_DIR})
|
||||||
|
include_directories(${ROCKSDB_SOURCE_DIR}/include)
|
||||||
|
if(WITH_FOLLY_DISTRIBUTED_MUTEX)
|
||||||
|
include_directories(${ROCKSDB_SOURCE_DIR}/third-party/folly)
|
||||||
|
endif()
|
||||||
|
find_package(Threads REQUIRED)
|
||||||
|
|
||||||
|
# Main library source code
|
||||||
|
|
||||||
|
set(SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/cache/cache.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/cache/clock_cache.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/cache/lru_cache.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/cache/sharded_cache.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/arena_wrapped_db_iter.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_addition.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_builder.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_garbage.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_meta.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_format.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_sequential_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_writer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/builder.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/c.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/column_family.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compacted_db_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_iterator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_job.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_fifo.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_level.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_universal.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compaction/sst_partitioner.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/convenience.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_filesnapshot.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_write.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_compaction_flush.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_files.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_open.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_debug.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_experimental.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_readonly.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_secondary.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_info_dumper.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_iter.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/dbformat.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/error_handler.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/event_helpers.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/experimental.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/external_sst_file_ingestion_job.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/file_indexer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/flush_job.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/flush_scheduler.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/forward_iterator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/import_column_family_job.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/internal_stats.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/logs_with_prep_tracker.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/log_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/log_writer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/malloc_stats.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/memtable.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/memtable_list.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/merge_helper.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/merge_operator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/output_validator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/periodic_work_scheduler.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/range_del_aggregator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/range_tombstone_fragmenter.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/repair.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/snapshot_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/table_cache.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/table_properties_collector.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/transaction_log_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/trim_history_scheduler.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/version_builder.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/version_edit.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/version_edit_handler.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/version_set.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/wal_edit.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/wal_manager.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/write_batch.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/write_batch_base.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/write_controller.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/write_thread.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/env.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/env_chroot.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/env_encryption.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/env_hdfs.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/file_system.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/file_system_tracer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/mock_env.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/delete_scheduler.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/file_prefetch_buffer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/file_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/filename.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/random_access_file_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/read_write_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/readahead_raf.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/sequence_file_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/sst_file_manager_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/writable_file_writer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/logging/auto_roll_logger.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/logging/event_logger.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/logging/log_buffer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memory/arena.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memory/concurrent_arena.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memory/jemalloc_nodump_allocator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memory/memkind_kmem_allocator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memtable/alloc_tracker.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memtable/hash_linklist_rep.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memtable/hash_skiplist_rep.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memtable/skiplistrep.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memtable/vectorrep.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memtable/write_buffer_manager.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/histogram.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/histogram_windowing.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/in_memory_stats_history.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/instrumented_mutex.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/iostats_context.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/perf_context.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/perf_level.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/persistent_stats_history.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/statistics.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_updater.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util_debug.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/options/cf_options.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/options/configurable.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/options/db_options.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/options/options.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/options/options_helper.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/options/options_parser.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/stack_trace.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/adaptive/adaptive_table_factory.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/binary_search_index_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_filter_block.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_builder.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_factory.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_iterator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_builder.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefetcher.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefix_index.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_hash_index.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_footer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/filter_block_reader_common.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/filter_policy.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/flush_block_policy.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/full_filter_block.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/hash_index_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/index_builder.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/index_reader_common.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/parsed_full_filter_block.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_filter_block.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_iterator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/reader_common.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/uncompression_dict_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_fetcher.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_builder.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_factory.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/format.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/get_context.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/iterator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/merging_iterator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/meta_blocks.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/persistent_cache_helper.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_bloom.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_builder.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_factory.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_index.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_key_coding.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/sst_file_dumper.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/sst_file_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/sst_file_writer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/table_factory.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/table_properties.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/two_level_iterator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/test_util/sync_point.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/test_util/sync_point_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/test_util/testutil.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/test_util/transaction_test_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/tools/block_cache_analyzer/block_cache_trace_analyzer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/tools/dump/db_dump_tool.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/tools/io_tracer_parser_tool.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/tools/ldb_cmd.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/tools/ldb_tool.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/tools/sst_dump_tool.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/tools/trace_analyzer_tool.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/trace_replay/trace_replay.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/trace_replay/block_cache_tracer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/trace_replay/io_tracer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/coding.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/compaction_job_stats_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/comparator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/compression_context_cache.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/concurrent_task_limiter_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/crc32c.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/dynamic_bloom.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/hash.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/murmurhash.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/random.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/rate_limiter.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/slice.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/file_checksum_helper.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/status.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/string_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/thread_local.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/threadpool_imp.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/xxhash.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/backupable/backupable_db.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_compaction_filter.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl_filesnapshot.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_dump_tool.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_file.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/cassandra/cassandra_compaction_filter.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/cassandra/format.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/cassandra/merge_operator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/checkpoint/checkpoint_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/compaction_filters/remove_emptyvalue_compactionfilter.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/debug.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/env_mirror.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/env_timed.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_env.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_fs.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/leveldb_options/leveldb_options.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/memory/memory_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/bytesxor.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/max.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/put.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/sortlist.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend2.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/uint64add.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/object_registry.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/option_change_migration/option_change_migration.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/options/options_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_file.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_metadata.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/persistent_cache_tier.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/volatile_tier_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/cache_simulator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/sim_cache.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/table_properties_collectors/compact_on_deletion_collector.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/trace/file_trace_reader_writer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/lock_tracker.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point_lock_tracker.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction_db_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction_db.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/snapshot_checker.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_base.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_db_mutex_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_lock_mgr.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn_db.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn_db.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/ttl/db_ttl_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index_internal.cc
|
||||||
|
$<TARGET_OBJECTS:rocksdb_build_version>)
|
||||||
|
|
||||||
|
if(HAVE_SSE42 AND NOT MSVC)
|
||||||
|
set_source_files_properties(
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/crc32c.cc
|
||||||
|
PROPERTIES COMPILE_FLAGS "-msse4.2 -mpclmul")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64")
|
||||||
|
list(APPEND SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/crc32c_ppc.c
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/crc32c_ppc_asm.S)
|
||||||
|
endif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64")
|
||||||
|
|
||||||
|
if(HAS_ARMV8_CRC)
|
||||||
|
list(APPEND SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/crc32c_arm64.cc)
|
||||||
|
endif(HAS_ARMV8_CRC)
|
||||||
|
|
||||||
|
if(WIN32)
|
||||||
|
list(APPEND SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/win/io_win.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/win/env_win.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/win/env_default.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/win/port_win.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/win/win_logger.cc)
|
||||||
|
if(NOT MINGW)
|
||||||
|
# Mingw only supports std::thread when using
|
||||||
|
# posix threads.
|
||||||
|
list(APPEND SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/win/win_thread.cc)
|
||||||
|
endif()
|
||||||
|
if(WITH_XPRESS)
|
||||||
|
list(APPEND SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/win/xpress_win.cc)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(WITH_JEMALLOC)
|
||||||
|
list(APPEND SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/win/win_jemalloc.cc)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
else()
|
||||||
|
list(APPEND SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/port_posix.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/env_posix.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/fs_posix.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/io_posix.cc)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(WITH_FOLLY_DISTRIBUTED_MUTEX)
|
||||||
|
list(APPEND SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/detail/Futex.cpp
|
||||||
|
${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/synchronization/AtomicNotification.cpp
|
||||||
|
${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/synchronization/DistributedMutex.cpp
|
||||||
|
${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/synchronization/ParkingLot.cpp
|
||||||
|
${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/synchronization/WaitOptions.cpp)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
set(ROCKSDB_STATIC_LIB rocksdb)
|
||||||
|
|
||||||
|
if(WIN32)
|
||||||
|
set(SYSTEM_LIBS ${SYSTEM_LIBS} shlwapi.lib rpcrt4.lib)
|
||||||
|
else()
|
||||||
|
set(SYSTEM_LIBS ${CMAKE_THREAD_LIBS_INIT})
|
||||||
|
endif()
|
||||||
|
|
||||||
|
add_library(${ROCKSDB_STATIC_LIB} STATIC ${SOURCES})
|
||||||
|
target_link_libraries(${ROCKSDB_STATIC_LIB} PRIVATE
|
||||||
|
${THIRDPARTY_LIBS} ${SYSTEM_LIBS})
|
1
contrib/xz
vendored
Submodule
1
contrib/xz
vendored
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit 869b9d1b4edd6df07f819d360d306251f8147353
|
4
debian/changelog
vendored
4
debian/changelog
vendored
@ -1,5 +1,5 @@
|
|||||||
clickhouse (20.11.1.1) unstable; urgency=low
|
clickhouse (20.13.1.1) unstable; urgency=low
|
||||||
|
|
||||||
* Modified source code
|
* Modified source code
|
||||||
|
|
||||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Sat, 10 Oct 2020 18:39:55 +0300
|
-- clickhouse-release <clickhouse-release@yandex-team.ru> Mon, 23 Nov 2020 10:29:24 +0300
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:18.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=20.11.1.*
|
ARG version=20.13.1.*
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install --yes --no-install-recommends \
|
&& apt-get install --yes --no-install-recommends \
|
||||||
|
@ -56,6 +56,7 @@ RUN apt-get update \
|
|||||||
libprotoc-dev \
|
libprotoc-dev \
|
||||||
libgrpc++-dev \
|
libgrpc++-dev \
|
||||||
protobuf-compiler-grpc \
|
protobuf-compiler-grpc \
|
||||||
|
libc-ares-dev \
|
||||||
rapidjson-dev \
|
rapidjson-dev \
|
||||||
libsnappy-dev \
|
libsnappy-dev \
|
||||||
libparquet-dev \
|
libparquet-dev \
|
||||||
@ -64,6 +65,8 @@ RUN apt-get update \
|
|||||||
libbz2-dev \
|
libbz2-dev \
|
||||||
libavro-dev \
|
libavro-dev \
|
||||||
libfarmhash-dev \
|
libfarmhash-dev \
|
||||||
|
librocksdb-dev \
|
||||||
|
libgflags-dev \
|
||||||
libmysqlclient-dev \
|
libmysqlclient-dev \
|
||||||
--yes --no-install-recommends
|
--yes --no-install-recommends
|
||||||
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:20.04
|
FROM ubuntu:20.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=20.11.1.*
|
ARG version=20.13.1.*
|
||||||
ARG gosu_ver=1.10
|
ARG gosu_ver=1.10
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:18.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=20.11.1.*
|
ARG version=20.13.1.*
|
||||||
|
|
||||||
RUN apt-get update && \
|
RUN apt-get update && \
|
||||||
apt-get install -y apt-transport-https dirmngr && \
|
apt-get install -y apt-transport-https dirmngr && \
|
||||||
|
@ -7,8 +7,10 @@ ENV SOURCE_DIR=/build
|
|||||||
ENV OUTPUT_DIR=/output
|
ENV OUTPUT_DIR=/output
|
||||||
ENV IGNORE='.*contrib.*'
|
ENV IGNORE='.*contrib.*'
|
||||||
|
|
||||||
CMD mkdir -p /build/obj-x86_64-linux-gnu && cd /build/obj-x86_64-linux-gnu && CC=clang-10 CXX=clang++-10 cmake .. && cd /; \
|
RUN apt-get update && apt-get install cmake --yes --no-install-recommends
|
||||||
|
|
||||||
|
CMD mkdir -p /build/obj-x86_64-linux-gnu && cd /build/obj-x86_64-linux-gnu && CC=clang-11 CXX=clang++-11 cmake .. && cd /; \
|
||||||
dpkg -i /package_folder/clickhouse-common-static_*.deb; \
|
dpkg -i /package_folder/clickhouse-common-static_*.deb; \
|
||||||
llvm-profdata-10 merge -sparse ${COVERAGE_DIR}/* -o clickhouse.profdata && \
|
llvm-profdata-11 merge -sparse ${COVERAGE_DIR}/* -o clickhouse.profdata && \
|
||||||
llvm-cov-10 export /usr/bin/clickhouse -instr-profile=clickhouse.profdata -j=16 -format=lcov -skip-functions -ignore-filename-regex $IGNORE > output.lcov && \
|
llvm-cov-11 export /usr/bin/clickhouse -instr-profile=clickhouse.profdata -j=16 -format=lcov -skip-functions -ignore-filename-regex $IGNORE > output.lcov && \
|
||||||
genhtml output.lcov --ignore-errors source --output-directory ${OUTPUT_DIR}
|
genhtml output.lcov --ignore-errors source --output-directory ${OUTPUT_DIR}
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
# docker build -t yandex/clickhouse-fasttest .
|
# docker build -t yandex/clickhouse-fasttest .
|
||||||
FROM ubuntu:19.10
|
FROM ubuntu:20.04
|
||||||
|
|
||||||
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=10
|
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=10
|
||||||
|
|
||||||
|
@ -15,6 +15,9 @@ stage=${stage:-}
|
|||||||
# empty parameter.
|
# empty parameter.
|
||||||
read -ra FASTTEST_CMAKE_FLAGS <<< "${FASTTEST_CMAKE_FLAGS:-}"
|
read -ra FASTTEST_CMAKE_FLAGS <<< "${FASTTEST_CMAKE_FLAGS:-}"
|
||||||
|
|
||||||
|
# Run only matching tests.
|
||||||
|
FASTTEST_FOCUS=${FASTTEST_FOCUS:-""}
|
||||||
|
|
||||||
FASTTEST_WORKSPACE=$(readlink -f "${FASTTEST_WORKSPACE:-.}")
|
FASTTEST_WORKSPACE=$(readlink -f "${FASTTEST_WORKSPACE:-.}")
|
||||||
FASTTEST_SOURCE=$(readlink -f "${FASTTEST_SOURCE:-$FASTTEST_WORKSPACE/ch}")
|
FASTTEST_SOURCE=$(readlink -f "${FASTTEST_SOURCE:-$FASTTEST_WORKSPACE/ch}")
|
||||||
FASTTEST_BUILD=$(readlink -f "${FASTTEST_BUILD:-${BUILD:-$FASTTEST_WORKSPACE/build}}")
|
FASTTEST_BUILD=$(readlink -f "${FASTTEST_BUILD:-${BUILD:-$FASTTEST_WORKSPACE/build}}")
|
||||||
@ -127,7 +130,7 @@ function clone_submodules
|
|||||||
(
|
(
|
||||||
cd "$FASTTEST_SOURCE"
|
cd "$FASTTEST_SOURCE"
|
||||||
|
|
||||||
SUBMODULES_TO_UPDATE=(contrib/boost contrib/zlib-ng contrib/libxml2 contrib/poco contrib/libunwind contrib/ryu contrib/fmtlib contrib/base64 contrib/cctz contrib/libcpuid contrib/double-conversion contrib/libcxx contrib/libcxxabi contrib/libc-headers contrib/lz4 contrib/zstd contrib/fastops contrib/rapidjson contrib/re2 contrib/sparsehash-c11 contrib/croaring)
|
SUBMODULES_TO_UPDATE=(contrib/boost contrib/zlib-ng contrib/libxml2 contrib/poco contrib/libunwind contrib/ryu contrib/fmtlib contrib/base64 contrib/cctz contrib/libcpuid contrib/double-conversion contrib/libcxx contrib/libcxxabi contrib/libc-headers contrib/lz4 contrib/zstd contrib/fastops contrib/rapidjson contrib/re2 contrib/sparsehash-c11 contrib/croaring contrib/miniselect contrib/xz)
|
||||||
|
|
||||||
git submodule sync
|
git submodule sync
|
||||||
git submodule update --init --recursive "${SUBMODULES_TO_UPDATE[@]}"
|
git submodule update --init --recursive "${SUBMODULES_TO_UPDATE[@]}"
|
||||||
@ -240,6 +243,10 @@ TESTS_TO_SKIP=(
|
|||||||
01354_order_by_tuple_collate_const
|
01354_order_by_tuple_collate_const
|
||||||
01355_ilike
|
01355_ilike
|
||||||
01411_bayesian_ab_testing
|
01411_bayesian_ab_testing
|
||||||
|
01532_collate_in_low_cardinality
|
||||||
|
01533_collate_in_nullable
|
||||||
|
01542_collate_in_array
|
||||||
|
01543_collate_in_tuple
|
||||||
_orc_
|
_orc_
|
||||||
arrow
|
arrow
|
||||||
avro
|
avro
|
||||||
@ -264,12 +271,16 @@ TESTS_TO_SKIP=(
|
|||||||
protobuf
|
protobuf
|
||||||
secure
|
secure
|
||||||
sha256
|
sha256
|
||||||
|
xz
|
||||||
|
|
||||||
# Not sure why these two fail even in sequential mode. Disabled for now
|
# Not sure why these two fail even in sequential mode. Disabled for now
|
||||||
# to make some progress.
|
# to make some progress.
|
||||||
00646_url_engine
|
00646_url_engine
|
||||||
00974_query_profiler
|
00974_query_profiler
|
||||||
|
|
||||||
|
# In fasttest, ENABLE_LIBRARIES=0, so rocksdb engine is not enabled by default
|
||||||
|
01504_rocksdb
|
||||||
|
|
||||||
# Look at DistributedFilesToInsert, so cannot run in parallel.
|
# Look at DistributedFilesToInsert, so cannot run in parallel.
|
||||||
01460_DistributedFilesToInsert
|
01460_DistributedFilesToInsert
|
||||||
|
|
||||||
@ -279,9 +290,11 @@ TESTS_TO_SKIP=(
|
|||||||
01322_ttest_scipy
|
01322_ttest_scipy
|
||||||
|
|
||||||
01545_system_errors
|
01545_system_errors
|
||||||
|
# Checks system.errors
|
||||||
|
01563_distributed_query_finish
|
||||||
)
|
)
|
||||||
|
|
||||||
time clickhouse-test -j 8 --order=random --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt"
|
time clickhouse-test -j 8 --order=random --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" -- "$FASTTEST_FOCUS" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt"
|
||||||
|
|
||||||
# substr is to remove semicolon after test name
|
# substr is to remove semicolon after test name
|
||||||
readarray -t FAILED_TESTS < <(awk '/FAIL|TIMEOUT|ERROR/ { print substr($3, 1, length($3)-1) }' "$FASTTEST_OUTPUT/test_log.txt" | tee "$FASTTEST_OUTPUT/failed-parallel-tests.txt")
|
readarray -t FAILED_TESTS < <(awk '/FAIL|TIMEOUT|ERROR/ { print substr($3, 1, length($3)-1) }' "$FASTTEST_OUTPUT/test_log.txt" | tee "$FASTTEST_OUTPUT/failed-parallel-tests.txt")
|
||||||
|
@ -30,7 +30,7 @@ RUN apt-get update \
|
|||||||
tzdata \
|
tzdata \
|
||||||
vim \
|
vim \
|
||||||
wget \
|
wget \
|
||||||
&& pip3 --no-cache-dir install clickhouse_driver scipy \
|
&& pip3 --no-cache-dir install 'clickhouse-driver>=0.1.5' scipy \
|
||||||
&& apt-get purge --yes python3-dev g++ \
|
&& apt-get purge --yes python3-dev g++ \
|
||||||
&& apt-get autoremove --yes \
|
&& apt-get autoremove --yes \
|
||||||
&& apt-get clean \
|
&& apt-get clean \
|
||||||
|
@ -1074,6 +1074,53 @@ wait
|
|||||||
unset IFS
|
unset IFS
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function upload_results
|
||||||
|
{
|
||||||
|
if ! [ -v CHPC_DATABASE_URL ]
|
||||||
|
then
|
||||||
|
echo Database for test results is not specified, will not upload them.
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Surprisingly, clickhouse-client doesn't understand --host 127.0.0.1:9000
|
||||||
|
# so I have to extract host and port with clickhouse-local. I tried to use
|
||||||
|
# Poco URI parser to support this in the client, but it's broken and can't
|
||||||
|
# parse host:port.
|
||||||
|
set +x # Don't show password in the log
|
||||||
|
clickhouse-client \
|
||||||
|
$(clickhouse-local --query "with '${CHPC_DATABASE_URL}' as url select '--host ' || domain(url) || ' --port ' || toString(port(url)) format TSV") \
|
||||||
|
--secure \
|
||||||
|
--user "${CHPC_DATABASE_USER}" \
|
||||||
|
--password "${CHPC_DATABASE_PASSWORD}" \
|
||||||
|
--config "right/config/client_config.xml" \
|
||||||
|
--database perftest \
|
||||||
|
--date_time_input_format=best_effort \
|
||||||
|
--query "
|
||||||
|
insert into query_metrics_v2
|
||||||
|
select
|
||||||
|
toDate(event_time) event_date,
|
||||||
|
toDateTime('$(cd right/ch && git show -s --format=%ci "$SHA_TO_TEST" | cut -d' ' -f-2)') event_time,
|
||||||
|
$PR_TO_TEST pr_number,
|
||||||
|
'$REF_SHA' old_sha,
|
||||||
|
'$SHA_TO_TEST' new_sha,
|
||||||
|
test,
|
||||||
|
query_index,
|
||||||
|
query_display_name,
|
||||||
|
metric_name,
|
||||||
|
old_value,
|
||||||
|
new_value,
|
||||||
|
diff,
|
||||||
|
stat_threshold
|
||||||
|
from input('metric_name text, old_value float, new_value float, diff float,
|
||||||
|
ratio_display_text text, stat_threshold float,
|
||||||
|
test text, query_index int, query_display_name text')
|
||||||
|
settings date_time_input_format='best_effort'
|
||||||
|
format TSV
|
||||||
|
settings date_time_input_format='best_effort'
|
||||||
|
" < report/all-query-metrics.tsv # Don't leave whitespace after INSERT: https://github.com/ClickHouse/ClickHouse/issues/16652
|
||||||
|
set -x
|
||||||
|
}
|
||||||
|
|
||||||
# Check that local and client are in PATH
|
# Check that local and client are in PATH
|
||||||
clickhouse-local --version > /dev/null
|
clickhouse-local --version > /dev/null
|
||||||
clickhouse-client --version > /dev/null
|
clickhouse-client --version > /dev/null
|
||||||
@ -1145,6 +1192,9 @@ case "$stage" in
|
|||||||
time "$script_dir/report.py" --report=all-queries > all-queries.html 2> >(tee -a report/errors.log 1>&2) ||:
|
time "$script_dir/report.py" --report=all-queries > all-queries.html 2> >(tee -a report/errors.log 1>&2) ||:
|
||||||
time "$script_dir/report.py" > report.html
|
time "$script_dir/report.py" > report.html
|
||||||
;&
|
;&
|
||||||
|
"upload_results")
|
||||||
|
time upload_results ||:
|
||||||
|
;&
|
||||||
esac
|
esac
|
||||||
|
|
||||||
# Print some final debug info to help debug Weirdness, of which there is plenty.
|
# Print some final debug info to help debug Weirdness, of which there is plenty.
|
||||||
|
17
docker/test/performance-comparison/config/client_config.xml
Normal file
17
docker/test/performance-comparison/config/client_config.xml
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
<!--
|
||||||
|
This config is used to upload test results to a public ClickHouse instance.
|
||||||
|
It has bad certificates so we ignore them.
|
||||||
|
-->
|
||||||
|
<config>
|
||||||
|
<openSSL>
|
||||||
|
<client>
|
||||||
|
<loadDefaultCAFile>true</loadDefaultCAFile>
|
||||||
|
<cacheSessions>true</cacheSessions>
|
||||||
|
<disableProtocols>sslv2,sslv3</disableProtocols>
|
||||||
|
<preferServerCiphers>true</preferServerCiphers>
|
||||||
|
<invalidCertificateHandler>
|
||||||
|
<name>AcceptCertificateHandler</name> <!-- For tests only-->
|
||||||
|
</invalidCertificateHandler>
|
||||||
|
</client>
|
||||||
|
</openSSL>
|
||||||
|
</config>
|
@ -16,7 +16,7 @@
|
|||||||
<max_execution_time>300</max_execution_time>
|
<max_execution_time>300</max_execution_time>
|
||||||
|
|
||||||
<!-- One NUMA node w/o hyperthreading -->
|
<!-- One NUMA node w/o hyperthreading -->
|
||||||
<max_threads>20</max_threads>
|
<max_threads>12</max_threads>
|
||||||
</default>
|
</default>
|
||||||
</profiles>
|
</profiles>
|
||||||
</yandex>
|
</yandex>
|
||||||
|
@ -121,6 +121,9 @@ set +e
|
|||||||
PATH="$(readlink -f right/)":"$PATH"
|
PATH="$(readlink -f right/)":"$PATH"
|
||||||
export PATH
|
export PATH
|
||||||
|
|
||||||
|
export REF_PR
|
||||||
|
export REF_SHA
|
||||||
|
|
||||||
# Start the main comparison script.
|
# Start the main comparison script.
|
||||||
{ \
|
{ \
|
||||||
time ../download.sh "$REF_PR" "$REF_SHA" "$PR_TO_TEST" "$SHA_TO_TEST" && \
|
time ../download.sh "$REF_PR" "$REF_SHA" "$PR_TO_TEST" "$SHA_TO_TEST" && \
|
||||||
|
@ -14,10 +14,12 @@ import string
|
|||||||
import sys
|
import sys
|
||||||
import time
|
import time
|
||||||
import traceback
|
import traceback
|
||||||
|
import logging
|
||||||
import xml.etree.ElementTree as et
|
import xml.etree.ElementTree as et
|
||||||
from threading import Thread
|
from threading import Thread
|
||||||
from scipy import stats
|
from scipy import stats
|
||||||
|
|
||||||
|
logging.basicConfig(format='%(asctime)s: %(levelname)s: %(module)s: %(message)s', level='WARNING')
|
||||||
|
|
||||||
total_start_seconds = time.perf_counter()
|
total_start_seconds = time.perf_counter()
|
||||||
stage_start_seconds = total_start_seconds
|
stage_start_seconds = total_start_seconds
|
||||||
@ -46,6 +48,8 @@ parser.add_argument('--profile-seconds', type=int, default=0, help='For how many
|
|||||||
parser.add_argument('--long', action='store_true', help='Do not skip the tests tagged as long.')
|
parser.add_argument('--long', action='store_true', help='Do not skip the tests tagged as long.')
|
||||||
parser.add_argument('--print-queries', action='store_true', help='Print test queries and exit.')
|
parser.add_argument('--print-queries', action='store_true', help='Print test queries and exit.')
|
||||||
parser.add_argument('--print-settings', action='store_true', help='Print test settings and exit.')
|
parser.add_argument('--print-settings', action='store_true', help='Print test settings and exit.')
|
||||||
|
parser.add_argument('--keep-created-tables', action='store_true', help="Don't drop the created tables after the test.")
|
||||||
|
parser.add_argument('--use-existing-tables', action='store_true', help="Don't create or drop the tables, use the existing ones instead.")
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
reportStageEnd('start')
|
reportStageEnd('start')
|
||||||
@ -146,20 +150,21 @@ for i, s in enumerate(servers):
|
|||||||
|
|
||||||
reportStageEnd('connect')
|
reportStageEnd('connect')
|
||||||
|
|
||||||
# Run drop queries, ignoring errors. Do this before all other activity, because
|
if not args.use_existing_tables:
|
||||||
# clickhouse_driver disconnects on error (this is not configurable), and the new
|
# Run drop queries, ignoring errors. Do this before all other activity,
|
||||||
# connection loses the changes in settings.
|
# because clickhouse_driver disconnects on error (this is not configurable),
|
||||||
drop_query_templates = [q.text for q in root.findall('drop_query')]
|
# and the new connection loses the changes in settings.
|
||||||
drop_queries = substitute_parameters(drop_query_templates)
|
drop_query_templates = [q.text for q in root.findall('drop_query')]
|
||||||
for conn_index, c in enumerate(all_connections):
|
drop_queries = substitute_parameters(drop_query_templates)
|
||||||
for q in drop_queries:
|
for conn_index, c in enumerate(all_connections):
|
||||||
try:
|
for q in drop_queries:
|
||||||
c.execute(q)
|
try:
|
||||||
print(f'drop\t{conn_index}\t{c.last_query.elapsed}\t{tsv_escape(q)}')
|
c.execute(q)
|
||||||
except:
|
print(f'drop\t{conn_index}\t{c.last_query.elapsed}\t{tsv_escape(q)}')
|
||||||
pass
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
reportStageEnd('drop-1')
|
reportStageEnd('drop-1')
|
||||||
|
|
||||||
# Apply settings.
|
# Apply settings.
|
||||||
# If there are errors, report them and continue -- maybe a new test uses a setting
|
# If there are errors, report them and continue -- maybe a new test uses a setting
|
||||||
@ -171,12 +176,9 @@ reportStageEnd('drop-1')
|
|||||||
settings = root.findall('settings/*')
|
settings = root.findall('settings/*')
|
||||||
for conn_index, c in enumerate(all_connections):
|
for conn_index, c in enumerate(all_connections):
|
||||||
for s in settings:
|
for s in settings:
|
||||||
try:
|
# requires clickhouse-driver >= 1.1.5 to accept arbitrary new settings
|
||||||
q = f"set {s.tag} = '{s.text}'"
|
# (https://github.com/mymarilyn/clickhouse-driver/pull/142)
|
||||||
c.execute(q)
|
c.settings[s.tag] = s.text
|
||||||
print(f'set\t{conn_index}\t{c.last_query.elapsed}\t{tsv_escape(q)}')
|
|
||||||
except:
|
|
||||||
print(traceback.format_exc(), file=sys.stderr)
|
|
||||||
|
|
||||||
reportStageEnd('settings')
|
reportStageEnd('settings')
|
||||||
|
|
||||||
@ -194,37 +196,40 @@ for t in tables:
|
|||||||
|
|
||||||
reportStageEnd('preconditions')
|
reportStageEnd('preconditions')
|
||||||
|
|
||||||
# Run create and fill queries. We will run them simultaneously for both servers,
|
if not args.use_existing_tables:
|
||||||
# to save time.
|
# Run create and fill queries. We will run them simultaneously for both
|
||||||
# The weird search is to keep the relative order of elements, which matters, and
|
# servers, to save time. The weird XML search + filter is because we want to
|
||||||
# etree doesn't support the appropriate xpath query.
|
# keep the relative order of elements, and etree doesn't support the
|
||||||
create_query_templates = [q.text for q in root.findall('./*') if q.tag in ('create_query', 'fill_query')]
|
# appropriate xpath query.
|
||||||
create_queries = substitute_parameters(create_query_templates)
|
create_query_templates = [q.text for q in root.findall('./*')
|
||||||
|
if q.tag in ('create_query', 'fill_query')]
|
||||||
|
create_queries = substitute_parameters(create_query_templates)
|
||||||
|
|
||||||
# Disallow temporary tables, because the clickhouse_driver reconnects on errors,
|
# Disallow temporary tables, because the clickhouse_driver reconnects on
|
||||||
# and temporary tables are destroyed. We want to be able to continue after some
|
# errors, and temporary tables are destroyed. We want to be able to continue
|
||||||
# errors.
|
# after some errors.
|
||||||
for q in create_queries:
|
for q in create_queries:
|
||||||
if re.search('create temporary table', q, flags=re.IGNORECASE):
|
if re.search('create temporary table', q, flags=re.IGNORECASE):
|
||||||
print(f"Temporary tables are not allowed in performance tests: '{q}'",
|
print(f"Temporary tables are not allowed in performance tests: '{q}'",
|
||||||
file = sys.stderr)
|
file = sys.stderr)
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
def do_create(connection, index, queries):
|
def do_create(connection, index, queries):
|
||||||
for q in queries:
|
for q in queries:
|
||||||
connection.execute(q)
|
connection.execute(q)
|
||||||
print(f'create\t{index}\t{connection.last_query.elapsed}\t{tsv_escape(q)}')
|
print(f'create\t{index}\t{connection.last_query.elapsed}\t{tsv_escape(q)}')
|
||||||
|
|
||||||
threads = [Thread(target = do_create, args = (connection, index, create_queries))
|
threads = [
|
||||||
for index, connection in enumerate(all_connections)]
|
Thread(target = do_create, args = (connection, index, create_queries))
|
||||||
|
for index, connection in enumerate(all_connections)]
|
||||||
|
|
||||||
for t in threads:
|
for t in threads:
|
||||||
t.start()
|
t.start()
|
||||||
|
|
||||||
for t in threads:
|
for t in threads:
|
||||||
t.join()
|
t.join()
|
||||||
|
|
||||||
reportStageEnd('create')
|
reportStageEnd('create')
|
||||||
|
|
||||||
# By default, test all queries.
|
# By default, test all queries.
|
||||||
queries_to_run = range(0, len(test_queries))
|
queries_to_run = range(0, len(test_queries))
|
||||||
@ -403,10 +408,11 @@ print(f'profile-total\t{profile_total_seconds}')
|
|||||||
reportStageEnd('run')
|
reportStageEnd('run')
|
||||||
|
|
||||||
# Run drop queries
|
# Run drop queries
|
||||||
drop_queries = substitute_parameters(drop_query_templates)
|
if not args.keep_created_tables and not args.use_existing_tables:
|
||||||
for conn_index, c in enumerate(all_connections):
|
drop_queries = substitute_parameters(drop_query_templates)
|
||||||
for q in drop_queries:
|
for conn_index, c in enumerate(all_connections):
|
||||||
c.execute(q)
|
for q in drop_queries:
|
||||||
print(f'drop\t{conn_index}\t{c.last_query.elapsed}\t{tsv_escape(q)}')
|
c.execute(q)
|
||||||
|
print(f'drop\t{conn_index}\t{c.last_query.elapsed}\t{tsv_escape(q)}')
|
||||||
|
|
||||||
reportStageEnd('drop-2')
|
reportStageEnd('drop-2')
|
||||||
|
@ -10,6 +10,11 @@ RUN apt-get update --yes \
|
|||||||
gpg-agent \
|
gpg-agent \
|
||||||
debsig-verify \
|
debsig-verify \
|
||||||
strace \
|
strace \
|
||||||
|
protobuf-compiler \
|
||||||
|
protobuf-compiler-grpc \
|
||||||
|
libprotoc-dev \
|
||||||
|
libgrpc++-dev \
|
||||||
|
libc-ares-dev \
|
||||||
--yes --no-install-recommends
|
--yes --no-install-recommends
|
||||||
|
|
||||||
#RUN wget -nv -O - http://files.viva64.com/etc/pubkey.txt | sudo apt-key add -
|
#RUN wget -nv -O - http://files.viva64.com/etc/pubkey.txt | sudo apt-key add -
|
||||||
@ -33,7 +38,8 @@ RUN set -x \
|
|||||||
&& dpkg -i "${PKG_VERSION}.deb"
|
&& dpkg -i "${PKG_VERSION}.deb"
|
||||||
|
|
||||||
CMD echo "Running PVS version $PKG_VERSION" && cd /repo_folder && pvs-studio-analyzer credentials $LICENCE_NAME $LICENCE_KEY -o ./licence.lic \
|
CMD echo "Running PVS version $PKG_VERSION" && cd /repo_folder && pvs-studio-analyzer credentials $LICENCE_NAME $LICENCE_KEY -o ./licence.lic \
|
||||||
&& cmake . -D"ENABLE_EMBEDDED_COMPILER"=OFF && ninja re2_st \
|
&& cmake . -D"ENABLE_EMBEDDED_COMPILER"=OFF -D"USE_INTERNAL_PROTOBUF_LIBRARY"=OFF -D"USE_INTERNAL_GRPC_LIBRARY"=OFF \
|
||||||
|
&& ninja re2_st clickhouse_grpc_protos \
|
||||||
&& pvs-studio-analyzer analyze -o pvs-studio.log -e contrib -j 4 -l ./licence.lic; \
|
&& pvs-studio-analyzer analyze -o pvs-studio.log -e contrib -j 4 -l ./licence.lic; \
|
||||||
plog-converter -a GA:1,2 -t fullhtml -o /test_output/pvs-studio-html-report pvs-studio.log; \
|
plog-converter -a GA:1,2 -t fullhtml -o /test_output/pvs-studio-html-report pvs-studio.log; \
|
||||||
plog-converter -a GA:1,2 -t tasklist -o /test_output/pvs-studio-task-report.txt pvs-studio.log
|
plog-converter -a GA:1,2 -t tasklist -o /test_output/pvs-studio-task-report.txt pvs-studio.log
|
||||||
|
@ -1,12 +1,12 @@
|
|||||||
# docker build -t yandex/clickhouse-stateful-test-with-coverage .
|
# docker build -t yandex/clickhouse-stateful-test-with-coverage .
|
||||||
FROM yandex/clickhouse-stateless-test
|
FROM yandex/clickhouse-stateless-test-with-coverage
|
||||||
|
|
||||||
RUN echo "deb [trusted=yes] http://apt.llvm.org/bionic/ llvm-toolchain-bionic-9 main" >> /etc/apt/sources.list
|
RUN echo "deb [trusted=yes] http://apt.llvm.org/bionic/ llvm-toolchain-bionic-9 main" >> /etc/apt/sources.list
|
||||||
|
|
||||||
RUN apt-get update -y \
|
RUN apt-get update -y \
|
||||||
&& env DEBIAN_FRONTEND=noninteractive \
|
&& env DEBIAN_FRONTEND=noninteractive \
|
||||||
apt-get install --yes --no-install-recommends \
|
apt-get install --yes --no-install-recommends \
|
||||||
python3-requests
|
python3-requests procps psmisc
|
||||||
|
|
||||||
COPY s3downloader /s3downloader
|
COPY s3downloader /s3downloader
|
||||||
COPY run.sh /run.sh
|
COPY run.sh /run.sh
|
||||||
|
@ -1,40 +1,44 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
kill_clickhouse () {
|
kill_clickhouse () {
|
||||||
kill "$(pgrep -u clickhouse)" 2>/dev/null
|
echo "clickhouse pids $(pgrep -u clickhouse)" | ts '%Y-%m-%d %H:%M:%S'
|
||||||
|
pkill -f "clickhouse-server" 2>/dev/null
|
||||||
|
|
||||||
for _ in {1..10}
|
|
||||||
|
for _ in {1..120}
|
||||||
do
|
do
|
||||||
if ! kill -0 "$(pgrep -u clickhouse)"; then
|
if ! pkill -0 -f "clickhouse-server" ; then break ; fi
|
||||||
echo "No clickhouse process"
|
echo "ClickHouse still alive" | ts '%Y-%m-%d %H:%M:%S'
|
||||||
break
|
sleep 1
|
||||||
else
|
|
||||||
echo "Process $(pgrep -u clickhouse) still alive"
|
|
||||||
sleep 10
|
|
||||||
fi
|
|
||||||
done
|
done
|
||||||
|
|
||||||
|
if pkill -0 -f "clickhouse-server"
|
||||||
|
then
|
||||||
|
pstree -apgT
|
||||||
|
jobs
|
||||||
|
echo "Failed to kill the ClickHouse server" | ts '%Y-%m-%d %H:%M:%S'
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
start_clickhouse () {
|
start_clickhouse () {
|
||||||
LLVM_PROFILE_FILE='server_%h_%p_%m.profraw' sudo -Eu clickhouse /usr/bin/clickhouse-server --config /etc/clickhouse-server/config.xml &
|
LLVM_PROFILE_FILE='server_%h_%p_%m.profraw' sudo -Eu clickhouse /usr/bin/clickhouse-server --config /etc/clickhouse-server/config.xml &
|
||||||
}
|
counter=0
|
||||||
|
until clickhouse-client --query "SELECT 1"
|
||||||
wait_llvm_profdata () {
|
|
||||||
while kill -0 "$(pgrep llvm-profdata-10)"
|
|
||||||
do
|
do
|
||||||
echo "Waiting for profdata $(pgrep llvm-profdata-10) still alive"
|
if [ "$counter" -gt 120 ]
|
||||||
sleep 3
|
then
|
||||||
|
echo "Cannot start clickhouse-server"
|
||||||
|
cat /var/log/clickhouse-server/stdout.log
|
||||||
|
tail -n1000 /var/log/clickhouse-server/stderr.log
|
||||||
|
tail -n1000 /var/log/clickhouse-server/clickhouse-server.log
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
sleep 0.5
|
||||||
|
counter=$((counter + 1))
|
||||||
done
|
done
|
||||||
}
|
}
|
||||||
|
|
||||||
merge_client_files_in_background () {
|
|
||||||
client_files=$(ls /client_*profraw 2>/dev/null)
|
|
||||||
if [ -n "$client_files" ]
|
|
||||||
then
|
|
||||||
llvm-profdata-10 merge -sparse "$client_files" -o "merged_client_$(date +%s).profraw"
|
|
||||||
rm "$client_files"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
chmod 777 /
|
chmod 777 /
|
||||||
|
|
||||||
@ -51,26 +55,7 @@ chmod 777 -R /var/log/clickhouse-server/
|
|||||||
# install test configs
|
# install test configs
|
||||||
/usr/share/clickhouse-test/config/install.sh
|
/usr/share/clickhouse-test/config/install.sh
|
||||||
|
|
||||||
function start()
|
start_clickhouse
|
||||||
{
|
|
||||||
counter=0
|
|
||||||
until clickhouse-client --query "SELECT 1"
|
|
||||||
do
|
|
||||||
if [ "$counter" -gt 120 ]
|
|
||||||
then
|
|
||||||
echo "Cannot start clickhouse-server"
|
|
||||||
cat /var/log/clickhouse-server/stdout.log
|
|
||||||
tail -n1000 /var/log/clickhouse-server/stderr.log
|
|
||||||
tail -n1000 /var/log/clickhouse-server/clickhouse-server.log
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
timeout 120 service clickhouse-server start
|
|
||||||
sleep 0.5
|
|
||||||
counter=$((counter + 1))
|
|
||||||
done
|
|
||||||
}
|
|
||||||
|
|
||||||
start
|
|
||||||
|
|
||||||
# shellcheck disable=SC2086 # No quotes because I want to split it into words.
|
# shellcheck disable=SC2086 # No quotes because I want to split it into words.
|
||||||
if ! /s3downloader --dataset-names $DATASETS; then
|
if ! /s3downloader --dataset-names $DATASETS; then
|
||||||
@ -81,25 +66,20 @@ fi
|
|||||||
|
|
||||||
chmod 777 -R /var/lib/clickhouse
|
chmod 777 -R /var/lib/clickhouse
|
||||||
|
|
||||||
while /bin/true; do
|
|
||||||
merge_client_files_in_background
|
|
||||||
sleep 2
|
|
||||||
done &
|
|
||||||
|
|
||||||
LLVM_PROFILE_FILE='client_%h_%p_%m.profraw' clickhouse-client --query "SHOW DATABASES"
|
LLVM_PROFILE_FILE='client_coverage.profraw' clickhouse-client --query "SHOW DATABASES"
|
||||||
LLVM_PROFILE_FILE='client_%h_%p_%m.profraw' clickhouse-client --query "ATTACH DATABASE datasets ENGINE = Ordinary"
|
LLVM_PROFILE_FILE='client_coverage.profraw' clickhouse-client --query "ATTACH DATABASE datasets ENGINE = Ordinary"
|
||||||
LLVM_PROFILE_FILE='client_%h_%p_%m.profraw' clickhouse-client --query "CREATE DATABASE test"
|
LLVM_PROFILE_FILE='client_coverage.profraw' clickhouse-client --query "CREATE DATABASE test"
|
||||||
|
|
||||||
kill_clickhouse
|
kill_clickhouse
|
||||||
start_clickhouse
|
start_clickhouse
|
||||||
|
|
||||||
sleep 10
|
LLVM_PROFILE_FILE='client_coverage.profraw' clickhouse-client --query "SHOW TABLES FROM datasets"
|
||||||
|
LLVM_PROFILE_FILE='client_coverage.profraw' clickhouse-client --query "SHOW TABLES FROM test"
|
||||||
|
LLVM_PROFILE_FILE='client_coverage.profraw' clickhouse-client --query "RENAME TABLE datasets.hits_v1 TO test.hits"
|
||||||
|
LLVM_PROFILE_FILE='client_coverage.profraw' clickhouse-client --query "RENAME TABLE datasets.visits_v1 TO test.visits"
|
||||||
|
LLVM_PROFILE_FILE='client_coverage.profraw' clickhouse-client --query "SHOW TABLES FROM test"
|
||||||
|
|
||||||
LLVM_PROFILE_FILE='client_%h_%p_%m.profraw' clickhouse-client --query "SHOW TABLES FROM datasets"
|
|
||||||
LLVM_PROFILE_FILE='client_%h_%p_%m.profraw' clickhouse-client --query "SHOW TABLES FROM test"
|
|
||||||
LLVM_PROFILE_FILE='client_%h_%p_%m.profraw' clickhouse-client --query "RENAME TABLE datasets.hits_v1 TO test.hits"
|
|
||||||
LLVM_PROFILE_FILE='client_%h_%p_%m.profraw' clickhouse-client --query "RENAME TABLE datasets.visits_v1 TO test.visits"
|
|
||||||
LLVM_PROFILE_FILE='client_%h_%p_%m.profraw' clickhouse-client --query "SHOW TABLES FROM test"
|
|
||||||
|
|
||||||
if grep -q -- "--use-skip-list" /usr/bin/clickhouse-test; then
|
if grep -q -- "--use-skip-list" /usr/bin/clickhouse-test; then
|
||||||
SKIP_LIST_OPT="--use-skip-list"
|
SKIP_LIST_OPT="--use-skip-list"
|
||||||
@ -109,15 +89,10 @@ fi
|
|||||||
# more idiologically correct.
|
# more idiologically correct.
|
||||||
read -ra ADDITIONAL_OPTIONS <<< "${ADDITIONAL_OPTIONS:-}"
|
read -ra ADDITIONAL_OPTIONS <<< "${ADDITIONAL_OPTIONS:-}"
|
||||||
|
|
||||||
LLVM_PROFILE_FILE='client_%h_%p_%m.profraw' clickhouse-test --testname --shard --zookeeper --no-stateless --hung-check --print-time "$SKIP_LIST_OPT" "${ADDITIONAL_OPTIONS[@]}" "$SKIP_TESTS_OPTION" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt
|
LLVM_PROFILE_FILE='client_coverage.profraw' clickhouse-test --testname --shard --zookeeper --no-stateless --hung-check --print-time "$SKIP_LIST_OPT" "${ADDITIONAL_OPTIONS[@]}" "$SKIP_TESTS_OPTION" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt
|
||||||
|
|
||||||
kill_clickhouse
|
kill_clickhouse
|
||||||
|
|
||||||
wait_llvm_profdata
|
|
||||||
|
|
||||||
sleep 3
|
sleep 3
|
||||||
|
|
||||||
wait_llvm_profdata # 100% merged all parts
|
|
||||||
|
|
||||||
|
|
||||||
cp /*.profraw /profraw ||:
|
cp /*.profraw /profraw ||:
|
||||||
|
@ -29,7 +29,7 @@ def dowload_with_progress(url, path):
|
|||||||
logging.info("Downloading from %s to temp path %s", url, path)
|
logging.info("Downloading from %s to temp path %s", url, path)
|
||||||
for i in range(RETRIES_COUNT):
|
for i in range(RETRIES_COUNT):
|
||||||
try:
|
try:
|
||||||
with open(path, 'w') as f:
|
with open(path, 'wb') as f:
|
||||||
response = requests.get(url, stream=True)
|
response = requests.get(url, stream=True)
|
||||||
response.raise_for_status()
|
response.raise_for_status()
|
||||||
total_length = response.headers.get('content-length')
|
total_length = response.headers.get('content-length')
|
||||||
|
@ -43,6 +43,8 @@ RUN apt-get --allow-unauthenticated update -y \
|
|||||||
libreadline-dev \
|
libreadline-dev \
|
||||||
libsasl2-dev \
|
libsasl2-dev \
|
||||||
libzstd-dev \
|
libzstd-dev \
|
||||||
|
librocksdb-dev \
|
||||||
|
libgflags-dev \
|
||||||
lsof \
|
lsof \
|
||||||
moreutils \
|
moreutils \
|
||||||
ncdu \
|
ncdu \
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# docker build -t yandex/clickhouse-stateless-with-coverage-test .
|
# docker build -t yandex/clickhouse-stateless-test-with-coverage .
|
||||||
# TODO: that can be based on yandex/clickhouse-stateless-test (llvm version and CMD differs)
|
# TODO: that can be based on yandex/clickhouse-stateless-test (llvm version and CMD differs)
|
||||||
FROM yandex/clickhouse-test-base
|
FROM yandex/clickhouse-test-base
|
||||||
|
|
||||||
@ -28,7 +28,9 @@ RUN apt-get update -y \
|
|||||||
lsof \
|
lsof \
|
||||||
unixodbc \
|
unixodbc \
|
||||||
wget \
|
wget \
|
||||||
qemu-user-static
|
qemu-user-static \
|
||||||
|
procps \
|
||||||
|
psmisc
|
||||||
|
|
||||||
RUN mkdir -p /tmp/clickhouse-odbc-tmp \
|
RUN mkdir -p /tmp/clickhouse-odbc-tmp \
|
||||||
&& wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
&& wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
||||||
|
@ -2,27 +2,41 @@
|
|||||||
|
|
||||||
kill_clickhouse () {
|
kill_clickhouse () {
|
||||||
echo "clickhouse pids $(pgrep -u clickhouse)" | ts '%Y-%m-%d %H:%M:%S'
|
echo "clickhouse pids $(pgrep -u clickhouse)" | ts '%Y-%m-%d %H:%M:%S'
|
||||||
kill "$(pgrep -u clickhouse)" 2>/dev/null
|
pkill -f "clickhouse-server" 2>/dev/null
|
||||||
|
|
||||||
for _ in {1..10}
|
|
||||||
|
for _ in {1..120}
|
||||||
do
|
do
|
||||||
if ! kill -0 "$(pgrep -u clickhouse)"; then
|
if ! pkill -0 -f "clickhouse-server" ; then break ; fi
|
||||||
echo "No clickhouse process" | ts '%Y-%m-%d %H:%M:%S'
|
echo "ClickHouse still alive" | ts '%Y-%m-%d %H:%M:%S'
|
||||||
break
|
sleep 1
|
||||||
else
|
|
||||||
echo "Process $(pgrep -u clickhouse) still alive" | ts '%Y-%m-%d %H:%M:%S'
|
|
||||||
sleep 10
|
|
||||||
fi
|
|
||||||
done
|
done
|
||||||
|
|
||||||
echo "Will try to send second kill signal for sure"
|
if pkill -0 -f "clickhouse-server"
|
||||||
kill "$(pgrep -u clickhouse)" 2>/dev/null
|
then
|
||||||
sleep 5
|
pstree -apgT
|
||||||
echo "clickhouse pids $(pgrep -u clickhouse)" | ts '%Y-%m-%d %H:%M:%S'
|
jobs
|
||||||
|
echo "Failed to kill the ClickHouse server" | ts '%Y-%m-%d %H:%M:%S'
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
start_clickhouse () {
|
start_clickhouse () {
|
||||||
LLVM_PROFILE_FILE='server_%h_%p_%m.profraw' sudo -Eu clickhouse /usr/bin/clickhouse-server --config /etc/clickhouse-server/config.xml &
|
LLVM_PROFILE_FILE='server_%h_%p_%m.profraw' sudo -Eu clickhouse /usr/bin/clickhouse-server --config /etc/clickhouse-server/config.xml &
|
||||||
|
counter=0
|
||||||
|
until clickhouse-client --query "SELECT 1"
|
||||||
|
do
|
||||||
|
if [ "$counter" -gt 120 ]
|
||||||
|
then
|
||||||
|
echo "Cannot start clickhouse-server"
|
||||||
|
cat /var/log/clickhouse-server/stdout.log
|
||||||
|
tail -n1000 /var/log/clickhouse-server/stderr.log
|
||||||
|
tail -n1000 /var/log/clickhouse-server/clickhouse-server.log
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
sleep 0.5
|
||||||
|
counter=$((counter + 1))
|
||||||
|
done
|
||||||
}
|
}
|
||||||
|
|
||||||
chmod 777 /
|
chmod 777 /
|
||||||
@ -44,9 +58,6 @@ chmod 777 -R /var/log/clickhouse-server/
|
|||||||
|
|
||||||
start_clickhouse
|
start_clickhouse
|
||||||
|
|
||||||
sleep 10
|
|
||||||
|
|
||||||
|
|
||||||
if grep -q -- "--use-skip-list" /usr/bin/clickhouse-test; then
|
if grep -q -- "--use-skip-list" /usr/bin/clickhouse-test; then
|
||||||
SKIP_LIST_OPT="--use-skip-list"
|
SKIP_LIST_OPT="--use-skip-list"
|
||||||
fi
|
fi
|
||||||
|
@ -17,13 +17,6 @@ def get_skip_list_cmd(path):
|
|||||||
return ''
|
return ''
|
||||||
|
|
||||||
|
|
||||||
def run_perf_test(cmd, xmls_path, output_folder):
|
|
||||||
output_path = os.path.join(output_folder, "perf_stress_run.txt")
|
|
||||||
f = open(output_path, 'w')
|
|
||||||
p = Popen("{} --skip-tags=long --recursive --input-files {}".format(cmd, xmls_path), shell=True, stdout=f, stderr=f)
|
|
||||||
return p
|
|
||||||
|
|
||||||
|
|
||||||
def get_options(i):
|
def get_options(i):
|
||||||
options = ""
|
options = ""
|
||||||
if 0 < i:
|
if 0 < i:
|
||||||
@ -75,8 +68,6 @@ if __name__ == "__main__":
|
|||||||
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
func_pipes = []
|
func_pipes = []
|
||||||
perf_process = None
|
|
||||||
perf_process = run_perf_test(args.perf_test_cmd, args.perf_test_xml_path, args.output_folder)
|
|
||||||
func_pipes = run_func_test(args.test_cmd, args.output_folder, args.num_parallel, args.skip_func_tests, args.global_time_limit)
|
func_pipes = run_func_test(args.test_cmd, args.output_folder, args.num_parallel, args.skip_func_tests, args.global_time_limit)
|
||||||
|
|
||||||
logging.info("Will wait functests to finish")
|
logging.info("Will wait functests to finish")
|
||||||
|
@ -35,7 +35,7 @@ RUN apt-get update \
|
|||||||
ENV TZ=Europe/Moscow
|
ENV TZ=Europe/Moscow
|
||||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||||
|
|
||||||
RUN pip3 install urllib3 testflows==1.6.59 docker-compose docker dicttoxml kazoo tzlocal
|
RUN pip3 install urllib3 testflows==1.6.65 docker-compose docker dicttoxml kazoo tzlocal
|
||||||
|
|
||||||
ENV DOCKER_CHANNEL stable
|
ENV DOCKER_CHANNEL stable
|
||||||
ENV DOCKER_VERSION 17.09.1-ce
|
ENV DOCKER_VERSION 17.09.1-ce
|
||||||
|
141
docs/en/development/adding_test_queries.md
Normal file
141
docs/en/development/adding_test_queries.md
Normal file
@ -0,0 +1,141 @@
|
|||||||
|
# How to add test queries to ClickHouse CI
|
||||||
|
|
||||||
|
ClickHouse has hundreds (or even thousands) of features. Every commit get checked by a complex set of tests containing many thousands of test cases.
|
||||||
|
|
||||||
|
The core functionality is very well tested, but some corner-cases and different combinations of features can be uncovered with ClickHouse CI.
|
||||||
|
|
||||||
|
Most of the bugs/regressions we see happen in that 'grey area' where test coverage is poor.
|
||||||
|
|
||||||
|
And we are very interested in covering most of the possible scenarios and feature combinations used in real life by tests.
|
||||||
|
|
||||||
|
## Why adding tests
|
||||||
|
|
||||||
|
Why/when you should add a test case into ClickHouse code:
|
||||||
|
1) you use some complicated scenarios / feature combinations / you have some corner case which is probably not widely used
|
||||||
|
2) you see that certain behavior gets changed between version w/o notifications in the changelog
|
||||||
|
3) you just want to help to improve ClickHouse quality and ensure the features you use will not be broken in the future releases
|
||||||
|
4) once the test is added/accepted, you can be sure the corner case you check will never be accidentally broken.
|
||||||
|
5) you will be a part of great open-source community
|
||||||
|
6) your name will be visible in the `system.contributors` table!
|
||||||
|
7) you will make a world bit better :)
|
||||||
|
|
||||||
|
### Steps to do
|
||||||
|
|
||||||
|
#### Prerequisite
|
||||||
|
|
||||||
|
I assume you run some Linux machine (you can use docker / virtual machines on other OS) and any modern browser / internet connection, and you have some basic Linux & SQL skills.
|
||||||
|
|
||||||
|
Any highly specialized knowledge is not needed (so you don't need to know C++ or know something about how ClickHouse CI works).
|
||||||
|
|
||||||
|
|
||||||
|
#### Preparation
|
||||||
|
|
||||||
|
1) [create GitHub account](https://github.com/join) (if you haven't one yet)
|
||||||
|
2) [setup git](https://docs.github.com/en/free-pro-team@latest/github/getting-started-with-github/set-up-git)
|
||||||
|
```bash
|
||||||
|
# for Ubuntu
|
||||||
|
sudo apt-get update
|
||||||
|
sudo apt-get install git
|
||||||
|
|
||||||
|
git config --global user.name "John Doe" # fill with your name
|
||||||
|
git config --global user.email "email@example.com" # fill with your email
|
||||||
|
|
||||||
|
```
|
||||||
|
3) [fork ClickHouse project](https://docs.github.com/en/free-pro-team@latest/github/getting-started-with-github/fork-a-repo) - just open [https://github.com/ClickHouse/ClickHouse](https://github.com/ClickHouse/ClickHouse) and press fork button in the top right corner:
|
||||||
|
![fork repo](https://github-images.s3.amazonaws.com/help/bootcamp/Bootcamp-Fork.png)
|
||||||
|
|
||||||
|
4) clone your fork to some folder on your PC, for example, `~/workspace/ClickHouse`
|
||||||
|
```
|
||||||
|
mkdir ~/workspace && cd ~/workspace
|
||||||
|
git clone https://github.com/< your GitHub username>/ClickHouse
|
||||||
|
cd ClickHouse
|
||||||
|
git remote add upstream https://github.com/ClickHouse/ClickHouse
|
||||||
|
```
|
||||||
|
|
||||||
|
#### New branch for the test
|
||||||
|
|
||||||
|
1) create a new branch from the latest clickhouse master
|
||||||
|
```
|
||||||
|
cd ~/workspace/ClickHouse
|
||||||
|
git fetch upstream
|
||||||
|
git checkout -b name_for_a_branch_with_my_test upstream/master
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Install & run clickhouse
|
||||||
|
|
||||||
|
1) install `clickhouse-server` (follow [official docs](https://clickhouse.tech/docs/en/getting-started/install/))
|
||||||
|
2) install test configurations (it will use Zookeeper mock implementation and adjust some settings)
|
||||||
|
```
|
||||||
|
cd ~/workspace/ClickHouse/tests/config
|
||||||
|
sudo ./install.sh
|
||||||
|
```
|
||||||
|
3) run clickhouse-server
|
||||||
|
```
|
||||||
|
sudo systemctl restart clickhouse-server
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Creating the test file
|
||||||
|
|
||||||
|
|
||||||
|
1) find the number for your test - find the file with the biggest number in `tests/queries/0_stateless/`
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ cd ~/workspace/ClickHouse
|
||||||
|
$ ls tests/queries/0_stateless/[0-9]*.reference | tail -n 1
|
||||||
|
tests/queries/0_stateless/01520_client_print_query_id.reference
|
||||||
|
```
|
||||||
|
Currently, the last number for the test is `01520`, so my test will have the number `01521`
|
||||||
|
|
||||||
|
2) create an SQL file with the next number and name of the feature you test
|
||||||
|
|
||||||
|
```sh
|
||||||
|
touch tests/queries/0_stateless/01521_dummy_test.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
3) edit SQL file with your favorite editor (see hint of creating tests below)
|
||||||
|
```sh
|
||||||
|
vim tests/queries/0_stateless/01521_dummy_test.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
4) run the test, and put the result of that into the reference file:
|
||||||
|
```
|
||||||
|
clickhouse-client -nmT < tests/queries/0_stateless/01521_dummy_test.sql | tee tests/queries/0_stateless/01521_dummy_test.reference
|
||||||
|
```
|
||||||
|
|
||||||
|
5) ensure everything is correct, if the test output is incorrect (due to some bug for example), adjust the reference file using text editor.
|
||||||
|
|
||||||
|
#### How create good test
|
||||||
|
|
||||||
|
- test should be
|
||||||
|
- minimal - create only tables related to tested functionality, remove unrelated columns and parts of query
|
||||||
|
- fast - should not take longer than few seconds (better subseconds)
|
||||||
|
- correct - fails then feature is not working
|
||||||
|
- deteministic
|
||||||
|
- isolated / stateless
|
||||||
|
- don't rely on some environment things
|
||||||
|
- don't rely on timing when possible
|
||||||
|
- try to cover corner cases (zeros / Nulls / empty sets / throwing exceptions)
|
||||||
|
- to test that query return errors, you can put special comment after the query: `-- { serverError 60 }` or `-- { clientError 20 }`
|
||||||
|
- don't switch databases (unless necessary)
|
||||||
|
- you can create several table replicas on the same node if needed
|
||||||
|
- you can use one of the test cluster definitions when needed (see system.clusters)
|
||||||
|
- use `number` / `numbers_mt` / `zeros` / `zeros_mt` and similar for queries / to initialize data when appliable
|
||||||
|
- clean up the created objects after test and before the test (DROP IF EXISTS) - in case of some dirty state
|
||||||
|
- prefer sync mode of operations (mutations, merges, etc.)
|
||||||
|
- use other SQL files in the `0_stateless` folder as an example
|
||||||
|
- ensure the feature / feature combination you want to tests is not covered yet with existsing tests
|
||||||
|
|
||||||
|
#### Commit / push / create PR.
|
||||||
|
|
||||||
|
1) commit & push your changes
|
||||||
|
```sh
|
||||||
|
cd ~/workspace/ClickHouse
|
||||||
|
git add tests/queries/0_stateless/01521_dummy_test.sql
|
||||||
|
git add tests/queries/0_stateless/01521_dummy_test.reference
|
||||||
|
git commit # use some nice commit message when possible
|
||||||
|
git push origin HEAD
|
||||||
|
```
|
||||||
|
2) use a link which was shown during the push, to create a PR into the main repo
|
||||||
|
3) adjust the PR title and contents, in `Changelog category (leave one)` keep
|
||||||
|
`Build/Testing/Packaging Improvement`, fill the rest of the fields if you want.
|
@ -23,7 +23,7 @@ $ sudo apt-get install git cmake python ninja-build
|
|||||||
|
|
||||||
Or cmake3 instead of cmake on older systems.
|
Or cmake3 instead of cmake on older systems.
|
||||||
|
|
||||||
### Install GCC 9 {#install-gcc-9}
|
### Install GCC 10 {#install-gcc-10}
|
||||||
|
|
||||||
There are several ways to do this.
|
There are several ways to do this.
|
||||||
|
|
||||||
@ -32,7 +32,7 @@ There are several ways to do this.
|
|||||||
On Ubuntu 19.10 or newer:
|
On Ubuntu 19.10 or newer:
|
||||||
|
|
||||||
$ sudo apt-get update
|
$ sudo apt-get update
|
||||||
$ sudo apt-get install gcc-9 g++-9
|
$ sudo apt-get install gcc-10 g++-10
|
||||||
|
|
||||||
#### Install from a PPA Package {#install-from-a-ppa-package}
|
#### Install from a PPA Package {#install-from-a-ppa-package}
|
||||||
|
|
||||||
@ -42,18 +42,18 @@ On older Ubuntu:
|
|||||||
$ sudo apt-get install software-properties-common
|
$ sudo apt-get install software-properties-common
|
||||||
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
||||||
$ sudo apt-get update
|
$ sudo apt-get update
|
||||||
$ sudo apt-get install gcc-9 g++-9
|
$ sudo apt-get install gcc-10 g++-10
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Install from Sources {#install-from-sources}
|
#### Install from Sources {#install-from-sources}
|
||||||
|
|
||||||
See [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
See [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
||||||
|
|
||||||
### Use GCC 9 for Builds {#use-gcc-9-for-builds}
|
### Use GCC 10 for Builds {#use-gcc-10-for-builds}
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ export CC=gcc-9
|
$ export CC=gcc-10
|
||||||
$ export CXX=g++-9
|
$ export CXX=g++-10
|
||||||
```
|
```
|
||||||
|
|
||||||
### Checkout ClickHouse Sources {#checkout-clickhouse-sources}
|
### Checkout ClickHouse Sources {#checkout-clickhouse-sources}
|
||||||
@ -88,7 +88,7 @@ The build requires the following components:
|
|||||||
- Git (is used only to checkout the sources, it’s not needed for the build)
|
- Git (is used only to checkout the sources, it’s not needed for the build)
|
||||||
- CMake 3.10 or newer
|
- CMake 3.10 or newer
|
||||||
- Ninja (recommended) or Make
|
- Ninja (recommended) or Make
|
||||||
- C++ compiler: gcc 9 or clang 8 or newer
|
- C++ compiler: gcc 10 or clang 8 or newer
|
||||||
- Linker: lld or gold (the classic GNU ld won’t work)
|
- Linker: lld or gold (the classic GNU ld won’t work)
|
||||||
- Python (is only used inside LLVM build and it is optional)
|
- Python (is only used inside LLVM build and it is optional)
|
||||||
|
|
||||||
|
@ -131,13 +131,13 @@ ClickHouse uses several external libraries for building. All of them do not need
|
|||||||
|
|
||||||
## C++ Compiler {#c-compiler}
|
## C++ Compiler {#c-compiler}
|
||||||
|
|
||||||
Compilers GCC starting from version 9 and Clang version 8 or above are supported for building ClickHouse.
|
Compilers GCC starting from version 10 and Clang version 8 or above are supported for building ClickHouse.
|
||||||
|
|
||||||
Official Yandex builds currently use GCC because it generates machine code of slightly better performance (yielding a difference of up to several percent according to our benchmarks). And Clang is more convenient for development usually. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations.
|
Official Yandex builds currently use GCC because it generates machine code of slightly better performance (yielding a difference of up to several percent according to our benchmarks). And Clang is more convenient for development usually. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations.
|
||||||
|
|
||||||
To install GCC on Ubuntu run: `sudo apt install gcc g++`
|
To install GCC on Ubuntu run: `sudo apt install gcc g++`
|
||||||
|
|
||||||
Check the version of gcc: `gcc --version`. If it is below 9, then follow the instruction here: https://clickhouse.tech/docs/en/development/build/#install-gcc-9.
|
Check the version of gcc: `gcc --version`. If it is below 10, then follow the instruction here: https://clickhouse.tech/docs/en/development/build/#install-gcc-10.
|
||||||
|
|
||||||
Mac OS X build is supported only for Clang. Just run `brew install llvm`
|
Mac OS X build is supported only for Clang. Just run `brew install llvm`
|
||||||
|
|
||||||
@ -152,11 +152,11 @@ Now that you are ready to build ClickHouse we recommend you to create a separate
|
|||||||
|
|
||||||
You can have several different directories (build_release, build_debug, etc.) for different types of build.
|
You can have several different directories (build_release, build_debug, etc.) for different types of build.
|
||||||
|
|
||||||
While inside the `build` directory, configure your build by running CMake. Before the first run, you need to define environment variables that specify compiler (version 9 gcc compiler in this example).
|
While inside the `build` directory, configure your build by running CMake. Before the first run, you need to define environment variables that specify compiler (version 10 gcc compiler in this example).
|
||||||
|
|
||||||
Linux:
|
Linux:
|
||||||
|
|
||||||
export CC=gcc-9 CXX=g++-9
|
export CC=gcc-10 CXX=g++-10
|
||||||
cmake ..
|
cmake ..
|
||||||
|
|
||||||
Mac OS X:
|
Mac OS X:
|
||||||
|
@ -0,0 +1,45 @@
|
|||||||
|
---
|
||||||
|
toc_priority: 6
|
||||||
|
toc_title: EmbeddedRocksDB
|
||||||
|
---
|
||||||
|
|
||||||
|
# EmbeddedRocksDB Engine {#EmbeddedRocksDB-engine}
|
||||||
|
|
||||||
|
This engine allows integrating ClickHouse with [rocksdb](http://rocksdb.org/).
|
||||||
|
|
||||||
|
`EmbeddedRocksDB` lets you:
|
||||||
|
|
||||||
|
## Creating a Table {#table_engine-EmbeddedRocksDB-creating-a-table}
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||||
|
(
|
||||||
|
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
|
||||||
|
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
|
||||||
|
...
|
||||||
|
) ENGINE = EmbeddedRocksDB PRIMARY KEY(primary_key_name)
|
||||||
|
```
|
||||||
|
|
||||||
|
Required parameters:
|
||||||
|
|
||||||
|
- `primary_key_name` – any column name in the column list.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE test
|
||||||
|
(
|
||||||
|
`key` String,
|
||||||
|
`v1` UInt32,
|
||||||
|
`v2` String,
|
||||||
|
`v3` Float32,
|
||||||
|
)
|
||||||
|
ENGINE = EmbeddedRocksDB
|
||||||
|
PRIMARY KEY key
|
||||||
|
```
|
||||||
|
|
||||||
|
## Description {#description}
|
||||||
|
|
||||||
|
- `primary key` must be specified, it only supports one column in primary key. The primary key will serialized in binary as rocksdb key.
|
||||||
|
- columns other than the primary key will be serialized in binary as rocksdb value in corresponding order.
|
||||||
|
- queries with key `equals` or `in` filtering will be optimized to multi keys lookup from rocksdb.
|
@ -343,8 +343,8 @@ The `set` index can be used with all functions. Function subsets for other index
|
|||||||
|------------------------------------------------------------------------------------------------------------|-------------|--------|-------------|-------------|---------------|
|
|------------------------------------------------------------------------------------------------------------|-------------|--------|-------------|-------------|---------------|
|
||||||
| [equals (=, ==)](../../../sql-reference/functions/comparison-functions.md#function-equals) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
| [equals (=, ==)](../../../sql-reference/functions/comparison-functions.md#function-equals) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||||
| [notEquals(!=, \<\>)](../../../sql-reference/functions/comparison-functions.md#function-notequals) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
| [notEquals(!=, \<\>)](../../../sql-reference/functions/comparison-functions.md#function-notequals) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||||
| [like](../../../sql-reference/functions/string-search-functions.md#function-like) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
| [like](../../../sql-reference/functions/string-search-functions.md#function-like) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
||||||
| [notLike](../../../sql-reference/functions/string-search-functions.md#function-notlike) | ✔ | ✔ | ✗ | ✗ | ✗ |
|
| [notLike](../../../sql-reference/functions/string-search-functions.md#function-notlike) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
||||||
| [startsWith](../../../sql-reference/functions/string-functions.md#startswith) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
| [startsWith](../../../sql-reference/functions/string-functions.md#startswith) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
||||||
| [endsWith](../../../sql-reference/functions/string-functions.md#endswith) | ✗ | ✗ | ✔ | ✔ | ✗ |
|
| [endsWith](../../../sql-reference/functions/string-functions.md#endswith) | ✗ | ✗ | ✔ | ✔ | ✗ |
|
||||||
| [multiSearchAny](../../../sql-reference/functions/string-search-functions.md#function-multisearchany) | ✗ | ✗ | ✔ | ✗ | ✗ |
|
| [multiSearchAny](../../../sql-reference/functions/string-search-functions.md#function-multisearchany) | ✗ | ✗ | ✔ | ✗ | ✗ |
|
||||||
|
@ -152,7 +152,7 @@ You can specify default arguments for `Replicated` table engine in the server co
|
|||||||
|
|
||||||
```xml
|
```xml
|
||||||
<default_replica_path>/clickhouse/tables/{shard}/{database}/{table}</default_replica_path>
|
<default_replica_path>/clickhouse/tables/{shard}/{database}/{table}</default_replica_path>
|
||||||
<default_replica_name>{replica}</default_replica_path>
|
<default_replica_name>{replica}</default_replica_name>
|
||||||
```
|
```
|
||||||
|
|
||||||
In this case, you can omit arguments when creating tables:
|
In this case, you can omit arguments when creating tables:
|
||||||
|
@ -98,6 +98,7 @@ When creating a table, the following settings are applied:
|
|||||||
- [max_bytes_in_join](../../../operations/settings/query-complexity.md#settings-max_bytes_in_join)
|
- [max_bytes_in_join](../../../operations/settings/query-complexity.md#settings-max_bytes_in_join)
|
||||||
- [join_overflow_mode](../../../operations/settings/query-complexity.md#settings-join_overflow_mode)
|
- [join_overflow_mode](../../../operations/settings/query-complexity.md#settings-join_overflow_mode)
|
||||||
- [join_any_take_last_row](../../../operations/settings/settings.md#settings-join_any_take_last_row)
|
- [join_any_take_last_row](../../../operations/settings/settings.md#settings-join_any_take_last_row)
|
||||||
|
- [persistent](../../../operations/settings/settings.md#persistent)
|
||||||
|
|
||||||
The `Join`-engine tables can’t be used in `GLOBAL JOIN` operations.
|
The `Join`-engine tables can’t be used in `GLOBAL JOIN` operations.
|
||||||
|
|
||||||
|
@ -14,4 +14,10 @@ Data is always located in RAM. For `INSERT`, the blocks of inserted data are als
|
|||||||
|
|
||||||
For a rough server restart, the block of data on the disk might be lost or damaged. In the latter case, you may need to manually delete the file with damaged data.
|
For a rough server restart, the block of data on the disk might be lost or damaged. In the latter case, you may need to manually delete the file with damaged data.
|
||||||
|
|
||||||
|
### Limitations and Settings {#join-limitations-and-settings}
|
||||||
|
|
||||||
|
When creating a table, the following settings are applied:
|
||||||
|
|
||||||
|
- [persistent](../../../operations/settings/settings.md#persistent)
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/set/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/set/) <!--hide-->
|
||||||
|
@ -30,4 +30,4 @@ Instead of inserting data manually, you might consider to use one of [client lib
|
|||||||
- `input_format_import_nested_json` allows to insert nested JSON objects into columns of [Nested](../../sql-reference/data-types/nested-data-structures/nested.md) type.
|
- `input_format_import_nested_json` allows to insert nested JSON objects into columns of [Nested](../../sql-reference/data-types/nested-data-structures/nested.md) type.
|
||||||
|
|
||||||
!!! note "Note"
|
!!! note "Note"
|
||||||
Settings are specified as `GET` parameters for the HTTP interface or as additional command-line arguments prefixed with `--` for the `CLI` interface.
|
Settings are specified as `GET` parameters for the HTTP interface or as additional command-line arguments prefixed with `--` for the `CLI` interface.
|
||||||
|
@ -11,7 +11,7 @@ By going through this tutorial, you’ll learn how to set up a simple ClickHouse
|
|||||||
|
|
||||||
## Single Node Setup {#single-node-setup}
|
## Single Node Setup {#single-node-setup}
|
||||||
|
|
||||||
To postpone the complexities of a distributed environment, we’ll start with deploying ClickHouse on a single server or virtual machine. ClickHouse is usually installed from [deb](../getting-started/install.md#install-from-deb-packages) or [rpm](../getting-started/install.md#from-rpm-packages) packages, but there are [alternatives](../getting-started/install.md#from-docker-image) for the operating systems that do no support them.
|
To postpone the complexities of a distributed environment, we’ll start with deploying ClickHouse on a single server or virtual machine. ClickHouse is usually installed from [deb](../getting-started/install.md#install-from-deb-packages) or [rpm](../getting-started/install.md#from-rpm-packages) packages, but there are [alternatives](../getting-started/install.md#from-docker-image) for the operating systems that do not support them.
|
||||||
|
|
||||||
For example, you have chosen `deb` packages and executed:
|
For example, you have chosen `deb` packages and executed:
|
||||||
|
|
||||||
|
@ -123,6 +123,7 @@ You can pass parameters to `clickhouse-client` (all parameters have a default va
|
|||||||
- `--stacktrace` – If specified, also print the stack trace if an exception occurs.
|
- `--stacktrace` – If specified, also print the stack trace if an exception occurs.
|
||||||
- `--config-file` – The name of the configuration file.
|
- `--config-file` – The name of the configuration file.
|
||||||
- `--secure` – If specified, will connect to server over secure connection.
|
- `--secure` – If specified, will connect to server over secure connection.
|
||||||
|
- `--history_file` — Path to a file containing command history.
|
||||||
- `--param_<name>` — Value for a [query with parameters](#cli-queries-with-parameters).
|
- `--param_<name>` — Value for a [query with parameters](#cli-queries-with-parameters).
|
||||||
|
|
||||||
### Configuration Files {#configuration_files}
|
### Configuration Files {#configuration_files}
|
||||||
|
@ -26,6 +26,9 @@ toc_title: Client Libraries
|
|||||||
- [go-clickhouse](https://github.com/roistat/go-clickhouse)
|
- [go-clickhouse](https://github.com/roistat/go-clickhouse)
|
||||||
- [mailrugo-clickhouse](https://github.com/mailru/go-clickhouse)
|
- [mailrugo-clickhouse](https://github.com/mailru/go-clickhouse)
|
||||||
- [golang-clickhouse](https://github.com/leprosus/golang-clickhouse)
|
- [golang-clickhouse](https://github.com/leprosus/golang-clickhouse)
|
||||||
|
- Swift
|
||||||
|
- [ClickHouseNIO](https://github.com/patrick-zippenfenig/ClickHouseNIO)
|
||||||
|
- [ClickHouseVapor ORM](https://github.com/patrick-zippenfenig/ClickHouseVapor)
|
||||||
- NodeJs
|
- NodeJs
|
||||||
- [clickhouse (NodeJs)](https://github.com/TimonKK/clickhouse)
|
- [clickhouse (NodeJs)](https://github.com/TimonKK/clickhouse)
|
||||||
- [node-clickhouse](https://github.com/apla/node-clickhouse)
|
- [node-clickhouse](https://github.com/apla/node-clickhouse)
|
||||||
|
@ -11,6 +11,7 @@ toc_title: Adopters
|
|||||||
| Company | Industry | Usecase | Cluster Size | (Un)Compressed Data Size<abbr title="of single replica"><sup>\*</sup></abbr> | Reference |
|
| Company | Industry | Usecase | Cluster Size | (Un)Compressed Data Size<abbr title="of single replica"><sup>\*</sup></abbr> | Reference |
|
||||||
|------------------------------------------------------------------------------------------------|---------------------------------|-----------------------|------------------------------------------------------------|------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
|------------------------------------------------------------------------------------------------|---------------------------------|-----------------------|------------------------------------------------------------|------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
| <a href="https://2gis.ru" class="favicon">2gis</a> | Maps | Monitoring | — | — | [Talk in Russian, July 2019](https://youtu.be/58sPkXfq6nw) |
|
| <a href="https://2gis.ru" class="favicon">2gis</a> | Maps | Monitoring | — | — | [Talk in Russian, July 2019](https://youtu.be/58sPkXfq6nw) |
|
||||||
|
| <a href="https://getadmiral.com/" class="favicon">Admiral</a> | Martech | Engagement Management | — | — | [Webinar Slides, June 2020](https://altinity.com/presentations/2020/06/16/big-data-in-real-time-how-clickhouse-powers-admirals-visitor-relationships-for-publishers) |
|
||||||
| <a href="https://cn.aliyun.com/" class="favicon">Alibaba Cloud</a> | Cloud | Managed Service | — | — | [Official Website](https://help.aliyun.com/product/144466.html) |
|
| <a href="https://cn.aliyun.com/" class="favicon">Alibaba Cloud</a> | Cloud | Managed Service | — | — | [Official Website](https://help.aliyun.com/product/144466.html) |
|
||||||
| <a href="https://alohabrowser.com/" class="favicon">Aloha Browser</a> | Mobile App | Browser backend | — | — | [Slides in Russian, May 2019](https://presentations.clickhouse.tech/meetup22/aloha.pdf) |
|
| <a href="https://alohabrowser.com/" class="favicon">Aloha Browser</a> | Mobile App | Browser backend | — | — | [Slides in Russian, May 2019](https://presentations.clickhouse.tech/meetup22/aloha.pdf) |
|
||||||
| <a href="https://amadeus.com/" class="favicon">Amadeus</a> | Travel | Analytics | — | — | [Press Release, April 2018](https://www.altinity.com/blog/2018/4/5/amadeus-technologies-launches-investment-and-insights-tool-based-on-machine-learning-and-strategy-algorithms) |
|
| <a href="https://amadeus.com/" class="favicon">Amadeus</a> | Travel | Analytics | — | — | [Press Release, April 2018](https://www.altinity.com/blog/2018/4/5/amadeus-technologies-launches-investment-and-insights-tool-based-on-machine-learning-and-strategy-algorithms) |
|
||||||
@ -29,6 +30,7 @@ toc_title: Adopters
|
|||||||
| <a href="https://www.citadelsecurities.com/" class="favicon">Citadel Securities</a> | Finance | — | — | — | [Contribution, March 2019](https://github.com/ClickHouse/ClickHouse/pull/4774) |
|
| <a href="https://www.citadelsecurities.com/" class="favicon">Citadel Securities</a> | Finance | — | — | — | [Contribution, March 2019](https://github.com/ClickHouse/ClickHouse/pull/4774) |
|
||||||
| <a href="https://city-mobil.ru" class="favicon">Citymobil</a> | Taxi | Analytics | — | — | [Blog Post in Russian, March 2020](https://habr.com/en/company/citymobil/blog/490660/) |
|
| <a href="https://city-mobil.ru" class="favicon">Citymobil</a> | Taxi | Analytics | — | — | [Blog Post in Russian, March 2020](https://habr.com/en/company/citymobil/blog/490660/) |
|
||||||
| <a href="https://cloudflare.com" class="favicon">Cloudflare</a> | CDN | Traffic analysis | 36 servers | — | [Blog post, May 2017](https://blog.cloudflare.com/how-cloudflare-analyzes-1m-dns-queries-per-second/), [Blog post, March 2018](https://blog.cloudflare.com/http-analytics-for-6m-requests-per-second-using-clickhouse/) |
|
| <a href="https://cloudflare.com" class="favicon">Cloudflare</a> | CDN | Traffic analysis | 36 servers | — | [Blog post, May 2017](https://blog.cloudflare.com/how-cloudflare-analyzes-1m-dns-queries-per-second/), [Blog post, March 2018](https://blog.cloudflare.com/http-analytics-for-6m-requests-per-second-using-clickhouse/) |
|
||||||
|
| <a href="https://corporate.comcast.com/" class="favicon">Comcast</a> | Media | CDN Traffic Analysis | — | — | [ApacheCon 2019 Talk](https://www.youtube.com/watch?v=e9TZ6gFDjNg) |
|
||||||
| <a href="https://contentsquare.com" class="favicon">ContentSquare</a> | Web analytics | Main product | — | — | [Blog post in French, November 2018](http://souslecapot.net/2018/11/21/patrick-chatain-vp-engineering-chez-contentsquare-penser-davantage-amelioration-continue-que-revolution-constante/) |
|
| <a href="https://contentsquare.com" class="favicon">ContentSquare</a> | Web analytics | Main product | — | — | [Blog post in French, November 2018](http://souslecapot.net/2018/11/21/patrick-chatain-vp-engineering-chez-contentsquare-penser-davantage-amelioration-continue-que-revolution-constante/) |
|
||||||
| <a href="https://coru.net/" class="favicon">Corunet</a> | Analytics | Main product | — | — | [Slides in English, April 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup21/predictive_models.pdf) |
|
| <a href="https://coru.net/" class="favicon">Corunet</a> | Analytics | Main product | — | — | [Slides in English, April 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup21/predictive_models.pdf) |
|
||||||
| <a href="https://www.creditx.com" class="favicon">CraiditX 氪信</a> | Finance AI | Analysis | — | — | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup33/udf.pptx) |
|
| <a href="https://www.creditx.com" class="favicon">CraiditX 氪信</a> | Finance AI | Analysis | — | — | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup33/udf.pptx) |
|
||||||
@ -36,6 +38,7 @@ toc_title: Adopters
|
|||||||
| <a href="https://www.criteo.com/" class="favicon">Criteo</a> | Retail | Main product | — | — | [Slides in English, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup18/3_storetail.pptx) |
|
| <a href="https://www.criteo.com/" class="favicon">Criteo</a> | Retail | Main product | — | — | [Slides in English, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup18/3_storetail.pptx) |
|
||||||
| <a href="https://www.chinatelecomglobal.com/" class="favicon">Dataliance for China Telecom</a> | Telecom | Analytics | — | — | [Slides in Chinese, January 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/telecom.pdf) |
|
| <a href="https://www.chinatelecomglobal.com/" class="favicon">Dataliance for China Telecom</a> | Telecom | Analytics | — | — | [Slides in Chinese, January 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/telecom.pdf) |
|
||||||
| <a href="https://db.com" class="favicon">Deutsche Bank</a> | Finance | BI Analytics | — | — | [Slides in English, October 2019](https://bigdatadays.ru/wp-content/uploads/2019/10/D2-H3-3_Yakunin-Goihburg.pdf) |
|
| <a href="https://db.com" class="favicon">Deutsche Bank</a> | Finance | BI Analytics | — | — | [Slides in English, October 2019](https://bigdatadays.ru/wp-content/uploads/2019/10/D2-H3-3_Yakunin-Goihburg.pdf) |
|
||||||
|
| <a href="https://deeplay.io/eng/" class="favicon">Deeplay</a> | Gaming Analytics | — | — | — | [Job advertisement, 2020](https://career.habr.com/vacancies/1000062568) |
|
||||||
| <a href="https://www.diva-e.com" class="favicon">Diva-e</a> | Digital consulting | Main Product | — | — | [Slides in English, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup29/ClickHouse-MeetUp-Unusual-Applications-sd-2019-09-17.pdf) |
|
| <a href="https://www.diva-e.com" class="favicon">Diva-e</a> | Digital consulting | Main Product | — | — | [Slides in English, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup29/ClickHouse-MeetUp-Unusual-Applications-sd-2019-09-17.pdf) |
|
||||||
| <a href="https://www.ecwid.com/" class="favicon">Ecwid</a> | E-commerce SaaS | Metrics, Logging | — | — | [Slides in Russian, April 2019](https://nastachku.ru/var/files/1/presentation/backend/2_Backend_6.pdf) |
|
| <a href="https://www.ecwid.com/" class="favicon">Ecwid</a> | E-commerce SaaS | Metrics, Logging | — | — | [Slides in Russian, April 2019](https://nastachku.ru/var/files/1/presentation/backend/2_Backend_6.pdf) |
|
||||||
| <a href="https://www.ebay.com/" class="favicon">eBay</a> | E-commerce | Logs, Metrics and Events | — | — | [Official website, Sep 2020](https://tech.ebayinc.com/engineering/ou-online-analytical-processing/) |
|
| <a href="https://www.ebay.com/" class="favicon">eBay</a> | E-commerce | Logs, Metrics and Events | — | — | [Official website, Sep 2020](https://tech.ebayinc.com/engineering/ou-online-analytical-processing/) |
|
||||||
@ -45,6 +48,7 @@ toc_title: Adopters
|
|||||||
| <a href="https://fun.co/rp" class="favicon">FunCorp</a> | Games | | — | — | [Article](https://www.altinity.com/blog/migrating-from-redshift-to-clickhouse) |
|
| <a href="https://fun.co/rp" class="favicon">FunCorp</a> | Games | | — | — | [Article](https://www.altinity.com/blog/migrating-from-redshift-to-clickhouse) |
|
||||||
| <a href="https://geniee.co.jp" class="favicon">Geniee</a> | Ad network | Main product | — | — | [Blog post in Japanese, July 2017](https://tech.geniee.co.jp/entry/2017/07/20/160100) |
|
| <a href="https://geniee.co.jp" class="favicon">Geniee</a> | Ad network | Main product | — | — | [Blog post in Japanese, July 2017](https://tech.geniee.co.jp/entry/2017/07/20/160100) |
|
||||||
| <a href="https://www.huya.com/" class="favicon">HUYA</a> | Video Streaming | Analytics | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/7.%20ClickHouse万亿数据分析实践%20李本旺(sundy-li)%20虎牙.pdf) |
|
| <a href="https://www.huya.com/" class="favicon">HUYA</a> | Video Streaming | Analytics | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/7.%20ClickHouse万亿数据分析实践%20李本旺(sundy-li)%20虎牙.pdf) |
|
||||||
|
| <a href="https://www.the-ica.com/" class="favicon">ICA</a> | FinTech | Risk Management | — | — | [Blog Post in English, Sep 2020](https://altinity.com/blog/clickhouse-vs-redshift-performance-for-fintech-risk-management?utm_campaign=ClickHouse%20vs%20RedShift&utm_content=143520807&utm_medium=social&utm_source=twitter&hss_channel=tw-3894792263) |
|
||||||
| <a href="https://www.idealista.com" class="favicon">Idealista</a> | Real Estate | Analytics | — | — | [Blog Post in English, April 2019](https://clickhouse.tech/blog/en/clickhouse-meetup-in-madrid-on-april-2-2019) |
|
| <a href="https://www.idealista.com" class="favicon">Idealista</a> | Real Estate | Analytics | — | — | [Blog Post in English, April 2019](https://clickhouse.tech/blog/en/clickhouse-meetup-in-madrid-on-april-2-2019) |
|
||||||
| <a href="https://www.infovista.com/" class="favicon">Infovista</a> | Networks | Analytics | — | — | [Slides in English, October 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup30/infovista.pdf) |
|
| <a href="https://www.infovista.com/" class="favicon">Infovista</a> | Networks | Analytics | — | — | [Slides in English, October 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup30/infovista.pdf) |
|
||||||
| <a href="https://www.innogames.com" class="favicon">InnoGames</a> | Games | Metrics, Logging | — | — | [Slides in Russian, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/graphite_and_clickHouse.pdf) |
|
| <a href="https://www.innogames.com" class="favicon">InnoGames</a> | Games | Metrics, Logging | — | — | [Slides in Russian, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/graphite_and_clickHouse.pdf) |
|
||||||
@ -62,12 +66,14 @@ toc_title: Adopters
|
|||||||
| <a href="https://tech.mymarilyn.ru" class="favicon">Marilyn</a> | Advertising | Statistics | — | — | [Talk in Russian, June 2017](https://www.youtube.com/watch?v=iXlIgx2khwc) |
|
| <a href="https://tech.mymarilyn.ru" class="favicon">Marilyn</a> | Advertising | Statistics | — | — | [Talk in Russian, June 2017](https://www.youtube.com/watch?v=iXlIgx2khwc) |
|
||||||
| <a href="https://mellodesign.ru/" class="favicon">Mello</a> | Marketing | Analytics | 1 server | — | [Article, Oct 2020](https://vc.ru/marketing/166180-razrabotka-tipovogo-otcheta-skvoznoy-analitiki) |
|
| <a href="https://mellodesign.ru/" class="favicon">Mello</a> | Marketing | Analytics | 1 server | — | [Article, Oct 2020](https://vc.ru/marketing/166180-razrabotka-tipovogo-otcheta-skvoznoy-analitiki) |
|
||||||
| <a href="https://www.messagebird.com" class="favicon">MessageBird</a> | Telecommunications | Statistics | — | — | [Slides in English, November 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup20/messagebird.pdf) |
|
| <a href="https://www.messagebird.com" class="favicon">MessageBird</a> | Telecommunications | Statistics | — | — | [Slides in English, November 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup20/messagebird.pdf) |
|
||||||
| <a href="https://www.mindsdb.com/" class="favicon">MindsDB</a> | Machine Learning | Main Product | — | — | [Official Website](https://www.mindsdb.com/blog/machine-learning-models-as-tables-in-ch) |
|
| <a href="https://www.mindsdb.com/" class="favicon">MindsDB</a> | Machine Learning | Main Product | — | — | [Official Website](https://www.mindsdb.com/blog/machine-learning-models-as-tables-in-ch) |x
|
||||||
|
| <a href="https://mux.com/" class="favicon">MUX</a> | Online Video | Video Analytics | — | — | [Talk in English, August 2019](https://altinity.com/presentations/2019/8/13/how-clickhouse-became-the-default-analytics-database-for-mux/) |
|
||||||
| <a href="https://www.mgid.com/" class="favicon">MGID</a> | Ad network | Web-analytics | — | — | [Blog post in Russian, April 2020](http://gs-studio.com/news-about-it/32777----clickhouse---c) |
|
| <a href="https://www.mgid.com/" class="favicon">MGID</a> | Ad network | Web-analytics | — | — | [Blog post in Russian, April 2020](http://gs-studio.com/news-about-it/32777----clickhouse---c) |
|
||||||
| <a href="https://getnoc.com/" class="favicon">NOC Project</a> | Network Monitoring | Analytics | Main Product | — | [Official Website](https://getnoc.com/features/big-data/) |
|
| <a href="https://getnoc.com/" class="favicon">NOC Project</a> | Network Monitoring | Analytics | Main Product | — | [Official Website](https://getnoc.com/features/big-data/) |
|
||||||
| <a href="https://www.nuna.com/" class="favicon">Nuna Inc.</a> | Health Data Analytics | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=170) |
|
| <a href="https://www.nuna.com/" class="favicon">Nuna Inc.</a> | Health Data Analytics | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=170) |
|
||||||
| <a href="https://www.oneapm.com/" class="favicon">OneAPM</a> | Monitorings and Data Analysis | Main product | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/8.%20clickhouse在OneAPM的应用%20杜龙.pdf) |
|
| <a href="https://www.oneapm.com/" class="favicon">OneAPM</a> | Monitorings and Data Analysis | Main product | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/8.%20clickhouse在OneAPM的应用%20杜龙.pdf) |
|
||||||
| <a href="https://www.percent.cn/" class="favicon">Percent 百分点</a> | Analytics | Main Product | — | — | [Slides in Chinese, June 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup24/4.%20ClickHouse万亿数据双中心的设计与实践%20.pdf) |
|
| <a href="https://www.percent.cn/" class="favicon">Percent 百分点</a> | Analytics | Main Product | — | — | [Slides in Chinese, June 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup24/4.%20ClickHouse万亿数据双中心的设计与实践%20.pdf) |
|
||||||
|
| <a href="https://www.percona.com/" class="favicon">Percona</a> | Performance analysis | Percona Monitoring and Management | — | — | [Official website, Mar 2020](https://www.percona.com/blog/2020/03/30/advanced-query-analysis-in-percona-monitoring-and-management-with-direct-clickhouse-access/) |
|
||||||
| <a href="https://plausible.io/" class="favicon">Plausible</a> | Analytics | Main Product | — | — | [Blog post, June 2020](https://twitter.com/PlausibleHQ/status/1273889629087969280) |
|
| <a href="https://plausible.io/" class="favicon">Plausible</a> | Analytics | Main Product | — | — | [Blog post, June 2020](https://twitter.com/PlausibleHQ/status/1273889629087969280) |
|
||||||
| <a href="https://posthog.com/" class="favicon">PostHog</a> | Product Analytics | Main Product | — | — | [Release Notes, Oct 2020](https://posthog.com/blog/the-posthog-array-1-15-0) |
|
| <a href="https://posthog.com/" class="favicon">PostHog</a> | Product Analytics | Main Product | — | — | [Release Notes, Oct 2020](https://posthog.com/blog/the-posthog-array-1-15-0) |
|
||||||
| <a href="https://postmates.com/" class="favicon">Postmates</a> | Delivery | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=188) |
|
| <a href="https://postmates.com/" class="favicon">Postmates</a> | Delivery | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=188) |
|
||||||
|
@ -44,11 +44,10 @@ stages, such as query planning or distributed queries.
|
|||||||
|
|
||||||
To be useful, the tracing information has to be exported to a monitoring system
|
To be useful, the tracing information has to be exported to a monitoring system
|
||||||
that supports OpenTelemetry, such as Jaeger or Prometheus. ClickHouse avoids
|
that supports OpenTelemetry, such as Jaeger or Prometheus. ClickHouse avoids
|
||||||
a dependency on a particular monitoring system, instead only
|
a dependency on a particular monitoring system, instead only providing the
|
||||||
providing the tracing data conforming to the standard. A natural way to do so
|
tracing data through a system table. OpenTelemetry trace span information
|
||||||
in an SQL RDBMS is a system table. OpenTelemetry trace span information
|
|
||||||
[required by the standard](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/overview.md#span)
|
[required by the standard](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/overview.md#span)
|
||||||
is stored in the system table called `system.opentelemetry_span_log`.
|
is stored in the `system.opentelemetry_span_log` table.
|
||||||
|
|
||||||
The table must be enabled in the server configuration, see the `opentelemetry_span_log`
|
The table must be enabled in the server configuration, see the `opentelemetry_span_log`
|
||||||
element in the default config file `config.xml`. It is enabled by default.
|
element in the default config file `config.xml`. It is enabled by default.
|
||||||
@ -67,3 +66,31 @@ The table has the following columns:
|
|||||||
|
|
||||||
The tags or attributes are saved as two parallel arrays, containing the keys
|
The tags or attributes are saved as two parallel arrays, containing the keys
|
||||||
and values. Use `ARRAY JOIN` to work with them.
|
and values. Use `ARRAY JOIN` to work with them.
|
||||||
|
|
||||||
|
## Integration with monitoring systems
|
||||||
|
|
||||||
|
At the moment, there is no ready tool that can export the tracing data from
|
||||||
|
ClickHouse to a monitoring system.
|
||||||
|
|
||||||
|
For testing, it is possible to setup the export using a materialized view with the URL engine over the `system.opentelemetry_span_log` table, which would push the arriving log data to an HTTP endpoint of a trace collector. For example, to push the minimal span data to a Zipkin instance running at `http://localhost:9411`, in Zipkin v2 JSON format:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE MATERIALIZED VIEW default.zipkin_spans
|
||||||
|
ENGINE = URL('http://127.0.0.1:9411/api/v2/spans', 'JSONEachRow')
|
||||||
|
SETTINGS output_format_json_named_tuples_as_objects = 1,
|
||||||
|
output_format_json_array_of_rows = 1 AS
|
||||||
|
SELECT
|
||||||
|
lower(hex(reinterpretAsFixedString(trace_id))) AS traceId,
|
||||||
|
lower(hex(parent_span_id)) AS parentId,
|
||||||
|
lower(hex(span_id)) AS id,
|
||||||
|
operation_name AS name,
|
||||||
|
start_time_us AS timestamp,
|
||||||
|
finish_time_us - start_time_us AS duration,
|
||||||
|
cast(tuple('clickhouse'), 'Tuple(serviceName text)') AS localEndpoint,
|
||||||
|
cast(tuple(
|
||||||
|
attribute.values[indexOf(attribute.names, 'db.statement')]),
|
||||||
|
'Tuple("db.statement" text)') AS tags
|
||||||
|
FROM system.opentelemetry_span_log
|
||||||
|
```
|
||||||
|
|
||||||
|
In case of any errors, the part of the log data for which the error has occurred will be silently lost. Check the server log for error messages if the data does not arrive.
|
||||||
|
@ -571,7 +571,7 @@ For more information, see the MergeTreeSettings.h header file.
|
|||||||
|
|
||||||
Fine tuning for tables in the [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/mergetree.md).
|
Fine tuning for tables in the [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/mergetree.md).
|
||||||
|
|
||||||
This setting has higher priority.
|
This setting has a higher priority.
|
||||||
|
|
||||||
For more information, see the MergeTreeSettings.h header file.
|
For more information, see the MergeTreeSettings.h header file.
|
||||||
|
|
||||||
@ -1081,4 +1081,45 @@ Default value: `/var/lib/clickhouse/access/`.
|
|||||||
|
|
||||||
- [Access Control and Account Management](../../operations/access-rights.md#access-control)
|
- [Access Control and Account Management](../../operations/access-rights.md#access-control)
|
||||||
|
|
||||||
|
## user_directories {#user_directories}
|
||||||
|
|
||||||
|
Section of the configuration file that contains settings:
|
||||||
|
- Path to configuration file with predefined users.
|
||||||
|
- Path to folder where users created by SQL commands are stored.
|
||||||
|
|
||||||
|
If this section is specified, the path from [users_config](../../operations/server-configuration-parameters/settings.md#users-config) and [access_control_path](../../operations/server-configuration-parameters/settings.md#access_control_path) won't be used.
|
||||||
|
|
||||||
|
The `user_directories` section can contain any number of items, the order of the items means their precedence (the higher the item the higher the precedence).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
``` xml
|
||||||
|
<user_directories>
|
||||||
|
<users_xml>
|
||||||
|
<path>/etc/clickhouse-server/users.xml</path>
|
||||||
|
</users_xml>
|
||||||
|
<local_directory>
|
||||||
|
<path>/var/lib/clickhouse/access/</path>
|
||||||
|
</local_directory>
|
||||||
|
</user_directories>
|
||||||
|
```
|
||||||
|
|
||||||
|
You can also specify settings `memory` — means storing information only in memory, without writing to disk, and `ldap` — means storing information on an LDAP server.
|
||||||
|
|
||||||
|
To add an LDAP server as a remote user directory of users that are not defined locally, define a single `ldap` section with a following parameters:
|
||||||
|
- `server` — one of LDAP server names defined in `ldap_servers` config section. This parameter is mandatory and cannot be empty.
|
||||||
|
- `roles` — section with a list of locally defined roles that will be assigned to each user retrieved from the LDAP server. If no roles are specified, user will not be able to perform any actions after authentication. If any of the listed roles is not defined locally at the time of authentication, the authenthication attept will fail as if the provided password was incorrect.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
``` xml
|
||||||
|
<ldap>
|
||||||
|
<server>my_ldap_server</server>
|
||||||
|
<roles>
|
||||||
|
<my_local_role1 />
|
||||||
|
<my_local_role2 />
|
||||||
|
</roles>
|
||||||
|
</ldap>
|
||||||
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/server_configuration_parameters/settings/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/server_configuration_parameters/settings/) <!--hide-->
|
||||||
|
@ -307,7 +307,51 @@ Disabled by default.
|
|||||||
|
|
||||||
## input_format_tsv_enum_as_number {#settings-input_format_tsv_enum_as_number}
|
## input_format_tsv_enum_as_number {#settings-input_format_tsv_enum_as_number}
|
||||||
|
|
||||||
For TSV input format switches to parsing enum values as enum ids.
|
Enables or disables parsing enum values as enum ids for TSV input format.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 0 — Enum values are parsed as values.
|
||||||
|
- 1 — Enum values are parsed as enum IDs
|
||||||
|
|
||||||
|
Default value: 0.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Consider the table:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE table_with_enum_column_for_tsv_insert (Id Int32,Value Enum('first' = 1, 'second' = 2)) ENGINE=Memory();
|
||||||
|
```
|
||||||
|
|
||||||
|
When the `input_format_tsv_enum_as_number` setting is enabled:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SET input_format_tsv_enum_as_number = 1;
|
||||||
|
INSERT INTO table_with_enum_column_for_tsv_insert FORMAT TSV 102 2;
|
||||||
|
INSERT INTO table_with_enum_column_for_tsv_insert FORMAT TSV 103 1;
|
||||||
|
SELECT * FROM table_with_enum_column_for_tsv_insert;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌──Id─┬─Value──┐
|
||||||
|
│ 102 │ second │
|
||||||
|
└─────┴────────┘
|
||||||
|
┌──Id─┬─Value──┐
|
||||||
|
│ 103 │ first │
|
||||||
|
└─────┴────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
When the `input_format_tsv_enum_as_number` setting is disabled, the `INSERT` query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SET input_format_tsv_enum_as_number = 0;
|
||||||
|
INSERT INTO table_with_enum_column_for_tsv_insert FORMAT TSV 102 2;
|
||||||
|
```
|
||||||
|
|
||||||
|
throws an exception.
|
||||||
|
|
||||||
## input_format_null_as_default {#settings-input-format-null-as-default}
|
## input_format_null_as_default {#settings-input-format-null-as-default}
|
||||||
|
|
||||||
@ -384,7 +428,7 @@ Possible values:
|
|||||||
|
|
||||||
- `'basic'` — Use basic parser.
|
- `'basic'` — Use basic parser.
|
||||||
|
|
||||||
ClickHouse can parse only the basic `YYYY-MM-DD HH:MM:SS` format. For example, `'2019-08-20 10:18:56'`.
|
ClickHouse can parse only the basic `YYYY-MM-DD HH:MM:SS` or `YYYY-MM-DD` format. For example, `'2019-08-20 10:18:56'` or `2019-08-20`.
|
||||||
|
|
||||||
Default value: `'basic'`.
|
Default value: `'basic'`.
|
||||||
|
|
||||||
@ -1182,7 +1226,47 @@ For CSV input format enables or disables parsing of unquoted `NULL` as literal (
|
|||||||
|
|
||||||
## input_format_csv_enum_as_number {#settings-input_format_csv_enum_as_number}
|
## input_format_csv_enum_as_number {#settings-input_format_csv_enum_as_number}
|
||||||
|
|
||||||
For CSV input format switches to parsing enum values as enum ids.
|
Enables or disables parsing enum values as enum ids for CSV input format.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 0 — Enum values are parsed as values.
|
||||||
|
- 1 — Enum values are parsed as enum IDs.
|
||||||
|
|
||||||
|
Default value: 0.
|
||||||
|
|
||||||
|
**Examples**
|
||||||
|
|
||||||
|
Consider the table:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE table_with_enum_column_for_csv_insert (Id Int32,Value Enum('first' = 1, 'second' = 2)) ENGINE=Memory();
|
||||||
|
```
|
||||||
|
|
||||||
|
When the `input_format_csv_enum_as_number` setting is enabled:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SET input_format_csv_enum_as_number = 1;
|
||||||
|
INSERT INTO table_with_enum_column_for_csv_insert FORMAT CSV 102,2;
|
||||||
|
SELECT * FROM table_with_enum_column_for_csv_insert;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌──Id─┬─Value─────┐
|
||||||
|
│ 102 │ second │
|
||||||
|
└─────┴───────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
When the `input_format_csv_enum_as_number` setting is disabled, the `INSERT` query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SET input_format_csv_enum_as_number = 0;
|
||||||
|
INSERT INTO table_with_enum_column_for_csv_insert FORMAT CSV 102,2;
|
||||||
|
```
|
||||||
|
|
||||||
|
throws an exception.
|
||||||
|
|
||||||
## output_format_csv_crlf_end_of_line {#settings-output-format-csv-crlf-end-of-line}
|
## output_format_csv_crlf_end_of_line {#settings-output-format-csv-crlf-end-of-line}
|
||||||
|
|
||||||
@ -1765,6 +1849,23 @@ Default value: `0`.
|
|||||||
|
|
||||||
- [Distributed Table Engine](../../engines/table-engines/special/distributed.md#distributed)
|
- [Distributed Table Engine](../../engines/table-engines/special/distributed.md#distributed)
|
||||||
- [Managing Distributed Tables](../../sql-reference/statements/system.md#query-language-system-distributed)
|
- [Managing Distributed Tables](../../sql-reference/statements/system.md#query-language-system-distributed)
|
||||||
|
|
||||||
|
|
||||||
|
## use_compact_format_in_distributed_parts_names {#use_compact_format_in_distributed_parts_names}
|
||||||
|
|
||||||
|
Uses compact format for storing blocks for async (`insert_distributed_sync`) INSERT into tables with `Distributed` engine.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 0 — Uses `user[:password]@host:port#default_database` directory format.
|
||||||
|
- 1 — Uses `[shard{shard_index}[_replica{replica_index}]]` directory format.
|
||||||
|
|
||||||
|
Default value: `1`.
|
||||||
|
|
||||||
|
!!! note "Note"
|
||||||
|
- with `use_compact_format_in_distributed_parts_names=0` changes from cluster definition will not be applied for async INSERT.
|
||||||
|
- with `use_compact_format_in_distributed_parts_names=1` changing the order of the nodes in the cluster definition, will change the `shard_index`/`replica_index` so be aware.
|
||||||
|
|
||||||
## background_buffer_flush_schedule_pool_size {#background_buffer_flush_schedule_pool_size}
|
## background_buffer_flush_schedule_pool_size {#background_buffer_flush_schedule_pool_size}
|
||||||
|
|
||||||
Sets the number of threads performing background flush in [Buffer](../../engines/table-engines/special/buffer.md)-engine tables. This setting is applied at the ClickHouse server start and can’t be changed in a user session.
|
Sets the number of threads performing background flush in [Buffer](../../engines/table-engines/special/buffer.md)-engine tables. This setting is applied at the ClickHouse server start and can’t be changed in a user session.
|
||||||
@ -2203,4 +2304,23 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `0`.
|
Default value: `0`.
|
||||||
|
|
||||||
|
## persistent {#persistent}
|
||||||
|
|
||||||
|
Disables persistency for the [Set](../../engines/table-engines/special/set.md#set) and [Join](../../engines/table-engines/special/join.md#join) table engines.
|
||||||
|
|
||||||
|
Reduces the I/O overhead. Suitable for scenarios that pursue performance and do not require persistence.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 1 — Enabled.
|
||||||
|
- 0 — Disabled.
|
||||||
|
|
||||||
|
Default value: `1`.
|
||||||
|
|
||||||
|
## output_format_tsv_null_representation {#output_format_tsv_null_representation}
|
||||||
|
|
||||||
|
Allows configurable `NULL` representation for [TSV](../../interfaces/formats.md#tabseparated) output format. The setting only controls output format and `\N` is the only supported `NULL` representation for TSV input format.
|
||||||
|
|
||||||
|
Default value: `\N`.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/settings/settings/) <!-- hide -->
|
[Original article](https://clickhouse.tech/docs/en/operations/settings/settings/) <!-- hide -->
|
||||||
|
70
docs/en/operations/system-tables/replicated_fetches.md
Normal file
70
docs/en/operations/system-tables/replicated_fetches.md
Normal file
@ -0,0 +1,70 @@
|
|||||||
|
# system.replicated_fetches {#system_tables-replicated_fetches}
|
||||||
|
|
||||||
|
Contains information about currently running background fetches.
|
||||||
|
|
||||||
|
Columns:
|
||||||
|
|
||||||
|
- `database` ([String](../../sql-reference/data-types/string.md)) — Name of the database.
|
||||||
|
|
||||||
|
- `table` ([String](../../sql-reference/data-types/string.md)) — Name of the table.
|
||||||
|
|
||||||
|
- `elapsed` ([Float64](../../sql-reference/data-types/float.md)) — The time elapsed (in seconds) since showing currently running background fetches started.
|
||||||
|
|
||||||
|
- `progress` ([Float64](../../sql-reference/data-types/float.md)) — The percentage of completed work from 0 to 1.
|
||||||
|
|
||||||
|
- `result_part_name` ([String](../../sql-reference/data-types/string.md)) — The name of the part that will be formed as the result of showing currently running background fetches.
|
||||||
|
|
||||||
|
- `result_part_path` ([String](../../sql-reference/data-types/string.md)) — Absolute path to the part that will be formed as the result of showing currently running background fetches.
|
||||||
|
|
||||||
|
- `partition_id` ([String](../../sql-reference/data-types/string.md)) — ID of the partition.
|
||||||
|
|
||||||
|
- `total_size_bytes_compressed` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The total size (in bytes) of the compressed data in the result part.
|
||||||
|
|
||||||
|
- `bytes_read_compressed` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The number of compressed bytes read from the result part.
|
||||||
|
|
||||||
|
- `source_replica_path` ([String](../../sql-reference/data-types/string.md)) — Absolute path to the source replica.
|
||||||
|
|
||||||
|
- `source_replica_hostname` ([String](../../sql-reference/data-types/string.md)) — Hostname of the source replica.
|
||||||
|
|
||||||
|
- `source_replica_port` ([UInt16](../../sql-reference/data-types/int-uint.md)) — Port number of the source replica.
|
||||||
|
|
||||||
|
- `interserver_scheme` ([String](../../sql-reference/data-types/string.md)) — Name of the interserver scheme.
|
||||||
|
|
||||||
|
- `URI` ([String](../../sql-reference/data-types/string.md)) — Uniform resource identifier.
|
||||||
|
|
||||||
|
- `to_detached` ([UInt8](../../sql-reference/data-types/int-uint.md)) — The flag indicates whether the currently running background fetch is being performed using the `TO DETACHED` expression.
|
||||||
|
|
||||||
|
- `thread_id` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Thread identifier.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT * FROM system.replicated_fetches LIMIT 1 FORMAT Vertical;
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
Row 1:
|
||||||
|
──────
|
||||||
|
database: default
|
||||||
|
table: t
|
||||||
|
elapsed: 7.243039876
|
||||||
|
progress: 0.41832135995612835
|
||||||
|
result_part_name: all_0_0_0
|
||||||
|
result_part_path: /var/lib/clickhouse/store/700/70080a04-b2de-4adf-9fa5-9ea210e81766/all_0_0_0/
|
||||||
|
partition_id: all
|
||||||
|
total_size_bytes_compressed: 1052783726
|
||||||
|
bytes_read_compressed: 440401920
|
||||||
|
source_replica_path: /clickhouse/test/t/replicas/1
|
||||||
|
source_replica_hostname: node1
|
||||||
|
source_replica_port: 9009
|
||||||
|
interserver_scheme: http
|
||||||
|
URI: http://node1:9009/?endpoint=DataPartsExchange%3A%2Fclickhouse%2Ftest%2Ft%2Freplicas%2F1&part=all_0_0_0&client_protocol_version=4&compress=false
|
||||||
|
to_detached: 0
|
||||||
|
thread_id: 54
|
||||||
|
```
|
||||||
|
|
||||||
|
**See Also**
|
||||||
|
|
||||||
|
- [Managing ReplicatedMergeTree Tables](../../sql-reference/statements/system/#query-language-system-replicated)
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/replicated_fetches) <!--hide-->
|
@ -1,42 +1,42 @@
|
|||||||
# ClickHouse obfuscator
|
# ClickHouse obfuscator
|
||||||
|
|
||||||
Simple tool for table data obfuscation.
|
A simple tool for table data obfuscation.
|
||||||
|
|
||||||
It reads input table and produces output table, that retain some properties of input, but contains different data.
|
It reads an input table and produces an output table, that retains some properties of input, but contains different data.
|
||||||
It allows to publish almost real production data for usage in benchmarks.
|
It allows publishing almost real production data for usage in benchmarks.
|
||||||
|
|
||||||
It is designed to retain the following properties of data:
|
It is designed to retain the following properties of data:
|
||||||
- cardinalities of values (number of distinct values) for every column and for every tuple of columns;
|
- cardinalities of values (number of distinct values) for every column and every tuple of columns;
|
||||||
- conditional cardinalities: number of distinct values of one column under condition on value of another column;
|
- conditional cardinalities: number of distinct values of one column under the condition on the value of another column;
|
||||||
- probability distributions of absolute value of integers; sign of signed integers; exponent and sign for floats;
|
- probability distributions of the absolute value of integers; the sign of signed integers; exponent and sign for floats;
|
||||||
- probability distributions of length of strings;
|
- probability distributions of the length of strings;
|
||||||
- probability of zero values of numbers; empty strings and arrays, NULLs;
|
- probability of zero values of numbers; empty strings and arrays, `NULL`s;
|
||||||
- data compression ratio when compressed with LZ77 and entropy family of codecs;
|
|
||||||
- continuity (magnitude of difference) of time values across table; continuity of floating point values.
|
- data compression ratio when compressed with LZ77 and entropy family of codecs;
|
||||||
- date component of DateTime values;
|
- continuity (magnitude of difference) of time values across the table; continuity of floating-point values;
|
||||||
- UTF-8 validity of string values;
|
- date component of `DateTime` values;
|
||||||
- string values continue to look somewhat natural.
|
|
||||||
|
- UTF-8 validity of string values;
|
||||||
Most of the properties above are viable for performance testing:
|
- string values look natural.
|
||||||
|
|
||||||
reading data, filtering, aggregation and sorting will work at almost the same speed
|
Most of the properties above are viable for performance testing:
|
||||||
as on original data due to saved cardinalities, magnitudes, compression ratios, etc.
|
|
||||||
|
reading data, filtering, aggregatio, and sorting will work at almost the same speed
|
||||||
It works in deterministic fashion: you define a seed value and transform is totally determined by input data and by seed.
|
as on original data due to saved cardinalities, magnitudes, compression ratios, etc.
|
||||||
Some transforms are one to one and could be reversed, so you need to have large enough seed and keep it in secret.
|
|
||||||
|
It works in a deterministic fashion: you define a seed value and the transformation is determined by input data and by seed.
|
||||||
It use some cryptographic primitives to transform data, but from the cryptographic point of view,
|
Some transformations are one to one and could be reversed, so you need to have a large seed and keep it in secret.
|
||||||
It doesn't do anything properly and you should never consider the result as secure, unless you have other reasons for it.
|
|
||||||
|
It uses some cryptographic primitives to transform data but from the cryptographic point of view, it doesn't do it properly, that is why you should not consider the result as secure unless you have another reason. The result may retain some data you don't want to publish.
|
||||||
It may retain some data you don't want to publish.
|
|
||||||
|
|
||||||
It always leave numbers 0, 1, -1 as is. Also it leaves dates, lengths of arrays and null flags exactly as in source data.
|
It always leaves 0, 1, -1 numbers, dates, lengths of arrays, and null flags exactly as in source data.
|
||||||
For example, you have a column IsMobile in your table with values 0 and 1. In transformed data, it will have the same value.
|
For example, you have a column `IsMobile` in your table with values 0 and 1. In transformed data, it will have the same value.
|
||||||
So, the user will be able to count exact ratio of mobile traffic.
|
|
||||||
|
So, the user will be able to count the exact ratio of mobile traffic.
|
||||||
Another example, suppose you have some private data in your table, like user email and you don't want to publish any single email address.
|
|
||||||
If your table is large enough and contain multiple different emails and there is no email that have very high frequency than all others,
|
Let's give another example. When you have some private data in your table, like user email and you don't want to publish any single email address.
|
||||||
It will perfectly anonymize all data. But if you have small amount of different values in a column, it can possibly reproduce some of them.
|
If your table is large enough and contains multiple different emails and no email has a very high frequency than all others, it will anonymize all data. But if you have a small number of different values in a column, it can reproduce some of them.
|
||||||
And you should take care and look at exact algorithm, how this tool works, and probably fine tune some of it command line parameters.
|
You should look at the working algorithm of this tool works, and fine-tune its command line parameters.
|
||||||
|
|
||||||
This tool works fine only with reasonable amount of data (at least 1000s of rows).
|
This tool works fine only with an average amount of data (at least 1000s of rows).
|
||||||
|
@ -44,8 +44,6 @@ SELECT sum(y) FROM t_null_big
|
|||||||
└────────┘
|
└────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
The `sum` function interprets `NULL` as `0`. In particular, this means that if the function receives input of a selection where all the values are `NULL`, then the result will be `0`, not `NULL`.
|
|
||||||
|
|
||||||
Now you can use the `groupArray` function to create an array from the `y` column:
|
Now you can use the `groupArray` function to create an array from the `y` column:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
|
@ -50,8 +50,6 @@ ClickHouse-specific aggregate functions:
|
|||||||
- [skewPop](../../../sql-reference/aggregate-functions/reference/skewpop.md)
|
- [skewPop](../../../sql-reference/aggregate-functions/reference/skewpop.md)
|
||||||
- [kurtSamp](../../../sql-reference/aggregate-functions/reference/kurtsamp.md)
|
- [kurtSamp](../../../sql-reference/aggregate-functions/reference/kurtsamp.md)
|
||||||
- [kurtPop](../../../sql-reference/aggregate-functions/reference/kurtpop.md)
|
- [kurtPop](../../../sql-reference/aggregate-functions/reference/kurtpop.md)
|
||||||
- [timeSeriesGroupSum](../../../sql-reference/aggregate-functions/reference/timeseriesgroupsum.md)
|
|
||||||
- [timeSeriesGroupRateSum](../../../sql-reference/aggregate-functions/reference/timeseriesgroupratesum.md)
|
|
||||||
- [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md)
|
- [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md)
|
||||||
- [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md)
|
- [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md)
|
||||||
- [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md)
|
- [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md)
|
||||||
|
@ -0,0 +1,37 @@
|
|||||||
|
---
|
||||||
|
toc_priority: 150
|
||||||
|
---
|
||||||
|
|
||||||
|
## initializeAggregation {#initializeaggregation}
|
||||||
|
|
||||||
|
Initializes aggregation for your input rows. It is intended for the functions with the suffix `State`.
|
||||||
|
Use it for tests or to process columns of types `AggregateFunction` and `AggregationgMergeTree`.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
initializeAggregation (aggregate_function, column_1, column_2);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `aggregate_function` — Name of the aggregation function. The state of this function — the creating one. [String](../../../sql-reference/data-types/string.md#string).
|
||||||
|
- `column_n` — The column to translate it into the function as it's argument. [String](../../../sql-reference/data-types/string.md#string).
|
||||||
|
|
||||||
|
**Returned value(s)**
|
||||||
|
|
||||||
|
Returns the result of the aggregation for your input rows. The return type will be the same as the return type of function, that `initializeAgregation` takes as first argument.
|
||||||
|
For example for functions with the suffix `State` the return type will be `AggregateFunction`.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT uniqMerge(state) FROM (SELECT initializeAggregation('uniqState', number % 3) AS state FROM system.numbers LIMIT 10000);
|
||||||
|
```
|
||||||
|
Result:
|
||||||
|
|
||||||
|
┌─uniqMerge(state)─┐
|
||||||
|
│ 3 │
|
||||||
|
└──────────────────┘
|
@ -4,6 +4,6 @@ toc_priority: 140
|
|||||||
|
|
||||||
# sumWithOverflow {#sumwithoverflowx}
|
# sumWithOverflow {#sumwithoverflowx}
|
||||||
|
|
||||||
Computes the sum of the numbers, using the same data type for the result as for the input parameters. If the sum exceeds the maximum value for this data type, the function returns an error.
|
Computes the sum of the numbers, using the same data type for the result as for the input parameters. If the sum exceeds the maximum value for this data type, it is calculated with overflow.
|
||||||
|
|
||||||
Only works for numbers.
|
Only works for numbers.
|
||||||
|
@ -1,16 +0,0 @@
|
|||||||
---
|
|
||||||
toc_priority: 171
|
|
||||||
---
|
|
||||||
|
|
||||||
# timeSeriesGroupRateSum {#agg-function-timeseriesgroupratesum}
|
|
||||||
|
|
||||||
Syntax: `timeSeriesGroupRateSum(uid, ts, val)`
|
|
||||||
|
|
||||||
Similarly to [timeSeriesGroupSum](../../../sql-reference/aggregate-functions/reference/timeseriesgroupsum.md), `timeSeriesGroupRateSum` calculates the rate of time-series and then sum rates together.
|
|
||||||
Also, timestamp should be in ascend order before use this function.
|
|
||||||
|
|
||||||
Applying this function to the data from the `timeSeriesGroupSum` example, you get the following result:
|
|
||||||
|
|
||||||
``` text
|
|
||||||
[(2,0),(3,0.1),(7,0.3),(8,0.3),(12,0.3),(17,0.3),(18,0.3),(24,0.3),(25,0.1)]
|
|
||||||
```
|
|
@ -1,57 +0,0 @@
|
|||||||
---
|
|
||||||
toc_priority: 170
|
|
||||||
---
|
|
||||||
|
|
||||||
# timeSeriesGroupSum {#agg-function-timeseriesgroupsum}
|
|
||||||
|
|
||||||
Syntax: `timeSeriesGroupSum(uid, timestamp, value)`
|
|
||||||
|
|
||||||
`timeSeriesGroupSum` can aggregate different time series that sample timestamp not alignment.
|
|
||||||
It will use linear interpolation between two sample timestamp and then sum time-series together.
|
|
||||||
|
|
||||||
- `uid` is the time series unique id, `UInt64`.
|
|
||||||
- `timestamp` is Int64 type in order to support millisecond or microsecond.
|
|
||||||
- `value` is the metric.
|
|
||||||
|
|
||||||
The function returns array of tuples with `(timestamp, aggregated_value)` pairs.
|
|
||||||
|
|
||||||
Before using this function make sure `timestamp` is in ascending order.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─uid─┬─timestamp─┬─value─┐
|
|
||||||
│ 1 │ 2 │ 0.2 │
|
|
||||||
│ 1 │ 7 │ 0.7 │
|
|
||||||
│ 1 │ 12 │ 1.2 │
|
|
||||||
│ 1 │ 17 │ 1.7 │
|
|
||||||
│ 1 │ 25 │ 2.5 │
|
|
||||||
│ 2 │ 3 │ 0.6 │
|
|
||||||
│ 2 │ 8 │ 1.6 │
|
|
||||||
│ 2 │ 12 │ 2.4 │
|
|
||||||
│ 2 │ 18 │ 3.6 │
|
|
||||||
│ 2 │ 24 │ 4.8 │
|
|
||||||
└─────┴───────────┴───────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
CREATE TABLE time_series(
|
|
||||||
uid UInt64,
|
|
||||||
timestamp Int64,
|
|
||||||
value Float64
|
|
||||||
) ENGINE = Memory;
|
|
||||||
INSERT INTO time_series VALUES
|
|
||||||
(1,2,0.2),(1,7,0.7),(1,12,1.2),(1,17,1.7),(1,25,2.5),
|
|
||||||
(2,3,0.6),(2,8,1.6),(2,12,2.4),(2,18,3.6),(2,24,4.8);
|
|
||||||
|
|
||||||
SELECT timeSeriesGroupSum(uid, timestamp, value)
|
|
||||||
FROM (
|
|
||||||
SELECT * FROM time_series order by timestamp ASC
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
And the result will be:
|
|
||||||
|
|
||||||
``` text
|
|
||||||
[(2,0.2),(3,0.9),(7,2.1),(8,2.4),(12,3.6),(17,5.1),(18,5.4),(24,7.2),(25,2.5)]
|
|
||||||
```
|
|
@ -3,10 +3,45 @@ toc_priority: 47
|
|||||||
toc_title: Date
|
toc_title: Date
|
||||||
---
|
---
|
||||||
|
|
||||||
# Date {#date}
|
# Date {#data_type-date}
|
||||||
|
|
||||||
A date. Stored in two bytes as the number of days since 1970-01-01 (unsigned). Allows storing values from just after the beginning of the Unix Epoch to the upper threshold defined by a constant at the compilation stage (currently, this is until the year 2106, but the final fully-supported year is 2105).
|
A date. Stored in two bytes as the number of days since 1970-01-01 (unsigned). Allows storing values from just after the beginning of the Unix Epoch to the upper threshold defined by a constant at the compilation stage (currently, this is until the year 2106, but the final fully-supported year is 2105).
|
||||||
|
|
||||||
The date value is stored without the time zone.
|
The date value is stored without the time zone.
|
||||||
|
|
||||||
|
## Examples {#examples}
|
||||||
|
|
||||||
|
**1.** Creating a table with a `DateTime`-type column and inserting data into it:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE dt
|
||||||
|
(
|
||||||
|
`timestamp` Date,
|
||||||
|
`event_id` UInt8
|
||||||
|
)
|
||||||
|
ENGINE = TinyLog;
|
||||||
|
```
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
INSERT INTO dt Values (1546300800, 1), ('2019-01-01', 2);
|
||||||
|
```
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT * FROM dt;
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌──timestamp─┬─event_id─┐
|
||||||
|
│ 2019-01-01 │ 1 │
|
||||||
|
│ 2019-01-01 │ 2 │
|
||||||
|
└────────────┴──────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## See Also {#see-also}
|
||||||
|
|
||||||
|
- [Functions for working with dates and times](../../sql-reference/functions/date-time-functions.md)
|
||||||
|
- [Operators for working with dates and times](../../sql-reference/operators/index.md#operators-datetime)
|
||||||
|
- [`DateTime` data type](../../sql-reference/data-types/datetime.md)
|
||||||
|
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/data_types/date/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/data_types/date/) <!--hide-->
|
||||||
|
@ -0,0 +1,91 @@
|
|||||||
|
---
|
||||||
|
toc_priority: 46
|
||||||
|
toc_title: Polygon Dictionaries With Grids
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
# Polygon dictionaries {#polygon-dictionaries}
|
||||||
|
|
||||||
|
Polygon dictionaries allow you to efficiently search for the polygon containing specified points.
|
||||||
|
For example: defining a city area by geographical coordinates.
|
||||||
|
|
||||||
|
Example configuration:
|
||||||
|
|
||||||
|
``` xml
|
||||||
|
<dictionary>
|
||||||
|
<structure>
|
||||||
|
<key>
|
||||||
|
<name>key</name>
|
||||||
|
<type>Array(Array(Array(Array(Float64))))</type>
|
||||||
|
</key>
|
||||||
|
|
||||||
|
<attribute>
|
||||||
|
<name>name</name>
|
||||||
|
<type>String</type>
|
||||||
|
<null_value></null_value>
|
||||||
|
</attribute>
|
||||||
|
|
||||||
|
<attribute>
|
||||||
|
<name>value</name>
|
||||||
|
<type>UInt64</type>
|
||||||
|
<null_value>0</null_value>
|
||||||
|
</attribute>
|
||||||
|
|
||||||
|
</structure>
|
||||||
|
|
||||||
|
<layout>
|
||||||
|
<polygon />
|
||||||
|
</layout>
|
||||||
|
|
||||||
|
</dictionary>
|
||||||
|
```
|
||||||
|
|
||||||
|
Tne corresponding [DDL-query](../../../sql-reference/statements/create/dictionary.md#create-dictionary-query):
|
||||||
|
``` sql
|
||||||
|
CREATE DICTIONARY polygon_dict_name (
|
||||||
|
key Array(Array(Array(Array(Float64)))),
|
||||||
|
name String,
|
||||||
|
value UInt64
|
||||||
|
)
|
||||||
|
PRIMARY KEY key
|
||||||
|
LAYOUT(POLYGON())
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
When configuring the polygon dictionary, the key must have one of two types:
|
||||||
|
- A simple polygon. It is an array of points.
|
||||||
|
- MultiPolygon. It is an array of polygons. Each polygon is a two-dimensional array of points. The first element of this array is the outer boundary of the polygon, and subsequent elements specify areas to be excluded from it.
|
||||||
|
|
||||||
|
Points can be specified as an array or a tuple of their coordinates. In the current implementation, only two-dimensional points are supported.
|
||||||
|
|
||||||
|
The user can [upload their own data](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md) in all formats supported by ClickHouse.
|
||||||
|
|
||||||
|
|
||||||
|
There are 3 types of [in-memory storage](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md) available:
|
||||||
|
|
||||||
|
- POLYGON_SIMPLE. This is a naive implementation, where a linear pass through all polygons is made for each query, and membership is checked for each one without using additional indexes.
|
||||||
|
|
||||||
|
- POLYGON_INDEX_EACH. A separate index is built for each polygon, which allows you to quickly check whether it belongs in most cases (optimized for geographical regions).
|
||||||
|
Also, a grid is superimposed on the area under consideration, which significantly narrows the number of polygons under consideration.
|
||||||
|
The grid is created by recursively dividing the cell into 16 equal parts and is configured with two parameters.
|
||||||
|
The division stops when the recursion depth reaches MAX_DEPTH or when the cell crosses no more than MIN_INTERSECTIONS polygons.
|
||||||
|
To respond to the query, there is a corresponding cell, and the index for the polygons stored in it is accessed alternately.
|
||||||
|
|
||||||
|
- POLYGON_INDEX_CELL. This placement also creates the grid described above. The same options are available. For each sheet cell, an index is built on all pieces of polygons that fall into it, which allows you to quickly respond to a request.
|
||||||
|
|
||||||
|
- POLYGON. Synonym to POLYGON_INDEX_CELL.
|
||||||
|
|
||||||
|
Dictionary queries are carried out using standard [functions](../../../sql-reference/functions/ext-dict-functions.md) for working with external dictionaries.
|
||||||
|
An important difference is that here the keys will be the points for which you want to find the polygon containing them.
|
||||||
|
|
||||||
|
Example of working with the dictionary defined above:
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE points (
|
||||||
|
x Float64,
|
||||||
|
y Float64
|
||||||
|
)
|
||||||
|
...
|
||||||
|
SELECT tuple(x, y) AS key, dictGet(dict_name, 'name', key), dictGet(dict_name, 'value', key) FROM points ORDER BY x, y;
|
||||||
|
```
|
||||||
|
|
||||||
|
As a result of executing the last command for each point in the 'points' table, a minimum area polygon containing this point will be found, and the requested attributes will be output.
|
@ -89,7 +89,7 @@ If the index falls outside of the bounds of an array, it returns some default va
|
|||||||
## has(arr, elem) {#hasarr-elem}
|
## has(arr, elem) {#hasarr-elem}
|
||||||
|
|
||||||
Checks whether the ‘arr’ array has the ‘elem’ element.
|
Checks whether the ‘arr’ array has the ‘elem’ element.
|
||||||
Returns 0 if the the element is not in the array, or 1 if it is.
|
Returns 0 if the element is not in the array, or 1 if it is.
|
||||||
|
|
||||||
`NULL` is processed as a value.
|
`NULL` is processed as a value.
|
||||||
|
|
||||||
|
@ -337,26 +337,124 @@ SELECT toDate('2016-12-27') AS date, toYearWeek(date) AS yearWeek0, toYearWeek(d
|
|||||||
└────────────┴───────────┴───────────┴───────────┘
|
└────────────┴───────────┴───────────┴───────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
## date_trunc(datepart, time_or_data\[, time_zone\]), dateTrunc(datepart, time_or_data\[, time_zone\]) {#date_trunc}
|
## date_trunc {#date_trunc}
|
||||||
|
|
||||||
Truncates a date or date with time based on the specified datepart, such as
|
Truncates date and time data to the specified part of date.
|
||||||
- `second`
|
|
||||||
- `minute`
|
|
||||||
- `hour`
|
|
||||||
- `day`
|
|
||||||
- `week`
|
|
||||||
- `month`
|
|
||||||
- `quarter`
|
|
||||||
- `year`
|
|
||||||
|
|
||||||
```sql
|
**Syntax**
|
||||||
SELECT date_trunc('hour', now())
|
|
||||||
|
``` sql
|
||||||
|
date_trunc(unit, value[, timezone])
|
||||||
```
|
```
|
||||||
|
|
||||||
## now {#now}
|
Alias: `dateTrunc`.
|
||||||
|
|
||||||
Accepts zero or one arguments(timezone) and returns the current time at one of the moments of request execution, or current time of specific timezone at one of the moments of request execution if `timezone` argument provided.
|
**Parameters**
|
||||||
This function returns a constant, even if the request took a long time to complete.
|
|
||||||
|
- `unit` — Part of date. [String](../syntax.md#syntax-string-literal).
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- `second`
|
||||||
|
- `minute`
|
||||||
|
- `hour`
|
||||||
|
- `day`
|
||||||
|
- `week`
|
||||||
|
- `month`
|
||||||
|
- `quarter`
|
||||||
|
- `year`
|
||||||
|
|
||||||
|
- `value` — Date and time. [DateTime](../../sql-reference/data-types/datetime.md) or [DateTime64](../../sql-reference/data-types/datetime64.md).
|
||||||
|
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Value, truncated to the specified part of date.
|
||||||
|
|
||||||
|
Type: [Datetime](../../sql-reference/data-types/datetime.md).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query without timezone:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT now(), date_trunc('hour', now());
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌───────────────now()─┬─date_trunc('hour', now())─┐
|
||||||
|
│ 2020-09-28 10:40:45 │ 2020-09-28 10:00:00 │
|
||||||
|
└─────────────────────┴───────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Query with the specified timezone:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT now(), date_trunc('hour', now(), 'Europe/Moscow');
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌───────────────now()─┬─date_trunc('hour', now(), 'Europe/Moscow')─┐
|
||||||
|
│ 2020-09-28 10:46:26 │ 2020-09-28 13:00:00 │
|
||||||
|
└─────────────────────┴────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**See also**
|
||||||
|
|
||||||
|
- [toStartOfInterval](#tostartofintervaltime-or-data-interval-x-unit-time-zone)
|
||||||
|
|
||||||
|
# now {#now}
|
||||||
|
|
||||||
|
Returns the current date and time.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
now([timezone])
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Current date and time.
|
||||||
|
|
||||||
|
Type: [Datetime](../../sql-reference/data-types/datetime.md).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query without timezone:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT now();
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌───────────────now()─┐
|
||||||
|
│ 2020-10-17 07:42:09 │
|
||||||
|
└─────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Query with the specified timezone:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT now('Europe/Moscow');
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─now('Europe/Moscow')─┐
|
||||||
|
│ 2020-10-17 10:42:23 │
|
||||||
|
└──────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
## today {#today}
|
## today {#today}
|
||||||
|
|
||||||
@ -437,18 +535,7 @@ dateDiff('unit', startdate, enddate, [timezone])
|
|||||||
|
|
||||||
- `unit` — Time unit, in which the returned value is expressed. [String](../../sql-reference/syntax.md#syntax-string-literal).
|
- `unit` — Time unit, in which the returned value is expressed. [String](../../sql-reference/syntax.md#syntax-string-literal).
|
||||||
|
|
||||||
Supported values:
|
Supported values: second, minute, hour, day, week, month, quarter, year.
|
||||||
|
|
||||||
| unit |
|
|
||||||
| ---- |
|
|
||||||
|second |
|
|
||||||
|minute |
|
|
||||||
|hour |
|
|
||||||
|day |
|
|
||||||
|week |
|
|
||||||
|month |
|
|
||||||
|quarter |
|
|
||||||
|year |
|
|
||||||
|
|
||||||
- `startdate` — The first time value to compare. [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md).
|
- `startdate` — The first time value to compare. [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md).
|
||||||
|
|
||||||
|
381
docs/en/sql-reference/functions/encryption-functions.md
Normal file
381
docs/en/sql-reference/functions/encryption-functions.md
Normal file
@ -0,0 +1,381 @@
|
|||||||
|
---
|
||||||
|
toc_priority: 67
|
||||||
|
toc_title: Encryption
|
||||||
|
---
|
||||||
|
|
||||||
|
# Encryption functions {#encryption-functions}
|
||||||
|
|
||||||
|
These functions implement encryption and decryption of data with AES (Advanced Encryption Standard) algorithm.
|
||||||
|
|
||||||
|
Key length depends on encryption mode. It is 16, 24, and 32 bytes long for `-128-`, `-196-`, and `-256-` modes respectively.
|
||||||
|
|
||||||
|
Initialization vector length is always 16 bytes (bytes in excess of 16 are ignored).
|
||||||
|
|
||||||
|
Note that these functions work slowly.
|
||||||
|
|
||||||
|
## encrypt {#encrypt}
|
||||||
|
|
||||||
|
This function encrypts data using these modes:
|
||||||
|
|
||||||
|
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
||||||
|
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
||||||
|
- aes-128-cfb1, aes-192-cfb1, aes-256-cfb1
|
||||||
|
- aes-128-cfb8, aes-192-cfb8, aes-256-cfb8
|
||||||
|
- aes-128-cfb128, aes-192-cfb128, aes-256-cfb128
|
||||||
|
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
||||||
|
- aes-128-gcm, aes-192-gcm, aes-256-gcm
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
encrypt('mode', 'plaintext', 'key' [, iv, aad])
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `mode` — Encryption mode. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
- `plaintext` — Text thats need to be encrypted. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
- `key` — Encryption key. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
- `iv` — Initialization vector. Required for `-gcm` modes, optinal for others. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
- `aad` — Additional authenticated data. It isn't encrypted, but it affects decryption. Works only in `-gcm` modes, for others would throw an exception. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Ciphered String. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
|
||||||
|
**Examples**
|
||||||
|
|
||||||
|
Create this table:
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE encryption_test
|
||||||
|
(
|
||||||
|
input String,
|
||||||
|
key String DEFAULT unhex('fb9958e2e897ef3fdb49067b51a24af645b3626eed2f9ea1dc7fd4dd71b7e38f9a68db2a3184f952382c783785f9d77bf923577108a88adaacae5c141b1576b0'),
|
||||||
|
iv String DEFAULT unhex('8CA3554377DFF8A369BC50A89780DD85'),
|
||||||
|
key32 String DEFAULT substring(key, 1, 32),
|
||||||
|
key24 String DEFAULT substring(key, 1, 24),
|
||||||
|
key16 String DEFAULT substring(key, 1, 16)
|
||||||
|
) Engine = Memory;
|
||||||
|
```
|
||||||
|
|
||||||
|
Insert this data:
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
INSERT INTO encryption_test (input) VALUES (''), ('text'), ('What Is ClickHouse?');
|
||||||
|
```
|
||||||
|
|
||||||
|
Example without `iv`:
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT 'aes-128-ecb' AS mode, hex(encrypt(mode, input, key16)) FROM encryption_test;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─mode────────┬─hex(encrypt('aes-128-ecb', input, key16))────────────────────────┐
|
||||||
|
│ aes-128-ecb │ 4603E6862B0D94BBEC68E0B0DF51D60F │
|
||||||
|
│ aes-128-ecb │ 3004851B86D3F3950672DE7085D27C03 │
|
||||||
|
│ aes-128-ecb │ E807F8C8D40A11F65076361AFC7D8B68D8658C5FAA6457985CAA380F16B3F7E4 │
|
||||||
|
└─────────────┴──────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Example with `iv`:
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT 'aes-256-ctr' AS mode, hex(encrypt(mode, input, key32, iv)) FROM encryption_test;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─mode────────┬─hex(encrypt('aes-256-ctr', input, key32, iv))─┐
|
||||||
|
│ aes-256-ctr │ │
|
||||||
|
│ aes-256-ctr │ 7FB039F7 │
|
||||||
|
│ aes-256-ctr │ 5CBD20F7ABD3AC41FCAA1A5C0E119E2B325949 │
|
||||||
|
└─────────────┴───────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Example with `-gcm`:
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT 'aes-256-gcm' AS mode, hex(encrypt(mode, input, key32, iv)) FROM encryption_test;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─mode────────┬─hex(encrypt('aes-256-gcm', input, key32, iv))──────────────────────────┐
|
||||||
|
│ aes-256-gcm │ E99DBEBC01F021758352D7FBD9039EFA │
|
||||||
|
│ aes-256-gcm │ 8742CE3A7B0595B281C712600D274CA881F47414 │
|
||||||
|
│ aes-256-gcm │ A44FD73ACEB1A64BDE2D03808A2576EDBB60764CC6982DB9AF2C33C893D91B00C60DC5 │
|
||||||
|
└─────────────┴────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Example with `-gcm` mode and with `aad`:
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT 'aes-192-gcm' AS mode, hex(encrypt(mode, input, key24, iv, 'AAD')) FROM encryption_test;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─mode────────┬─hex(encrypt('aes-192-gcm', input, key24, iv, 'AAD'))───────────────────┐
|
||||||
|
│ aes-192-gcm │ 04C13E4B1D62481ED22B3644595CB5DB │
|
||||||
|
│ aes-192-gcm │ 9A6CF0FD2B329B04EAD18301818F016DF8F77447 │
|
||||||
|
│ aes-192-gcm │ B961E9FD9B940EBAD7ADDA75C9F198A40797A5EA1722D542890CC976E21113BBB8A7AA │
|
||||||
|
└─────────────┴────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## aes_encrypt_mysql {#aes_encrypt_mysql}
|
||||||
|
|
||||||
|
Compatible with mysql encryption and can be decrypted with [AES_DECRYPT](https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html#function_aes-decrypt) function.
|
||||||
|
|
||||||
|
Supported encryption modes:
|
||||||
|
|
||||||
|
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
||||||
|
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
||||||
|
- aes-128-cfb1, aes-192-cfb1, aes-256-cfb1
|
||||||
|
- aes-128-cfb8, aes-192-cfb8, aes-256-cfb8
|
||||||
|
- aes-128-cfb128, aes-192-cfb128, aes-256-cfb128
|
||||||
|
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
aes_encrypt_mysql('mode', 'plaintext', 'key' [, iv])
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `mode` — Encryption mode. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
- `plaintext` — Text that needs to be encrypted. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
- `key` — Encryption key. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
- `iv` — Initialization vector. Optinal. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Ciphered String. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
|
||||||
|
**Examples**
|
||||||
|
|
||||||
|
Create this table:
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE encryption_test
|
||||||
|
(
|
||||||
|
input String,
|
||||||
|
key String DEFAULT unhex('fb9958e2e897ef3fdb49067b51a24af645b3626eed2f9ea1dc7fd4dd71b7e38f9a68db2a3184f952382c783785f9d77bf923577108a88adaacae5c141b1576b0'),
|
||||||
|
iv String DEFAULT unhex('8CA3554377DFF8A369BC50A89780DD85'),
|
||||||
|
key32 String DEFAULT substring(key, 1, 32),
|
||||||
|
key24 String DEFAULT substring(key, 1, 24),
|
||||||
|
key16 String DEFAULT substring(key, 1, 16)
|
||||||
|
) Engine = Memory;
|
||||||
|
```
|
||||||
|
|
||||||
|
Insert this data:
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
INSERT INTO encryption_test (input) VALUES (''), ('text'), ('What Is ClickHouse?');
|
||||||
|
```
|
||||||
|
|
||||||
|
Example without `iv`:
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT 'aes-128-cbc' AS mode, hex(aes_encrypt_mysql(mode, input, key32)) FROM encryption_test;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─mode────────┬─hex(aes_encrypt_mysql('aes-128-cbc', input, key32))──────────────┐
|
||||||
|
│ aes-128-cbc │ FEA8CFDE6EE2C6E7A2CC6ADDC9F62C83 │
|
||||||
|
│ aes-128-cbc │ 78B16CD4BE107660156124C5FEE6454A │
|
||||||
|
│ aes-128-cbc │ 67C0B119D96F18E2823968D42871B3D179221B1E7EE642D628341C2B29BA2E18 │
|
||||||
|
└─────────────┴──────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Example with `iv`:
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT 'aes-256-cfb128' AS mode, hex(aes_encrypt_mysql(mode, input, key32, iv)) FROM encryption_test;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─mode───────────┬─hex(aes_encrypt_mysql('aes-256-cfb128', input, key32, iv))─┐
|
||||||
|
│ aes-256-cfb128 │ │
|
||||||
|
│ aes-256-cfb128 │ 7FB039F7 │
|
||||||
|
│ aes-256-cfb128 │ 5CBD20F7ABD3AC41FCAA1A5C0E119E2BB5174F │
|
||||||
|
└────────────────┴────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## decrypt {#decrypt}
|
||||||
|
|
||||||
|
This function decrypts data using these modes:
|
||||||
|
|
||||||
|
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
||||||
|
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
||||||
|
- aes-128-cfb1, aes-192-cfb1, aes-256-cfb1
|
||||||
|
- aes-128-cfb8, aes-192-cfb8, aes-256-cfb8
|
||||||
|
- aes-128-cfb128, aes-192-cfb128, aes-256-cfb128
|
||||||
|
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
||||||
|
- aes-128-gcm, aes-192-gcm, aes-256-gcm
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
decrypt('mode', 'ciphertext', 'key' [, iv, aad])
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `mode` — Decryption mode. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
- `ciphertext` — Encrypted text that needs to be decrypted. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
- `key` — Decryption key. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
- `iv` — Initialization vector. Required for `-gcm` modes, optinal for others. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
- `aad` — Additional authenticated data. Won't decrypt if this value is incorrect. Works only in `-gcm` modes, for others would throw an exception. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Decrypted String. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
|
||||||
|
**Examples**
|
||||||
|
|
||||||
|
Create this table:
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE encryption_test
|
||||||
|
(
|
||||||
|
input String,
|
||||||
|
key String DEFAULT unhex('fb9958e2e897ef3fdb49067b51a24af645b3626eed2f9ea1dc7fd4dd71b7e38f9a68db2a3184f952382c783785f9d77bf923577108a88adaacae5c141b1576b0'),
|
||||||
|
iv String DEFAULT unhex('8CA3554377DFF8A369BC50A89780DD85'),
|
||||||
|
key32 String DEFAULT substring(key, 1, 32),
|
||||||
|
key24 String DEFAULT substring(key, 1, 24),
|
||||||
|
key16 String DEFAULT substring(key, 1, 16)
|
||||||
|
) Engine = Memory;
|
||||||
|
```
|
||||||
|
|
||||||
|
Insert this data:
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
INSERT INTO encryption_test (input) VALUES (''), ('text'), ('What Is ClickHouse?');
|
||||||
|
```
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
|
||||||
|
SELECT 'aes-128-ecb' AS mode, decrypt(mode, encrypt(mode, input, key16), key16) FROM encryption_test;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌─mode────────┬─decrypt('aes-128-ecb', encrypt('aes-128-ecb', input, key16), key16)─┐
|
||||||
|
│ aes-128-ecb │ │
|
||||||
|
│ aes-128-ecb │ text │
|
||||||
|
│ aes-128-ecb │ What Is ClickHouse? │
|
||||||
|
└─────────────┴─────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## aes_decrypt_mysql {#aes_decrypt_mysql}
|
||||||
|
|
||||||
|
Compatible with mysql encryption and decrypts data encrypted with [AES_ENCRYPT](https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html#function_aes-encrypt) function.
|
||||||
|
|
||||||
|
Supported decryption modes:
|
||||||
|
|
||||||
|
- aes-128-ecb, aes-192-ecb, aes-256-ecb
|
||||||
|
- aes-128-cbc, aes-192-cbc, aes-256-cbc
|
||||||
|
- aes-128-cfb1, aes-192-cfb1, aes-256-cfb1
|
||||||
|
- aes-128-cfb8, aes-192-cfb8, aes-256-cfb8
|
||||||
|
- aes-128-cfb128, aes-192-cfb128, aes-256-cfb128
|
||||||
|
- aes-128-ofb, aes-192-ofb, aes-256-ofb
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
aes_decrypt_mysql('mode', 'ciphertext', 'key' [, iv])
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `mode` — Decryption mode. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
- `ciphertext` — Encrypted text that needs to be decrypted. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
- `key` — Decryption key. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
- `iv` — Initialization vector. Optinal. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Decrypted String. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
|
||||||
|
**Examples**
|
||||||
|
|
||||||
|
Create this table:
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE encryption_test
|
||||||
|
(
|
||||||
|
input String,
|
||||||
|
key String DEFAULT unhex('fb9958e2e897ef3fdb49067b51a24af645b3626eed2f9ea1dc7fd4dd71b7e38f9a68db2a3184f952382c783785f9d77bf923577108a88adaacae5c141b1576b0'),
|
||||||
|
iv String DEFAULT unhex('8CA3554377DFF8A369BC50A89780DD85'),
|
||||||
|
key32 String DEFAULT substring(key, 1, 32),
|
||||||
|
key24 String DEFAULT substring(key, 1, 24),
|
||||||
|
key16 String DEFAULT substring(key, 1, 16)
|
||||||
|
) Engine = Memory;
|
||||||
|
```
|
||||||
|
|
||||||
|
Insert this data:
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
INSERT INTO encryption_test (input) VALUES (''), ('text'), ('What Is ClickHouse?');
|
||||||
|
```
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT 'aes-128-cbc' AS mode, aes_decrypt_mysql(mode, aes_encrypt_mysql(mode, input, key), key) FROM encryption_test;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─mode────────┬─aes_decrypt_mysql('aes-128-cbc', aes_encrypt_mysql('aes-128-cbc', input, key), key)─┐
|
||||||
|
│ aes-128-cbc │ │
|
||||||
|
│ aes-128-cbc │ text │
|
||||||
|
│ aes-128-cbc │ What Is ClickHouse? │
|
||||||
|
└─────────────┴─────────────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.tech/docs/en/sql-reference/functions/encryption_functions/) <!--hide-->
|
@ -153,15 +153,18 @@ A fast, decent-quality non-cryptographic hash function for a string obtained fro
|
|||||||
`URLHash(s, N)` – Calculates a hash from a string up to the N level in the URL hierarchy, without one of the trailing symbols `/`,`?` or `#` at the end, if present.
|
`URLHash(s, N)` – Calculates a hash from a string up to the N level in the URL hierarchy, without one of the trailing symbols `/`,`?` or `#` at the end, if present.
|
||||||
Levels are the same as in URLHierarchy. This function is specific to Yandex.Metrica.
|
Levels are the same as in URLHierarchy. This function is specific to Yandex.Metrica.
|
||||||
|
|
||||||
|
## farmFingerprint64 {#farmfingerprint64}
|
||||||
|
|
||||||
## farmHash64 {#farmhash64}
|
## farmHash64 {#farmhash64}
|
||||||
|
|
||||||
Produces a 64-bit [FarmHash](https://github.com/google/farmhash) hash value.
|
Produces a 64-bit [FarmHash](https://github.com/google/farmhash) or Fingerprint value. Prefer `farmFingerprint64` for a stable and portable value.
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
|
farmFingerprint64(par1, ...)
|
||||||
farmHash64(par1, ...)
|
farmHash64(par1, ...)
|
||||||
```
|
```
|
||||||
|
|
||||||
The function uses the `Hash64` method from all [available methods](https://github.com/google/farmhash/blob/master/src/farmhash.h).
|
These functions use the `Fingerprint64` and `Hash64` method respectively from all [available methods](https://github.com/google/farmhash/blob/master/src/farmhash.h).
|
||||||
|
|
||||||
**Parameters**
|
**Parameters**
|
||||||
|
|
||||||
|
@ -306,3 +306,67 @@ execute_native_thread_routine
|
|||||||
start_thread
|
start_thread
|
||||||
clone
|
clone
|
||||||
```
|
```
|
||||||
|
## tid {#tid}
|
||||||
|
|
||||||
|
Returns id of the thread, in which current [Block](https://clickhouse.tech/docs/en/development/architecture/#block) is processed.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
tid()
|
||||||
|
```
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Current thread id. [Uint64](../../sql-reference/data-types/int-uint.md#uint-ranges).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT tid();
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─tid()─┐
|
||||||
|
│ 3878 │
|
||||||
|
└───────┘
|
||||||
|
```
|
||||||
|
## logTrace {#logtrace}
|
||||||
|
|
||||||
|
Emits trace log message to server log for each [Block](https://clickhouse.tech/docs/en/development/architecture/#block).
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
logTrace('message')
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `message` — Message that is emitted to server log. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Always returns 0.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT logTrace('logTrace message');
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─logTrace('logTrace message')─┐
|
||||||
|
│ 0 │
|
||||||
|
└──────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.tech/docs/en/query_language/functions/introspection/) <!--hide-->
|
||||||
|
@ -325,7 +325,59 @@ This function accepts a number or date or date with time, and returns a FixedStr
|
|||||||
|
|
||||||
## reinterpretAsUUID {#reinterpretasuuid}
|
## reinterpretAsUUID {#reinterpretasuuid}
|
||||||
|
|
||||||
This function accepts FixedString, and returns UUID. Takes 16 bytes string. If the string isn't long enough, the functions work as if the string is padded with the necessary number of null bytes to the end. If the string longer than 16 bytes, the extra bytes at the end are ignored.
|
This function accepts 16 bytes string, and returns UUID containing bytes representing the corresponding value in network byte order (big-endian). If the string isn't long enough, the functions work as if the string is padded with the necessary number of null bytes to the end. If the string longer than 16 bytes, the extra bytes at the end are ignored.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
reinterpretAsUUID(fixed_string)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `fixed_string` — Big-endian byte string. [FixedString](../../sql-reference/data-types/fixedstring.md#fixedstring).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- The UUID type value. [UUID](../../sql-reference/data-types/uuid.md#uuid-data-type).
|
||||||
|
|
||||||
|
**Examples**
|
||||||
|
|
||||||
|
String to UUID.
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT reinterpretAsUUID(reverse(unhex('000102030405060708090a0b0c0d0e0f')))
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─reinterpretAsUUID(reverse(unhex('000102030405060708090a0b0c0d0e0f')))─┐
|
||||||
|
│ 08090a0b-0c0d-0e0f-0001-020304050607 │
|
||||||
|
└───────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Going back and forth from String to UUID.
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
WITH
|
||||||
|
generateUUIDv4() AS uuid,
|
||||||
|
identity(lower(hex(reverse(reinterpretAsString(uuid))))) AS str,
|
||||||
|
reinterpretAsUUID(reverse(unhex(str))) AS uuid2
|
||||||
|
SELECT uuid = uuid2;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─equals(uuid, uuid2)─┐
|
||||||
|
│ 1 │
|
||||||
|
└─────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
## CAST(x, T) {#type_conversion_function-cast}
|
## CAST(x, T) {#type_conversion_function-cast}
|
||||||
|
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user