mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-26 17:41:59 +00:00
Merge branch 'master' into hrissan/prevent_tdigest_uncontrolled_growth
This commit is contained in:
commit
81b9d6a948
10
.gitmodules
vendored
10
.gitmodules
vendored
@ -190,3 +190,13 @@
|
|||||||
path = contrib/croaring
|
path = contrib/croaring
|
||||||
url = https://github.com/RoaringBitmap/CRoaring
|
url = https://github.com/RoaringBitmap/CRoaring
|
||||||
branch = v0.2.66
|
branch = v0.2.66
|
||||||
|
[submodule "contrib/miniselect"]
|
||||||
|
path = contrib/miniselect
|
||||||
|
url = https://github.com/danlark1/miniselect
|
||||||
|
[submodule "contrib/rocksdb"]
|
||||||
|
path = contrib/rocksdb
|
||||||
|
url = https://github.com/facebook/rocksdb
|
||||||
|
branch = v6.11.4
|
||||||
|
[submodule "contrib/xz"]
|
||||||
|
path = contrib/xz
|
||||||
|
url = https://github.com/xz-mirror/xz
|
||||||
|
612
CHANGELOG.md
612
CHANGELOG.md
@ -1,6 +1,442 @@
|
|||||||
|
## ClickHouse release 20.11
|
||||||
|
|
||||||
|
### ClickHouse release v20.11.3.3-stable, 2020-11-13
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix rare silent crashes when query profiler is on and ClickHouse is installed on OS with glibc version that has (supposedly) broken asynchronous unwind tables for some functions. This fixes [#15301](https://github.com/ClickHouse/ClickHouse/issues/15301). This fixes [#13098](https://github.com/ClickHouse/ClickHouse/issues/13098). [#16846](https://github.com/ClickHouse/ClickHouse/pull/16846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.11.2.1, 2020-11-11
|
||||||
|
|
||||||
|
#### Backward Incompatible Change
|
||||||
|
|
||||||
|
* If some `profile` was specified in `distributed_ddl` config section, then this profile could overwrite settings of `default` profile on server startup. It's fixed, now settings of distributed DDL queries should not affect global server settings. [#16635](https://github.com/ClickHouse/ClickHouse/pull/16635) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Restrict to use of non-comparable data types (like `AggregateFunction`) in keys (Sorting key, Primary key, Partition key, and so on). [#16601](https://github.com/ClickHouse/ClickHouse/pull/16601) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Remove `ANALYZE` and `AST` queries, and make the setting `enable_debug_queries` obsolete since now it is the part of full featured `EXPLAIN` query. [#16536](https://github.com/ClickHouse/ClickHouse/pull/16536) ([Ivan](https://github.com/abyss7)).
|
||||||
|
* Aggregate functions `boundingRatio`, `rankCorr`, `retention`, `timeSeriesGroupSum`, `timeSeriesGroupRateSum`, `windowFunnel` were erroneously made case-insensitive. Now their names are made case sensitive as designed. Only functions that are specified in SQL standard or made for compatibility with other DBMS or functions similar to those should be case-insensitive. [#16407](https://github.com/ClickHouse/ClickHouse/pull/16407) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Make `rankCorr` function return nan on insufficient data https://github.com/ClickHouse/ClickHouse/issues/16124. [#16135](https://github.com/ClickHouse/ClickHouse/pull/16135) ([hexiaoting](https://github.com/hexiaoting)).
|
||||||
|
|
||||||
|
#### New Feature
|
||||||
|
|
||||||
|
* Added support of LDAP as a user directory for locally non-existent users. [#12736](https://github.com/ClickHouse/ClickHouse/pull/12736) ([Denis Glazachev](https://github.com/traceon)).
|
||||||
|
* Add `system.replicated_fetches` table which shows currently running background fetches. [#16428](https://github.com/ClickHouse/ClickHouse/pull/16428) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Added setting `date_time_output_format`. [#15845](https://github.com/ClickHouse/ClickHouse/pull/15845) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Added minimal web UI to ClickHouse. [#16158](https://github.com/ClickHouse/ClickHouse/pull/16158) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Allows to read/write Single protobuf message at once (w/o length-delimiters). [#15199](https://github.com/ClickHouse/ClickHouse/pull/15199) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Added initial OpenTelemetry support. ClickHouse now accepts OpenTelemetry traceparent headers over Native and HTTP protocols, and passes them downstream in some cases. The trace spans for executed queries are saved into the `system.opentelemetry_span_log` table. [#14195](https://github.com/ClickHouse/ClickHouse/pull/14195) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Allow specify primary key in column list of `CREATE TABLE` query. This is needed for compatibility with other SQL dialects. [#15823](https://github.com/ClickHouse/ClickHouse/pull/15823) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Implement `OFFSET offset_row_count {ROW | ROWS} FETCH {FIRST | NEXT} fetch_row_count {ROW | ROWS} {ONLY | WITH TIES}` in SELECT query with ORDER BY. This is the SQL-standard way to specify `LIMIT`. [#15855](https://github.com/ClickHouse/ClickHouse/pull/15855) ([hexiaoting](https://github.com/hexiaoting)).
|
||||||
|
* `errorCodeToName` function - return variable name of the error (useful for analyzing query_log and similar). `system.errors` table - shows how many times errors has been happened (respects `system_events_show_zero_values`). [#16438](https://github.com/ClickHouse/ClickHouse/pull/16438) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Added function `untuple` which is a special function which can introduce new columns to the SELECT list by expanding a named tuple. [#16242](https://github.com/ClickHouse/ClickHouse/pull/16242) ([Nikolai Kochetov](https://github.com/KochetovNicolai), [Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Now we can provide identifiers via query parameters. And these parameters can be used as table objects or columns. [#16594](https://github.com/ClickHouse/ClickHouse/pull/16594) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Added big integers (UInt256, Int128, Int256) and UUID data types support for MergeTree BloomFilter index. Big integers is an experimental feature. [#16642](https://github.com/ClickHouse/ClickHouse/pull/16642) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Add `farmFingerprint64` function (non-cryptographic string hashing). [#16570](https://github.com/ClickHouse/ClickHouse/pull/16570) ([Jacob Hayes](https://github.com/JacobHayes)).
|
||||||
|
* Add `log_queries_min_query_duration_ms`, only queries slower then the value of this setting will go to `query_log`/`query_thread_log` (i.e. something like `slow_query_log` in mysql). [#16529](https://github.com/ClickHouse/ClickHouse/pull/16529) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Ability to create a docker image on the top of `Alpine`. Uses precompiled binary and glibc components from ubuntu 20.04. [#16479](https://github.com/ClickHouse/ClickHouse/pull/16479) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Added `toUUIDOrNull`, `toUUIDOrZero` cast functions. [#16337](https://github.com/ClickHouse/ClickHouse/pull/16337) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Add `max_concurrent_queries_for_all_users` setting, see [#6636](https://github.com/ClickHouse/ClickHouse/issues/6636) for use cases. [#16154](https://github.com/ClickHouse/ClickHouse/pull/16154) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
* Add a new option `print_query_id` to clickhouse-client. It helps generate arbitrary strings with the current query id generated by the client. Also print query id in clickhouse-client by default. [#15809](https://github.com/ClickHouse/ClickHouse/pull/15809) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add `tid` and `logTrace` functions. This closes [#9434](https://github.com/ClickHouse/ClickHouse/issues/9434). [#15803](https://github.com/ClickHouse/ClickHouse/pull/15803) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Add function `formatReadableTimeDelta` that format time delta to human readable string ... [#15497](https://github.com/ClickHouse/ClickHouse/pull/15497) ([Filipe Caixeta](https://github.com/filipecaixeta)).
|
||||||
|
* Added `disable_merges` option for volumes in multi-disk configuration. [#13956](https://github.com/ClickHouse/ClickHouse/pull/13956) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
|
||||||
|
#### Experimental Feature
|
||||||
|
|
||||||
|
* New functions `encrypt`, `aes_encrypt_mysql`, `decrypt`, `aes_decrypt_mysql`. These functions are working slowly, so we consider it as an experimental feature. [#11844](https://github.com/ClickHouse/ClickHouse/pull/11844) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Mask password in data_path in the `system.distribution_queue`. [#16727](https://github.com/ClickHouse/ClickHouse/pull/16727) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix `IN` operator over several columns and tuples with enabled `transform_null_in` setting. Fixes [#15310](https://github.com/ClickHouse/ClickHouse/issues/15310). [#16722](https://github.com/ClickHouse/ClickHouse/pull/16722) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* The setting `max_parallel_replicas` worked incorrectly if the queried table has no sampling. This fixes [#5733](https://github.com/ClickHouse/ClickHouse/issues/5733). [#16675](https://github.com/ClickHouse/ClickHouse/pull/16675) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix optimize_read_in_order/optimize_aggregation_in_order with max_threads > 0 and expression in ORDER BY. [#16637](https://github.com/ClickHouse/ClickHouse/pull/16637) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Calculation of `DEFAULT` expressions was involving possible name collisions (that was very unlikely to encounter). This fixes [#9359](https://github.com/ClickHouse/ClickHouse/issues/9359). [#16612](https://github.com/ClickHouse/ClickHouse/pull/16612) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix `query_thread_log.query_duration_ms` unit. [#16563](https://github.com/ClickHouse/ClickHouse/pull/16563) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix a bug when using MySQL Master -> MySQL Slave -> ClickHouse MaterializeMySQL Engine. `MaterializeMySQL` is an experimental feature. [#16504](https://github.com/ClickHouse/ClickHouse/pull/16504) ([TCeason](https://github.com/TCeason)).
|
||||||
|
* Specifically crafted argument of `round` function with `Decimal` was leading to integer division by zero. This fixes [#13338](https://github.com/ClickHouse/ClickHouse/issues/13338). [#16451](https://github.com/ClickHouse/ClickHouse/pull/16451) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix DROP TABLE for Distributed (racy with INSERT). [#16409](https://github.com/ClickHouse/ClickHouse/pull/16409) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix processing of very large entries in replication queue. Very large entries may appear in ALTER queries if table structure is extremely large (near 1 MB). This fixes [#16307](https://github.com/ClickHouse/ClickHouse/issues/16307). [#16332](https://github.com/ClickHouse/ClickHouse/pull/16332) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed the inconsistent behaviour when a part of return data could be dropped because the set for its filtration wasn't created. [#16308](https://github.com/ClickHouse/ClickHouse/pull/16308) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix dictGet in sharding_key (and similar places, i.e. when the function context is stored permanently). [#16205](https://github.com/ClickHouse/ClickHouse/pull/16205) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix the exception thrown in `clickhouse-local` when trying to execute `OPTIMIZE` command. Fixes [#16076](https://github.com/ClickHouse/ClickHouse/issues/16076). [#16192](https://github.com/ClickHouse/ClickHouse/pull/16192) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Fixes [#15780](https://github.com/ClickHouse/ClickHouse/issues/15780) regression, e.g. `indexOf([1, 2, 3], toLowCardinality(1))` now is prohibited but it should not be. [#16038](https://github.com/ClickHouse/ClickHouse/pull/16038) ([Mike](https://github.com/myrrc)).
|
||||||
|
* Fix bug with MySQL database. When MySQL server used as database engine is down some queries raise Exception, because they try to get tables from disabled server, while it's unnecessary. For example, query `SELECT ... FROM system.parts` should work only with MergeTree tables and don't touch MySQL database at all. [#16032](https://github.com/ClickHouse/ClickHouse/pull/16032) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Now exception will be thrown when `ALTER MODIFY COLUMN ... DEFAULT ...` has incompatible default with column type. Fixes [#15854](https://github.com/ClickHouse/ClickHouse/issues/15854). [#15858](https://github.com/ClickHouse/ClickHouse/pull/15858) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fixed IPv4CIDRToRange/IPv6CIDRToRange functions to accept const IP-column values. [#15856](https://github.com/ClickHouse/ClickHouse/pull/15856) ([vladimir-golovchenko](https://github.com/vladimir-golovchenko)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Treat `INTERVAL '1 hour'` as equivalent to `INTERVAL 1 HOUR`, to be compatible with Postgres and similar. This fixes [#15637](https://github.com/ClickHouse/ClickHouse/issues/15637). [#15978](https://github.com/ClickHouse/ClickHouse/pull/15978) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Enable parsing enum values by their numeric ids for CSV, TSV and JSON input formats. [#15685](https://github.com/ClickHouse/ClickHouse/pull/15685) ([vivarum](https://github.com/vivarum)).
|
||||||
|
* Better read task scheduling for JBOD architecture and `MergeTree` storage. New setting `read_backoff_min_concurrency` which serves as the lower limit to the number of reading threads. [#16423](https://github.com/ClickHouse/ClickHouse/pull/16423) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add missing support for `LowCardinality` in `Avro` format. [#16521](https://github.com/ClickHouse/ClickHouse/pull/16521) ([Mike](https://github.com/myrrc)).
|
||||||
|
* Workaround for use `S3` with nginx server as proxy. Nginx currenty does not accept urls with empty path like `http://domain.com?delete`, but vanilla aws-sdk-cpp produces this kind of urls. This commit uses patched aws-sdk-cpp version, which makes urls with "/" as path in this cases, like `http://domain.com/?delete`. [#16814](https://github.com/ClickHouse/ClickHouse/pull/16814) ([ianton-ru](https://github.com/ianton-ru)).
|
||||||
|
* Better diagnostics on parse errors in input data. Provide row number on `Cannot read all data` errors. [#16644](https://github.com/ClickHouse/ClickHouse/pull/16644) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Make the behaviour of `minMap` and `maxMap` more desireable. It will not skip zero values in the result. Fixes [#16087](https://github.com/ClickHouse/ClickHouse/issues/16087). [#16631](https://github.com/ClickHouse/ClickHouse/pull/16631) ([Ildus Kurbangaliev](https://github.com/ildus)).
|
||||||
|
* Better update of ZooKeeper configuration in runtime. [#16630](https://github.com/ClickHouse/ClickHouse/pull/16630) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
* Apply SETTINGS clause as early as possible. It allows to modify more settings in the query. This closes [#3178](https://github.com/ClickHouse/ClickHouse/issues/3178). [#16619](https://github.com/ClickHouse/ClickHouse/pull/16619) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Now `event_time_microseconds` field stores in Decimal64, not UInt64. [#16617](https://github.com/ClickHouse/ClickHouse/pull/16617) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Now paratmeterized functions can be used in `APPLY` column transformer. [#16589](https://github.com/ClickHouse/ClickHouse/pull/16589) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Improve scheduling of background task which removes data of dropped tables in `Atomic` databases. `Atomic` databases do not create broken symlink to table data directory if table actually has no data directory. [#16584](https://github.com/ClickHouse/ClickHouse/pull/16584) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Subqueries in `WITH` section (CTE) can reference previous subqueries in `WITH` section by their name. [#16575](https://github.com/ClickHouse/ClickHouse/pull/16575) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add current_database into `system.query_thread_log`. [#16558](https://github.com/ClickHouse/ClickHouse/pull/16558) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Allow to fetch parts that are already committed or outdated in the current instance into the detached directory. It's useful when migrating tables from another cluster and having N to 1 shards mapping. It's also consistent with the current fetchPartition implementation. [#16538](https://github.com/ClickHouse/ClickHouse/pull/16538) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Multiple improvements for `RabbitMQ`: Fixed bug for [#16263](https://github.com/ClickHouse/ClickHouse/issues/16263). Also minimized event loop lifetime. Added more efficient queues setup. [#16426](https://github.com/ClickHouse/ClickHouse/pull/16426) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Fix debug assertion in `quantileDeterministic` function. In previous version it may also transfer up to two times more data over the network. Although no bug existed. This fixes [#15683](https://github.com/ClickHouse/ClickHouse/issues/15683). [#16410](https://github.com/ClickHouse/ClickHouse/pull/16410) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add `TablesToDropQueueSize` metric. It's equal to number of dropped tables, that are waiting for background data removal. [#16364](https://github.com/ClickHouse/ClickHouse/pull/16364) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Better diagnostics when client has dropped connection. In previous versions, `Attempt to read after EOF` and `Broken pipe` exceptions were logged in server. In new version, it's information message `Client has dropped the connection, cancel the query.`. [#16329](https://github.com/ClickHouse/ClickHouse/pull/16329) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add total_rows/total_bytes (from system.tables) support for Set/Join table engines. [#16306](https://github.com/ClickHouse/ClickHouse/pull/16306) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Now it's possible to specify `PRIMARY KEY` without `ORDER BY` for MergeTree table engines family. Closes [#15591](https://github.com/ClickHouse/ClickHouse/issues/15591). [#16284](https://github.com/ClickHouse/ClickHouse/pull/16284) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* If there is no tmp folder in the system (chroot, misconfigutation etc) `clickhouse-local` will create temporary subfolder in the current directory. [#16280](https://github.com/ClickHouse/ClickHouse/pull/16280) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Add support for nested data types (like named tuple) as sub-types. Fixes [#15587](https://github.com/ClickHouse/ClickHouse/issues/15587). [#16262](https://github.com/ClickHouse/ClickHouse/pull/16262) ([Ivan](https://github.com/abyss7)).
|
||||||
|
* Support for `database_atomic_wait_for_drop_and_detach_synchronously`/`NO DELAY`/`SYNC` for `DROP DATABASE`. [#16127](https://github.com/ClickHouse/ClickHouse/pull/16127) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add `allow_nondeterministic_optimize_skip_unused_shards` (to allow non deterministic like `rand()` or `dictGet()` in sharding key). [#16105](https://github.com/ClickHouse/ClickHouse/pull/16105) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix `memory_profiler_step`/`max_untracked_memory` for queries via HTTP (test included). Fix the issue that adjusting this value globally in xml config does not help either, since those settings are not applied anyway, only default (4MB) value is [used](https://github.com/ClickHouse/ClickHouse/blob/17731245336d8c84f75e4c0894c5797ed7732190/src/Common/ThreadStatus.h#L104). Fix `query_id` for the most root ThreadStatus of the http query (by initializing QueryScope after reading query_id). [#16101](https://github.com/ClickHouse/ClickHouse/pull/16101) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Now it's allowed to execute `ALTER ... ON CLUSTER` queries regardless of the `<internal_replication>` setting in cluster config. [#16075](https://github.com/ClickHouse/ClickHouse/pull/16075) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix rare issue when `clickhouse-client` may abort on exit due to loading of suggestions. This fixes [#16035](https://github.com/ClickHouse/ClickHouse/issues/16035). [#16047](https://github.com/ClickHouse/ClickHouse/pull/16047) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add support of `cache` layout for `Redis` dictionaries with complex key. [#15985](https://github.com/ClickHouse/ClickHouse/pull/15985) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix query hang (endless loop) in case of misconfiguration (`connections_with_failover_max_tries` set to 0). [#15876](https://github.com/ClickHouse/ClickHouse/pull/15876) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Change level of some log messages from information to debug, so information messages will not appear for every query. This closes [#5293](https://github.com/ClickHouse/ClickHouse/issues/5293). [#15816](https://github.com/ClickHouse/ClickHouse/pull/15816) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Remove `MemoryTrackingInBackground*` metrics to avoid potentially misleading results. This fixes [#15684](https://github.com/ClickHouse/ClickHouse/issues/15684). [#15813](https://github.com/ClickHouse/ClickHouse/pull/15813) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add reconnects to `zookeeper-dump-tree` tool. [#15711](https://github.com/ClickHouse/ClickHouse/pull/15711) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Allow explicitly specify columns list in `CREATE TABLE table AS table_function(...)` query. Fixes [#9249](https://github.com/ClickHouse/ClickHouse/issues/9249) Fixes [#14214](https://github.com/ClickHouse/ClickHouse/issues/14214). [#14295](https://github.com/ClickHouse/ClickHouse/pull/14295) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
|
||||||
|
#### Performance Improvement
|
||||||
|
|
||||||
|
* Do not merge parts across partitions in SELECT FINAL. [#15938](https://github.com/ClickHouse/ClickHouse/pull/15938) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Improve performance of `-OrNull` and `-OrDefault` aggregate functions. [#16661](https://github.com/ClickHouse/ClickHouse/pull/16661) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Improve performance of `quantileMerge`. In previous versions it was obnoxiously slow. This closes [#1463](https://github.com/ClickHouse/ClickHouse/issues/1463). [#16643](https://github.com/ClickHouse/ClickHouse/pull/16643) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Improve performance of logical functions a little. [#16347](https://github.com/ClickHouse/ClickHouse/pull/16347) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Improved performance of merges assignment in MergeTree table engines. Shouldn't be visible for the user. [#16191](https://github.com/ClickHouse/ClickHouse/pull/16191) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Speedup hashed/sparse_hashed dictionary loading by preallocating the hash table. [#15454](https://github.com/ClickHouse/ClickHouse/pull/15454) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Now trivial count optimization becomes slightly non-trivial. Predicates that contain exact partition expr can be optimized too. This also fixes [#11092](https://github.com/ClickHouse/ClickHouse/issues/11092) which returns wrong count when `max_parallel_replicas > 1`. [#15074](https://github.com/ClickHouse/ClickHouse/pull/15074) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
|
||||||
|
* Add flaky check for stateless tests. It will detect potentially flaky functional tests in advance, before they are merged. [#16238](https://github.com/ClickHouse/ClickHouse/pull/16238) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Use proper version for `croaring` instead of amalgamation. [#16285](https://github.com/ClickHouse/ClickHouse/pull/16285) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
* Improve generation of build files for `ya.make` build system (Arcadia). [#16700](https://github.com/ClickHouse/ClickHouse/pull/16700) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add MySQL BinLog file check tool for `MaterializeMySQL` database engine. `MaterializeMySQL` is an experimental feature. [#16223](https://github.com/ClickHouse/ClickHouse/pull/16223) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Check for executable bit on non-executable files. People often accidentially commit executable files from Windows. [#15843](https://github.com/ClickHouse/ClickHouse/pull/15843) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Check for `#pragma once` in headers. [#15818](https://github.com/ClickHouse/ClickHouse/pull/15818) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix illegal code style `&vector[idx]` in libhdfs3. This fixes libcxx debug build. See also https://github.com/ClickHouse-Extras/libhdfs3/pull/8 . [#15815](https://github.com/ClickHouse/ClickHouse/pull/15815) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix build of one miscellaneous example tool on Mac OS. Note that we don't build examples on Mac OS in our CI (we build only ClickHouse binary), so there is zero chance it will not break again. This fixes [#15804](https://github.com/ClickHouse/ClickHouse/issues/15804). [#15808](https://github.com/ClickHouse/ClickHouse/pull/15808) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Simplify Sys/V init script. [#14135](https://github.com/ClickHouse/ClickHouse/pull/14135) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Added `boost::program_options` to `db_generator` in order to increase its usability. This closes [#15940](https://github.com/ClickHouse/ClickHouse/issues/15940). [#15973](https://github.com/ClickHouse/ClickHouse/pull/15973) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
|
||||||
|
|
||||||
|
## ClickHouse release 20.10
|
||||||
|
|
||||||
|
### ClickHouse release v20.10.4.1-stable, 2020-11-13
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix rare silent crashes when query profiler is on and ClickHouse is installed on OS with glibc version that has (supposedly) broken asynchronous unwind tables for some functions. This fixes [#15301](https://github.com/ClickHouse/ClickHouse/issues/15301). This fixes [#13098](https://github.com/ClickHouse/ClickHouse/issues/13098). [#16846](https://github.com/ClickHouse/ClickHouse/pull/16846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix `IN` operator over several columns and tuples with enabled `transform_null_in` setting. Fixes [#15310](https://github.com/ClickHouse/ClickHouse/issues/15310). [#16722](https://github.com/ClickHouse/ClickHouse/pull/16722) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* This will fix optimize_read_in_order/optimize_aggregation_in_order with max_threads>0 and expression in ORDER BY. [#16637](https://github.com/ClickHouse/ClickHouse/pull/16637) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Now when parsing AVRO from input the LowCardinality is removed from type. Fixes [#16188](https://github.com/ClickHouse/ClickHouse/issues/16188). [#16521](https://github.com/ClickHouse/ClickHouse/pull/16521) ([Mike](https://github.com/myrrc)).
|
||||||
|
* Fix rapid growth of metadata when using MySQL Master -> MySQL Slave -> ClickHouse MaterializeMySQL Engine, and `slave_parallel_worker` enabled on MySQL Slave, by properly shrinking GTID sets. This fixes [#15951](https://github.com/ClickHouse/ClickHouse/issues/15951). [#16504](https://github.com/ClickHouse/ClickHouse/pull/16504) ([TCeason](https://github.com/TCeason)).
|
||||||
|
* Fix DROP TABLE for Distributed (racy with INSERT). [#16409](https://github.com/ClickHouse/ClickHouse/pull/16409) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix processing of very large entries in replication queue. Very large entries may appear in ALTER queries if table structure is extremely large (near 1 MB). This fixes [#16307](https://github.com/ClickHouse/ClickHouse/issues/16307). [#16332](https://github.com/ClickHouse/ClickHouse/pull/16332) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix bug with MySQL database. When MySQL server used as database engine is down some queries raise Exception, because they try to get tables from disabled server, while it's unnecessary. For example, query `SELECT ... FROM system.parts` should work only with MergeTree tables and don't touch MySQL database at all. [#16032](https://github.com/ClickHouse/ClickHouse/pull/16032) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Workaround for use S3 with nginx server as proxy. Nginx currenty does not accept urls with empty path like http://domain.com?delete, but vanilla aws-sdk-cpp produces this kind of urls. This commit uses patched aws-sdk-cpp version, which makes urls with "/" as path in this cases, like http://domain.com/?delete. [#16813](https://github.com/ClickHouse/ClickHouse/pull/16813) ([ianton-ru](https://github.com/ianton-ru)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.10.3.30, 2020-10-28
|
||||||
|
|
||||||
|
#### Backward Incompatible Change
|
||||||
|
|
||||||
|
* Make `multiple_joins_rewriter_version` obsolete. Remove first version of joins rewriter. [#15472](https://github.com/ClickHouse/ClickHouse/pull/15472) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Change default value of `format_regexp_escaping_rule` setting (it's related to `Regexp` format) to `Raw` (it means - read whole subpattern as a value) to make the behaviour more like to what users expect. [#15426](https://github.com/ClickHouse/ClickHouse/pull/15426) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add support for nested multiline comments `/* comment /* comment */ */` in SQL. This conforms to the SQL standard. [#14655](https://github.com/ClickHouse/ClickHouse/pull/14655) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Added MergeTree settings (`max_replicated_merges_with_ttl_in_queue` and `max_number_of_merges_with_ttl_in_pool`) to control the number of merges with TTL in the background pool and replicated queue. This change breaks compatibility with older versions only if you use delete TTL. Otherwise, replication will stay compatible. You can avoid incompatibility issues if you update all shard replicas at once or execute `SYSTEM STOP TTL MERGES` until you finish the update of all replicas. If you'll get an incompatible entry in the replication queue, first of all, execute `SYSTEM STOP TTL MERGES` and after `ALTER TABLE ... DETACH PARTITION ...` the partition where incompatible TTL merge was assigned. Attach it back on a single replica. [#14490](https://github.com/ClickHouse/ClickHouse/pull/14490) ([alesapin](https://github.com/alesapin)).
|
||||||
|
|
||||||
|
#### New Feature
|
||||||
|
|
||||||
|
* Background data recompression. Add the ability to specify `TTL ... RECOMPRESS codec_name` for MergeTree table engines family. [#14494](https://github.com/ClickHouse/ClickHouse/pull/14494) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Add parallel quorum inserts. This closes [#15601](https://github.com/ClickHouse/ClickHouse/issues/15601). [#15601](https://github.com/ClickHouse/ClickHouse/pull/15601) ([Latysheva Alexandra](https://github.com/alexelex)).
|
||||||
|
* Settings for additional enforcement of data durability. Useful for non-replicated setups. [#11948](https://github.com/ClickHouse/ClickHouse/pull/11948) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* When duplicate block is written to replica where it does not exist locally (has not been fetched from replicas), don't ignore it and write locally to achieve the same effect as if it was successfully replicated. [#11684](https://github.com/ClickHouse/ClickHouse/pull/11684) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Now we support `WITH <identifier> AS (subquery) ... ` to introduce named subqueries in the query context. This closes [#2416](https://github.com/ClickHouse/ClickHouse/issues/2416). This closes [#4967](https://github.com/ClickHouse/ClickHouse/issues/4967). [#14771](https://github.com/ClickHouse/ClickHouse/pull/14771) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Introduce `enable_global_with_statement` setting which propagates the first select's `WITH` statements to other select queries at the same level, and makes aliases in `WITH` statements visible to subqueries. [#15451](https://github.com/ClickHouse/ClickHouse/pull/15451) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Secure inter-cluster query execution (with initial_user as current query user). [#13156](https://github.com/ClickHouse/ClickHouse/pull/13156) ([Azat Khuzhin](https://github.com/azat)). [#15551](https://github.com/ClickHouse/ClickHouse/pull/15551) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add the ability to remove column properties and table TTLs. Introduced queries `ALTER TABLE MODIFY COLUMN col_name REMOVE what_to_remove` and `ALTER TABLE REMOVE TTL`. Both operations are lightweight and executed at the metadata level. [#14742](https://github.com/ClickHouse/ClickHouse/pull/14742) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Added format `RawBLOB`. It is intended for input or output a single value without any escaping and delimiters. This closes [#15349](https://github.com/ClickHouse/ClickHouse/issues/15349). [#15364](https://github.com/ClickHouse/ClickHouse/pull/15364) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add the `reinterpretAsUUID` function that allows to convert a big-endian byte string to UUID. [#15480](https://github.com/ClickHouse/ClickHouse/pull/15480) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Implement `force_data_skipping_indices` setting. [#15642](https://github.com/ClickHouse/ClickHouse/pull/15642) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add a setting `output_format_pretty_row_numbers` to numerate the result in Pretty formats. This closes [#15350](https://github.com/ClickHouse/ClickHouse/issues/15350). [#15443](https://github.com/ClickHouse/ClickHouse/pull/15443) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Added query obfuscation tool. It allows to share more queries for better testing. This closes [#15268](https://github.com/ClickHouse/ClickHouse/issues/15268). [#15321](https://github.com/ClickHouse/ClickHouse/pull/15321) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add table function `null('structure')`. [#14797](https://github.com/ClickHouse/ClickHouse/pull/14797) ([vxider](https://github.com/Vxider)).
|
||||||
|
* Added `formatReadableQuantity` function. It is useful for reading big numbers by human. [#14725](https://github.com/ClickHouse/ClickHouse/pull/14725) ([Artem Hnilov](https://github.com/BooBSD)).
|
||||||
|
* Add format `LineAsString` that accepts a sequence of lines separated by newlines, every line is parsed as a whole as a single String field. [#14703](https://github.com/ClickHouse/ClickHouse/pull/14703) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)), [#13846](https://github.com/ClickHouse/ClickHouse/pull/13846) ([hexiaoting](https://github.com/hexiaoting)).
|
||||||
|
* Add `JSONStrings` format which output data in arrays of strings. [#14333](https://github.com/ClickHouse/ClickHouse/pull/14333) ([hcz](https://github.com/hczhcz)).
|
||||||
|
* Add support for "Raw" column format for `Regexp` format. It allows to simply extract subpatterns as a whole without any escaping rules. [#15363](https://github.com/ClickHouse/ClickHouse/pull/15363) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Allow configurable `NULL` representation for `TSV` output format. It is controlled by the setting `output_format_tsv_null_representation` which is `\N` by default. This closes [#9375](https://github.com/ClickHouse/ClickHouse/issues/9375). Note that the setting only controls output format and `\N` is the only supported `NULL` representation for `TSV` input format. [#14586](https://github.com/ClickHouse/ClickHouse/pull/14586) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Support Decimal data type for `MaterializedMySQL`. `MaterializedMySQL` is an experimental feature. [#14535](https://github.com/ClickHouse/ClickHouse/pull/14535) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Add new feature: `SHOW DATABASES LIKE 'xxx'`. [#14521](https://github.com/ClickHouse/ClickHouse/pull/14521) ([hexiaoting](https://github.com/hexiaoting)).
|
||||||
|
* Added a script to import (arbitrary) git repository to ClickHouse as a sample dataset. [#14471](https://github.com/ClickHouse/ClickHouse/pull/14471) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Now insert statements can have asterisk (or variants) with column transformers in the column list. [#14453](https://github.com/ClickHouse/ClickHouse/pull/14453) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* New query complexity limit settings `max_rows_to_read_leaf`, `max_bytes_to_read_leaf` for distributed queries to limit max rows/bytes read on the leaf nodes. Limit is applied for local reads only, *excluding* the final merge stage on the root node. [#14221](https://github.com/ClickHouse/ClickHouse/pull/14221) ([Roman Khavronenko](https://github.com/hagen1778)).
|
||||||
|
* Allow user to specify settings for `ReplicatedMergeTree*` storage in `<replicated_merge_tree>` section of config file. It works similarly to `<merge_tree>` section. For `ReplicatedMergeTree*` storages settings from `<merge_tree>` and `<replicated_merge_tree>` are applied together, but settings from `<replicated_merge_tree>` has higher priority. Added `system.replicated_merge_tree_settings` table. [#13573](https://github.com/ClickHouse/ClickHouse/pull/13573) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add `mapPopulateSeries` function. [#13166](https://github.com/ClickHouse/ClickHouse/pull/13166) ([Ildus Kurbangaliev](https://github.com/ildus)).
|
||||||
|
* Supporting MySQL types: `decimal` (as ClickHouse `Decimal`) and `datetime` with sub-second precision (as `DateTime64`). [#11512](https://github.com/ClickHouse/ClickHouse/pull/11512) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
|
* Introduce `event_time_microseconds` field to `system.text_log`, `system.trace_log`, `system.query_log` and `system.query_thread_log` tables. [#14760](https://github.com/ClickHouse/ClickHouse/pull/14760) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Add `event_time_microseconds` to `system.asynchronous_metric_log` & `system.metric_log` tables. [#14514](https://github.com/ClickHouse/ClickHouse/pull/14514) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Add `query_start_time_microseconds` field to `system.query_log` & `system.query_thread_log` tables. [#14252](https://github.com/ClickHouse/ClickHouse/pull/14252) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix the case when memory can be overallocated regardless to the limit. This closes [#14560](https://github.com/ClickHouse/ClickHouse/issues/14560). [#16206](https://github.com/ClickHouse/ClickHouse/pull/16206) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix `executable` dictionary source hang. In previous versions, when using some formats (e.g. `JSONEachRow`) data was not feed to a child process before it outputs at least something. This closes [#1697](https://github.com/ClickHouse/ClickHouse/issues/1697). This closes [#2455](https://github.com/ClickHouse/ClickHouse/issues/2455). [#14525](https://github.com/ClickHouse/ClickHouse/pull/14525) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix double free in case of exception in function `dictGet`. It could have happened if dictionary was loaded with error. [#16429](https://github.com/ClickHouse/ClickHouse/pull/16429) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix group by with totals/rollup/cube modifers and min/max functions over group by keys. Fixes [#16393](https://github.com/ClickHouse/ClickHouse/issues/16393). [#16397](https://github.com/ClickHouse/ClickHouse/pull/16397) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix async Distributed INSERT with prefer_localhost_replica=0 and internal_replication. [#16358](https://github.com/ClickHouse/ClickHouse/pull/16358) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix a very wrong code in TwoLevelStringHashTable implementation, which might lead to memory leak. [#16264](https://github.com/ClickHouse/ClickHouse/pull/16264) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix segfault in some cases of wrong aggregation in lambdas. [#16082](https://github.com/ClickHouse/ClickHouse/pull/16082) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix `ALTER MODIFY ... ORDER BY` query hang for `ReplicatedVersionedCollapsingMergeTree`. This fixes [#15980](https://github.com/ClickHouse/ClickHouse/issues/15980). [#16011](https://github.com/ClickHouse/ClickHouse/pull/16011) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fix collate name & charset name parser and support `length = 0` for string type. [#16008](https://github.com/ClickHouse/ClickHouse/pull/16008) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Allow to use `direct` layout for dictionaries with complex keys. [#16007](https://github.com/ClickHouse/ClickHouse/pull/16007) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Prevent replica hang for 5-10 mins when replication error happens after a period of inactivity. [#15987](https://github.com/ClickHouse/ClickHouse/pull/15987) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes [#15628](https://github.com/ClickHouse/ClickHouse/issues/15628). [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fix crash on create database failure. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) - Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixes [#12513](https://github.com/ClickHouse/ClickHouse/issues/12513): difference expressions with same alias when query is reanalyzed. [#15886](https://github.com/ClickHouse/ClickHouse/pull/15886) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fix possible very rare deadlocks in RBAC implementation. [#15875](https://github.com/ClickHouse/ClickHouse/pull/15875) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix exception `Block structure mismatch` in `SELECT ... ORDER BY DESC` queries which were executed after `ALTER MODIFY COLUMN` query. Fixes [#15800](https://github.com/ClickHouse/ClickHouse/issues/15800). [#15852](https://github.com/ClickHouse/ClickHouse/pull/15852) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fix `select count()` inaccuracy. [#15767](https://github.com/ClickHouse/ClickHouse/pull/15767) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix some cases of queries, in which only virtual columns are selected. Previously `Not found column _nothing in block` exception may be thrown. Fixes [#12298](https://github.com/ClickHouse/ClickHouse/issues/12298). [#15756](https://github.com/ClickHouse/ClickHouse/pull/15756) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix drop of materialized view with inner table in Atomic database (hangs all subsequent DROP TABLE due to hang of the worker thread, due to recursive DROP TABLE for inner table of MV). [#15743](https://github.com/ClickHouse/ClickHouse/pull/15743) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Possibility to move part to another disk/volume if the first attempt was failed. [#15723](https://github.com/ClickHouse/ClickHouse/pull/15723) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Fix error `Cannot find column` which may happen at insertion into `MATERIALIZED VIEW` in case if query for `MV` containes `ARRAY JOIN`. [#15717](https://github.com/ClickHouse/ClickHouse/pull/15717) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed too low default value of `max_replicated_logs_to_keep` setting, which might cause replicas to become lost too often. Improve lost replica recovery process by choosing the most up-to-date replica to clone. Also do not remove old parts from lost replica, detach them instead. [#15701](https://github.com/ClickHouse/ClickHouse/pull/15701) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix rare race condition in dictionaries and tables from MySQL. [#15686](https://github.com/ClickHouse/ClickHouse/pull/15686) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix (benign) race condition in AMQP-CPP. [#15667](https://github.com/ClickHouse/ClickHouse/pull/15667) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix error `Cannot add simple transform to empty Pipe` which happened while reading from `Buffer` table which has different structure than destination table. It was possible if destination table returned empty result for query. Fixes [#15529](https://github.com/ClickHouse/ClickHouse/issues/15529). [#15662](https://github.com/ClickHouse/ClickHouse/pull/15662) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Proper error handling during insert into MergeTree with S3. MergeTree over S3 is an experimental feature. [#15657](https://github.com/ClickHouse/ClickHouse/pull/15657) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Fixed bug with S3 table function: region from URL was not applied to S3 client configuration. [#15646](https://github.com/ClickHouse/ClickHouse/pull/15646) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Fix the order of destruction for resources in `ReadFromStorage` step of query plan. It might cause crashes in rare cases. Possibly connected with [#15610](https://github.com/ClickHouse/ClickHouse/issues/15610). [#15645](https://github.com/ClickHouse/ClickHouse/pull/15645) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Subtract `ReadonlyReplica` metric when detach readonly tables. [#15592](https://github.com/ClickHouse/ClickHouse/pull/15592) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
* Fixed `Element ... is not a constant expression` error when using `JSON*` function result in `VALUES`, `LIMIT` or right side of `IN` operator. [#15589](https://github.com/ClickHouse/ClickHouse/pull/15589) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Query will finish faster in case of exception. Cancel execution on remote replicas if exception happens. [#15578](https://github.com/ClickHouse/ClickHouse/pull/15578) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Prevent the possibility of error message `Could not calculate available disk space (statvfs), errno: 4, strerror: Interrupted system call`. This fixes [#15541](https://github.com/ClickHouse/ClickHouse/issues/15541). [#15557](https://github.com/ClickHouse/ClickHouse/pull/15557) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix `Database <db> doesn't exist.` in queries with IN and Distributed table when there's no database on initiator. [#15538](https://github.com/ClickHouse/ClickHouse/pull/15538) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix bug when `ILIKE` operator stops being case insensitive if `LIKE` with the same pattern was executed. [#15536](https://github.com/ClickHouse/ClickHouse/pull/15536) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix `Missing columns` errors when selecting columns which absent in data, but depend on other columns which also absent in data. Fixes [#15530](https://github.com/ClickHouse/ClickHouse/issues/15530). [#15532](https://github.com/ClickHouse/ClickHouse/pull/15532) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Throw an error when a single parameter is passed to ReplicatedMergeTree instead of ignoring it. [#15516](https://github.com/ClickHouse/ClickHouse/pull/15516) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
* Fix bug with event subscription in DDLWorker which rarely may lead to query hangs in `ON CLUSTER`. Introduced in [#13450](https://github.com/ClickHouse/ClickHouse/issues/13450). [#15477](https://github.com/ClickHouse/ClickHouse/pull/15477) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Report proper error when the second argument of `boundingRatio` aggregate function has a wrong type. [#15407](https://github.com/ClickHouse/ClickHouse/pull/15407) ([detailyang](https://github.com/detailyang)).
|
||||||
|
* Fixes [#15365](https://github.com/ClickHouse/ClickHouse/issues/15365): attach a database with MySQL engine throws exception (no query context). [#15384](https://github.com/ClickHouse/ClickHouse/pull/15384) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fix the case of multiple occurrences of column transformers in a select query. [#15378](https://github.com/ClickHouse/ClickHouse/pull/15378) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed compression in `S3` storage. [#15376](https://github.com/ClickHouse/ClickHouse/pull/15376) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Fix bug where queries like `SELECT toStartOfDay(today())` fail complaining about empty time_zone argument. [#15319](https://github.com/ClickHouse/ClickHouse/pull/15319) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix race condition during MergeTree table rename and background cleanup. [#15304](https://github.com/ClickHouse/ClickHouse/pull/15304) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix rare race condition on server startup when system logs are enabled. [#15300](https://github.com/ClickHouse/ClickHouse/pull/15300) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix hang of queries with a lot of subqueries to same table of `MySQL` engine. Previously, if there were more than 16 subqueries to same `MySQL` table in query, it hang forever. [#15299](https://github.com/ClickHouse/ClickHouse/pull/15299) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix 'Unknown identifier' in GROUP BY when query has JOIN over Merge table. [#15242](https://github.com/ClickHouse/ClickHouse/pull/15242) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Fix instance crash when using `joinGet` with `LowCardinality` types. This fixes https://github.com/ClickHouse/ClickHouse/issues/15214. [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Adjust Decimal field size in MySQL column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)).
|
||||||
|
* Fixes `Data compressed with different methods` in `join_algorithm='auto'`. Keep LowCardinality as type for left table join key in `join_algorithm='partial_merge'`. [#15088](https://github.com/ClickHouse/ClickHouse/pull/15088) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Update `jemalloc` to fix `percpu_arena` with affinity mask. [#15035](https://github.com/ClickHouse/ClickHouse/pull/15035) ([Azat Khuzhin](https://github.com/azat)). [#14957](https://github.com/ClickHouse/ClickHouse/pull/14957) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes https://github.com/ClickHouse/ClickHouse/issues/14908. [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* If function `bar` was called with specifically crafted arguments, buffer overflow was possible. This closes [#13926](https://github.com/ClickHouse/ClickHouse/issues/13926). [#15028](https://github.com/ClickHouse/ClickHouse/pull/15028) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in Docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix crash in RIGHT or FULL JOIN with join_algorith='auto' when memory limit exceeded and we should change HashJoin with MergeJoin. [#15002](https://github.com/ClickHouse/ClickHouse/pull/15002) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Now settings `number_of_free_entries_in_pool_to_execute_mutation` and `number_of_free_entries_in_pool_to_lower_max_size_of_merge` can be equal to `background_pool_size`. [#14975](https://github.com/ClickHouse/ClickHouse/pull/14975) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix to make predicate push down work when subquery contains `finalizeAggregation` function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes https://github.com/ClickHouse/ClickHouse/issues/14923. [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fixed `.metadata.tmp File exists` error. [#14898](https://github.com/ClickHouse/ClickHouse/pull/14898) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fix the issue when some invocations of `extractAllGroups` function may trigger "Memory limit exceeded" error. This fixes [#13383](https://github.com/ClickHouse/ClickHouse/issues/13383). [#14889](https://github.com/ClickHouse/ClickHouse/pull/14889) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix SIGSEGV for an attempt to INSERT into StorageFile with file descriptor. [#14887](https://github.com/ClickHouse/ClickHouse/pull/14887) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fixed segfault in `cache` dictionary [#14837](https://github.com/ClickHouse/ClickHouse/issues/14837). [#14879](https://github.com/ClickHouse/ClickHouse/pull/14879) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fixed bug in parsing MySQL binlog events, which causes `Attempt to read after eof` and `Packet payload is not fully read` in `MaterializeMySQL` database engine. [#14852](https://github.com/ClickHouse/ClickHouse/pull/14852) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fix rare error in `SELECT` queries when the queried column has `DEFAULT` expression which depends on the other column which also has `DEFAULT` and not present in select query and not exists on disk. Partially fixes [#14531](https://github.com/ClickHouse/ClickHouse/issues/14531). [#14845](https://github.com/ClickHouse/ClickHouse/pull/14845) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix a problem where the server may get stuck on startup while talking to ZooKeeper, if the configuration files have to be fetched from ZK (using the `from_zk` include option). This fixes [#14814](https://github.com/ClickHouse/ClickHouse/issues/14814). [#14843](https://github.com/ClickHouse/ClickHouse/pull/14843) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Fix wrong monotonicity detection for shrunk `Int -> Int` cast of signed types. It might lead to incorrect query result. This bug is unveiled in [#14513](https://github.com/ClickHouse/ClickHouse/issues/14513). [#14783](https://github.com/ClickHouse/ClickHouse/pull/14783) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* `Replace` column transformer should replace identifiers with cloned ASTs. This fixes https://github.com/ClickHouse/ClickHouse/issues/14695 . [#14734](https://github.com/ClickHouse/ClickHouse/pull/14734) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed missed default database name in metadata of materialized view when executing `ALTER ... MODIFY QUERY`. [#14664](https://github.com/ClickHouse/ClickHouse/pull/14664) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix bug when `ALTER UPDATE` mutation with `Nullable` column in assignment expression and constant value (like `UPDATE x = 42`) leads to incorrect value in column or segfault. Fixes [#13634](https://github.com/ClickHouse/ClickHouse/issues/13634), [#14045](https://github.com/ClickHouse/ClickHouse/issues/14045). [#14646](https://github.com/ClickHouse/ClickHouse/pull/14646) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix wrong Decimal multiplication result caused wrong decimal scale of result column. [#14603](https://github.com/ClickHouse/ClickHouse/pull/14603) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Fix function `has` with `LowCardinality` of `Nullable`. [#14591](https://github.com/ClickHouse/ClickHouse/pull/14591) ([Mike](https://github.com/myrrc)).
|
||||||
|
* Cleanup data directory after Zookeeper exceptions during CreateQuery for StorageReplicatedMergeTree Engine. [#14563](https://github.com/ClickHouse/ClickHouse/pull/14563) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix rare segfaults in functions with combinator `-Resample`, which could appear in result of overflow with very large parameters. [#14562](https://github.com/ClickHouse/ClickHouse/pull/14562) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix a bug when converting `Nullable(String)` to Enum. Introduced by https://github.com/ClickHouse/ClickHouse/pull/12745. This fixes https://github.com/ClickHouse/ClickHouse/issues/14435. [#14530](https://github.com/ClickHouse/ClickHouse/pull/14530) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed the incorrect sorting order of `Nullable` column. This fixes [#14344](https://github.com/ClickHouse/ClickHouse/issues/14344). [#14495](https://github.com/ClickHouse/ClickHouse/pull/14495) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix `currentDatabase()` function cannot be used in `ON CLUSTER` ddl query. [#14211](https://github.com/ClickHouse/ClickHouse/pull/14211) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fixed `Packet payload is not fully read` error in `MaterializeMySQL` database engine. [#14696](https://github.com/ClickHouse/ClickHouse/pull/14696) ([BohuTANG](https://github.com/BohuTANG)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Enable `Atomic` database engine by default for newly created databases. [#15003](https://github.com/ClickHouse/ClickHouse/pull/15003) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Add the ability to specify specialized codecs like `Delta`, `T64`, etc. for columns with subtypes. Implements [#12551](https://github.com/ClickHouse/ClickHouse/issues/12551), fixes [#11397](https://github.com/ClickHouse/ClickHouse/issues/11397), fixes [#4609](https://github.com/ClickHouse/ClickHouse/issues/4609). [#15089](https://github.com/ClickHouse/ClickHouse/pull/15089) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Dynamic reload of zookeeper config. [#14678](https://github.com/ClickHouse/ClickHouse/pull/14678) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
* Now it's allowed to execute `ALTER ... ON CLUSTER` queries regardless of the `<internal_replication>` setting in cluster config. [#16075](https://github.com/ClickHouse/ClickHouse/pull/16075) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Now `joinGet` supports multi-key lookup. Continuation of [#12418](https://github.com/ClickHouse/ClickHouse/issues/12418). [#13015](https://github.com/ClickHouse/ClickHouse/pull/13015) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Wait for `DROP/DETACH TABLE` to actually finish if `NO DELAY` or `SYNC` is specified for `Atomic` database. [#15448](https://github.com/ClickHouse/ClickHouse/pull/15448) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Now it's possible to change the type of version column for `VersionedCollapsingMergeTree` with `ALTER` query. [#15442](https://github.com/ClickHouse/ClickHouse/pull/15442) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Unfold `{database}`, `{table}` and `{uuid}` macros in `zookeeper_path` on replicated table creation. Do not allow `RENAME TABLE` if it may break `zookeeper_path` after server restart. Fixes [#6917](https://github.com/ClickHouse/ClickHouse/issues/6917). [#15348](https://github.com/ClickHouse/ClickHouse/pull/15348) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* The function `now` allows an argument with timezone. This closes [15264](https://github.com/ClickHouse/ClickHouse/issues/15264). [#15285](https://github.com/ClickHouse/ClickHouse/pull/15285) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Do not allow connections to ClickHouse server until all scripts in `/docker-entrypoint-initdb.d/` are executed. [#15244](https://github.com/ClickHouse/ClickHouse/pull/15244) ([Aleksei Kozharin](https://github.com/alekseik1)).
|
||||||
|
* Added `optimize` setting to `EXPLAIN PLAN` query. If enabled, query plan level optimisations are applied. Enabled by default. [#15201](https://github.com/ClickHouse/ClickHouse/pull/15201) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Proper exception message for wrong number of arguments of CAST. This closes [#13992](https://github.com/ClickHouse/ClickHouse/issues/13992). [#15029](https://github.com/ClickHouse/ClickHouse/pull/15029) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add option to disable TTL move on data part insert. [#15000](https://github.com/ClickHouse/ClickHouse/pull/15000) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Ignore key constraints when doing mutations. Without this pull request, it's not possible to do mutations when `force_index_by_date = 1` or `force_primary_key = 1`. [#14973](https://github.com/ClickHouse/ClickHouse/pull/14973) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Allow to drop Replicated table if previous drop attempt was failed due to ZooKeeper session expiration. This fixes [#11891](https://github.com/ClickHouse/ClickHouse/issues/11891). [#14926](https://github.com/ClickHouse/ClickHouse/pull/14926) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed excessive settings constraint violation when running SELECT with SETTINGS from a distributed table. [#14876](https://github.com/ClickHouse/ClickHouse/pull/14876) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Provide a `load_balancing_first_offset` query setting to explicitly state what the first replica is. It's used together with `FIRST_OR_RANDOM` load balancing strategy, which allows to control replicas workload. [#14867](https://github.com/ClickHouse/ClickHouse/pull/14867) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Show subqueries for `SET` and `JOIN` in `EXPLAIN` result. [#14856](https://github.com/ClickHouse/ClickHouse/pull/14856) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Allow using multi-volume storage configuration in storage `Distributed`. [#14839](https://github.com/ClickHouse/ClickHouse/pull/14839) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Construct `query_start_time` and `query_start_time_microseconds` from the same timespec. [#14831](https://github.com/ClickHouse/ClickHouse/pull/14831) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Support for disabling persistency for `StorageJoin` and `StorageSet`, this feature is controlled by setting `disable_set_and_join_persistency`. And this PR solved issue [#6318](https://github.com/ClickHouse/ClickHouse/issues/6318). [#14776](https://github.com/ClickHouse/ClickHouse/pull/14776) ([vxider](https://github.com/Vxider)).
|
||||||
|
* Now `COLUMNS` can be used to wrap over a list of columns and apply column transformers afterwards. [#14775](https://github.com/ClickHouse/ClickHouse/pull/14775) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add `merge_algorithm` to `system.merges` table to improve merging inspections. [#14705](https://github.com/ClickHouse/ClickHouse/pull/14705) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix potential memory leak caused by zookeeper exists watch. [#14693](https://github.com/ClickHouse/ClickHouse/pull/14693) ([hustnn](https://github.com/hustnn)).
|
||||||
|
* Allow parallel execution of distributed DDL. [#14684](https://github.com/ClickHouse/ClickHouse/pull/14684) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add `QueryMemoryLimitExceeded` event counter. This closes [#14589](https://github.com/ClickHouse/ClickHouse/issues/14589). [#14647](https://github.com/ClickHouse/ClickHouse/pull/14647) ([fastio](https://github.com/fastio)).
|
||||||
|
* Fix some trailing whitespaces in query formatting. [#14595](https://github.com/ClickHouse/ClickHouse/pull/14595) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* ClickHouse treats partition expr and key expr differently. Partition expr is used to construct an minmax index containing related columns, while primary key expr is stored as an expr. Sometimes user might partition a table at coarser levels, such as `partition by i / 1000`. However, binary operators are not monotonic and this PR tries to fix that. It might also benifit other use cases. [#14513](https://github.com/ClickHouse/ClickHouse/pull/14513) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add an option to skip access checks for `DiskS3`. `s3` disk is an experimental feature. [#14497](https://github.com/ClickHouse/ClickHouse/pull/14497) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Speed up server shutdown process if there are ongoing S3 requests. [#14496](https://github.com/ClickHouse/ClickHouse/pull/14496) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* `SYSTEM RELOAD CONFIG` now throws an exception if failed to reload and continues using the previous users.xml. The background periodic reloading also continues using the previous users.xml if failed to reload. [#14492](https://github.com/ClickHouse/ClickHouse/pull/14492) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* For INSERTs with inline data in VALUES format in the script mode of `clickhouse-client`, support semicolon as the data terminator, in addition to the new line. Closes https://github.com/ClickHouse/ClickHouse/issues/12288. [#13192](https://github.com/ClickHouse/ClickHouse/pull/13192) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Support custom codecs in compact parts. [#12183](https://github.com/ClickHouse/ClickHouse/pull/12183) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
#### Performance Improvement
|
||||||
|
|
||||||
|
* Enable compact parts by default for small parts. This will allow to process frequent inserts slightly more efficiently (4..100 times). [#11913](https://github.com/ClickHouse/ClickHouse/pull/11913) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Improve `quantileTDigest` performance. This fixes [#2668](https://github.com/ClickHouse/ClickHouse/issues/2668). [#15542](https://github.com/ClickHouse/ClickHouse/pull/15542) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Significantly reduce memory usage in AggregatingInOrderTransform/optimize_aggregation_in_order. [#15543](https://github.com/ClickHouse/ClickHouse/pull/15543) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Faster 256-bit multiplication. [#15418](https://github.com/ClickHouse/ClickHouse/pull/15418) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Improve performance of 256-bit types using (u)int64_t as base type for wide integers. Original wide integers use 8-bit types as base. [#14859](https://github.com/ClickHouse/ClickHouse/pull/14859) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Explicitly use a temporary disk to store vertical merge temporary data. [#15639](https://github.com/ClickHouse/ClickHouse/pull/15639) ([Grigory Pervakov](https://github.com/GrigoryPervakov)).
|
||||||
|
* Use one S3 DeleteObjects request instead of multiple DeleteObject in a loop. No any functionality changes, so covered by existing tests like integration/test_log_family_s3. [#15238](https://github.com/ClickHouse/ClickHouse/pull/15238) ([ianton-ru](https://github.com/ianton-ru)).
|
||||||
|
* Fix `DateTime <op> DateTime` mistakenly choosing the slow generic implementation. This fixes https://github.com/ClickHouse/ClickHouse/issues/15153. [#15178](https://github.com/ClickHouse/ClickHouse/pull/15178) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Improve performance of GROUP BY key of type `FixedString`. [#15034](https://github.com/ClickHouse/ClickHouse/pull/15034) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Only `mlock` code segment when starting clickhouse-server. In previous versions, all mapped regions were locked in memory, including debug info. Debug info is usually splitted to a separate file but if it isn't, it led to +2..3 GiB memory usage. [#14929](https://github.com/ClickHouse/ClickHouse/pull/14929) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* ClickHouse binary become smaller due to link time optimization.
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
|
||||||
|
* Now we use clang-11 for production ClickHouse build. [#15239](https://github.com/ClickHouse/ClickHouse/pull/15239) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Now we use clang-11 to build ClickHouse in CI. [#14846](https://github.com/ClickHouse/ClickHouse/pull/14846) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Switch binary builds (Linux, Darwin, AArch64, FreeDSD) to clang-11. [#15622](https://github.com/ClickHouse/ClickHouse/pull/15622) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||||
|
* Now all test images use `llvm-symbolizer-11`. [#15069](https://github.com/ClickHouse/ClickHouse/pull/15069) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Allow to build with llvm-11. [#15366](https://github.com/ClickHouse/ClickHouse/pull/15366) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Switch from `clang-tidy-10` to `clang-tidy-11`. [#14922](https://github.com/ClickHouse/ClickHouse/pull/14922) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Use LLVM's experimental pass manager by default. [#15608](https://github.com/ClickHouse/ClickHouse/pull/15608) ([Danila Kutenin](https://github.com/danlark1)).
|
||||||
|
* Don't allow any C++ translation unit to build more than 10 minutes or to use more than 10 GB or memory. This fixes [#14925](https://github.com/ClickHouse/ClickHouse/issues/14925). [#15060](https://github.com/ClickHouse/ClickHouse/pull/15060) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Make performance test more stable and representative by splitting test runs and profile runs. [#15027](https://github.com/ClickHouse/ClickHouse/pull/15027) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Attempt to make performance test more reliable. It is done by remapping the executable memory of the process on the fly with `madvise` to use transparent huge pages - it can lower the number of iTLB misses which is the main source of instabilities in performance tests. [#14685](https://github.com/ClickHouse/ClickHouse/pull/14685) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Convert to python3. This closes [#14886](https://github.com/ClickHouse/ClickHouse/issues/14886). [#15007](https://github.com/ClickHouse/ClickHouse/pull/15007) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fail early in functional tests if server failed to respond. This closes [#15262](https://github.com/ClickHouse/ClickHouse/issues/15262). [#15267](https://github.com/ClickHouse/ClickHouse/pull/15267) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Allow to run AArch64 version of clickhouse-server without configs. This facilitates [#15174](https://github.com/ClickHouse/ClickHouse/issues/15174). [#15266](https://github.com/ClickHouse/ClickHouse/pull/15266) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Improvements in CI docker images: get rid of ZooKeeper and single script for test configs installation. [#15215](https://github.com/ClickHouse/ClickHouse/pull/15215) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix CMake options forwarding in fast test script. Fixes error in [#14711](https://github.com/ClickHouse/ClickHouse/issues/14711). [#15155](https://github.com/ClickHouse/ClickHouse/pull/15155) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Added a script to perform hardware benchmark in a single command. [#15115](https://github.com/ClickHouse/ClickHouse/pull/15115) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Splitted huge test `test_dictionaries_all_layouts_and_sources` into smaller ones. [#15110](https://github.com/ClickHouse/ClickHouse/pull/15110) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Maybe fix MSan report in base64 (on servers with AVX-512). This fixes [#14006](https://github.com/ClickHouse/ClickHouse/issues/14006). [#15030](https://github.com/ClickHouse/ClickHouse/pull/15030) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Reformat and cleanup code in all integration test *.py files. [#14864](https://github.com/ClickHouse/ClickHouse/pull/14864) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix MaterializeMySQL empty transaction unstable test case found in CI. [#14854](https://github.com/ClickHouse/ClickHouse/pull/14854) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Attempt to speed up build a little. [#14808](https://github.com/ClickHouse/ClickHouse/pull/14808) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Speed up build a little by removing unused headers. [#14714](https://github.com/ClickHouse/ClickHouse/pull/14714) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix build failure in OSX. [#14761](https://github.com/ClickHouse/ClickHouse/pull/14761) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Enable ccache by default in cmake if it's found in OS. [#14575](https://github.com/ClickHouse/ClickHouse/pull/14575) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Control CI builds configuration from the ClickHouse repository. [#14547](https://github.com/ClickHouse/ClickHouse/pull/14547) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* In CMake files: - Moved some options' descriptions' parts to comments above. - Replace 0 -> `OFF`, 1 -> `ON` in `option`s default values. - Added some descriptions and links to docs to the options. - Replaced `FUZZER` option (there is another option `ENABLE_FUZZING` which also enables same functionality). - Removed `ENABLE_GTEST_LIBRARY` option as there is `ENABLE_TESTS`. See the full description in PR: [#14711](https://github.com/ClickHouse/ClickHouse/pull/14711) ([Mike](https://github.com/myrrc)).
|
||||||
|
* Make binary a bit smaller (~50 Mb for debug version). [#14555](https://github.com/ClickHouse/ClickHouse/pull/14555) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Use std::filesystem::path in ConfigProcessor for concatenating file paths. [#14558](https://github.com/ClickHouse/ClickHouse/pull/14558) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix debug assertion in `bitShiftLeft()` when called with negative big integer. [#14697](https://github.com/ClickHouse/ClickHouse/pull/14697) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
|
||||||
|
|
||||||
## ClickHouse release 20.9
|
## ClickHouse release 20.9
|
||||||
|
|
||||||
### ClickHouse release v20.9.2.20-stable, 2020-09-22
|
### ClickHouse release v20.9.5.5-stable, 2020-11-13
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix rare silent crashes when query profiler is on and ClickHouse is installed on OS with glibc version that has (supposedly) broken asynchronous unwind tables for some functions. This fixes [#15301](https://github.com/ClickHouse/ClickHouse/issues/15301). This fixes [#13098](https://github.com/ClickHouse/ClickHouse/issues/13098). [#16846](https://github.com/ClickHouse/ClickHouse/pull/16846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Now when parsing AVRO from input the LowCardinality is removed from type. Fixes [#16188](https://github.com/ClickHouse/ClickHouse/issues/16188). [#16521](https://github.com/ClickHouse/ClickHouse/pull/16521) ([Mike](https://github.com/myrrc)).
|
||||||
|
* Fix rapid growth of metadata when using MySQL Master -> MySQL Slave -> ClickHouse MaterializeMySQL Engine, and `slave_parallel_worker` enabled on MySQL Slave, by properly shrinking GTID sets. This fixes [#15951](https://github.com/ClickHouse/ClickHouse/issues/15951). [#16504](https://github.com/ClickHouse/ClickHouse/pull/16504) ([TCeason](https://github.com/TCeason)).
|
||||||
|
* Fix DROP TABLE for Distributed (racy with INSERT). [#16409](https://github.com/ClickHouse/ClickHouse/pull/16409) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix processing of very large entries in replication queue. Very large entries may appear in ALTER queries if table structure is extremely large (near 1 MB). This fixes [#16307](https://github.com/ClickHouse/ClickHouse/issues/16307). [#16332](https://github.com/ClickHouse/ClickHouse/pull/16332) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed the inconsistent behaviour when a part of return data could be dropped because the set for its filtration wasn't created. [#16308](https://github.com/ClickHouse/ClickHouse/pull/16308) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix bug with MySQL database. When MySQL server used as database engine is down some queries raise Exception, because they try to get tables from disabled server, while it's unnecessary. For example, query `SELECT ... FROM system.parts` should work only with MergeTree tables and don't touch MySQL database at all. [#16032](https://github.com/ClickHouse/ClickHouse/pull/16032) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.9.4.76-stable (2020-10-29)
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix double free in case of exception in function `dictGet`. It could have happened if dictionary was loaded with error. [#16429](https://github.com/ClickHouse/ClickHouse/pull/16429) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix group by with totals/rollup/cube modifers and min/max functions over group by keys. Fixes [#16393](https://github.com/ClickHouse/ClickHouse/issues/16393). [#16397](https://github.com/ClickHouse/ClickHouse/pull/16397) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix async Distributed INSERT w/ prefer_localhost_replica=0 and internal_replication. [#16358](https://github.com/ClickHouse/ClickHouse/pull/16358) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix a very wrong code in TwoLevelStringHashTable implementation, which might lead to memory leak. I'm suprised how this bug can lurk for so long.... [#16264](https://github.com/ClickHouse/ClickHouse/pull/16264) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix the case when memory can be overallocated regardless to the limit. This closes [#14560](https://github.com/ClickHouse/ClickHouse/issues/14560). [#16206](https://github.com/ClickHouse/ClickHouse/pull/16206) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix `ALTER MODIFY ... ORDER BY` query hang for `ReplicatedVersionedCollapsingMergeTree`. This fixes [#15980](https://github.com/ClickHouse/ClickHouse/issues/15980). [#16011](https://github.com/ClickHouse/ClickHouse/pull/16011) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix collate name & charset name parser and support `length = 0` for string type. [#16008](https://github.com/ClickHouse/ClickHouse/pull/16008) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Allow to use direct layout for dictionaries with complex keys. [#16007](https://github.com/ClickHouse/ClickHouse/pull/16007) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Prevent replica hang for 5-10 mins when replication error happens after a period of inactivity. [#15987](https://github.com/ClickHouse/ClickHouse/pull/15987) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes https://github.com/ClickHouse/ClickHouse/issues/15628. [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix a crash when database creation fails. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix possible deadlocks in RBAC. [#15875](https://github.com/ClickHouse/ClickHouse/pull/15875) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix exception `Block structure mismatch` in `SELECT ... ORDER BY DESC` queries which were executed after `ALTER MODIFY COLUMN` query. Fixes [#15800](https://github.com/ClickHouse/ClickHouse/issues/15800). [#15852](https://github.com/ClickHouse/ClickHouse/pull/15852) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix `select count()` inaccuracy for MaterializeMySQL. [#15767](https://github.com/ClickHouse/ClickHouse/pull/15767) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix some cases of queries, in which only virtual columns are selected. Previously `Not found column _nothing in block` exception may be thrown. Fixes [#12298](https://github.com/ClickHouse/ClickHouse/issues/12298). [#15756](https://github.com/ClickHouse/ClickHouse/pull/15756) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fixed too low default value of `max_replicated_logs_to_keep` setting, which might cause replicas to become lost too often. Improve lost replica recovery process by choosing the most up-to-date replica to clone. Also do not remove old parts from lost replica, detach them instead. [#15701](https://github.com/ClickHouse/ClickHouse/pull/15701) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix error `Cannot add simple transform to empty Pipe` which happened while reading from `Buffer` table which has different structure than destination table. It was possible if destination table returned empty result for query. Fixes [#15529](https://github.com/ClickHouse/ClickHouse/issues/15529). [#15662](https://github.com/ClickHouse/ClickHouse/pull/15662) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed bug with globs in S3 table function, region from URL was not applied to S3 client configuration. [#15646](https://github.com/ClickHouse/ClickHouse/pull/15646) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Decrement the `ReadonlyReplica` metric when detaching read-only tables. This fixes https://github.com/ClickHouse/ClickHouse/issues/15598. [#15592](https://github.com/ClickHouse/ClickHouse/pull/15592) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
* Throw an error when a single parameter is passed to ReplicatedMergeTree instead of ignoring it. [#15516](https://github.com/ClickHouse/ClickHouse/pull/15516) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Now it's allowed to execute `ALTER ... ON CLUSTER` queries regardless of the `<internal_replication>` setting in cluster config. [#16075](https://github.com/ClickHouse/ClickHouse/pull/16075) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Unfold `{database}`, `{table}` and `{uuid}` macros in `ReplicatedMergeTree` arguments on table creation. [#16160](https://github.com/ClickHouse/ClickHouse/pull/16160) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.9.3.45-stable (2020-10-09)
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix error `Cannot find column` which may happen at insertion into `MATERIALIZED VIEW` in case if query for `MV` containes `ARRAY JOIN`. [#15717](https://github.com/ClickHouse/ClickHouse/pull/15717) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix race condition in AMQP-CPP. [#15667](https://github.com/ClickHouse/ClickHouse/pull/15667) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix the order of destruction for resources in `ReadFromStorage` step of query plan. It might cause crashes in rare cases. Possibly connected with [#15610](https://github.com/ClickHouse/ClickHouse/issues/15610). [#15645](https://github.com/ClickHouse/ClickHouse/pull/15645) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed `Element ... is not a constant expression` error when using `JSON*` function result in `VALUES`, `LIMIT` or right side of `IN` operator. [#15589](https://github.com/ClickHouse/ClickHouse/pull/15589) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Prevent the possibility of error message `Could not calculate available disk space (statvfs), errno: 4, strerror: Interrupted system call`. This fixes [#15541](https://github.com/ClickHouse/ClickHouse/issues/15541). [#15557](https://github.com/ClickHouse/ClickHouse/pull/15557) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Significantly reduce memory usage in AggregatingInOrderTransform/optimize_aggregation_in_order. [#15543](https://github.com/ClickHouse/ClickHouse/pull/15543) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix bug when `ILIKE` operator stops being case insensitive if `LIKE` with the same pattern was executed. [#15536](https://github.com/ClickHouse/ClickHouse/pull/15536) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix `Missing columns` errors when selecting columns which absent in data, but depend on other columns which also absent in data. Fixes [#15530](https://github.com/ClickHouse/ClickHouse/issues/15530). [#15532](https://github.com/ClickHouse/ClickHouse/pull/15532) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix bug with event subscription in DDLWorker which rarely may lead to query hangs in `ON CLUSTER`. Introduced in [#13450](https://github.com/ClickHouse/ClickHouse/issues/13450). [#15477](https://github.com/ClickHouse/ClickHouse/pull/15477) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Report proper error when the second argument of `boundingRatio` aggregate function has a wrong type. [#15407](https://github.com/ClickHouse/ClickHouse/pull/15407) ([detailyang](https://github.com/detailyang)).
|
||||||
|
* Fix bug where queries like SELECT toStartOfDay(today()) fail complaining about empty time_zone argument. [#15319](https://github.com/ClickHouse/ClickHouse/pull/15319) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix race condition during MergeTree table rename and background cleanup. [#15304](https://github.com/ClickHouse/ClickHouse/pull/15304) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix rare race condition on server startup when system.logs are enabled. [#15300](https://github.com/ClickHouse/ClickHouse/pull/15300) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix instance crash when using joinGet with LowCardinality types. This fixes https://github.com/ClickHouse/ClickHouse/issues/15214. [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Adjust decimals field size in mysql column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)).
|
||||||
|
* Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix to make predicate push down work when subquery contains finalizeAggregation function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Fix a problem where the server may get stuck on startup while talking to ZooKeeper, if the configuration files have to be fetched from ZK (using the `from_zk` include option). This fixes [#14814](https://github.com/ClickHouse/ClickHouse/issues/14814). [#14843](https://github.com/ClickHouse/ClickHouse/pull/14843) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Now it's possible to change the type of version column for `VersionedCollapsingMergeTree` with `ALTER` query. [#15442](https://github.com/ClickHouse/ClickHouse/pull/15442) ([alesapin](https://github.com/alesapin)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.9.2.20, 2020-09-22
|
||||||
|
|
||||||
#### New Feature
|
#### New Feature
|
||||||
|
|
||||||
@ -74,6 +510,110 @@
|
|||||||
|
|
||||||
## ClickHouse release 20.8
|
## ClickHouse release 20.8
|
||||||
|
|
||||||
|
### ClickHouse release v20.8.6.6-lts, 2020-11-13
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix rare silent crashes when query profiler is on and ClickHouse is installed on OS with glibc version that has (supposedly) broken asynchronous unwind tables for some functions. This fixes [#15301](https://github.com/ClickHouse/ClickHouse/issues/15301). This fixes [#13098](https://github.com/ClickHouse/ClickHouse/issues/13098). [#16846](https://github.com/ClickHouse/ClickHouse/pull/16846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Now when parsing AVRO from input the LowCardinality is removed from type. Fixes [#16188](https://github.com/ClickHouse/ClickHouse/issues/16188). [#16521](https://github.com/ClickHouse/ClickHouse/pull/16521) ([Mike](https://github.com/myrrc)).
|
||||||
|
* Fix rapid growth of metadata when using MySQL Master -> MySQL Slave -> ClickHouse MaterializeMySQL Engine, and `slave_parallel_worker` enabled on MySQL Slave, by properly shrinking GTID sets. This fixes [#15951](https://github.com/ClickHouse/ClickHouse/issues/15951). [#16504](https://github.com/ClickHouse/ClickHouse/pull/16504) ([TCeason](https://github.com/TCeason)).
|
||||||
|
* Fix DROP TABLE for Distributed (racy with INSERT). [#16409](https://github.com/ClickHouse/ClickHouse/pull/16409) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix processing of very large entries in replication queue. Very large entries may appear in ALTER queries if table structure is extremely large (near 1 MB). This fixes [#16307](https://github.com/ClickHouse/ClickHouse/issues/16307). [#16332](https://github.com/ClickHouse/ClickHouse/pull/16332) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed the inconsistent behaviour when a part of return data could be dropped because the set for its filtration wasn't created. [#16308](https://github.com/ClickHouse/ClickHouse/pull/16308) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix bug with MySQL database. When MySQL server used as database engine is down some queries raise Exception, because they try to get tables from disabled server, while it's unnecessary. For example, query `SELECT ... FROM system.parts` should work only with MergeTree tables and don't touch MySQL database at all. [#16032](https://github.com/ClickHouse/ClickHouse/pull/16032) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.8.5.45-lts, 2020-10-29
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix double free in case of exception in function `dictGet`. It could have happened if dictionary was loaded with error. [#16429](https://github.com/ClickHouse/ClickHouse/pull/16429) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix group by with totals/rollup/cube modifers and min/max functions over group by keys. Fixes [#16393](https://github.com/ClickHouse/ClickHouse/issues/16393). [#16397](https://github.com/ClickHouse/ClickHouse/pull/16397) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix async Distributed INSERT w/ prefer_localhost_replica=0 and internal_replication. [#16358](https://github.com/ClickHouse/ClickHouse/pull/16358) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix a possible memory leak during `GROUP BY` with string keys, caused by an error in `TwoLevelStringHashTable` implementation. [#16264](https://github.com/ClickHouse/ClickHouse/pull/16264) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix the case when memory can be overallocated regardless to the limit. This closes [#14560](https://github.com/ClickHouse/ClickHouse/issues/14560). [#16206](https://github.com/ClickHouse/ClickHouse/pull/16206) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix `ALTER MODIFY ... ORDER BY` query hang for `ReplicatedVersionedCollapsingMergeTree`. This fixes [#15980](https://github.com/ClickHouse/ClickHouse/issues/15980). [#16011](https://github.com/ClickHouse/ClickHouse/pull/16011) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix collate name & charset name parser and support `length = 0` for string type. [#16008](https://github.com/ClickHouse/ClickHouse/pull/16008) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Allow to use direct layout for dictionaries with complex keys. [#16007](https://github.com/ClickHouse/ClickHouse/pull/16007) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Prevent replica hang for 5-10 mins when replication error happens after a period of inactivity. [#15987](https://github.com/ClickHouse/ClickHouse/pull/15987) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes [#15628](https://github.com/ClickHouse/ClickHouse/issues/15628). [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix a crash when database creation fails. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix possible deadlocks in RBAC. [#15875](https://github.com/ClickHouse/ClickHouse/pull/15875) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix exception `Block structure mismatch` in `SELECT ... ORDER BY DESC` queries which were executed after `ALTER MODIFY COLUMN` query. Fixes [#15800](https://github.com/ClickHouse/ClickHouse/issues/15800). [#15852](https://github.com/ClickHouse/ClickHouse/pull/15852) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix some cases of queries, in which only virtual columns are selected. Previously `Not found column _nothing in block` exception may be thrown. Fixes [#12298](https://github.com/ClickHouse/ClickHouse/issues/12298). [#15756](https://github.com/ClickHouse/ClickHouse/pull/15756) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix error `Cannot find column` which may happen at insertion into `MATERIALIZED VIEW` in case if query for `MV` containes `ARRAY JOIN`. [#15717](https://github.com/ClickHouse/ClickHouse/pull/15717) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed too low default value of `max_replicated_logs_to_keep` setting, which might cause replicas to become lost too often. Improve lost replica recovery process by choosing the most up-to-date replica to clone. Also do not remove old parts from lost replica, detach them instead. [#15701](https://github.com/ClickHouse/ClickHouse/pull/15701) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix error `Cannot add simple transform to empty Pipe` which happened while reading from `Buffer` table which has different structure than destination table. It was possible if destination table returned empty result for query. Fixes [#15529](https://github.com/ClickHouse/ClickHouse/issues/15529). [#15662](https://github.com/ClickHouse/ClickHouse/pull/15662) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed bug with globs in S3 table function, region from URL was not applied to S3 client configuration. [#15646](https://github.com/ClickHouse/ClickHouse/pull/15646) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Decrement the `ReadonlyReplica` metric when detaching read-only tables. This fixes [#15598](https://github.com/ClickHouse/ClickHouse/issues/15598). [#15592](https://github.com/ClickHouse/ClickHouse/pull/15592) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
* Throw an error when a single parameter is passed to ReplicatedMergeTree instead of ignoring it. [#15516](https://github.com/ClickHouse/ClickHouse/pull/15516) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Now it's allowed to execute `ALTER ... ON CLUSTER` queries regardless of the `<internal_replication>` setting in cluster config. [#16075](https://github.com/ClickHouse/ClickHouse/pull/16075) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Unfold `{database}`, `{table}` and `{uuid}` macros in `ReplicatedMergeTree` arguments on table creation. [#16159](https://github.com/ClickHouse/ClickHouse/pull/16159) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.8.4.11-lts, 2020-10-09
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix the order of destruction for resources in `ReadFromStorage` step of query plan. It might cause crashes in rare cases. Possibly connected with [#15610](https://github.com/ClickHouse/ClickHouse/issues/15610). [#15645](https://github.com/ClickHouse/ClickHouse/pull/15645) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed `Element ... is not a constant expression` error when using `JSON*` function result in `VALUES`, `LIMIT` or right side of `IN` operator. [#15589](https://github.com/ClickHouse/ClickHouse/pull/15589) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Prevent the possibility of error message `Could not calculate available disk space (statvfs), errno: 4, strerror: Interrupted system call`. This fixes [#15541](https://github.com/ClickHouse/ClickHouse/issues/15541). [#15557](https://github.com/ClickHouse/ClickHouse/pull/15557) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Significantly reduce memory usage in AggregatingInOrderTransform/optimize_aggregation_in_order. [#15543](https://github.com/ClickHouse/ClickHouse/pull/15543) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix bug when `ILIKE` operator stops being case insensitive if `LIKE` with the same pattern was executed. [#15536](https://github.com/ClickHouse/ClickHouse/pull/15536) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix `Missing columns` errors when selecting columns which absent in data, but depend on other columns which also absent in data. Fixes [#15530](https://github.com/ClickHouse/ClickHouse/issues/15530). [#15532](https://github.com/ClickHouse/ClickHouse/pull/15532) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix bug with event subscription in DDLWorker which rarely may lead to query hangs in `ON CLUSTER`. Introduced in [#13450](https://github.com/ClickHouse/ClickHouse/issues/13450). [#15477](https://github.com/ClickHouse/ClickHouse/pull/15477) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Report proper error when the second argument of `boundingRatio` aggregate function has a wrong type. [#15407](https://github.com/ClickHouse/ClickHouse/pull/15407) ([detailyang](https://github.com/detailyang)).
|
||||||
|
* Fix race condition during MergeTree table rename and background cleanup. [#15304](https://github.com/ClickHouse/ClickHouse/pull/15304) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix rare race condition on server startup when system.logs are enabled. [#15300](https://github.com/ClickHouse/ClickHouse/pull/15300) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix instance crash when using joinGet with LowCardinality types. This fixes https://github.com/ClickHouse/ClickHouse/issues/15214. [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Adjust decimals field size in mysql column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)).
|
||||||
|
* We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes https://github.com/ClickHouse/ClickHouse/issues/14908. [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* If function `bar` was called with specifically crafter arguments, buffer overflow was possible. This closes [#13926](https://github.com/ClickHouse/ClickHouse/issues/13926). [#15028](https://github.com/ClickHouse/ClickHouse/pull/15028) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Now settings `number_of_free_entries_in_pool_to_execute_mutation` and `number_of_free_entries_in_pool_to_lower_max_size_of_merge` can be equal to `background_pool_size`. [#14975](https://github.com/ClickHouse/ClickHouse/pull/14975) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix to make predicate push down work when subquery contains finalizeAggregation function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes https://github.com/ClickHouse/ClickHouse/issues/14923. [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Fixed `.metadata.tmp File exists` error when using `MaterializeMySQL` database engine. [#14898](https://github.com/ClickHouse/ClickHouse/pull/14898) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fix a problem where the server may get stuck on startup while talking to ZooKeeper, if the configuration files have to be fetched from ZK (using the `from_zk` include option). This fixes [#14814](https://github.com/ClickHouse/ClickHouse/issues/14814). [#14843](https://github.com/ClickHouse/ClickHouse/pull/14843) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Fix wrong monotonicity detection for shrunk `Int -> Int` cast of signed types. It might lead to incorrect query result. This bug is unveiled in [#14513](https://github.com/ClickHouse/ClickHouse/issues/14513). [#14783](https://github.com/ClickHouse/ClickHouse/pull/14783) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed the incorrect sorting order of `Nullable` column. This fixes [#14344](https://github.com/ClickHouse/ClickHouse/issues/14344). [#14495](https://github.com/ClickHouse/ClickHouse/pull/14495) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Now it's possible to change the type of version column for `VersionedCollapsingMergeTree` with `ALTER` query. [#15442](https://github.com/ClickHouse/ClickHouse/pull/15442) ([alesapin](https://github.com/alesapin)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.8.3.18-stable, 2020-09-18
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix the issue when some invocations of `extractAllGroups` function may trigger "Memory limit exceeded" error. This fixes [#13383](https://github.com/ClickHouse/ClickHouse/issues/13383). [#14889](https://github.com/ClickHouse/ClickHouse/pull/14889) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix SIGSEGV for an attempt to INSERT into StorageFile(fd). [#14887](https://github.com/ClickHouse/ClickHouse/pull/14887) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix rare error in `SELECT` queries when the queried column has `DEFAULT` expression which depends on the other column which also has `DEFAULT` and not present in select query and not exists on disk. Partially fixes [#14531](https://github.com/ClickHouse/ClickHouse/issues/14531). [#14845](https://github.com/ClickHouse/ClickHouse/pull/14845) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fixed missed default database name in metadata of materialized view when executing `ALTER ... MODIFY QUERY`. [#14664](https://github.com/ClickHouse/ClickHouse/pull/14664) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix bug when `ALTER UPDATE` mutation with Nullable column in assignment expression and constant value (like `UPDATE x = 42`) leads to incorrect value in column or segfault. Fixes [#13634](https://github.com/ClickHouse/ClickHouse/issues/13634), [#14045](https://github.com/ClickHouse/ClickHouse/issues/14045). [#14646](https://github.com/ClickHouse/ClickHouse/pull/14646) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix wrong Decimal multiplication result caused wrong decimal scale of result column. [#14603](https://github.com/ClickHouse/ClickHouse/pull/14603) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Added the checker as neither calling `lc->isNullable()` nor calling `ls->getDictionaryPtr()->isNullable()` would return the correct result. [#14591](https://github.com/ClickHouse/ClickHouse/pull/14591) ([myrrc](https://github.com/myrrc)).
|
||||||
|
* Cleanup data directory after Zookeeper exceptions during CreateQuery for StorageReplicatedMergeTree Engine. [#14563](https://github.com/ClickHouse/ClickHouse/pull/14563) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix rare segfaults in functions with combinator -Resample, which could appear in result of overflow with very large parameters. [#14562](https://github.com/ClickHouse/ClickHouse/pull/14562) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Speed up server shutdown process if there are ongoing S3 requests. [#14858](https://github.com/ClickHouse/ClickHouse/pull/14858) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Allow using multi-volume storage configuration in storage Distributed. [#14839](https://github.com/ClickHouse/ClickHouse/pull/14839) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Speed up server shutdown process if there are ongoing S3 requests. [#14496](https://github.com/ClickHouse/ClickHouse/pull/14496) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Support custom codecs in compact parts. [#12183](https://github.com/ClickHouse/ClickHouse/pull/12183) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
|
||||||
### ClickHouse release v20.8.2.3-stable, 2020-09-08
|
### ClickHouse release v20.8.2.3-stable, 2020-09-08
|
||||||
|
|
||||||
#### Backward Incompatible Change
|
#### Backward Incompatible Change
|
||||||
@ -84,7 +624,6 @@
|
|||||||
|
|
||||||
#### New Feature
|
#### New Feature
|
||||||
|
|
||||||
* ClickHouse can work as MySQL replica - it is implemented by `MaterializeMySQL` database engine. Implements [#4006](https://github.com/ClickHouse/ClickHouse/issues/4006). [#10851](https://github.com/ClickHouse/ClickHouse/pull/10851) ([Winter Zhang](https://github.com/zhang2014)).
|
|
||||||
* Add the ability to specify `Default` compression codec for columns that correspond to settings specified in `config.xml`. Implements: [#9074](https://github.com/ClickHouse/ClickHouse/issues/9074). [#14049](https://github.com/ClickHouse/ClickHouse/pull/14049) ([alesapin](https://github.com/alesapin)).
|
* Add the ability to specify `Default` compression codec for columns that correspond to settings specified in `config.xml`. Implements: [#9074](https://github.com/ClickHouse/ClickHouse/issues/9074). [#14049](https://github.com/ClickHouse/ClickHouse/pull/14049) ([alesapin](https://github.com/alesapin)).
|
||||||
* Support Kerberos authentication in Kafka, using `krb5` and `cyrus-sasl` libraries. [#12771](https://github.com/ClickHouse/ClickHouse/pull/12771) ([Ilya Golshtein](https://github.com/ilejn)).
|
* Support Kerberos authentication in Kafka, using `krb5` and `cyrus-sasl` libraries. [#12771](https://github.com/ClickHouse/ClickHouse/pull/12771) ([Ilya Golshtein](https://github.com/ilejn)).
|
||||||
* Add function `normalizeQuery` that replaces literals, sequences of literals and complex aliases with placeholders. Add function `normalizedQueryHash` that returns identical 64bit hash values for similar queries. It helps to analyze query log. This closes [#11271](https://github.com/ClickHouse/ClickHouse/issues/11271). [#13816](https://github.com/ClickHouse/ClickHouse/pull/13816) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
* Add function `normalizeQuery` that replaces literals, sequences of literals and complex aliases with placeholders. Add function `normalizedQueryHash` that returns identical 64bit hash values for similar queries. It helps to analyze query log. This closes [#11271](https://github.com/ClickHouse/ClickHouse/issues/11271). [#13816](https://github.com/ClickHouse/ClickHouse/pull/13816) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
@ -184,6 +723,7 @@
|
|||||||
|
|
||||||
#### Experimental Feature
|
#### Experimental Feature
|
||||||
|
|
||||||
|
* ClickHouse can work as MySQL replica - it is implemented by `MaterializeMySQL` database engine. Implements [#4006](https://github.com/ClickHouse/ClickHouse/issues/4006). [#10851](https://github.com/ClickHouse/ClickHouse/pull/10851) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
* Add types `Int128`, `Int256`, `UInt256` and related functions for them. Extend Decimals with Decimal256 (precision up to 76 digits). New types are under the setting `allow_experimental_bigint_types`. It is working extremely slow and bad. The implementation is incomplete. Please don't use this feature. [#13097](https://github.com/ClickHouse/ClickHouse/pull/13097) ([Artem Zuikov](https://github.com/4ertus2)).
|
* Add types `Int128`, `Int256`, `UInt256` and related functions for them. Extend Decimals with Decimal256 (precision up to 76 digits). New types are under the setting `allow_experimental_bigint_types`. It is working extremely slow and bad. The implementation is incomplete. Please don't use this feature. [#13097](https://github.com/ClickHouse/ClickHouse/pull/13097) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
|
||||||
#### Build/Testing/Packaging Improvement
|
#### Build/Testing/Packaging Improvement
|
||||||
@ -1424,6 +1964,74 @@ No changes compared to v20.4.3.16-stable.
|
|||||||
|
|
||||||
## ClickHouse release v20.3
|
## ClickHouse release v20.3
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.3.21.2-lts, 2020-11-02
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix dictGet in sharding_key (and similar places, i.e. when the function context is stored permanently). [#16205](https://github.com/ClickHouse/ClickHouse/pull/16205) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix missing or excessive headers in `TSV/CSVWithNames` formats. This fixes [#12504](https://github.com/ClickHouse/ClickHouse/issues/12504). [#13343](https://github.com/ClickHouse/ClickHouse/pull/13343) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.3.20.6-lts, 2020-10-09
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15724](https://github.com/ClickHouse/ClickHouse/pull/15724), [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix hang of queries with a lot of subqueries to same table of `MySQL` engine. Previously, if there were more than 16 subqueries to same `MySQL` table in query, it hang forever. [#15299](https://github.com/ClickHouse/ClickHouse/pull/15299) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix 'Unknown identifier' in GROUP BY when query has JOIN over Merge table. [#15242](https://github.com/ClickHouse/ClickHouse/pull/15242) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Fix to make predicate push down work when subquery contains finalizeAggregation function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Concurrent `ALTER ... REPLACE/MOVE PARTITION ...` queries might cause deadlock. It's fixed. [#13626](https://github.com/ClickHouse/ClickHouse/pull/13626) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.3.19.4-lts, 2020-09-18
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix rare error in `SELECT` queries when the queried column has `DEFAULT` expression which depends on the other column which also has `DEFAULT` and not present in select query and not exists on disk. Partially fixes [#14531](https://github.com/ClickHouse/ClickHouse/issues/14531). [#14845](https://github.com/ClickHouse/ClickHouse/pull/14845) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix bug when `ALTER UPDATE` mutation with Nullable column in assignment expression and constant value (like `UPDATE x = 42`) leads to incorrect value in column or segfault. Fixes [#13634](https://github.com/ClickHouse/ClickHouse/issues/13634), [#14045](https://github.com/ClickHouse/ClickHouse/issues/14045). [#14646](https://github.com/ClickHouse/ClickHouse/pull/14646) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix wrong Decimal multiplication result caused wrong decimal scale of result column. [#14603](https://github.com/ClickHouse/ClickHouse/pull/14603) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Support custom codecs in compact parts. [#12183](https://github.com/ClickHouse/ClickHouse/pull/12183) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.3.18.10-lts, 2020-09-08
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Stop query execution if exception happened in `PipelineExecutor` itself. This could prevent rare possible query hung. Continuation of [#14334](https://github.com/ClickHouse/ClickHouse/issues/14334). [#14402](https://github.com/ClickHouse/ClickHouse/pull/14402) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed the behaviour when sometimes cache-dictionary returned default value instead of present value from source. [#13624](https://github.com/ClickHouse/ClickHouse/pull/13624) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix parsing row policies from users.xml when names of databases or tables contain dots. This fixes [#5779](https://github.com/ClickHouse/ClickHouse/issues/5779), [#12527](https://github.com/ClickHouse/ClickHouse/issues/12527). [#13199](https://github.com/ClickHouse/ClickHouse/pull/13199) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix CAST(Nullable(String), Enum()). [#12745](https://github.com/ClickHouse/ClickHouse/pull/12745) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fixed data race in `text_log`. It does not correspond to any real bug. [#9726](https://github.com/ClickHouse/ClickHouse/pull/9726) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Fix wrong error for long queries. It was possible to get syntax error other than `Max query size exceeded` for correct query. [#13928](https://github.com/ClickHouse/ClickHouse/pull/13928) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Return NULL/zero when value is not parsed completely in parseDateTimeBestEffortOrNull/Zero functions. This fixes [#7876](https://github.com/ClickHouse/ClickHouse/issues/7876). [#11653](https://github.com/ClickHouse/ClickHouse/pull/11653) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
#### Performance Improvement
|
||||||
|
|
||||||
|
* Slightly optimize very short queries with LowCardinality. [#14129](https://github.com/ClickHouse/ClickHouse/pull/14129) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
|
||||||
|
* Fix UBSan report (adding zero to nullptr) in HashTable that appeared after migration to clang-10. [#10638](https://github.com/ClickHouse/ClickHouse/pull/10638) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
|
||||||
|
### ClickHouse release v20.3.17.173-lts, 2020-08-15
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix crash in JOIN with StorageMerge and `set enable_optimize_predicate_expression=1`. [#13679](https://github.com/ClickHouse/ClickHouse/pull/13679) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Fix invalid return type for comparison of tuples with `NULL` elements. Fixes [#12461](https://github.com/ClickHouse/ClickHouse/issues/12461). [#13420](https://github.com/ClickHouse/ClickHouse/pull/13420) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix queries with constant columns and `ORDER BY` prefix of primary key. [#13396](https://github.com/ClickHouse/ClickHouse/pull/13396) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Return passed number for numbers with MSB set in roundUpToPowerOfTwoOrZero(). [#13234](https://github.com/ClickHouse/ClickHouse/pull/13234) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
|
||||||
|
|
||||||
### ClickHouse release v20.3.16.165-lts 2020-08-10
|
### ClickHouse release v20.3.16.165-lts 2020-08-10
|
||||||
|
|
||||||
#### Bug Fix
|
#### Bug Fix
|
||||||
|
@ -445,6 +445,7 @@ include (cmake/find/brotli.cmake)
|
|||||||
include (cmake/find/protobuf.cmake)
|
include (cmake/find/protobuf.cmake)
|
||||||
include (cmake/find/grpc.cmake)
|
include (cmake/find/grpc.cmake)
|
||||||
include (cmake/find/pdqsort.cmake)
|
include (cmake/find/pdqsort.cmake)
|
||||||
|
include (cmake/find/miniselect.cmake)
|
||||||
include (cmake/find/hdfs3.cmake) # uses protobuf
|
include (cmake/find/hdfs3.cmake) # uses protobuf
|
||||||
include (cmake/find/poco.cmake)
|
include (cmake/find/poco.cmake)
|
||||||
include (cmake/find/curl.cmake)
|
include (cmake/find/curl.cmake)
|
||||||
@ -455,6 +456,8 @@ include (cmake/find/simdjson.cmake)
|
|||||||
include (cmake/find/rapidjson.cmake)
|
include (cmake/find/rapidjson.cmake)
|
||||||
include (cmake/find/fastops.cmake)
|
include (cmake/find/fastops.cmake)
|
||||||
include (cmake/find/odbc.cmake)
|
include (cmake/find/odbc.cmake)
|
||||||
|
include (cmake/find/rocksdb.cmake)
|
||||||
|
|
||||||
|
|
||||||
if(NOT USE_INTERNAL_PARQUET_LIBRARY)
|
if(NOT USE_INTERNAL_PARQUET_LIBRARY)
|
||||||
set (ENABLE_ORC OFF CACHE INTERNAL "")
|
set (ENABLE_ORC OFF CACHE INTERNAL "")
|
||||||
|
37
base/common/sort.h
Normal file
37
base/common/sort.h
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#if !defined(ARCADIA_BUILD)
|
||||||
|
# include <miniselect/floyd_rivest_select.h> // Y_IGNORE
|
||||||
|
#else
|
||||||
|
# include <algorithm>
|
||||||
|
#endif
|
||||||
|
|
||||||
|
template <class RandomIt>
|
||||||
|
void nth_element(RandomIt first, RandomIt nth, RandomIt last)
|
||||||
|
{
|
||||||
|
#if !defined(ARCADIA_BUILD)
|
||||||
|
::miniselect::floyd_rivest_select(first, nth, last);
|
||||||
|
#else
|
||||||
|
::std::nth_element(first, nth, last);
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
|
||||||
|
template <class RandomIt>
|
||||||
|
void partial_sort(RandomIt first, RandomIt middle, RandomIt last)
|
||||||
|
{
|
||||||
|
#if !defined(ARCADIA_BUILD)
|
||||||
|
::miniselect::floyd_rivest_partial_sort(first, middle, last);
|
||||||
|
#else
|
||||||
|
::std::partial_sort(first, middle, last);
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
|
||||||
|
template <class RandomIt, class Compare>
|
||||||
|
void partial_sort(RandomIt first, RandomIt middle, RandomIt last, Compare compare)
|
||||||
|
{
|
||||||
|
#if !defined(ARCADIA_BUILD)
|
||||||
|
::miniselect::floyd_rivest_partial_sort(first, middle, last, compare);
|
||||||
|
#else
|
||||||
|
::std::partial_sort(first, middle, last, compare);
|
||||||
|
#endif
|
||||||
|
}
|
@ -5,6 +5,9 @@
|
|||||||
/// (See at http://www.boost.org/LICENSE_1_0.txt)
|
/// (See at http://www.boost.org/LICENSE_1_0.txt)
|
||||||
|
|
||||||
#include "throwError.h"
|
#include "throwError.h"
|
||||||
|
#include <cfloat>
|
||||||
|
#include <limits>
|
||||||
|
#include <cassert>
|
||||||
|
|
||||||
namespace wide
|
namespace wide
|
||||||
{
|
{
|
||||||
@ -192,7 +195,7 @@ struct integer<Bits, Signed>::_impl
|
|||||||
}
|
}
|
||||||
|
|
||||||
template <typename T>
|
template <typename T>
|
||||||
constexpr static auto to_Integral(T f) noexcept
|
__attribute__((no_sanitize("undefined"))) constexpr static auto to_Integral(T f) noexcept
|
||||||
{
|
{
|
||||||
if constexpr (std::is_same_v<T, __int128>)
|
if constexpr (std::is_same_v<T, __int128>)
|
||||||
return f;
|
return f;
|
||||||
@ -225,25 +228,54 @@ struct integer<Bits, Signed>::_impl
|
|||||||
self.items[i] = 0;
|
self.items[i] = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
constexpr static void wide_integer_from_bultin(integer<Bits, Signed> & self, double rhs) noexcept
|
/**
|
||||||
{
|
* N.B. t is constructed from double, so max(t) = max(double) ~ 2^310
|
||||||
if ((rhs > 0 && rhs < std::numeric_limits<uint64_t>::max()) || (rhs < 0 && rhs > std::numeric_limits<int64_t>::min()))
|
* the recursive call happens when t / 2^64 > 2^64, so there won't be more than 5 of them.
|
||||||
|
*
|
||||||
|
* t = a1 * max_int + b1, a1 > max_int, b1 < max_int
|
||||||
|
* a1 = a2 * max_int + b2, a2 > max_int, b2 < max_int
|
||||||
|
* a_(n - 1) = a_n * max_int + b2, a_n <= max_int <- base case.
|
||||||
|
*/
|
||||||
|
template <class T>
|
||||||
|
constexpr static void set_multiplier(integer<Bits, Signed> & self, T t) noexcept {
|
||||||
|
constexpr uint64_t max_int = std::numeric_limits<uint64_t>::max();
|
||||||
|
const T alpha = t / max_int;
|
||||||
|
|
||||||
|
if (alpha <= max_int)
|
||||||
|
self = static_cast<uint64_t>(alpha);
|
||||||
|
else // max(double) / 2^64 will surely contain less than 52 precision bits, so speed up computations.
|
||||||
|
set_multiplier<double>(self, alpha);
|
||||||
|
|
||||||
|
self *= max_int;
|
||||||
|
self += static_cast<uint64_t>(t - alpha * max_int); // += b_i
|
||||||
|
}
|
||||||
|
|
||||||
|
constexpr static void wide_integer_from_bultin(integer<Bits, Signed>& self, double rhs) noexcept {
|
||||||
|
constexpr int64_t max_int = std::numeric_limits<int64_t>::max();
|
||||||
|
constexpr int64_t min_int = std::numeric_limits<int64_t>::min();
|
||||||
|
|
||||||
|
/// There are values in int64 that have more than 53 significant bits (in terms of double
|
||||||
|
/// representation). Such values, being promoted to double, are rounded up or down. If they are rounded up,
|
||||||
|
/// the result may not fit in 64 bits.
|
||||||
|
/// The example of such a number is 9.22337e+18.
|
||||||
|
/// As to_Integral does a static_cast to int64_t, it may result in UB.
|
||||||
|
/// The necessary check here is that long double has enough significant (mantissa) bits to store the
|
||||||
|
/// int64_t max value precisely.
|
||||||
|
static_assert(LDBL_MANT_DIG >= 64,
|
||||||
|
"On your system long double has less than 64 precision bits,"
|
||||||
|
"which may result in UB when initializing double from int64_t");
|
||||||
|
|
||||||
|
if ((rhs > 0 && rhs < max_int) || (rhs < 0 && rhs > min_int))
|
||||||
{
|
{
|
||||||
self = to_Integral(rhs);
|
self = static_cast<int64_t>(rhs);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
long double r = rhs;
|
const long double rhs_long_double = (static_cast<long double>(rhs) < 0)
|
||||||
if (r < 0)
|
? -static_cast<long double>(rhs)
|
||||||
r = -r;
|
: rhs;
|
||||||
|
|
||||||
size_t count = r / std::numeric_limits<uint64_t>::max();
|
set_multiplier(self, rhs_long_double);
|
||||||
self = count;
|
|
||||||
self *= std::numeric_limits<uint64_t>::max();
|
|
||||||
long double to_diff = count;
|
|
||||||
to_diff *= std::numeric_limits<uint64_t>::max();
|
|
||||||
|
|
||||||
self += to_Integral(r - to_diff);
|
|
||||||
|
|
||||||
if (rhs < 0)
|
if (rhs < 0)
|
||||||
self = -self;
|
self = -self;
|
||||||
|
@ -1,4 +1,6 @@
|
|||||||
# This file is generated automatically, do not edit. See 'ya.make.in' and use 'utils/generate-ya-make' to regenerate it.
|
# This file is generated automatically, do not edit. See 'ya.make.in' and use 'utils/generate-ya-make' to regenerate it.
|
||||||
|
OWNER(g:clickhouse)
|
||||||
|
|
||||||
LIBRARY()
|
LIBRARY()
|
||||||
|
|
||||||
ADDINCL(
|
ADDINCL(
|
||||||
@ -35,25 +37,25 @@ PEERDIR(
|
|||||||
CFLAGS(-g0)
|
CFLAGS(-g0)
|
||||||
|
|
||||||
SRCS(
|
SRCS(
|
||||||
argsToConfig.cpp
|
|
||||||
coverage.cpp
|
|
||||||
DateLUT.cpp
|
DateLUT.cpp
|
||||||
DateLUTImpl.cpp
|
DateLUTImpl.cpp
|
||||||
|
JSON.cpp
|
||||||
|
LineReader.cpp
|
||||||
|
StringRef.cpp
|
||||||
|
argsToConfig.cpp
|
||||||
|
coverage.cpp
|
||||||
demangle.cpp
|
demangle.cpp
|
||||||
errnoToString.cpp
|
errnoToString.cpp
|
||||||
getFQDNOrHostName.cpp
|
getFQDNOrHostName.cpp
|
||||||
getMemoryAmount.cpp
|
getMemoryAmount.cpp
|
||||||
getResource.cpp
|
getResource.cpp
|
||||||
getThreadId.cpp
|
getThreadId.cpp
|
||||||
JSON.cpp
|
|
||||||
LineReader.cpp
|
|
||||||
mremap.cpp
|
mremap.cpp
|
||||||
phdr_cache.cpp
|
phdr_cache.cpp
|
||||||
preciseExp10.cpp
|
preciseExp10.cpp
|
||||||
setTerminalEcho.cpp
|
setTerminalEcho.cpp
|
||||||
shift10.cpp
|
shift10.cpp
|
||||||
sleep.cpp
|
sleep.cpp
|
||||||
StringRef.cpp
|
|
||||||
terminalColors.cpp
|
terminalColors.cpp
|
||||||
|
|
||||||
)
|
)
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
OWNER(g:clickhouse)
|
||||||
|
|
||||||
LIBRARY()
|
LIBRARY()
|
||||||
|
|
||||||
ADDINCL(
|
ADDINCL(
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
OWNER(g:clickhouse)
|
||||||
|
|
||||||
LIBRARY()
|
LIBRARY()
|
||||||
|
|
||||||
NO_COMPILER_WARNINGS()
|
NO_COMPILER_WARNINGS()
|
||||||
|
21
base/glibc-compatibility/musl/sync_file_range.c
Normal file
21
base/glibc-compatibility/musl/sync_file_range.c
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
#define _GNU_SOURCE
|
||||||
|
#include <fcntl.h>
|
||||||
|
#include <errno.h>
|
||||||
|
#include "syscall.h"
|
||||||
|
|
||||||
|
// works same in x86_64 && aarch64
|
||||||
|
#define __SYSCALL_LL_E(x) (x)
|
||||||
|
#define __SYSCALL_LL_O(x) (x)
|
||||||
|
|
||||||
|
int sync_file_range(int fd, off_t pos, off_t len, unsigned flags)
|
||||||
|
{
|
||||||
|
#if defined(SYS_sync_file_range2)
|
||||||
|
return syscall(SYS_sync_file_range2, fd, flags,
|
||||||
|
__SYSCALL_LL_E(pos), __SYSCALL_LL_E(len));
|
||||||
|
#elif defined(SYS_sync_file_range)
|
||||||
|
return __syscall(SYS_sync_file_range, fd,
|
||||||
|
__SYSCALL_LL_O(pos), __SYSCALL_LL_E(len), flags);
|
||||||
|
#else
|
||||||
|
return __syscall_ret(-ENOSYS);
|
||||||
|
#endif
|
||||||
|
}
|
@ -1,3 +1,5 @@
|
|||||||
|
OWNER(g:clickhouse)
|
||||||
|
|
||||||
LIBRARY()
|
LIBRARY()
|
||||||
|
|
||||||
PEERDIR(
|
PEERDIR(
|
||||||
|
@ -113,6 +113,12 @@
|
|||||||
|
|
||||||
#include "pcg_extras.hpp"
|
#include "pcg_extras.hpp"
|
||||||
|
|
||||||
|
namespace DB
|
||||||
|
{
|
||||||
|
struct PcgSerializer;
|
||||||
|
struct PcgDeserializer;
|
||||||
|
}
|
||||||
|
|
||||||
namespace pcg_detail {
|
namespace pcg_detail {
|
||||||
|
|
||||||
using namespace pcg_extras;
|
using namespace pcg_extras;
|
||||||
@ -557,6 +563,9 @@ public:
|
|||||||
engine<xtype1, itype1,
|
engine<xtype1, itype1,
|
||||||
output_mixin1, output_previous1,
|
output_mixin1, output_previous1,
|
||||||
stream_mixin1, multiplier_mixin1>& rng);
|
stream_mixin1, multiplier_mixin1>& rng);
|
||||||
|
|
||||||
|
friend ::DB::PcgSerializer;
|
||||||
|
friend ::DB::PcgDeserializer;
|
||||||
};
|
};
|
||||||
|
|
||||||
template <typename CharT, typename Traits,
|
template <typename CharT, typename Traits,
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
OWNER(g:clickhouse)
|
||||||
|
|
||||||
LIBRARY()
|
LIBRARY()
|
||||||
|
|
||||||
ADDINCL (GLOBAL clickhouse/base/pcg-random)
|
ADDINCL (GLOBAL clickhouse/base/pcg-random)
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
OWNER(g:clickhouse)
|
||||||
|
|
||||||
LIBRARY()
|
LIBRARY()
|
||||||
|
|
||||||
CFLAGS(-g0)
|
CFLAGS(-g0)
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
OWNER(g:clickhouse)
|
||||||
|
|
||||||
LIBRARY()
|
LIBRARY()
|
||||||
|
|
||||||
ADDINCL(GLOBAL clickhouse/base/widechar_width)
|
ADDINCL(GLOBAL clickhouse/base/widechar_width)
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
OWNER(g:clickhouse)
|
||||||
|
|
||||||
RECURSE(
|
RECURSE(
|
||||||
common
|
common
|
||||||
daemon
|
daemon
|
||||||
|
@ -1,9 +1,9 @@
|
|||||||
# This strings autochanged from release_lib.sh:
|
# This strings autochanged from release_lib.sh:
|
||||||
SET(VERSION_REVISION 54442)
|
SET(VERSION_REVISION 54443)
|
||||||
SET(VERSION_MAJOR 20)
|
SET(VERSION_MAJOR 20)
|
||||||
SET(VERSION_MINOR 11)
|
SET(VERSION_MINOR 12)
|
||||||
SET(VERSION_PATCH 1)
|
SET(VERSION_PATCH 1)
|
||||||
SET(VERSION_GITHASH 76a04fb4b4f6cd27ad999baf6dc9a25e88851c42)
|
SET(VERSION_GITHASH c53725fb1f846fda074347607ab582fbb9c6f7a1)
|
||||||
SET(VERSION_DESCRIBE v20.11.1.1-prestable)
|
SET(VERSION_DESCRIBE v20.12.1.1-prestable)
|
||||||
SET(VERSION_STRING 20.11.1.1)
|
SET(VERSION_STRING 20.12.1.1)
|
||||||
# end of autochange
|
# end of autochange
|
||||||
|
2
cmake/find/miniselect.cmake
Normal file
2
cmake/find/miniselect.cmake
Normal file
@ -0,0 +1,2 @@
|
|||||||
|
set(MINISELECT_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/miniselect/include)
|
||||||
|
message(STATUS "Using miniselect: ${MINISELECT_INCLUDE_DIR}")
|
67
cmake/find/rocksdb.cmake
Normal file
67
cmake/find/rocksdb.cmake
Normal file
@ -0,0 +1,67 @@
|
|||||||
|
option(ENABLE_ROCKSDB "Enable ROCKSDB" ${ENABLE_LIBRARIES})
|
||||||
|
|
||||||
|
if (NOT ENABLE_ROCKSDB)
|
||||||
|
if (USE_INTERNAL_ROCKSDB_LIBRARY)
|
||||||
|
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't use internal rocksdb library with ENABLE_ROCKSDB=OFF")
|
||||||
|
endif()
|
||||||
|
return()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
option(USE_INTERNAL_ROCKSDB_LIBRARY "Set to FALSE to use system ROCKSDB library instead of bundled" ${NOT_UNBUNDLED})
|
||||||
|
|
||||||
|
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/rocksdb/CMakeLists.txt")
|
||||||
|
if (USE_INTERNAL_ROCKSDB_LIBRARY)
|
||||||
|
message (WARNING "submodule contrib is missing. to fix try run: \n git submodule update --init --recursive")
|
||||||
|
message(${RECONFIGURE_MESSAGE_LEVEL} "cannot find internal rocksdb")
|
||||||
|
endif()
|
||||||
|
set (MISSING_INTERNAL_ROCKSDB 1)
|
||||||
|
endif ()
|
||||||
|
|
||||||
|
if (NOT USE_INTERNAL_ROCKSDB_LIBRARY)
|
||||||
|
find_library (ROCKSDB_LIBRARY rocksdb)
|
||||||
|
find_path (ROCKSDB_INCLUDE_DIR NAMES rocksdb/db.h PATHS ${ROCKSDB_INCLUDE_PATHS})
|
||||||
|
if (NOT ROCKSDB_LIBRARY OR NOT ROCKSDB_INCLUDE_DIR)
|
||||||
|
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find system rocksdb library")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if (NOT SNAPPY_LIBRARY)
|
||||||
|
include(cmake/find/snappy.cmake)
|
||||||
|
endif()
|
||||||
|
if (NOT ZLIB_LIBRARY)
|
||||||
|
include(cmake/find/zlib.cmake)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
find_package(BZip2)
|
||||||
|
find_library(ZSTD_LIBRARY zstd)
|
||||||
|
find_library(LZ4_LIBRARY lz4)
|
||||||
|
find_library(GFLAGS_LIBRARY gflags)
|
||||||
|
|
||||||
|
if(SNAPPY_LIBRARY AND ZLIB_LIBRARY AND LZ4_LIBRARY AND BZIP2_FOUND AND ZSTD_LIBRARY AND GFLAGS_LIBRARY)
|
||||||
|
list (APPEND ROCKSDB_LIBRARY ${SNAPPY_LIBRARY})
|
||||||
|
list (APPEND ROCKSDB_LIBRARY ${ZLIB_LIBRARY})
|
||||||
|
list (APPEND ROCKSDB_LIBRARY ${LZ4_LIBRARY})
|
||||||
|
list (APPEND ROCKSDB_LIBRARY ${BZIP2_LIBRARY})
|
||||||
|
list (APPEND ROCKSDB_LIBRARY ${ZSTD_LIBRARY})
|
||||||
|
list (APPEND ROCKSDB_LIBRARY ${GFLAGS_LIBRARY})
|
||||||
|
else()
|
||||||
|
message (${RECONFIGURE_MESSAGE_LEVEL}
|
||||||
|
"Can't find system rocksdb: snappy=${SNAPPY_LIBRARY} ;"
|
||||||
|
" zlib=${ZLIB_LIBRARY} ;"
|
||||||
|
" lz4=${LZ4_LIBRARY} ;"
|
||||||
|
" bz2=${BZIP2_LIBRARY} ;"
|
||||||
|
" zstd=${ZSTD_LIBRARY} ;"
|
||||||
|
" gflags=${GFLAGS_LIBRARY} ;")
|
||||||
|
endif()
|
||||||
|
endif ()
|
||||||
|
|
||||||
|
if(ROCKSDB_LIBRARY AND ROCKSDB_INCLUDE_DIR)
|
||||||
|
set(USE_ROCKSDB 1)
|
||||||
|
elseif (NOT MISSING_INTERNAL_ROCKSDB)
|
||||||
|
set (USE_INTERNAL_ROCKSDB_LIBRARY 1)
|
||||||
|
|
||||||
|
set (ROCKSDB_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/rocksdb/include")
|
||||||
|
set (ROCKSDB_LIBRARY "rocksdb")
|
||||||
|
set (USE_ROCKSDB 1)
|
||||||
|
endif ()
|
||||||
|
|
||||||
|
message (STATUS "Using ROCKSDB=${USE_ROCKSDB}: ${ROCKSDB_INCLUDE_DIR} : ${ROCKSDB_LIBRARY}")
|
13
contrib/CMakeLists.txt
vendored
13
contrib/CMakeLists.txt
vendored
@ -14,6 +14,11 @@ unset (_current_dir_name)
|
|||||||
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -w")
|
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -w")
|
||||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w")
|
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w")
|
||||||
|
|
||||||
|
if (SANITIZE STREQUAL "undefined")
|
||||||
|
# 3rd-party libraries usually not intended to work with UBSan.
|
||||||
|
add_compile_options(-fno-sanitize=undefined)
|
||||||
|
endif()
|
||||||
|
|
||||||
set_property(DIRECTORY PROPERTY EXCLUDE_FROM_ALL 1)
|
set_property(DIRECTORY PROPERTY EXCLUDE_FROM_ALL 1)
|
||||||
|
|
||||||
add_subdirectory (boost-cmake)
|
add_subdirectory (boost-cmake)
|
||||||
@ -31,6 +36,7 @@ add_subdirectory (murmurhash)
|
|||||||
add_subdirectory (replxx-cmake)
|
add_subdirectory (replxx-cmake)
|
||||||
add_subdirectory (ryu-cmake)
|
add_subdirectory (ryu-cmake)
|
||||||
add_subdirectory (unixodbc-cmake)
|
add_subdirectory (unixodbc-cmake)
|
||||||
|
add_subdirectory (xz)
|
||||||
|
|
||||||
add_subdirectory (poco-cmake)
|
add_subdirectory (poco-cmake)
|
||||||
add_subdirectory (croaring-cmake)
|
add_subdirectory (croaring-cmake)
|
||||||
@ -157,9 +163,6 @@ if(USE_INTERNAL_SNAPPY_LIBRARY)
|
|||||||
add_subdirectory(snappy)
|
add_subdirectory(snappy)
|
||||||
|
|
||||||
set (SNAPPY_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/snappy")
|
set (SNAPPY_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/snappy")
|
||||||
if(SANITIZE STREQUAL "undefined")
|
|
||||||
target_compile_options(${SNAPPY_LIBRARY} PRIVATE -fno-sanitize=undefined)
|
|
||||||
endif()
|
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
if (USE_INTERNAL_PARQUET_LIBRARY)
|
if (USE_INTERNAL_PARQUET_LIBRARY)
|
||||||
@ -318,3 +321,7 @@ if (USE_KRB5)
|
|||||||
add_subdirectory (cyrus-sasl-cmake)
|
add_subdirectory (cyrus-sasl-cmake)
|
||||||
endif()
|
endif()
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
|
if (USE_INTERNAL_ROCKSDB_LIBRARY)
|
||||||
|
add_subdirectory(rocksdb-cmake)
|
||||||
|
endif()
|
||||||
|
2
contrib/aws
vendored
2
contrib/aws
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 17e10c0fc77f22afe890fa6d1b283760e5edaa56
|
Subproject commit a220591e335923ce1c19bbf9eb925787f7ab6c13
|
2
contrib/libunwind
vendored
2
contrib/libunwind
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 27026ef4a9c6c8cc956d1d131c4d794e24096981
|
Subproject commit 7d78d3618910752c256b2b58c3895f4efea47fac
|
1
contrib/miniselect
vendored
Submodule
1
contrib/miniselect
vendored
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit be0af6bd0b6eb044d1acc4f754b229972d99903a
|
2
contrib/poco
vendored
2
contrib/poco
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 757d947235b307675cff964f29b19d388140a9eb
|
Subproject commit f49c6ab8d3aa71828bd1b411485c21722e8c9d82
|
1
contrib/rocksdb
vendored
Submodule
1
contrib/rocksdb
vendored
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit 963314ffd681596ef2738a95249fe4c1163ef87a
|
668
contrib/rocksdb-cmake/CMakeLists.txt
Normal file
668
contrib/rocksdb-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,668 @@
|
|||||||
|
## this file is extracted from `contrib/rocksdb/CMakeLists.txt`
|
||||||
|
set(ROCKSDB_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/rocksdb")
|
||||||
|
list(APPEND CMAKE_MODULE_PATH "${ROCKSDB_SOURCE_DIR}/cmake/modules/")
|
||||||
|
|
||||||
|
find_program(CCACHE_FOUND ccache)
|
||||||
|
if(CCACHE_FOUND)
|
||||||
|
set_property(GLOBAL PROPERTY RULE_LAUNCH_COMPILE ccache)
|
||||||
|
set_property(GLOBAL PROPERTY RULE_LAUNCH_LINK ccache)
|
||||||
|
endif(CCACHE_FOUND)
|
||||||
|
|
||||||
|
if (SANITIZE STREQUAL "undefined")
|
||||||
|
set(WITH_UBSAN ON)
|
||||||
|
elseif (SANITIZE STREQUAL "address")
|
||||||
|
set(WITH_ASAN ON)
|
||||||
|
elseif (SANITIZE STREQUAL "thread")
|
||||||
|
set(WITH_TSAN ON)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
|
||||||
|
set(PORTABLE ON)
|
||||||
|
## always disable jemalloc for rocksdb by default
|
||||||
|
## because it introduces non-standard jemalloc APIs
|
||||||
|
option(WITH_JEMALLOC "build with JeMalloc" OFF)
|
||||||
|
option(WITH_SNAPPY "build with SNAPPY" ${USE_SNAPPY})
|
||||||
|
## lz4, zlib, zstd is enabled in ClickHouse by default
|
||||||
|
option(WITH_LZ4 "build with lz4" ON)
|
||||||
|
option(WITH_ZLIB "build with zlib" ON)
|
||||||
|
option(WITH_ZSTD "build with zstd" ON)
|
||||||
|
|
||||||
|
# third-party/folly is only validated to work on Linux and Windows for now.
|
||||||
|
# So only turn it on there by default.
|
||||||
|
if(CMAKE_SYSTEM_NAME MATCHES "Linux|Windows")
|
||||||
|
if(MSVC AND MSVC_VERSION LESS 1910)
|
||||||
|
# Folly does not compile with MSVC older than VS2017
|
||||||
|
option(WITH_FOLLY_DISTRIBUTED_MUTEX "build with folly::DistributedMutex" OFF)
|
||||||
|
else()
|
||||||
|
option(WITH_FOLLY_DISTRIBUTED_MUTEX "build with folly::DistributedMutex" ON)
|
||||||
|
endif()
|
||||||
|
else()
|
||||||
|
option(WITH_FOLLY_DISTRIBUTED_MUTEX "build with folly::DistributedMutex" OFF)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if( NOT DEFINED CMAKE_CXX_STANDARD )
|
||||||
|
set(CMAKE_CXX_STANDARD 11)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(MSVC)
|
||||||
|
option(WITH_XPRESS "build with windows built in compression" OFF)
|
||||||
|
include(${ROCKSDB_SOURCE_DIR}/thirdparty.inc)
|
||||||
|
else()
|
||||||
|
if(CMAKE_SYSTEM_NAME MATCHES "FreeBSD" AND NOT CMAKE_SYSTEM_NAME MATCHES "kFreeBSD")
|
||||||
|
# FreeBSD has jemalloc as default malloc
|
||||||
|
# but it does not have all the jemalloc files in include/...
|
||||||
|
set(WITH_JEMALLOC ON)
|
||||||
|
else()
|
||||||
|
if(WITH_JEMALLOC)
|
||||||
|
add_definitions(-DROCKSDB_JEMALLOC -DJEMALLOC_NO_DEMANGLE)
|
||||||
|
list(APPEND THIRDPARTY_LIBS jemalloc)
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(WITH_SNAPPY)
|
||||||
|
add_definitions(-DSNAPPY)
|
||||||
|
list(APPEND THIRDPARTY_LIBS snappy)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(WITH_ZLIB)
|
||||||
|
add_definitions(-DZLIB)
|
||||||
|
list(APPEND THIRDPARTY_LIBS zlib)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(WITH_LZ4)
|
||||||
|
add_definitions(-DLZ4)
|
||||||
|
list(APPEND THIRDPARTY_LIBS lz4)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(WITH_ZSTD)
|
||||||
|
add_definitions(-DZSTD)
|
||||||
|
include_directories(${ZSTD_INCLUDE_DIR})
|
||||||
|
include_directories(${ZSTD_INCLUDE_DIR}/common)
|
||||||
|
include_directories(${ZSTD_INCLUDE_DIR}/dictBuilder)
|
||||||
|
include_directories(${ZSTD_INCLUDE_DIR}/deprecated)
|
||||||
|
|
||||||
|
list(APPEND THIRDPARTY_LIBS zstd)
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
string(TIMESTAMP TS "%Y/%m/%d %H:%M:%S" UTC)
|
||||||
|
set(GIT_DATE_TIME "${TS}" CACHE STRING "the time we first built rocksdb")
|
||||||
|
|
||||||
|
find_package(Git)
|
||||||
|
|
||||||
|
if(GIT_FOUND AND EXISTS "${ROCKSDB_SOURCE_DIR}/.git")
|
||||||
|
if(WIN32)
|
||||||
|
execute_process(COMMAND $ENV{COMSPEC} /C ${GIT_EXECUTABLE} -C ${ROCKSDB_SOURCE_DIR} rev-parse HEAD OUTPUT_VARIABLE GIT_SHA)
|
||||||
|
else()
|
||||||
|
execute_process(COMMAND ${GIT_EXECUTABLE} -C ${ROCKSDB_SOURCE_DIR} rev-parse HEAD OUTPUT_VARIABLE GIT_SHA)
|
||||||
|
endif()
|
||||||
|
else()
|
||||||
|
set(GIT_SHA 0)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
string(REGEX REPLACE "[^0-9a-f]+" "" GIT_SHA "${GIT_SHA}")
|
||||||
|
|
||||||
|
set(BUILD_VERSION_CC ${CMAKE_BINARY_DIR}/rocksdb_build_version.cc)
|
||||||
|
configure_file(${ROCKSDB_SOURCE_DIR}/util/build_version.cc.in ${BUILD_VERSION_CC} @ONLY)
|
||||||
|
add_library(rocksdb_build_version OBJECT ${BUILD_VERSION_CC})
|
||||||
|
target_include_directories(rocksdb_build_version PRIVATE
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util)
|
||||||
|
if(MSVC)
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /Zi /nologo /EHsc /GS /Gd /GR /GF /fp:precise /Zc:wchar_t /Zc:forScope /errorReport:queue")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /FC /d2Zi+ /W4 /wd4127 /wd4800 /wd4996 /wd4351 /wd4100 /wd4204 /wd4324")
|
||||||
|
else()
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -W -Wextra -Wall")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wsign-compare -Wshadow -Wno-unused-parameter -Wno-unused-variable -Woverloaded-virtual -Wnon-virtual-dtor -Wno-missing-field-initializers -Wno-strict-aliasing")
|
||||||
|
if(MINGW)
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-format -fno-asynchronous-unwind-tables")
|
||||||
|
add_definitions(-D_POSIX_C_SOURCE=1)
|
||||||
|
endif()
|
||||||
|
if(NOT CMAKE_BUILD_TYPE STREQUAL "Debug")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-omit-frame-pointer")
|
||||||
|
include(CheckCXXCompilerFlag)
|
||||||
|
CHECK_CXX_COMPILER_FLAG("-momit-leaf-frame-pointer" HAVE_OMIT_LEAF_FRAME_POINTER)
|
||||||
|
if(HAVE_OMIT_LEAF_FRAME_POINTER)
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -momit-leaf-frame-pointer")
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
include(CheckCCompilerFlag)
|
||||||
|
if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64")
|
||||||
|
CHECK_C_COMPILER_FLAG("-mcpu=power9" HAS_POWER9)
|
||||||
|
if(HAS_POWER9)
|
||||||
|
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -mcpu=power9 -mtune=power9")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mcpu=power9 -mtune=power9")
|
||||||
|
else()
|
||||||
|
CHECK_C_COMPILER_FLAG("-mcpu=power8" HAS_POWER8)
|
||||||
|
if(HAS_POWER8)
|
||||||
|
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -mcpu=power8 -mtune=power8")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mcpu=power8 -mtune=power8")
|
||||||
|
endif(HAS_POWER8)
|
||||||
|
endif(HAS_POWER9)
|
||||||
|
CHECK_C_COMPILER_FLAG("-maltivec" HAS_ALTIVEC)
|
||||||
|
if(HAS_ALTIVEC)
|
||||||
|
message(STATUS " HAS_ALTIVEC yes")
|
||||||
|
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -maltivec")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -maltivec")
|
||||||
|
endif(HAS_ALTIVEC)
|
||||||
|
endif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64")
|
||||||
|
|
||||||
|
if(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64")
|
||||||
|
CHECK_C_COMPILER_FLAG("-march=armv8-a+crc+crypto" HAS_ARMV8_CRC)
|
||||||
|
if(HAS_ARMV8_CRC)
|
||||||
|
message(STATUS " HAS_ARMV8_CRC yes")
|
||||||
|
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -march=armv8-a+crc+crypto -Wno-unused-function")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -march=armv8-a+crc+crypto -Wno-unused-function")
|
||||||
|
endif(HAS_ARMV8_CRC)
|
||||||
|
endif(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64")
|
||||||
|
|
||||||
|
|
||||||
|
include(CheckCXXSourceCompiles)
|
||||||
|
if(NOT MSVC)
|
||||||
|
set(CMAKE_REQUIRED_FLAGS "-msse4.2 -mpclmul")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
CHECK_CXX_SOURCE_COMPILES("
|
||||||
|
#include <cstdint>
|
||||||
|
#include <nmmintrin.h>
|
||||||
|
#include <wmmintrin.h>
|
||||||
|
int main() {
|
||||||
|
volatile uint32_t x = _mm_crc32_u32(0, 0);
|
||||||
|
const auto a = _mm_set_epi64x(0, 0);
|
||||||
|
const auto b = _mm_set_epi64x(0, 0);
|
||||||
|
const auto c = _mm_clmulepi64_si128(a, b, 0x00);
|
||||||
|
auto d = _mm_cvtsi128_si64(c);
|
||||||
|
}
|
||||||
|
" HAVE_SSE42)
|
||||||
|
unset(CMAKE_REQUIRED_FLAGS)
|
||||||
|
if(HAVE_SSE42)
|
||||||
|
add_definitions(-DHAVE_SSE42)
|
||||||
|
add_definitions(-DHAVE_PCLMUL)
|
||||||
|
elseif(FORCE_SSE42)
|
||||||
|
message(FATAL_ERROR "FORCE_SSE42=ON but unable to compile with SSE4.2 enabled")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
CHECK_CXX_SOURCE_COMPILES("
|
||||||
|
#if defined(_MSC_VER) && !defined(__thread)
|
||||||
|
#define __thread __declspec(thread)
|
||||||
|
#endif
|
||||||
|
int main() {
|
||||||
|
static __thread int tls;
|
||||||
|
}
|
||||||
|
" HAVE_THREAD_LOCAL)
|
||||||
|
if(HAVE_THREAD_LOCAL)
|
||||||
|
add_definitions(-DROCKSDB_SUPPORT_THREAD_LOCAL)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
option(FAIL_ON_WARNINGS "Treat compile warnings as errors" ON)
|
||||||
|
if(FAIL_ON_WARNINGS)
|
||||||
|
if(MSVC)
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /WX")
|
||||||
|
else() # assume GCC
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror")
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
option(WITH_ASAN "build with ASAN" OFF)
|
||||||
|
if(WITH_ASAN)
|
||||||
|
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fsanitize=address")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsanitize=address")
|
||||||
|
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fsanitize=address")
|
||||||
|
if(WITH_JEMALLOC)
|
||||||
|
message(FATAL "ASAN does not work well with JeMalloc")
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
option(WITH_TSAN "build with TSAN" OFF)
|
||||||
|
if(WITH_TSAN)
|
||||||
|
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fsanitize=thread -pie")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsanitize=thread -fPIC")
|
||||||
|
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fsanitize=thread -fPIC")
|
||||||
|
if(WITH_JEMALLOC)
|
||||||
|
message(FATAL "TSAN does not work well with JeMalloc")
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
option(WITH_UBSAN "build with UBSAN" OFF)
|
||||||
|
if(WITH_UBSAN)
|
||||||
|
add_definitions(-DROCKSDB_UBSAN_RUN)
|
||||||
|
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fsanitize=undefined")
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsanitize=undefined")
|
||||||
|
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fsanitize=undefined")
|
||||||
|
if(WITH_JEMALLOC)
|
||||||
|
message(FATAL "UBSAN does not work well with JeMalloc")
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
|
||||||
|
if(CMAKE_SYSTEM_NAME MATCHES "Cygwin")
|
||||||
|
add_definitions(-fno-builtin-memcmp -DCYGWIN)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "Darwin")
|
||||||
|
add_definitions(-DOS_MACOSX)
|
||||||
|
if(CMAKE_SYSTEM_PROCESSOR MATCHES arm)
|
||||||
|
add_definitions(-DIOS_CROSS_COMPILE -DROCKSDB_LITE)
|
||||||
|
# no debug info for IOS, that will make our library big
|
||||||
|
add_definitions(-DNDEBUG)
|
||||||
|
endif()
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "Linux")
|
||||||
|
add_definitions(-DOS_LINUX)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "SunOS")
|
||||||
|
add_definitions(-DOS_SOLARIS)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "kFreeBSD")
|
||||||
|
add_definitions(-DOS_GNU_KFREEBSD)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "FreeBSD")
|
||||||
|
add_definitions(-DOS_FREEBSD)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "NetBSD")
|
||||||
|
add_definitions(-DOS_NETBSD)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "OpenBSD")
|
||||||
|
add_definitions(-DOS_OPENBSD)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "DragonFly")
|
||||||
|
add_definitions(-DOS_DRAGONFLYBSD)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "Android")
|
||||||
|
add_definitions(-DOS_ANDROID)
|
||||||
|
elseif(CMAKE_SYSTEM_NAME MATCHES "Windows")
|
||||||
|
add_definitions(-DWIN32 -DOS_WIN -D_MBCS -DWIN64 -DNOMINMAX)
|
||||||
|
if(MINGW)
|
||||||
|
add_definitions(-D_WIN32_WINNT=_WIN32_WINNT_VISTA)
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(NOT WIN32)
|
||||||
|
add_definitions(-DROCKSDB_PLATFORM_POSIX -DROCKSDB_LIB_IO_POSIX)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
option(WITH_FALLOCATE "build with fallocate" ON)
|
||||||
|
if(WITH_FALLOCATE)
|
||||||
|
CHECK_CXX_SOURCE_COMPILES("
|
||||||
|
#include <fcntl.h>
|
||||||
|
#include <linux/falloc.h>
|
||||||
|
int main() {
|
||||||
|
int fd = open(\"/dev/null\", 0);
|
||||||
|
fallocate(fd, FALLOC_FL_KEEP_SIZE, 0, 1024);
|
||||||
|
}
|
||||||
|
" HAVE_FALLOCATE)
|
||||||
|
if(HAVE_FALLOCATE)
|
||||||
|
add_definitions(-DROCKSDB_FALLOCATE_PRESENT)
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
CHECK_CXX_SOURCE_COMPILES("
|
||||||
|
#include <fcntl.h>
|
||||||
|
int main() {
|
||||||
|
int fd = open(\"/dev/null\", 0);
|
||||||
|
sync_file_range(fd, 0, 1024, SYNC_FILE_RANGE_WRITE);
|
||||||
|
}
|
||||||
|
" HAVE_SYNC_FILE_RANGE_WRITE)
|
||||||
|
if(HAVE_SYNC_FILE_RANGE_WRITE)
|
||||||
|
add_definitions(-DROCKSDB_RANGESYNC_PRESENT)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
CHECK_CXX_SOURCE_COMPILES("
|
||||||
|
#include <pthread.h>
|
||||||
|
int main() {
|
||||||
|
(void) PTHREAD_MUTEX_ADAPTIVE_NP;
|
||||||
|
}
|
||||||
|
" HAVE_PTHREAD_MUTEX_ADAPTIVE_NP)
|
||||||
|
if(HAVE_PTHREAD_MUTEX_ADAPTIVE_NP)
|
||||||
|
add_definitions(-DROCKSDB_PTHREAD_ADAPTIVE_MUTEX)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
include(CheckCXXSymbolExists)
|
||||||
|
if(CMAKE_SYSTEM_NAME MATCHES "^FreeBSD")
|
||||||
|
check_cxx_symbol_exists(malloc_usable_size ${ROCKSDB_SOURCE_DIR}/malloc_np.h HAVE_MALLOC_USABLE_SIZE)
|
||||||
|
else()
|
||||||
|
check_cxx_symbol_exists(malloc_usable_size ${ROCKSDB_SOURCE_DIR}/malloc.h HAVE_MALLOC_USABLE_SIZE)
|
||||||
|
endif()
|
||||||
|
if(HAVE_MALLOC_USABLE_SIZE)
|
||||||
|
add_definitions(-DROCKSDB_MALLOC_USABLE_SIZE)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
check_cxx_symbol_exists(sched_getcpu sched.h HAVE_SCHED_GETCPU)
|
||||||
|
if(HAVE_SCHED_GETCPU)
|
||||||
|
add_definitions(-DROCKSDB_SCHED_GETCPU_PRESENT)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
check_cxx_symbol_exists(getauxval auvx.h HAVE_AUXV_GETAUXVAL)
|
||||||
|
if(HAVE_AUXV_GETAUXVAL)
|
||||||
|
add_definitions(-DROCKSDB_AUXV_GETAUXVAL_PRESENT)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
include_directories(${ROCKSDB_SOURCE_DIR})
|
||||||
|
include_directories(${ROCKSDB_SOURCE_DIR}/include)
|
||||||
|
if(WITH_FOLLY_DISTRIBUTED_MUTEX)
|
||||||
|
include_directories(${ROCKSDB_SOURCE_DIR}/third-party/folly)
|
||||||
|
endif()
|
||||||
|
find_package(Threads REQUIRED)
|
||||||
|
|
||||||
|
# Main library source code
|
||||||
|
|
||||||
|
set(SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/cache/cache.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/cache/clock_cache.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/cache/lru_cache.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/cache/sharded_cache.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/arena_wrapped_db_iter.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_addition.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_builder.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_garbage.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_meta.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_format.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_writer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/builder.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/c.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/column_family.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compacted_db_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_iterator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_job.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_fifo.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_level.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_universal.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/compaction/sst_partitioner.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/convenience.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_filesnapshot.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_write.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_compaction_flush.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_files.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_open.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_debug.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_experimental.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_readonly.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_secondary.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_info_dumper.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/db_iter.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/dbformat.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/error_handler.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/event_helpers.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/experimental.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/external_sst_file_ingestion_job.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/file_indexer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/flush_job.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/flush_scheduler.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/forward_iterator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/import_column_family_job.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/internal_stats.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/logs_with_prep_tracker.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/log_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/log_writer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/malloc_stats.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/memtable.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/memtable_list.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/merge_helper.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/merge_operator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/range_del_aggregator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/range_tombstone_fragmenter.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/repair.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/snapshot_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/table_cache.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/table_properties_collector.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/transaction_log_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/trim_history_scheduler.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/version_builder.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/version_edit.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/version_edit_handler.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/version_set.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/wal_edit.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/wal_manager.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/write_batch.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/write_batch_base.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/write_controller.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/db/write_thread.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/env.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/env_chroot.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/env_encryption.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/env_hdfs.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/file_system.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/file_system_tracer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/mock_env.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/delete_scheduler.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/file_prefetch_buffer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/file_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/filename.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/random_access_file_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/read_write_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/readahead_raf.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/sequence_file_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/sst_file_manager_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/file/writable_file_writer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/logging/auto_roll_logger.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/logging/event_logger.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/logging/log_buffer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memory/arena.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memory/concurrent_arena.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memory/jemalloc_nodump_allocator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memory/memkind_kmem_allocator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memtable/alloc_tracker.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memtable/hash_linklist_rep.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memtable/hash_skiplist_rep.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memtable/skiplistrep.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memtable/vectorrep.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/memtable/write_buffer_manager.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/histogram.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/histogram_windowing.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/in_memory_stats_history.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/instrumented_mutex.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/iostats_context.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/perf_context.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/perf_level.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/persistent_stats_history.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/statistics.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/stats_dump_scheduler.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_updater.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util_debug.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/options/cf_options.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/options/db_options.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/options/options.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/options/options_helper.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/options/options_parser.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/stack_trace.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/adaptive/adaptive_table_factory.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/binary_search_index_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_filter_block.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_builder.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_factory.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_iterator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_builder.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefetcher.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefix_index.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_hash_index.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_footer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/filter_block_reader_common.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/filter_policy.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/flush_block_policy.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/full_filter_block.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/hash_index_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/index_builder.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/index_reader_common.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/parsed_full_filter_block.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_filter_block.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_iterator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/reader_common.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_based/uncompression_dict_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/block_fetcher.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_builder.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_factory.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/format.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/get_context.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/iterator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/merging_iterator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/meta_blocks.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/persistent_cache_helper.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_bloom.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_builder.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_factory.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_index.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_key_coding.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/sst_file_dumper.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/sst_file_reader.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/sst_file_writer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/table_properties.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/table/two_level_iterator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/test_util/sync_point.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/test_util/sync_point_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/test_util/testutil.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/test_util/transaction_test_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/tools/block_cache_analyzer/block_cache_trace_analyzer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/tools/dump/db_dump_tool.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/tools/ldb_cmd.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/tools/ldb_tool.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/tools/sst_dump_tool.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/tools/trace_analyzer_tool.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/trace_replay/trace_replay.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/trace_replay/block_cache_tracer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/trace_replay/io_tracer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/coding.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/compaction_job_stats_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/comparator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/compression_context_cache.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/concurrent_task_limiter_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/crc32c.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/dynamic_bloom.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/hash.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/murmurhash.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/random.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/rate_limiter.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/slice.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/file_checksum_helper.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/status.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/string_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/thread_local.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/threadpool_imp.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/xxhash.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/backupable/backupable_db.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_compaction_filter.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl_filesnapshot.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_dump_tool.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_file.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/cassandra/cassandra_compaction_filter.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/cassandra/format.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/cassandra/merge_operator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/checkpoint/checkpoint_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/compaction_filters/remove_emptyvalue_compactionfilter.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/debug.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/env_mirror.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/env_timed.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_env.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_fs.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/leveldb_options/leveldb_options.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/memory/memory_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/bytesxor.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/max.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/put.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/sortlist.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend2.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/uint64add.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/object_registry.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/option_change_migration/option_change_migration.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/options/options_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_file.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_metadata.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/persistent_cache_tier.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/volatile_tier_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/cache_simulator.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/sim_cache.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/table_properties_collectors/compact_on_deletion_collector.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/trace/file_trace_reader_writer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/lock_tracker.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point_lock_tracker.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction_db_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction_db.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/snapshot_checker.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_base.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_db_mutex_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_lock_mgr.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn_db.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn_db.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/ttl/db_ttl_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index_internal.cc
|
||||||
|
$<TARGET_OBJECTS:rocksdb_build_version>)
|
||||||
|
|
||||||
|
if(HAVE_SSE42 AND NOT MSVC)
|
||||||
|
set_source_files_properties(
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/crc32c.cc
|
||||||
|
PROPERTIES COMPILE_FLAGS "-msse4.2 -mpclmul")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64")
|
||||||
|
list(APPEND SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/crc32c_ppc.c
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/crc32c_ppc_asm.S)
|
||||||
|
endif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64")
|
||||||
|
|
||||||
|
if(HAS_ARMV8_CRC)
|
||||||
|
list(APPEND SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/util/crc32c_arm64.cc)
|
||||||
|
endif(HAS_ARMV8_CRC)
|
||||||
|
|
||||||
|
if(WIN32)
|
||||||
|
list(APPEND SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/win/io_win.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/win/env_win.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/win/env_default.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/win/port_win.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/win/win_logger.cc)
|
||||||
|
if(NOT MINGW)
|
||||||
|
# Mingw only supports std::thread when using
|
||||||
|
# posix threads.
|
||||||
|
list(APPEND SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/win/win_thread.cc)
|
||||||
|
endif()
|
||||||
|
if(WITH_XPRESS)
|
||||||
|
list(APPEND SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/win/xpress_win.cc)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(WITH_JEMALLOC)
|
||||||
|
list(APPEND SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/win/win_jemalloc.cc)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
else()
|
||||||
|
list(APPEND SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/port/port_posix.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/env_posix.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/fs_posix.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/env/io_posix.cc)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(WITH_FOLLY_DISTRIBUTED_MUTEX)
|
||||||
|
list(APPEND SOURCES
|
||||||
|
${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/detail/Futex.cpp
|
||||||
|
${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/synchronization/AtomicNotification.cpp
|
||||||
|
${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/synchronization/DistributedMutex.cpp
|
||||||
|
${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/synchronization/ParkingLot.cpp
|
||||||
|
${ROCKSDB_SOURCE_DIR}/third-party/folly/folly/synchronization/WaitOptions.cpp)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
set(ROCKSDB_STATIC_LIB rocksdb)
|
||||||
|
|
||||||
|
if(WIN32)
|
||||||
|
set(SYSTEM_LIBS ${SYSTEM_LIBS} shlwapi.lib rpcrt4.lib)
|
||||||
|
else()
|
||||||
|
set(SYSTEM_LIBS ${CMAKE_THREAD_LIBS_INIT})
|
||||||
|
endif()
|
||||||
|
|
||||||
|
add_library(${ROCKSDB_STATIC_LIB} STATIC ${SOURCES})
|
||||||
|
target_link_libraries(${ROCKSDB_STATIC_LIB} PRIVATE
|
||||||
|
${THIRDPARTY_LIBS} ${SYSTEM_LIBS})
|
1
contrib/xz
vendored
Submodule
1
contrib/xz
vendored
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit 869b9d1b4edd6df07f819d360d306251f8147353
|
4
debian/changelog
vendored
4
debian/changelog
vendored
@ -1,5 +1,5 @@
|
|||||||
clickhouse (20.11.1.1) unstable; urgency=low
|
clickhouse (20.12.1.1) unstable; urgency=low
|
||||||
|
|
||||||
* Modified source code
|
* Modified source code
|
||||||
|
|
||||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Sat, 10 Oct 2020 18:39:55 +0300
|
-- clickhouse-release <clickhouse-release@yandex-team.ru> Thu, 05 Nov 2020 21:52:47 +0300
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:18.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=20.11.1.*
|
ARG version=20.12.1.*
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install --yes --no-install-recommends \
|
&& apt-get install --yes --no-install-recommends \
|
||||||
|
@ -63,7 +63,7 @@ then
|
|||||||
mkdir -p /output/config
|
mkdir -p /output/config
|
||||||
cp ../programs/server/config.xml /output/config
|
cp ../programs/server/config.xml /output/config
|
||||||
cp ../programs/server/users.xml /output/config
|
cp ../programs/server/users.xml /output/config
|
||||||
cp -r ../programs/server/config.d /output/config
|
cp -r --dereference ../programs/server/config.d /output/config
|
||||||
tar -czvf "$COMBINED_OUTPUT.tgz" /output
|
tar -czvf "$COMBINED_OUTPUT.tgz" /output
|
||||||
rm -r /output/*
|
rm -r /output/*
|
||||||
mv "$COMBINED_OUTPUT.tgz" /output
|
mv "$COMBINED_OUTPUT.tgz" /output
|
||||||
|
@ -64,6 +64,8 @@ RUN apt-get update \
|
|||||||
libbz2-dev \
|
libbz2-dev \
|
||||||
libavro-dev \
|
libavro-dev \
|
||||||
libfarmhash-dev \
|
libfarmhash-dev \
|
||||||
|
librocksdb-dev \
|
||||||
|
libgflags-dev \
|
||||||
libmysqlclient-dev \
|
libmysqlclient-dev \
|
||||||
--yes --no-install-recommends
|
--yes --no-install-recommends
|
||||||
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:20.04
|
FROM ubuntu:20.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=20.11.1.*
|
ARG version=20.12.1.*
|
||||||
ARG gosu_ver=1.10
|
ARG gosu_ver=1.10
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:18.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=20.11.1.*
|
ARG version=20.12.1.*
|
||||||
|
|
||||||
RUN apt-get update && \
|
RUN apt-get update && \
|
||||||
apt-get install -y apt-transport-https dirmngr && \
|
apt-get install -y apt-transport-https dirmngr && \
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
# docker build -t yandex/clickhouse-fasttest .
|
# docker build -t yandex/clickhouse-fasttest .
|
||||||
FROM ubuntu:19.10
|
FROM ubuntu:20.04
|
||||||
|
|
||||||
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=10
|
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=10
|
||||||
|
|
||||||
|
@ -127,7 +127,7 @@ function clone_submodules
|
|||||||
(
|
(
|
||||||
cd "$FASTTEST_SOURCE"
|
cd "$FASTTEST_SOURCE"
|
||||||
|
|
||||||
SUBMODULES_TO_UPDATE=(contrib/boost contrib/zlib-ng contrib/libxml2 contrib/poco contrib/libunwind contrib/ryu contrib/fmtlib contrib/base64 contrib/cctz contrib/libcpuid contrib/double-conversion contrib/libcxx contrib/libcxxabi contrib/libc-headers contrib/lz4 contrib/zstd contrib/fastops contrib/rapidjson contrib/re2 contrib/sparsehash-c11 contrib/croaring)
|
SUBMODULES_TO_UPDATE=(contrib/boost contrib/zlib-ng contrib/libxml2 contrib/poco contrib/libunwind contrib/ryu contrib/fmtlib contrib/base64 contrib/cctz contrib/libcpuid contrib/double-conversion contrib/libcxx contrib/libcxxabi contrib/libc-headers contrib/lz4 contrib/zstd contrib/fastops contrib/rapidjson contrib/re2 contrib/sparsehash-c11 contrib/croaring contrib/miniselect contrib/xz)
|
||||||
|
|
||||||
git submodule sync
|
git submodule sync
|
||||||
git submodule update --init --recursive "${SUBMODULES_TO_UPDATE[@]}"
|
git submodule update --init --recursive "${SUBMODULES_TO_UPDATE[@]}"
|
||||||
@ -240,6 +240,10 @@ TESTS_TO_SKIP=(
|
|||||||
01354_order_by_tuple_collate_const
|
01354_order_by_tuple_collate_const
|
||||||
01355_ilike
|
01355_ilike
|
||||||
01411_bayesian_ab_testing
|
01411_bayesian_ab_testing
|
||||||
|
01532_collate_in_low_cardinality
|
||||||
|
01533_collate_in_nullable
|
||||||
|
01542_collate_in_array
|
||||||
|
01543_collate_in_tuple
|
||||||
_orc_
|
_orc_
|
||||||
arrow
|
arrow
|
||||||
avro
|
avro
|
||||||
@ -264,12 +268,16 @@ TESTS_TO_SKIP=(
|
|||||||
protobuf
|
protobuf
|
||||||
secure
|
secure
|
||||||
sha256
|
sha256
|
||||||
|
xz
|
||||||
|
|
||||||
# Not sure why these two fail even in sequential mode. Disabled for now
|
# Not sure why these two fail even in sequential mode. Disabled for now
|
||||||
# to make some progress.
|
# to make some progress.
|
||||||
00646_url_engine
|
00646_url_engine
|
||||||
00974_query_profiler
|
00974_query_profiler
|
||||||
|
|
||||||
|
# In fasttest, ENABLE_LIBRARIES=0, so rocksdb engine is not enabled by default
|
||||||
|
01504_rocksdb
|
||||||
|
|
||||||
# Look at DistributedFilesToInsert, so cannot run in parallel.
|
# Look at DistributedFilesToInsert, so cannot run in parallel.
|
||||||
01460_DistributedFilesToInsert
|
01460_DistributedFilesToInsert
|
||||||
|
|
||||||
@ -277,6 +285,10 @@ TESTS_TO_SKIP=(
|
|||||||
|
|
||||||
# Require python libraries like scipy, pandas and numpy
|
# Require python libraries like scipy, pandas and numpy
|
||||||
01322_ttest_scipy
|
01322_ttest_scipy
|
||||||
|
|
||||||
|
01545_system_errors
|
||||||
|
# Checks system.errors
|
||||||
|
01563_distributed_query_finish
|
||||||
)
|
)
|
||||||
|
|
||||||
time clickhouse-test -j 8 --order=random --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt"
|
time clickhouse-test -j 8 --order=random --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt"
|
||||||
|
@ -45,11 +45,11 @@ function configure
|
|||||||
{
|
{
|
||||||
rm -rf db ||:
|
rm -rf db ||:
|
||||||
mkdir db ||:
|
mkdir db ||:
|
||||||
cp -av "$repo_dir"/programs/server/config* db
|
cp -av --dereference "$repo_dir"/programs/server/config* db
|
||||||
cp -av "$repo_dir"/programs/server/user* db
|
cp -av --dereference "$repo_dir"/programs/server/user* db
|
||||||
# TODO figure out which ones are needed
|
# TODO figure out which ones are needed
|
||||||
cp -av "$repo_dir"/tests/config/config.d/listen.xml db/config.d
|
cp -av --dereference "$repo_dir"/tests/config/config.d/listen.xml db/config.d
|
||||||
cp -av "$script_dir"/query-fuzzer-tweaks-users.xml db/users.d
|
cp -av --dereference "$script_dir"/query-fuzzer-tweaks-users.xml db/users.d
|
||||||
}
|
}
|
||||||
|
|
||||||
function watchdog
|
function watchdog
|
||||||
@ -89,7 +89,7 @@ function fuzz
|
|||||||
> >(tail -10000 > fuzzer.log) \
|
> >(tail -10000 > fuzzer.log) \
|
||||||
2>&1 \
|
2>&1 \
|
||||||
|| fuzzer_exit_code=$?
|
|| fuzzer_exit_code=$?
|
||||||
|
|
||||||
echo "Fuzzer exit code is $fuzzer_exit_code"
|
echo "Fuzzer exit code is $fuzzer_exit_code"
|
||||||
|
|
||||||
./clickhouse-client --query "select elapsed, query from system.processes" ||:
|
./clickhouse-client --query "select elapsed, query from system.processes" ||:
|
||||||
|
@ -1074,6 +1074,53 @@ wait
|
|||||||
unset IFS
|
unset IFS
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function upload_results
|
||||||
|
{
|
||||||
|
if ! [ -v CHPC_DATABASE_URL ]
|
||||||
|
then
|
||||||
|
echo Database for test results is not specified, will not upload them.
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Surprisingly, clickhouse-client doesn't understand --host 127.0.0.1:9000
|
||||||
|
# so I have to extract host and port with clickhouse-local. I tried to use
|
||||||
|
# Poco URI parser to support this in the client, but it's broken and can't
|
||||||
|
# parse host:port.
|
||||||
|
set +x # Don't show password in the log
|
||||||
|
clickhouse-client \
|
||||||
|
$(clickhouse-local --query "with '${CHPC_DATABASE_URL}' as url select '--host ' || domain(url) || ' --port ' || toString(port(url)) format TSV") \
|
||||||
|
--secure \
|
||||||
|
--user "${CHPC_DATABASE_USER}" \
|
||||||
|
--password "${CHPC_DATABASE_PASSWORD}" \
|
||||||
|
--config "right/config/client_config.xml" \
|
||||||
|
--database perftest \
|
||||||
|
--date_time_input_format=best_effort \
|
||||||
|
--query "
|
||||||
|
insert into query_metrics_v2
|
||||||
|
select
|
||||||
|
toDate(event_time) event_date,
|
||||||
|
toDateTime('$(cd right/ch && git show -s --format=%ci "$SHA_TO_TEST" | cut -d' ' -f-2)') event_time,
|
||||||
|
$PR_TO_TEST pr_number,
|
||||||
|
'$REF_SHA' old_sha,
|
||||||
|
'$SHA_TO_TEST' new_sha,
|
||||||
|
test,
|
||||||
|
query_index,
|
||||||
|
query_display_name,
|
||||||
|
metric_name,
|
||||||
|
old_value,
|
||||||
|
new_value,
|
||||||
|
diff,
|
||||||
|
stat_threshold
|
||||||
|
from input('metric_name text, old_value float, new_value float, diff float,
|
||||||
|
ratio_display_text text, stat_threshold float,
|
||||||
|
test text, query_index int, query_display_name text')
|
||||||
|
settings date_time_input_format='best_effort'
|
||||||
|
format TSV
|
||||||
|
settings date_time_input_format='best_effort'
|
||||||
|
" < report/all-query-metrics.tsv # Don't leave whitespace after INSERT: https://github.com/ClickHouse/ClickHouse/issues/16652
|
||||||
|
set -x
|
||||||
|
}
|
||||||
|
|
||||||
# Check that local and client are in PATH
|
# Check that local and client are in PATH
|
||||||
clickhouse-local --version > /dev/null
|
clickhouse-local --version > /dev/null
|
||||||
clickhouse-client --version > /dev/null
|
clickhouse-client --version > /dev/null
|
||||||
@ -1145,6 +1192,9 @@ case "$stage" in
|
|||||||
time "$script_dir/report.py" --report=all-queries > all-queries.html 2> >(tee -a report/errors.log 1>&2) ||:
|
time "$script_dir/report.py" --report=all-queries > all-queries.html 2> >(tee -a report/errors.log 1>&2) ||:
|
||||||
time "$script_dir/report.py" > report.html
|
time "$script_dir/report.py" > report.html
|
||||||
;&
|
;&
|
||||||
|
"upload_results")
|
||||||
|
time upload_results ||:
|
||||||
|
;&
|
||||||
esac
|
esac
|
||||||
|
|
||||||
# Print some final debug info to help debug Weirdness, of which there is plenty.
|
# Print some final debug info to help debug Weirdness, of which there is plenty.
|
||||||
|
17
docker/test/performance-comparison/config/client_config.xml
Normal file
17
docker/test/performance-comparison/config/client_config.xml
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
<!--
|
||||||
|
This config is used to upload test results to a public ClickHouse instance.
|
||||||
|
It has bad certificates so we ignore them.
|
||||||
|
-->
|
||||||
|
<config>
|
||||||
|
<openSSL>
|
||||||
|
<client>
|
||||||
|
<loadDefaultCAFile>true</loadDefaultCAFile>
|
||||||
|
<cacheSessions>true</cacheSessions>
|
||||||
|
<disableProtocols>sslv2,sslv3</disableProtocols>
|
||||||
|
<preferServerCiphers>true</preferServerCiphers>
|
||||||
|
<invalidCertificateHandler>
|
||||||
|
<name>AcceptCertificateHandler</name> <!-- For tests only-->
|
||||||
|
</invalidCertificateHandler>
|
||||||
|
</client>
|
||||||
|
</openSSL>
|
||||||
|
</config>
|
@ -16,7 +16,7 @@
|
|||||||
<max_execution_time>300</max_execution_time>
|
<max_execution_time>300</max_execution_time>
|
||||||
|
|
||||||
<!-- One NUMA node w/o hyperthreading -->
|
<!-- One NUMA node w/o hyperthreading -->
|
||||||
<max_threads>20</max_threads>
|
<max_threads>12</max_threads>
|
||||||
</default>
|
</default>
|
||||||
</profiles>
|
</profiles>
|
||||||
</yandex>
|
</yandex>
|
||||||
|
@ -121,6 +121,9 @@ set +e
|
|||||||
PATH="$(readlink -f right/)":"$PATH"
|
PATH="$(readlink -f right/)":"$PATH"
|
||||||
export PATH
|
export PATH
|
||||||
|
|
||||||
|
export REF_PR
|
||||||
|
export REF_SHA
|
||||||
|
|
||||||
# Start the main comparison script.
|
# Start the main comparison script.
|
||||||
{ \
|
{ \
|
||||||
time ../download.sh "$REF_PR" "$REF_SHA" "$PR_TO_TEST" "$SHA_TO_TEST" && \
|
time ../download.sh "$REF_PR" "$REF_SHA" "$PR_TO_TEST" "$SHA_TO_TEST" && \
|
||||||
|
@ -29,7 +29,7 @@ def dowload_with_progress(url, path):
|
|||||||
logging.info("Downloading from %s to temp path %s", url, path)
|
logging.info("Downloading from %s to temp path %s", url, path)
|
||||||
for i in range(RETRIES_COUNT):
|
for i in range(RETRIES_COUNT):
|
||||||
try:
|
try:
|
||||||
with open(path, 'w') as f:
|
with open(path, 'wb') as f:
|
||||||
response = requests.get(url, stream=True)
|
response = requests.get(url, stream=True)
|
||||||
response.raise_for_status()
|
response.raise_for_status()
|
||||||
total_length = response.headers.get('content-length')
|
total_length = response.headers.get('content-length')
|
||||||
|
@ -43,6 +43,8 @@ RUN apt-get --allow-unauthenticated update -y \
|
|||||||
libreadline-dev \
|
libreadline-dev \
|
||||||
libsasl2-dev \
|
libsasl2-dev \
|
||||||
libzstd-dev \
|
libzstd-dev \
|
||||||
|
librocksdb-dev \
|
||||||
|
libgflags-dev \
|
||||||
lsof \
|
lsof \
|
||||||
moreutils \
|
moreutils \
|
||||||
ncdu \
|
ncdu \
|
||||||
|
@ -1,7 +1,6 @@
|
|||||||
Allow to run simple ClickHouse stress test in Docker from debian packages.
|
Allow to run simple ClickHouse stress test in Docker from debian packages.
|
||||||
Actually it runs single copy of clickhouse-performance-test and multiple copies
|
Actually it runs multiple copies of clickhouse-test (functional tests).
|
||||||
of clickhouse-test (functional tests). This allows to find problems like
|
This allows to find problems like segmentation fault which cause shutdown of server.
|
||||||
segmentation fault which cause shutdown of server.
|
|
||||||
|
|
||||||
Usage:
|
Usage:
|
||||||
```
|
```
|
||||||
|
@ -17,13 +17,6 @@ def get_skip_list_cmd(path):
|
|||||||
return ''
|
return ''
|
||||||
|
|
||||||
|
|
||||||
def run_perf_test(cmd, xmls_path, output_folder):
|
|
||||||
output_path = os.path.join(output_folder, "perf_stress_run.txt")
|
|
||||||
f = open(output_path, 'w')
|
|
||||||
p = Popen("{} --skip-tags=long --recursive --input-files {}".format(cmd, xmls_path), shell=True, stdout=f, stderr=f)
|
|
||||||
return p
|
|
||||||
|
|
||||||
|
|
||||||
def get_options(i):
|
def get_options(i):
|
||||||
options = ""
|
options = ""
|
||||||
if 0 < i:
|
if 0 < i:
|
||||||
@ -68,8 +61,6 @@ if __name__ == "__main__":
|
|||||||
parser.add_argument("--test-cmd", default='/usr/bin/clickhouse-test')
|
parser.add_argument("--test-cmd", default='/usr/bin/clickhouse-test')
|
||||||
parser.add_argument("--skip-func-tests", default='')
|
parser.add_argument("--skip-func-tests", default='')
|
||||||
parser.add_argument("--client-cmd", default='clickhouse-client')
|
parser.add_argument("--client-cmd", default='clickhouse-client')
|
||||||
parser.add_argument("--perf-test-cmd", default='clickhouse-performance-test')
|
|
||||||
parser.add_argument("--perf-test-xml-path", default='/usr/share/clickhouse-test/performance/')
|
|
||||||
parser.add_argument("--server-log-folder", default='/var/log/clickhouse-server')
|
parser.add_argument("--server-log-folder", default='/var/log/clickhouse-server')
|
||||||
parser.add_argument("--output-folder")
|
parser.add_argument("--output-folder")
|
||||||
parser.add_argument("--global-time-limit", type=int, default=3600)
|
parser.add_argument("--global-time-limit", type=int, default=3600)
|
||||||
@ -77,8 +68,6 @@ if __name__ == "__main__":
|
|||||||
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
func_pipes = []
|
func_pipes = []
|
||||||
perf_process = None
|
|
||||||
perf_process = run_perf_test(args.perf_test_cmd, args.perf_test_xml_path, args.output_folder)
|
|
||||||
func_pipes = run_func_test(args.test_cmd, args.output_folder, args.num_parallel, args.skip_func_tests, args.global_time_limit)
|
func_pipes = run_func_test(args.test_cmd, args.output_folder, args.num_parallel, args.skip_func_tests, args.global_time_limit)
|
||||||
|
|
||||||
logging.info("Will wait functests to finish")
|
logging.info("Will wait functests to finish")
|
||||||
|
@ -35,7 +35,7 @@ RUN apt-get update \
|
|||||||
ENV TZ=Europe/Moscow
|
ENV TZ=Europe/Moscow
|
||||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||||
|
|
||||||
RUN pip3 install urllib3 testflows==1.6.59 docker-compose docker dicttoxml kazoo tzlocal
|
RUN pip3 install urllib3 testflows==1.6.62 docker-compose docker dicttoxml kazoo tzlocal
|
||||||
|
|
||||||
ENV DOCKER_CHANNEL stable
|
ENV DOCKER_CHANNEL stable
|
||||||
ENV DOCKER_VERSION 17.09.1-ce
|
ENV DOCKER_VERSION 17.09.1-ce
|
||||||
|
141
docs/en/development/adding_test_queries.md
Normal file
141
docs/en/development/adding_test_queries.md
Normal file
@ -0,0 +1,141 @@
|
|||||||
|
# How to add test queries to ClickHouse CI
|
||||||
|
|
||||||
|
ClickHouse has hundreds (or even thousands) of features. Every commit get checked by a complex set of tests containing many thousands of test cases.
|
||||||
|
|
||||||
|
The core functionality is very well tested, but some corner-cases and different combinations of features can be uncovered with ClickHouse CI.
|
||||||
|
|
||||||
|
Most of the bugs/regressions we see happen in that 'grey area' where test coverage is poor.
|
||||||
|
|
||||||
|
And we are very interested in covering most of the possible scenarios and feature combinations used in real life by tests.
|
||||||
|
|
||||||
|
## Why adding tests
|
||||||
|
|
||||||
|
Why/when you should add a test case into ClickHouse code:
|
||||||
|
1) you use some complicated scenarios / feature combinations / you have some corner case which is probably not widely used
|
||||||
|
2) you see that certain behavior gets changed between version w/o notifications in the changelog
|
||||||
|
3) you just want to help to improve ClickHouse quality and ensure the features you use will not be broken in the future releases
|
||||||
|
4) once the test is added/accepted, you can be sure the corner case you check will never be accidentally broken.
|
||||||
|
5) you will be a part of great open-source community
|
||||||
|
6) your name will be visible in the `system.contributors` table!
|
||||||
|
7) you will make a world bit better :)
|
||||||
|
|
||||||
|
### Steps to do
|
||||||
|
|
||||||
|
#### Prerequisite
|
||||||
|
|
||||||
|
I assume you run some Linux machine (you can use docker / virtual machines on other OS) and any modern browser / internet connection, and you have some basic Linux & SQL skills.
|
||||||
|
|
||||||
|
Any highly specialized knowledge is not needed (so you don't need to know C++ or know something about how ClickHouse CI works).
|
||||||
|
|
||||||
|
|
||||||
|
#### Preparation
|
||||||
|
|
||||||
|
1) [create GitHub account](https://github.com/join) (if you haven't one yet)
|
||||||
|
2) [setup git](https://docs.github.com/en/free-pro-team@latest/github/getting-started-with-github/set-up-git)
|
||||||
|
```bash
|
||||||
|
# for Ubuntu
|
||||||
|
sudo apt-get update
|
||||||
|
sudo apt-get install git
|
||||||
|
|
||||||
|
git config --global user.name "John Doe" # fill with your name
|
||||||
|
git config --global user.email "email@example.com" # fill with your email
|
||||||
|
|
||||||
|
```
|
||||||
|
3) [fork ClickHouse project](https://docs.github.com/en/free-pro-team@latest/github/getting-started-with-github/fork-a-repo) - just open [https://github.com/ClickHouse/ClickHouse](https://github.com/ClickHouse/ClickHouse) and press fork button in the top right corner:
|
||||||
|
![fork repo](https://github-images.s3.amazonaws.com/help/bootcamp/Bootcamp-Fork.png)
|
||||||
|
|
||||||
|
4) clone your fork to some folder on your PC, for example, `~/workspace/ClickHouse`
|
||||||
|
```
|
||||||
|
mkdir ~/workspace && cd ~/workspace
|
||||||
|
git clone https://github.com/< your GitHub username>/ClickHouse
|
||||||
|
cd ClickHouse
|
||||||
|
git remote add upstream https://github.com/ClickHouse/ClickHouse
|
||||||
|
```
|
||||||
|
|
||||||
|
#### New branch for the test
|
||||||
|
|
||||||
|
1) create a new branch from the latest clickhouse master
|
||||||
|
```
|
||||||
|
cd ~/workspace/ClickHouse
|
||||||
|
git fetch upstream
|
||||||
|
git checkout -b name_for_a_branch_with_my_test upstream/master
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Install & run clickhouse
|
||||||
|
|
||||||
|
1) install `clickhouse-server` (follow [official docs](https://clickhouse.tech/docs/en/getting-started/install/))
|
||||||
|
2) install test configurations (it will use Zookeeper mock implementation and adjust some settings)
|
||||||
|
```
|
||||||
|
cd ~/workspace/ClickHouse/tests/config
|
||||||
|
sudo ./install.sh
|
||||||
|
```
|
||||||
|
3) run clickhouse-server
|
||||||
|
```
|
||||||
|
sudo systemctl restart clickhouse-server
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Creating the test file
|
||||||
|
|
||||||
|
|
||||||
|
1) find the number for your test - find the file with the biggest number in `tests/queries/0_stateless/`
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ cd ~/workspace/ClickHouse
|
||||||
|
$ ls tests/queries/0_stateless/[0-9]*.reference | tail -n 1
|
||||||
|
tests/queries/0_stateless/01520_client_print_query_id.reference
|
||||||
|
```
|
||||||
|
Currently, the last number for the test is `01520`, so my test will have the number `01521`
|
||||||
|
|
||||||
|
2) create an SQL file with the next number and name of the feature you test
|
||||||
|
|
||||||
|
```sh
|
||||||
|
touch tests/queries/0_stateless/01521_dummy_test.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
3) edit SQL file with your favorite editor (see hint of creating tests below)
|
||||||
|
```sh
|
||||||
|
vim tests/queries/0_stateless/01521_dummy_test.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
4) run the test, and put the result of that into the reference file:
|
||||||
|
```
|
||||||
|
clickhouse-client -nmT < tests/queries/0_stateless/01521_dummy_test.sql | tee tests/queries/0_stateless/01521_dummy_test.reference
|
||||||
|
```
|
||||||
|
|
||||||
|
5) ensure everything is correct, if the test output is incorrect (due to some bug for example), adjust the reference file using text editor.
|
||||||
|
|
||||||
|
#### How create good test
|
||||||
|
|
||||||
|
- test should be
|
||||||
|
- minimal - create only tables related to tested functionality, remove unrelated columns and parts of query
|
||||||
|
- fast - should not take longer than few seconds (better subseconds)
|
||||||
|
- correct - fails then feature is not working
|
||||||
|
- deteministic
|
||||||
|
- isolated / stateless
|
||||||
|
- don't rely on some environment things
|
||||||
|
- don't rely on timing when possible
|
||||||
|
- try to cover corner cases (zeros / Nulls / empty sets / throwing exceptions)
|
||||||
|
- to test that query return errors, you can put special comment after the query: `-- { serverError 60 }` or `-- { clientError 20 }`
|
||||||
|
- don't switch databases (unless necessary)
|
||||||
|
- you can create several table replicas on the same node if needed
|
||||||
|
- you can use one of the test cluster definitions when needed (see system.clusters)
|
||||||
|
- use `number` / `numbers_mt` / `zeros` / `zeros_mt` and similar for queries / to initialize data when appliable
|
||||||
|
- clean up the created objects after test and before the test (DROP IF EXISTS) - in case of some dirty state
|
||||||
|
- prefer sync mode of operations (mutations, merges, etc.)
|
||||||
|
- use other SQL files in the `0_stateless` folder as an example
|
||||||
|
- ensure the feature / feature combination you want to tests is not covered yet with existsing tests
|
||||||
|
|
||||||
|
#### Commit / push / create PR.
|
||||||
|
|
||||||
|
1) commit & push your changes
|
||||||
|
```sh
|
||||||
|
cd ~/workspace/ClickHouse
|
||||||
|
git add tests/queries/0_stateless/01521_dummy_test.sql
|
||||||
|
git add tests/queries/0_stateless/01521_dummy_test.reference
|
||||||
|
git commit # use some nice commit message when possible
|
||||||
|
git push origin HEAD
|
||||||
|
```
|
||||||
|
2) use a link which was shown during the push, to create a PR into the main repo
|
||||||
|
3) adjust the PR title and contents, in `Changelog category (leave one)` keep
|
||||||
|
`Build/Testing/Packaging Improvement`, fill the rest of the fields if you want.
|
@ -23,7 +23,7 @@ $ sudo apt-get install git cmake python ninja-build
|
|||||||
|
|
||||||
Or cmake3 instead of cmake on older systems.
|
Or cmake3 instead of cmake on older systems.
|
||||||
|
|
||||||
### Install GCC 9 {#install-gcc-9}
|
### Install GCC 10 {#install-gcc-10}
|
||||||
|
|
||||||
There are several ways to do this.
|
There are several ways to do this.
|
||||||
|
|
||||||
@ -32,7 +32,7 @@ There are several ways to do this.
|
|||||||
On Ubuntu 19.10 or newer:
|
On Ubuntu 19.10 or newer:
|
||||||
|
|
||||||
$ sudo apt-get update
|
$ sudo apt-get update
|
||||||
$ sudo apt-get install gcc-9 g++-9
|
$ sudo apt-get install gcc-10 g++-10
|
||||||
|
|
||||||
#### Install from a PPA Package {#install-from-a-ppa-package}
|
#### Install from a PPA Package {#install-from-a-ppa-package}
|
||||||
|
|
||||||
@ -42,18 +42,18 @@ On older Ubuntu:
|
|||||||
$ sudo apt-get install software-properties-common
|
$ sudo apt-get install software-properties-common
|
||||||
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
||||||
$ sudo apt-get update
|
$ sudo apt-get update
|
||||||
$ sudo apt-get install gcc-9 g++-9
|
$ sudo apt-get install gcc-10 g++-10
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Install from Sources {#install-from-sources}
|
#### Install from Sources {#install-from-sources}
|
||||||
|
|
||||||
See [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
See [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
||||||
|
|
||||||
### Use GCC 9 for Builds {#use-gcc-9-for-builds}
|
### Use GCC 10 for Builds {#use-gcc-10-for-builds}
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ export CC=gcc-9
|
$ export CC=gcc-10
|
||||||
$ export CXX=g++-9
|
$ export CXX=g++-10
|
||||||
```
|
```
|
||||||
|
|
||||||
### Checkout ClickHouse Sources {#checkout-clickhouse-sources}
|
### Checkout ClickHouse Sources {#checkout-clickhouse-sources}
|
||||||
@ -88,7 +88,7 @@ The build requires the following components:
|
|||||||
- Git (is used only to checkout the sources, it’s not needed for the build)
|
- Git (is used only to checkout the sources, it’s not needed for the build)
|
||||||
- CMake 3.10 or newer
|
- CMake 3.10 or newer
|
||||||
- Ninja (recommended) or Make
|
- Ninja (recommended) or Make
|
||||||
- C++ compiler: gcc 9 or clang 8 or newer
|
- C++ compiler: gcc 10 or clang 8 or newer
|
||||||
- Linker: lld or gold (the classic GNU ld won’t work)
|
- Linker: lld or gold (the classic GNU ld won’t work)
|
||||||
- Python (is only used inside LLVM build and it is optional)
|
- Python (is only used inside LLVM build and it is optional)
|
||||||
|
|
||||||
|
@ -131,13 +131,13 @@ ClickHouse uses several external libraries for building. All of them do not need
|
|||||||
|
|
||||||
## C++ Compiler {#c-compiler}
|
## C++ Compiler {#c-compiler}
|
||||||
|
|
||||||
Compilers GCC starting from version 9 and Clang version 8 or above are supported for building ClickHouse.
|
Compilers GCC starting from version 10 and Clang version 8 or above are supported for building ClickHouse.
|
||||||
|
|
||||||
Official Yandex builds currently use GCC because it generates machine code of slightly better performance (yielding a difference of up to several percent according to our benchmarks). And Clang is more convenient for development usually. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations.
|
Official Yandex builds currently use GCC because it generates machine code of slightly better performance (yielding a difference of up to several percent according to our benchmarks). And Clang is more convenient for development usually. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations.
|
||||||
|
|
||||||
To install GCC on Ubuntu run: `sudo apt install gcc g++`
|
To install GCC on Ubuntu run: `sudo apt install gcc g++`
|
||||||
|
|
||||||
Check the version of gcc: `gcc --version`. If it is below 9, then follow the instruction here: https://clickhouse.tech/docs/en/development/build/#install-gcc-9.
|
Check the version of gcc: `gcc --version`. If it is below 10, then follow the instruction here: https://clickhouse.tech/docs/en/development/build/#install-gcc-10.
|
||||||
|
|
||||||
Mac OS X build is supported only for Clang. Just run `brew install llvm`
|
Mac OS X build is supported only for Clang. Just run `brew install llvm`
|
||||||
|
|
||||||
@ -152,11 +152,11 @@ Now that you are ready to build ClickHouse we recommend you to create a separate
|
|||||||
|
|
||||||
You can have several different directories (build_release, build_debug, etc.) for different types of build.
|
You can have several different directories (build_release, build_debug, etc.) for different types of build.
|
||||||
|
|
||||||
While inside the `build` directory, configure your build by running CMake. Before the first run, you need to define environment variables that specify compiler (version 9 gcc compiler in this example).
|
While inside the `build` directory, configure your build by running CMake. Before the first run, you need to define environment variables that specify compiler (version 10 gcc compiler in this example).
|
||||||
|
|
||||||
Linux:
|
Linux:
|
||||||
|
|
||||||
export CC=gcc-9 CXX=g++-9
|
export CC=gcc-10 CXX=g++-10
|
||||||
cmake ..
|
cmake ..
|
||||||
|
|
||||||
Mac OS X:
|
Mac OS X:
|
||||||
|
@ -74,9 +74,9 @@ It’s not necessarily to have unit tests if the code is already covered by func
|
|||||||
|
|
||||||
## Performance Tests {#performance-tests}
|
## Performance Tests {#performance-tests}
|
||||||
|
|
||||||
Performance tests allow to measure and compare performance of some isolated part of ClickHouse on synthetic queries. Tests are located at `tests/performance`. Each test is represented by `.xml` file with description of test case. Tests are run with `clickhouse performance-test` tool (that is embedded in `clickhouse` binary). See `--help` for invocation.
|
Performance tests allow to measure and compare performance of some isolated part of ClickHouse on synthetic queries. Tests are located at `tests/performance`. Each test is represented by `.xml` file with description of test case. Tests are run with `docker/tests/performance-comparison` tool . See the readme file for invocation.
|
||||||
|
|
||||||
Each test run one or multiple queries (possibly with combinations of parameters) in a loop with some conditions for stop (like “maximum execution speed is not changing in three seconds”) and measure some metrics about query performance (like “maximum execution speed”). Some tests can contain preconditions on preloaded test dataset.
|
Each test run one or multiple queries (possibly with combinations of parameters) in a loop. Some tests can contain preconditions on preloaded test dataset.
|
||||||
|
|
||||||
If you want to improve performance of ClickHouse in some scenario, and if improvements can be observed on simple queries, it is highly recommended to write a performance test. It always makes sense to use `perf top` or other perf tools during your tests.
|
If you want to improve performance of ClickHouse in some scenario, and if improvements can be observed on simple queries, it is highly recommended to write a performance test. It always makes sense to use `perf top` or other perf tools during your tests.
|
||||||
|
|
||||||
|
@ -0,0 +1,45 @@
|
|||||||
|
---
|
||||||
|
toc_priority: 6
|
||||||
|
toc_title: EmbeddedRocksDB
|
||||||
|
---
|
||||||
|
|
||||||
|
# EmbeddedRocksDB Engine {#EmbeddedRocksDB-engine}
|
||||||
|
|
||||||
|
This engine allows integrating ClickHouse with [rocksdb](http://rocksdb.org/).
|
||||||
|
|
||||||
|
`EmbeddedRocksDB` lets you:
|
||||||
|
|
||||||
|
## Creating a Table {#table_engine-EmbeddedRocksDB-creating-a-table}
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||||
|
(
|
||||||
|
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
|
||||||
|
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
|
||||||
|
...
|
||||||
|
) ENGINE = EmbeddedRocksDB PRIMARY KEY(primary_key_name)
|
||||||
|
```
|
||||||
|
|
||||||
|
Required parameters:
|
||||||
|
|
||||||
|
- `primary_key_name` – any column name in the column list.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE test
|
||||||
|
(
|
||||||
|
`key` String,
|
||||||
|
`v1` UInt32,
|
||||||
|
`v2` String,
|
||||||
|
`v3` Float32,
|
||||||
|
)
|
||||||
|
ENGINE = EmbeddedRocksDB
|
||||||
|
PRIMARY KEY key
|
||||||
|
```
|
||||||
|
|
||||||
|
## Description {#description}
|
||||||
|
|
||||||
|
- `primary key` must be specified, it only supports one column in primary key. The primary key will serialized in binary as rocksdb key.
|
||||||
|
- columns other than the primary key will be serialized in binary as rocksdb value in corresponding order.
|
||||||
|
- queries with key `equals` or `in` filtering will be optimized to multi keys lookup from rocksdb.
|
@ -343,8 +343,8 @@ The `set` index can be used with all functions. Function subsets for other index
|
|||||||
|------------------------------------------------------------------------------------------------------------|-------------|--------|-------------|-------------|---------------|
|
|------------------------------------------------------------------------------------------------------------|-------------|--------|-------------|-------------|---------------|
|
||||||
| [equals (=, ==)](../../../sql-reference/functions/comparison-functions.md#function-equals) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
| [equals (=, ==)](../../../sql-reference/functions/comparison-functions.md#function-equals) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||||
| [notEquals(!=, \<\>)](../../../sql-reference/functions/comparison-functions.md#function-notequals) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
| [notEquals(!=, \<\>)](../../../sql-reference/functions/comparison-functions.md#function-notequals) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||||
| [like](../../../sql-reference/functions/string-search-functions.md#function-like) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
| [like](../../../sql-reference/functions/string-search-functions.md#function-like) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
||||||
| [notLike](../../../sql-reference/functions/string-search-functions.md#function-notlike) | ✔ | ✔ | ✗ | ✗ | ✗ |
|
| [notLike](../../../sql-reference/functions/string-search-functions.md#function-notlike) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
||||||
| [startsWith](../../../sql-reference/functions/string-functions.md#startswith) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
| [startsWith](../../../sql-reference/functions/string-functions.md#startswith) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
||||||
| [endsWith](../../../sql-reference/functions/string-functions.md#endswith) | ✗ | ✗ | ✔ | ✔ | ✗ |
|
| [endsWith](../../../sql-reference/functions/string-functions.md#endswith) | ✗ | ✗ | ✔ | ✔ | ✗ |
|
||||||
| [multiSearchAny](../../../sql-reference/functions/string-search-functions.md#function-multisearchany) | ✗ | ✗ | ✔ | ✗ | ✗ |
|
| [multiSearchAny](../../../sql-reference/functions/string-search-functions.md#function-multisearchany) | ✗ | ✗ | ✔ | ✗ | ✗ |
|
||||||
|
@ -98,6 +98,7 @@ When creating a table, the following settings are applied:
|
|||||||
- [max_bytes_in_join](../../../operations/settings/query-complexity.md#settings-max_bytes_in_join)
|
- [max_bytes_in_join](../../../operations/settings/query-complexity.md#settings-max_bytes_in_join)
|
||||||
- [join_overflow_mode](../../../operations/settings/query-complexity.md#settings-join_overflow_mode)
|
- [join_overflow_mode](../../../operations/settings/query-complexity.md#settings-join_overflow_mode)
|
||||||
- [join_any_take_last_row](../../../operations/settings/settings.md#settings-join_any_take_last_row)
|
- [join_any_take_last_row](../../../operations/settings/settings.md#settings-join_any_take_last_row)
|
||||||
|
- [persistent](../../../operations/settings/settings.md#persistent)
|
||||||
|
|
||||||
The `Join`-engine tables can’t be used in `GLOBAL JOIN` operations.
|
The `Join`-engine tables can’t be used in `GLOBAL JOIN` operations.
|
||||||
|
|
||||||
|
@ -14,4 +14,10 @@ Data is always located in RAM. For `INSERT`, the blocks of inserted data are als
|
|||||||
|
|
||||||
For a rough server restart, the block of data on the disk might be lost or damaged. In the latter case, you may need to manually delete the file with damaged data.
|
For a rough server restart, the block of data on the disk might be lost or damaged. In the latter case, you may need to manually delete the file with damaged data.
|
||||||
|
|
||||||
|
### Limitations and Settings {#join-limitations-and-settings}
|
||||||
|
|
||||||
|
When creating a table, the following settings are applied:
|
||||||
|
|
||||||
|
- [persistent](../../../operations/settings/settings.md#persistent)
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/set/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/set/) <!--hide-->
|
||||||
|
@ -30,4 +30,4 @@ Instead of inserting data manually, you might consider to use one of [client lib
|
|||||||
- `input_format_import_nested_json` allows to insert nested JSON objects into columns of [Nested](../../sql-reference/data-types/nested-data-structures/nested.md) type.
|
- `input_format_import_nested_json` allows to insert nested JSON objects into columns of [Nested](../../sql-reference/data-types/nested-data-structures/nested.md) type.
|
||||||
|
|
||||||
!!! note "Note"
|
!!! note "Note"
|
||||||
Settings are specified as `GET` parameters for the HTTP interface or as additional command-line arguments prefixed with `--` for the `CLI` interface.
|
Settings are specified as `GET` parameters for the HTTP interface or as additional command-line arguments prefixed with `--` for the `CLI` interface.
|
||||||
|
@ -148,7 +148,7 @@ SETTINGS index_granularity = 8192;
|
|||||||
Loading data:
|
Loading data:
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhouse-client --host=example-perftest01j --query="INSERT INTO ontime FORMAT CSVWithNames"; done
|
$ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhouse-client --input_format_with_names_use_header=0 --host=example-perftest01j --query="INSERT INTO ontime FORMAT CSVWithNames"; done
|
||||||
```
|
```
|
||||||
|
|
||||||
## Download of Prepared Partitions {#download-of-prepared-partitions}
|
## Download of Prepared Partitions {#download-of-prepared-partitions}
|
||||||
|
@ -123,6 +123,7 @@ You can pass parameters to `clickhouse-client` (all parameters have a default va
|
|||||||
- `--stacktrace` – If specified, also print the stack trace if an exception occurs.
|
- `--stacktrace` – If specified, also print the stack trace if an exception occurs.
|
||||||
- `--config-file` – The name of the configuration file.
|
- `--config-file` – The name of the configuration file.
|
||||||
- `--secure` – If specified, will connect to server over secure connection.
|
- `--secure` – If specified, will connect to server over secure connection.
|
||||||
|
- `--history_file` — Path to a file containing command history.
|
||||||
- `--param_<name>` — Value for a [query with parameters](#cli-queries-with-parameters).
|
- `--param_<name>` — Value for a [query with parameters](#cli-queries-with-parameters).
|
||||||
|
|
||||||
### Configuration Files {#configuration_files}
|
### Configuration Files {#configuration_files}
|
||||||
|
@ -26,6 +26,9 @@ toc_title: Client Libraries
|
|||||||
- [go-clickhouse](https://github.com/roistat/go-clickhouse)
|
- [go-clickhouse](https://github.com/roistat/go-clickhouse)
|
||||||
- [mailrugo-clickhouse](https://github.com/mailru/go-clickhouse)
|
- [mailrugo-clickhouse](https://github.com/mailru/go-clickhouse)
|
||||||
- [golang-clickhouse](https://github.com/leprosus/golang-clickhouse)
|
- [golang-clickhouse](https://github.com/leprosus/golang-clickhouse)
|
||||||
|
- Swift
|
||||||
|
- [ClickHouseNIO](https://github.com/patrick-zippenfenig/ClickHouseNIO)
|
||||||
|
- [ClickHouseVapor ORM](https://github.com/patrick-zippenfenig/ClickHouseVapor)
|
||||||
- NodeJs
|
- NodeJs
|
||||||
- [clickhouse (NodeJs)](https://github.com/TimonKK/clickhouse)
|
- [clickhouse (NodeJs)](https://github.com/TimonKK/clickhouse)
|
||||||
- [node-clickhouse](https://github.com/apla/node-clickhouse)
|
- [node-clickhouse](https://github.com/apla/node-clickhouse)
|
||||||
|
@ -11,6 +11,7 @@ toc_title: Adopters
|
|||||||
| Company | Industry | Usecase | Cluster Size | (Un)Compressed Data Size<abbr title="of single replica"><sup>\*</sup></abbr> | Reference |
|
| Company | Industry | Usecase | Cluster Size | (Un)Compressed Data Size<abbr title="of single replica"><sup>\*</sup></abbr> | Reference |
|
||||||
|------------------------------------------------------------------------------------------------|---------------------------------|-----------------------|------------------------------------------------------------|------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
|------------------------------------------------------------------------------------------------|---------------------------------|-----------------------|------------------------------------------------------------|------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
| <a href="https://2gis.ru" class="favicon">2gis</a> | Maps | Monitoring | — | — | [Talk in Russian, July 2019](https://youtu.be/58sPkXfq6nw) |
|
| <a href="https://2gis.ru" class="favicon">2gis</a> | Maps | Monitoring | — | — | [Talk in Russian, July 2019](https://youtu.be/58sPkXfq6nw) |
|
||||||
|
| <a href="https://getadmiral.com/" class="favicon">Admiral</a> | Martech | Engagement Management | — | — | [Webinar Slides, June 2020](https://altinity.com/presentations/2020/06/16/big-data-in-real-time-how-clickhouse-powers-admirals-visitor-relationships-for-publishers) |
|
||||||
| <a href="https://cn.aliyun.com/" class="favicon">Alibaba Cloud</a> | Cloud | Managed Service | — | — | [Official Website](https://help.aliyun.com/product/144466.html) |
|
| <a href="https://cn.aliyun.com/" class="favicon">Alibaba Cloud</a> | Cloud | Managed Service | — | — | [Official Website](https://help.aliyun.com/product/144466.html) |
|
||||||
| <a href="https://alohabrowser.com/" class="favicon">Aloha Browser</a> | Mobile App | Browser backend | — | — | [Slides in Russian, May 2019](https://presentations.clickhouse.tech/meetup22/aloha.pdf) |
|
| <a href="https://alohabrowser.com/" class="favicon">Aloha Browser</a> | Mobile App | Browser backend | — | — | [Slides in Russian, May 2019](https://presentations.clickhouse.tech/meetup22/aloha.pdf) |
|
||||||
| <a href="https://amadeus.com/" class="favicon">Amadeus</a> | Travel | Analytics | — | — | [Press Release, April 2018](https://www.altinity.com/blog/2018/4/5/amadeus-technologies-launches-investment-and-insights-tool-based-on-machine-learning-and-strategy-algorithms) |
|
| <a href="https://amadeus.com/" class="favicon">Amadeus</a> | Travel | Analytics | — | — | [Press Release, April 2018](https://www.altinity.com/blog/2018/4/5/amadeus-technologies-launches-investment-and-insights-tool-based-on-machine-learning-and-strategy-algorithms) |
|
||||||
@ -29,6 +30,7 @@ toc_title: Adopters
|
|||||||
| <a href="https://www.citadelsecurities.com/" class="favicon">Citadel Securities</a> | Finance | — | — | — | [Contribution, March 2019](https://github.com/ClickHouse/ClickHouse/pull/4774) |
|
| <a href="https://www.citadelsecurities.com/" class="favicon">Citadel Securities</a> | Finance | — | — | — | [Contribution, March 2019](https://github.com/ClickHouse/ClickHouse/pull/4774) |
|
||||||
| <a href="https://city-mobil.ru" class="favicon">Citymobil</a> | Taxi | Analytics | — | — | [Blog Post in Russian, March 2020](https://habr.com/en/company/citymobil/blog/490660/) |
|
| <a href="https://city-mobil.ru" class="favicon">Citymobil</a> | Taxi | Analytics | — | — | [Blog Post in Russian, March 2020](https://habr.com/en/company/citymobil/blog/490660/) |
|
||||||
| <a href="https://cloudflare.com" class="favicon">Cloudflare</a> | CDN | Traffic analysis | 36 servers | — | [Blog post, May 2017](https://blog.cloudflare.com/how-cloudflare-analyzes-1m-dns-queries-per-second/), [Blog post, March 2018](https://blog.cloudflare.com/http-analytics-for-6m-requests-per-second-using-clickhouse/) |
|
| <a href="https://cloudflare.com" class="favicon">Cloudflare</a> | CDN | Traffic analysis | 36 servers | — | [Blog post, May 2017](https://blog.cloudflare.com/how-cloudflare-analyzes-1m-dns-queries-per-second/), [Blog post, March 2018](https://blog.cloudflare.com/http-analytics-for-6m-requests-per-second-using-clickhouse/) |
|
||||||
|
| <a href="https://corporate.comcast.com/" class="favicon">Comcast</a> | Media | CDN Traffic Analysis | — | — | [ApacheCon 2019 Talk](https://www.youtube.com/watch?v=e9TZ6gFDjNg) |
|
||||||
| <a href="https://contentsquare.com" class="favicon">ContentSquare</a> | Web analytics | Main product | — | — | [Blog post in French, November 2018](http://souslecapot.net/2018/11/21/patrick-chatain-vp-engineering-chez-contentsquare-penser-davantage-amelioration-continue-que-revolution-constante/) |
|
| <a href="https://contentsquare.com" class="favicon">ContentSquare</a> | Web analytics | Main product | — | — | [Blog post in French, November 2018](http://souslecapot.net/2018/11/21/patrick-chatain-vp-engineering-chez-contentsquare-penser-davantage-amelioration-continue-que-revolution-constante/) |
|
||||||
| <a href="https://coru.net/" class="favicon">Corunet</a> | Analytics | Main product | — | — | [Slides in English, April 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup21/predictive_models.pdf) |
|
| <a href="https://coru.net/" class="favicon">Corunet</a> | Analytics | Main product | — | — | [Slides in English, April 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup21/predictive_models.pdf) |
|
||||||
| <a href="https://www.creditx.com" class="favicon">CraiditX 氪信</a> | Finance AI | Analysis | — | — | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup33/udf.pptx) |
|
| <a href="https://www.creditx.com" class="favicon">CraiditX 氪信</a> | Finance AI | Analysis | — | — | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup33/udf.pptx) |
|
||||||
@ -36,6 +38,7 @@ toc_title: Adopters
|
|||||||
| <a href="https://www.criteo.com/" class="favicon">Criteo</a> | Retail | Main product | — | — | [Slides in English, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup18/3_storetail.pptx) |
|
| <a href="https://www.criteo.com/" class="favicon">Criteo</a> | Retail | Main product | — | — | [Slides in English, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup18/3_storetail.pptx) |
|
||||||
| <a href="https://www.chinatelecomglobal.com/" class="favicon">Dataliance for China Telecom</a> | Telecom | Analytics | — | — | [Slides in Chinese, January 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/telecom.pdf) |
|
| <a href="https://www.chinatelecomglobal.com/" class="favicon">Dataliance for China Telecom</a> | Telecom | Analytics | — | — | [Slides in Chinese, January 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/telecom.pdf) |
|
||||||
| <a href="https://db.com" class="favicon">Deutsche Bank</a> | Finance | BI Analytics | — | — | [Slides in English, October 2019](https://bigdatadays.ru/wp-content/uploads/2019/10/D2-H3-3_Yakunin-Goihburg.pdf) |
|
| <a href="https://db.com" class="favicon">Deutsche Bank</a> | Finance | BI Analytics | — | — | [Slides in English, October 2019](https://bigdatadays.ru/wp-content/uploads/2019/10/D2-H3-3_Yakunin-Goihburg.pdf) |
|
||||||
|
| <a href="https://deeplay.io/eng/" class="favicon">Deeplay</a> | Gaming Analytics | — | — | — | [Job advertisement, 2020](https://career.habr.com/vacancies/1000062568) |
|
||||||
| <a href="https://www.diva-e.com" class="favicon">Diva-e</a> | Digital consulting | Main Product | — | — | [Slides in English, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup29/ClickHouse-MeetUp-Unusual-Applications-sd-2019-09-17.pdf) |
|
| <a href="https://www.diva-e.com" class="favicon">Diva-e</a> | Digital consulting | Main Product | — | — | [Slides in English, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup29/ClickHouse-MeetUp-Unusual-Applications-sd-2019-09-17.pdf) |
|
||||||
| <a href="https://www.ecwid.com/" class="favicon">Ecwid</a> | E-commerce SaaS | Metrics, Logging | — | — | [Slides in Russian, April 2019](https://nastachku.ru/var/files/1/presentation/backend/2_Backend_6.pdf) |
|
| <a href="https://www.ecwid.com/" class="favicon">Ecwid</a> | E-commerce SaaS | Metrics, Logging | — | — | [Slides in Russian, April 2019](https://nastachku.ru/var/files/1/presentation/backend/2_Backend_6.pdf) |
|
||||||
| <a href="https://www.ebay.com/" class="favicon">eBay</a> | E-commerce | Logs, Metrics and Events | — | — | [Official website, Sep 2020](https://tech.ebayinc.com/engineering/ou-online-analytical-processing/) |
|
| <a href="https://www.ebay.com/" class="favicon">eBay</a> | E-commerce | Logs, Metrics and Events | — | — | [Official website, Sep 2020](https://tech.ebayinc.com/engineering/ou-online-analytical-processing/) |
|
||||||
@ -45,6 +48,7 @@ toc_title: Adopters
|
|||||||
| <a href="https://fun.co/rp" class="favicon">FunCorp</a> | Games | | — | — | [Article](https://www.altinity.com/blog/migrating-from-redshift-to-clickhouse) |
|
| <a href="https://fun.co/rp" class="favicon">FunCorp</a> | Games | | — | — | [Article](https://www.altinity.com/blog/migrating-from-redshift-to-clickhouse) |
|
||||||
| <a href="https://geniee.co.jp" class="favicon">Geniee</a> | Ad network | Main product | — | — | [Blog post in Japanese, July 2017](https://tech.geniee.co.jp/entry/2017/07/20/160100) |
|
| <a href="https://geniee.co.jp" class="favicon">Geniee</a> | Ad network | Main product | — | — | [Blog post in Japanese, July 2017](https://tech.geniee.co.jp/entry/2017/07/20/160100) |
|
||||||
| <a href="https://www.huya.com/" class="favicon">HUYA</a> | Video Streaming | Analytics | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/7.%20ClickHouse万亿数据分析实践%20李本旺(sundy-li)%20虎牙.pdf) |
|
| <a href="https://www.huya.com/" class="favicon">HUYA</a> | Video Streaming | Analytics | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/7.%20ClickHouse万亿数据分析实践%20李本旺(sundy-li)%20虎牙.pdf) |
|
||||||
|
| <a href="https://www.the-ica.com/" class="favicon">ICA</a> | FinTech | Risk Management | — | — | [Blog Post in English, Sep 2020](https://altinity.com/blog/clickhouse-vs-redshift-performance-for-fintech-risk-management?utm_campaign=ClickHouse%20vs%20RedShift&utm_content=143520807&utm_medium=social&utm_source=twitter&hss_channel=tw-3894792263) |
|
||||||
| <a href="https://www.idealista.com" class="favicon">Idealista</a> | Real Estate | Analytics | — | — | [Blog Post in English, April 2019](https://clickhouse.tech/blog/en/clickhouse-meetup-in-madrid-on-april-2-2019) |
|
| <a href="https://www.idealista.com" class="favicon">Idealista</a> | Real Estate | Analytics | — | — | [Blog Post in English, April 2019](https://clickhouse.tech/blog/en/clickhouse-meetup-in-madrid-on-april-2-2019) |
|
||||||
| <a href="https://www.infovista.com/" class="favicon">Infovista</a> | Networks | Analytics | — | — | [Slides in English, October 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup30/infovista.pdf) |
|
| <a href="https://www.infovista.com/" class="favicon">Infovista</a> | Networks | Analytics | — | — | [Slides in English, October 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup30/infovista.pdf) |
|
||||||
| <a href="https://www.innogames.com" class="favicon">InnoGames</a> | Games | Metrics, Logging | — | — | [Slides in Russian, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/graphite_and_clickHouse.pdf) |
|
| <a href="https://www.innogames.com" class="favicon">InnoGames</a> | Games | Metrics, Logging | — | — | [Slides in Russian, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/graphite_and_clickHouse.pdf) |
|
||||||
@ -62,12 +66,14 @@ toc_title: Adopters
|
|||||||
| <a href="https://tech.mymarilyn.ru" class="favicon">Marilyn</a> | Advertising | Statistics | — | — | [Talk in Russian, June 2017](https://www.youtube.com/watch?v=iXlIgx2khwc) |
|
| <a href="https://tech.mymarilyn.ru" class="favicon">Marilyn</a> | Advertising | Statistics | — | — | [Talk in Russian, June 2017](https://www.youtube.com/watch?v=iXlIgx2khwc) |
|
||||||
| <a href="https://mellodesign.ru/" class="favicon">Mello</a> | Marketing | Analytics | 1 server | — | [Article, Oct 2020](https://vc.ru/marketing/166180-razrabotka-tipovogo-otcheta-skvoznoy-analitiki) |
|
| <a href="https://mellodesign.ru/" class="favicon">Mello</a> | Marketing | Analytics | 1 server | — | [Article, Oct 2020](https://vc.ru/marketing/166180-razrabotka-tipovogo-otcheta-skvoznoy-analitiki) |
|
||||||
| <a href="https://www.messagebird.com" class="favicon">MessageBird</a> | Telecommunications | Statistics | — | — | [Slides in English, November 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup20/messagebird.pdf) |
|
| <a href="https://www.messagebird.com" class="favicon">MessageBird</a> | Telecommunications | Statistics | — | — | [Slides in English, November 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup20/messagebird.pdf) |
|
||||||
| <a href="https://www.mindsdb.com/" class="favicon">MindsDB</a> | Machine Learning | Main Product | — | — | [Official Website](https://www.mindsdb.com/blog/machine-learning-models-as-tables-in-ch) |
|
| <a href="https://www.mindsdb.com/" class="favicon">MindsDB</a> | Machine Learning | Main Product | — | — | [Official Website](https://www.mindsdb.com/blog/machine-learning-models-as-tables-in-ch) |x
|
||||||
|
| <a href="https://mux.com/" class="favicon">MUX</a> | Online Video | Video Analytics | — | — | [Talk in English, August 2019](https://altinity.com/presentations/2019/8/13/how-clickhouse-became-the-default-analytics-database-for-mux/) |
|
||||||
| <a href="https://www.mgid.com/" class="favicon">MGID</a> | Ad network | Web-analytics | — | — | [Blog post in Russian, April 2020](http://gs-studio.com/news-about-it/32777----clickhouse---c) |
|
| <a href="https://www.mgid.com/" class="favicon">MGID</a> | Ad network | Web-analytics | — | — | [Blog post in Russian, April 2020](http://gs-studio.com/news-about-it/32777----clickhouse---c) |
|
||||||
| <a href="https://getnoc.com/" class="favicon">NOC Project</a> | Network Monitoring | Analytics | Main Product | — | [Official Website](https://getnoc.com/features/big-data/) |
|
| <a href="https://getnoc.com/" class="favicon">NOC Project</a> | Network Monitoring | Analytics | Main Product | — | [Official Website](https://getnoc.com/features/big-data/) |
|
||||||
| <a href="https://www.nuna.com/" class="favicon">Nuna Inc.</a> | Health Data Analytics | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=170) |
|
| <a href="https://www.nuna.com/" class="favicon">Nuna Inc.</a> | Health Data Analytics | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=170) |
|
||||||
| <a href="https://www.oneapm.com/" class="favicon">OneAPM</a> | Monitorings and Data Analysis | Main product | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/8.%20clickhouse在OneAPM的应用%20杜龙.pdf) |
|
| <a href="https://www.oneapm.com/" class="favicon">OneAPM</a> | Monitorings and Data Analysis | Main product | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/8.%20clickhouse在OneAPM的应用%20杜龙.pdf) |
|
||||||
| <a href="https://www.percent.cn/" class="favicon">Percent 百分点</a> | Analytics | Main Product | — | — | [Slides in Chinese, June 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup24/4.%20ClickHouse万亿数据双中心的设计与实践%20.pdf) |
|
| <a href="https://www.percent.cn/" class="favicon">Percent 百分点</a> | Analytics | Main Product | — | — | [Slides in Chinese, June 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup24/4.%20ClickHouse万亿数据双中心的设计与实践%20.pdf) |
|
||||||
|
| <a href="https://www.percona.com/" class="favicon">Percona</a> | Performance analysis | Percona Monitoring and Management | — | — | [Official website, Mar 2020](https://www.percona.com/blog/2020/03/30/advanced-query-analysis-in-percona-monitoring-and-management-with-direct-clickhouse-access/) |
|
||||||
| <a href="https://plausible.io/" class="favicon">Plausible</a> | Analytics | Main Product | — | — | [Blog post, June 2020](https://twitter.com/PlausibleHQ/status/1273889629087969280) |
|
| <a href="https://plausible.io/" class="favicon">Plausible</a> | Analytics | Main Product | — | — | [Blog post, June 2020](https://twitter.com/PlausibleHQ/status/1273889629087969280) |
|
||||||
| <a href="https://posthog.com/" class="favicon">PostHog</a> | Product Analytics | Main Product | — | — | [Release Notes, Oct 2020](https://posthog.com/blog/the-posthog-array-1-15-0) |
|
| <a href="https://posthog.com/" class="favicon">PostHog</a> | Product Analytics | Main Product | — | — | [Release Notes, Oct 2020](https://posthog.com/blog/the-posthog-array-1-15-0) |
|
||||||
| <a href="https://postmates.com/" class="favicon">Postmates</a> | Delivery | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=188) |
|
| <a href="https://postmates.com/" class="favicon">Postmates</a> | Delivery | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=188) |
|
||||||
@ -90,6 +96,7 @@ toc_title: Adopters
|
|||||||
| <a href="https://www.splunk.com/" class="favicon">Splunk</a> | Business Analytics | Main product | — | — | [Slides in English, January 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/splunk.pdf) |
|
| <a href="https://www.splunk.com/" class="favicon">Splunk</a> | Business Analytics | Main product | — | — | [Slides in English, January 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/splunk.pdf) |
|
||||||
| <a href="https://www.spotify.com" class="favicon">Spotify</a> | Music | Experimentation | — | — | [Slides, July 2018](https://www.slideshare.net/glebus/using-clickhouse-for-experimentation-104247173) |
|
| <a href="https://www.spotify.com" class="favicon">Spotify</a> | Music | Experimentation | — | — | [Slides, July 2018](https://www.slideshare.net/glebus/using-clickhouse-for-experimentation-104247173) |
|
||||||
| <a href="https://www.staffcop.ru/" class="favicon">Staffcop</a> | Information Security | Main Product | — | — | [Official website, Documentation](https://www.staffcop.ru/sce43) |
|
| <a href="https://www.staffcop.ru/" class="favicon">Staffcop</a> | Information Security | Main Product | — | — | [Official website, Documentation](https://www.staffcop.ru/sce43) |
|
||||||
|
| <a href="https://www.teralytics.net/" class="favicon">Teralytics</a> | Mobility | Analytics | — | — | [Tech blog](https://www.teralytics.net/knowledge-hub/visualizing-mobility-data-the-scalability-challenge) |
|
||||||
| <a href="https://www.tencent.com" class="favicon">Tencent</a> | Big Data | Data processing | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/5.%20ClickHouse大数据集群应用_李俊飞腾讯网媒事业部.pdf) |
|
| <a href="https://www.tencent.com" class="favicon">Tencent</a> | Big Data | Data processing | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/5.%20ClickHouse大数据集群应用_李俊飞腾讯网媒事业部.pdf) |
|
||||||
| <a href="https://www.tencent.com" class="favicon">Tencent</a> | Messaging | Logging | — | — | [Talk in Chinese, November 2019](https://youtu.be/T-iVQRuw-QY?t=5050) |
|
| <a href="https://www.tencent.com" class="favicon">Tencent</a> | Messaging | Logging | — | — | [Talk in Chinese, November 2019](https://youtu.be/T-iVQRuw-QY?t=5050) |
|
||||||
| <a href="https://trafficstars.com/" class="favicon">Traffic Stars</a> | AD network | — | — | — | [Slides in Russian, May 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup15/lightning/ninja.pdf) |
|
| <a href="https://trafficstars.com/" class="favicon">Traffic Stars</a> | AD network | — | — | — | [Slides in Russian, May 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup15/lightning/ninja.pdf) |
|
||||||
|
@ -479,6 +479,26 @@ The maximum number of simultaneously processed requests.
|
|||||||
<max_concurrent_queries>100</max_concurrent_queries>
|
<max_concurrent_queries>100</max_concurrent_queries>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## max_concurrent_queries_for_all_users {#max-concurrent-queries-for-all-users}
|
||||||
|
|
||||||
|
Throw exception if the value of this setting is less or equal than the current number of simultaneously processed queries.
|
||||||
|
|
||||||
|
Example: `max_concurrent_queries_for_all_users` can be set to 99 for all users and database administrator can set it to 100 for itself to run queries for investigation even when the server is overloaded.
|
||||||
|
|
||||||
|
Modifying the setting for one query or user does not affect other queries.
|
||||||
|
|
||||||
|
Default value: `0` that means no limit.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
``` xml
|
||||||
|
<max_concurrent_queries_for_all_users>99</max_concurrent_queries_for_all_users>
|
||||||
|
```
|
||||||
|
|
||||||
|
**See Also**
|
||||||
|
|
||||||
|
- [max_concurrent_queries](#max-concurrent-queries)
|
||||||
|
|
||||||
## max_connections {#max-connections}
|
## max_connections {#max-connections}
|
||||||
|
|
||||||
The maximum number of inbound connections.
|
The maximum number of inbound connections.
|
||||||
@ -551,7 +571,7 @@ For more information, see the MergeTreeSettings.h header file.
|
|||||||
|
|
||||||
Fine tuning for tables in the [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/mergetree.md).
|
Fine tuning for tables in the [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/mergetree.md).
|
||||||
|
|
||||||
This setting has higher priority.
|
This setting has a higher priority.
|
||||||
|
|
||||||
For more information, see the MergeTreeSettings.h header file.
|
For more information, see the MergeTreeSettings.h header file.
|
||||||
|
|
||||||
@ -1061,4 +1081,45 @@ Default value: `/var/lib/clickhouse/access/`.
|
|||||||
|
|
||||||
- [Access Control and Account Management](../../operations/access-rights.md#access-control)
|
- [Access Control and Account Management](../../operations/access-rights.md#access-control)
|
||||||
|
|
||||||
|
## user_directories {#user_directories}
|
||||||
|
|
||||||
|
Section of the configuration file that contains settings:
|
||||||
|
- Path to configuration file with predefined users.
|
||||||
|
- Path to folder where users created by SQL commands are stored.
|
||||||
|
|
||||||
|
If this section is specified, the path from [users_config](../../operations/server-configuration-parameters/settings.md#users-config) and [access_control_path](../../operations/server-configuration-parameters/settings.md#access_control_path) won't be used.
|
||||||
|
|
||||||
|
The `user_directories` section can contain any number of items, the order of the items means their precedence (the higher the item the higher the precedence).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
``` xml
|
||||||
|
<user_directories>
|
||||||
|
<users_xml>
|
||||||
|
<path>/etc/clickhouse-server/users.xml</path>
|
||||||
|
</users_xml>
|
||||||
|
<local_directory>
|
||||||
|
<path>/var/lib/clickhouse/access/</path>
|
||||||
|
</local_directory>
|
||||||
|
</user_directories>
|
||||||
|
```
|
||||||
|
|
||||||
|
You can also specify settings `memory` — means storing information only in memory, without writing to disk, and `ldap` — means storing information on an LDAP server.
|
||||||
|
|
||||||
|
To add an LDAP server as a remote user directory of users that are not defined locally, define a single `ldap` section with a following parameters:
|
||||||
|
- `server` — one of LDAP server names defined in `ldap_servers` config section. This parameter is mandatory and cannot be empty.
|
||||||
|
- `roles` — section with a list of locally defined roles that will be assigned to each user retrieved from the LDAP server. If no roles are specified, user will not be able to perform any actions after authentication. If any of the listed roles is not defined locally at the time of authentication, the authenthication attept will fail as if the provided password was incorrect.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
``` xml
|
||||||
|
<ldap>
|
||||||
|
<server>my_ldap_server</server>
|
||||||
|
<roles>
|
||||||
|
<my_local_role1 />
|
||||||
|
<my_local_role2 />
|
||||||
|
</roles>
|
||||||
|
</ldap>
|
||||||
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/server_configuration_parameters/settings/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/server_configuration_parameters/settings/) <!--hide-->
|
||||||
|
@ -307,7 +307,51 @@ Disabled by default.
|
|||||||
|
|
||||||
## input_format_tsv_enum_as_number {#settings-input_format_tsv_enum_as_number}
|
## input_format_tsv_enum_as_number {#settings-input_format_tsv_enum_as_number}
|
||||||
|
|
||||||
For TSV input format switches to parsing enum values as enum ids.
|
Enables or disables parsing enum values as enum ids for TSV input format.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 0 — Enum values are parsed as values.
|
||||||
|
- 1 — Enum values are parsed as enum IDs
|
||||||
|
|
||||||
|
Default value: 0.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Consider the table:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE table_with_enum_column_for_tsv_insert (Id Int32,Value Enum('first' = 1, 'second' = 2)) ENGINE=Memory();
|
||||||
|
```
|
||||||
|
|
||||||
|
When the `input_format_tsv_enum_as_number` setting is enabled:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SET input_format_tsv_enum_as_number = 1;
|
||||||
|
INSERT INTO table_with_enum_column_for_tsv_insert FORMAT TSV 102 2;
|
||||||
|
INSERT INTO table_with_enum_column_for_tsv_insert FORMAT TSV 103 1;
|
||||||
|
SELECT * FROM table_with_enum_column_for_tsv_insert;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌──Id─┬─Value──┐
|
||||||
|
│ 102 │ second │
|
||||||
|
└─────┴────────┘
|
||||||
|
┌──Id─┬─Value──┐
|
||||||
|
│ 103 │ first │
|
||||||
|
└─────┴────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
When the `input_format_tsv_enum_as_number` setting is disabled, the `INSERT` query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SET input_format_tsv_enum_as_number = 0;
|
||||||
|
INSERT INTO table_with_enum_column_for_tsv_insert FORMAT TSV 102 2;
|
||||||
|
```
|
||||||
|
|
||||||
|
throws an exception.
|
||||||
|
|
||||||
## input_format_null_as_default {#settings-input-format-null-as-default}
|
## input_format_null_as_default {#settings-input-format-null-as-default}
|
||||||
|
|
||||||
@ -384,7 +428,7 @@ Possible values:
|
|||||||
|
|
||||||
- `'basic'` — Use basic parser.
|
- `'basic'` — Use basic parser.
|
||||||
|
|
||||||
ClickHouse can parse only the basic `YYYY-MM-DD HH:MM:SS` format. For example, `'2019-08-20 10:18:56'`.
|
ClickHouse can parse only the basic `YYYY-MM-DD HH:MM:SS` or `YYYY-MM-DD` format. For example, `'2019-08-20 10:18:56'` or `2019-08-20`.
|
||||||
|
|
||||||
Default value: `'basic'`.
|
Default value: `'basic'`.
|
||||||
|
|
||||||
@ -680,6 +724,21 @@ Example:
|
|||||||
log_queries=1
|
log_queries=1
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## log_queries_min_query_duration_ms {#settings-log-queries-min-query-duration-ms}
|
||||||
|
|
||||||
|
Minimal time for the query to run to get to the following tables:
|
||||||
|
|
||||||
|
- `system.query_log`
|
||||||
|
- `system.query_thread_log`
|
||||||
|
|
||||||
|
Only the queries with the following type will get to the log:
|
||||||
|
|
||||||
|
- `QUERY_FINISH`
|
||||||
|
- `EXCEPTION_WHILE_PROCESSING`
|
||||||
|
|
||||||
|
- Type: milliseconds
|
||||||
|
- Default value: 0 (any query)
|
||||||
|
|
||||||
## log_queries_min_type {#settings-log-queries-min-type}
|
## log_queries_min_type {#settings-log-queries-min-type}
|
||||||
|
|
||||||
`query_log` minimal type to log.
|
`query_log` minimal type to log.
|
||||||
@ -1167,7 +1226,47 @@ For CSV input format enables or disables parsing of unquoted `NULL` as literal (
|
|||||||
|
|
||||||
## input_format_csv_enum_as_number {#settings-input_format_csv_enum_as_number}
|
## input_format_csv_enum_as_number {#settings-input_format_csv_enum_as_number}
|
||||||
|
|
||||||
For CSV input format switches to parsing enum values as enum ids.
|
Enables or disables parsing enum values as enum ids for CSV input format.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 0 — Enum values are parsed as values.
|
||||||
|
- 1 — Enum values are parsed as enum IDs.
|
||||||
|
|
||||||
|
Default value: 0.
|
||||||
|
|
||||||
|
**Examples**
|
||||||
|
|
||||||
|
Consider the table:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE table_with_enum_column_for_csv_insert (Id Int32,Value Enum('first' = 1, 'second' = 2)) ENGINE=Memory();
|
||||||
|
```
|
||||||
|
|
||||||
|
When the `input_format_csv_enum_as_number` setting is enabled:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SET input_format_csv_enum_as_number = 1;
|
||||||
|
INSERT INTO table_with_enum_column_for_csv_insert FORMAT CSV 102,2;
|
||||||
|
SELECT * FROM table_with_enum_column_for_csv_insert;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌──Id─┬─Value─────┐
|
||||||
|
│ 102 │ second │
|
||||||
|
└─────┴───────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
When the `input_format_csv_enum_as_number` setting is disabled, the `INSERT` query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SET input_format_csv_enum_as_number = 0;
|
||||||
|
INSERT INTO table_with_enum_column_for_csv_insert FORMAT CSV 102,2;
|
||||||
|
```
|
||||||
|
|
||||||
|
throws an exception.
|
||||||
|
|
||||||
## output_format_csv_crlf_end_of_line {#settings-output-format-csv-crlf-end-of-line}
|
## output_format_csv_crlf_end_of_line {#settings-output-format-csv-crlf-end-of-line}
|
||||||
|
|
||||||
@ -1750,6 +1849,23 @@ Default value: `0`.
|
|||||||
|
|
||||||
- [Distributed Table Engine](../../engines/table-engines/special/distributed.md#distributed)
|
- [Distributed Table Engine](../../engines/table-engines/special/distributed.md#distributed)
|
||||||
- [Managing Distributed Tables](../../sql-reference/statements/system.md#query-language-system-distributed)
|
- [Managing Distributed Tables](../../sql-reference/statements/system.md#query-language-system-distributed)
|
||||||
|
|
||||||
|
|
||||||
|
## use_compact_format_in_distributed_parts_names {#use_compact_format_in_distributed_parts_names}
|
||||||
|
|
||||||
|
Uses compact format for storing blocks for async (`insert_distributed_sync`) INSERT into tables with `Distributed` engine.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 0 — Uses `user[:password]@host:port#default_database` directory format.
|
||||||
|
- 1 — Uses `[shard{shard_index}[_replica{replica_index}]]` directory format.
|
||||||
|
|
||||||
|
Default value: `1`.
|
||||||
|
|
||||||
|
!!! note "Note"
|
||||||
|
- with `use_compact_format_in_distributed_parts_names=0` changes from cluster definition will not be applied for async INSERT.
|
||||||
|
- with `use_compact_format_in_distributed_parts_names=1` changing the order of the nodes in the cluster definition, will change the `shard_index`/`replica_index` so be aware.
|
||||||
|
|
||||||
## background_buffer_flush_schedule_pool_size {#background_buffer_flush_schedule_pool_size}
|
## background_buffer_flush_schedule_pool_size {#background_buffer_flush_schedule_pool_size}
|
||||||
|
|
||||||
Sets the number of threads performing background flush in [Buffer](../../engines/table-engines/special/buffer.md)-engine tables. This setting is applied at the ClickHouse server start and can’t be changed in a user session.
|
Sets the number of threads performing background flush in [Buffer](../../engines/table-engines/special/buffer.md)-engine tables. This setting is applied at the ClickHouse server start and can’t be changed in a user session.
|
||||||
@ -2188,4 +2304,17 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `0`.
|
Default value: `0`.
|
||||||
|
|
||||||
|
## persistent {#persistent}
|
||||||
|
|
||||||
|
Disables persistency for the [Set](../../engines/table-engines/special/set.md#set) and [Join](../../engines/table-engines/special/join.md#join) table engines.
|
||||||
|
|
||||||
|
Reduces the I/O overhead. Suitable for scenarios that pursue performance and do not require persistence.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 1 — Enabled.
|
||||||
|
- 0 — Disabled.
|
||||||
|
|
||||||
|
Default value: `1`.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/settings/settings/) <!-- hide -->
|
[Original article](https://clickhouse.tech/docs/en/operations/settings/settings/) <!-- hide -->
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
## system.asynchronous_metric_log {#system-tables-async-log}
|
## system.asynchronous_metric_log {#system-tables-async-log}
|
||||||
|
|
||||||
Contains the historical values for `system.asynchronous_metrics`, which are saved once per minute. This feature is enabled by default.
|
Contains the historical values for `system.asynchronous_metrics`, which are saved once per minute. Enabled by default.
|
||||||
|
|
||||||
Columns:
|
Columns:
|
||||||
|
|
||||||
@ -33,7 +33,7 @@ SELECT * FROM system.asynchronous_metric_log LIMIT 10
|
|||||||
|
|
||||||
**See Also**
|
**See Also**
|
||||||
|
|
||||||
- [system.asynchronous_metrics](../system-tables/asynchronous_metrics.md) — Contains metrics that are calculated periodically in the background.
|
- [system.asynchronous_metrics](../system-tables/asynchronous_metrics.md) — Contains metrics, calculated periodically in the background.
|
||||||
- [system.metric_log](../system-tables/metric_log.md) — Contains history of metrics values from tables `system.metrics` and `system.events`, periodically flushed to disk.
|
- [system.metric_log](../system-tables/metric_log.md) — Contains history of metrics values from tables `system.metrics` and `system.events`, periodically flushed to disk.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/asynchronous_metric_log) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/asynchronous_metric_log) <!--hide-->
|
||||||
|
23
docs/en/operations/system-tables/errors.md
Normal file
23
docs/en/operations/system-tables/errors.md
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
# system.errors {#system_tables-errors}
|
||||||
|
|
||||||
|
Contains error codes with number of times they have been triggered.
|
||||||
|
|
||||||
|
Columns:
|
||||||
|
|
||||||
|
- `name` ([String](../../sql-reference/data-types/string.md)) — name of the error (`errorCodeToName`).
|
||||||
|
- `code` ([Int32](../../sql-reference/data-types/int-uint.md)) — code number of the error.
|
||||||
|
- `value` ([UInt64](../../sql-reference/data-types/int-uint.md)) - number of times this error has been happened.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT *
|
||||||
|
FROM system.errors
|
||||||
|
WHERE value > 0
|
||||||
|
ORDER BY code ASC
|
||||||
|
LIMIT 1
|
||||||
|
|
||||||
|
┌─name─────────────┬─code─┬─value─┐
|
||||||
|
│ CANNOT_OPEN_FILE │ 76 │ 1 │
|
||||||
|
└──────────────────┴──────┴───────┘
|
||||||
|
```
|
@ -1,6 +1,7 @@
|
|||||||
# system.metric_log {#system_tables-metric_log}
|
# system.metric_log {#system_tables-metric_log}
|
||||||
|
|
||||||
Contains history of metrics values from tables `system.metrics` and `system.events`, periodically flushed to disk.
|
Contains history of metrics values from tables `system.metrics` and `system.events`, periodically flushed to disk.
|
||||||
|
|
||||||
To turn on metrics history collection on `system.metric_log`, create `/etc/clickhouse-server/config.d/metric_log.xml` with following content:
|
To turn on metrics history collection on `system.metric_log`, create `/etc/clickhouse-server/config.d/metric_log.xml` with following content:
|
||||||
|
|
||||||
``` xml
|
``` xml
|
||||||
@ -14,6 +15,11 @@ To turn on metrics history collection on `system.metric_log`, create `/etc/click
|
|||||||
</yandex>
|
</yandex>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Columns:
|
||||||
|
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — Event date.
|
||||||
|
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Event time.
|
||||||
|
- `event_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Event time with microseconds resolution.
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
|
@ -7,6 +7,9 @@ toc_title: clickhouse-copier
|
|||||||
|
|
||||||
Copies data from the tables in one cluster to tables in another (or the same) cluster.
|
Copies data from the tables in one cluster to tables in another (or the same) cluster.
|
||||||
|
|
||||||
|
!!! warning "Warning"
|
||||||
|
To get a consistent copy, the data in the source tables and partitions should not change during the entire process.
|
||||||
|
|
||||||
You can run multiple `clickhouse-copier` instances on different servers to perform the same job. ZooKeeper is used for syncing the processes.
|
You can run multiple `clickhouse-copier` instances on different servers to perform the same job. ZooKeeper is used for syncing the processes.
|
||||||
|
|
||||||
After starting, `clickhouse-copier`:
|
After starting, `clickhouse-copier`:
|
||||||
|
@ -50,8 +50,6 @@ ClickHouse-specific aggregate functions:
|
|||||||
- [skewPop](../../../sql-reference/aggregate-functions/reference/skewpop.md)
|
- [skewPop](../../../sql-reference/aggregate-functions/reference/skewpop.md)
|
||||||
- [kurtSamp](../../../sql-reference/aggregate-functions/reference/kurtsamp.md)
|
- [kurtSamp](../../../sql-reference/aggregate-functions/reference/kurtsamp.md)
|
||||||
- [kurtPop](../../../sql-reference/aggregate-functions/reference/kurtpop.md)
|
- [kurtPop](../../../sql-reference/aggregate-functions/reference/kurtpop.md)
|
||||||
- [timeSeriesGroupSum](../../../sql-reference/aggregate-functions/reference/timeseriesgroupsum.md)
|
|
||||||
- [timeSeriesGroupRateSum](../../../sql-reference/aggregate-functions/reference/timeseriesgroupratesum.md)
|
|
||||||
- [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md)
|
- [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md)
|
||||||
- [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md)
|
- [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md)
|
||||||
- [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md)
|
- [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md)
|
||||||
|
@ -53,13 +53,13 @@ Result:
|
|||||||
|
|
||||||
Similar to `quantileExact`, this computes the exact [quantile](https://en.wikipedia.org/wiki/Quantile) of a numeric data sequence.
|
Similar to `quantileExact`, this computes the exact [quantile](https://en.wikipedia.org/wiki/Quantile) of a numeric data sequence.
|
||||||
|
|
||||||
To get exact value, all the passed values are combined into an array, which is then fully sorted. The sorting [algorithm's](https://en.cppreference.com/w/cpp/algorithm/sort) complexity is `O(N·log(N))`, where `N = std::distance(first, last)` comparisons.
|
To get the exact value, all the passed values are combined into an array, which is then fully sorted. The sorting [algorithm's](https://en.cppreference.com/w/cpp/algorithm/sort) complexity is `O(N·log(N))`, where `N = std::distance(first, last)` comparisons.
|
||||||
|
|
||||||
Depending on the level, i.e if the level is 0.5 then the exact lower median value is returned if there are even number of elements and the middle value is returned if there are odd number of elements. Median is calculated similar to the [median_low](https://docs.python.org/3/library/statistics.html#statistics.median_low) implementation which is used in python.
|
The return value depends on the quantile level and the number of elements in the selection, i.e. if the level is 0.5, then the function returns the lower median value for an even number of elements and the middle median value for an odd number of elements. Median is calculated similarly to the [median_low](https://docs.python.org/3/library/statistics.html#statistics.median_low) implementation which is used in python.
|
||||||
|
|
||||||
For all other levels, the element at the the index corresponding to the value of `level * size_of_array` is returned. For example:
|
For all other levels, the element at the index corresponding to the value of `level * size_of_array` is returned. For example:
|
||||||
|
|
||||||
```$sql
|
``` sql
|
||||||
SELECT quantileExactLow(0.1)(number) FROM numbers(10)
|
SELECT quantileExactLow(0.1)(number) FROM numbers(10)
|
||||||
|
|
||||||
┌─quantileExactLow(0.1)(number)─┐
|
┌─quantileExactLow(0.1)(number)─┐
|
||||||
@ -111,9 +111,10 @@ Result:
|
|||||||
|
|
||||||
Similar to `quantileExact`, this computes the exact [quantile](https://en.wikipedia.org/wiki/Quantile) of a numeric data sequence.
|
Similar to `quantileExact`, this computes the exact [quantile](https://en.wikipedia.org/wiki/Quantile) of a numeric data sequence.
|
||||||
|
|
||||||
To get exact value, all the passed values are combined into an array, which is then fully sorted. The sorting [algorithm's](https://en.cppreference.com/w/cpp/algorithm/sort) complexity is `O(N·log(N))`, where `N = std::distance(first, last)` comparisons.
|
All the passed values are combined into an array, which is then fully sorted,
|
||||||
|
to get the exact value. The sorting [algorithm's](https://en.cppreference.com/w/cpp/algorithm/sort) complexity is `O(N·log(N))`, where `N = std::distance(first, last)` comparisons.
|
||||||
|
|
||||||
Depending on the level, i.e if the level is 0.5 then the exact higher median value is returned if there are even number of elements and the middle value is returned if there are odd number of elements. Median is calculated similar to the [median_high](https://docs.python.org/3/library/statistics.html#statistics.median_high) implementation which is used in python. For all other levels, the element at the the index corresponding to the value of `level * size_of_array` is returned.
|
The return value depends on the quantile level and the number of elements in the selection, i.e. if the level is 0.5, then the function returns the higher median value for an even number of elements and the middle median value for an odd number of elements. Median is calculated similarly to the [median_high](https://docs.python.org/3/library/statistics.html#statistics.median_high) implementation which is used in python. For all other levels, the element at the index corresponding to the value of `level * size_of_array` is returned.
|
||||||
|
|
||||||
This implementation behaves exactly similar to the current `quantileExact` implementation.
|
This implementation behaves exactly similar to the current `quantileExact` implementation.
|
||||||
|
|
||||||
|
@ -4,6 +4,6 @@ toc_priority: 140
|
|||||||
|
|
||||||
# sumWithOverflow {#sumwithoverflowx}
|
# sumWithOverflow {#sumwithoverflowx}
|
||||||
|
|
||||||
Computes the sum of the numbers, using the same data type for the result as for the input parameters. If the sum exceeds the maximum value for this data type, the function returns an error.
|
Computes the sum of the numbers, using the same data type for the result as for the input parameters. If the sum exceeds the maximum value for this data type, it is calculated with overflow.
|
||||||
|
|
||||||
Only works for numbers.
|
Only works for numbers.
|
||||||
|
@ -1,16 +0,0 @@
|
|||||||
---
|
|
||||||
toc_priority: 171
|
|
||||||
---
|
|
||||||
|
|
||||||
# timeSeriesGroupRateSum {#agg-function-timeseriesgroupratesum}
|
|
||||||
|
|
||||||
Syntax: `timeSeriesGroupRateSum(uid, ts, val)`
|
|
||||||
|
|
||||||
Similarly to [timeSeriesGroupSum](../../../sql-reference/aggregate-functions/reference/timeseriesgroupsum.md), `timeSeriesGroupRateSum` calculates the rate of time-series and then sum rates together.
|
|
||||||
Also, timestamp should be in ascend order before use this function.
|
|
||||||
|
|
||||||
Applying this function to the data from the `timeSeriesGroupSum` example, you get the following result:
|
|
||||||
|
|
||||||
``` text
|
|
||||||
[(2,0),(3,0.1),(7,0.3),(8,0.3),(12,0.3),(17,0.3),(18,0.3),(24,0.3),(25,0.1)]
|
|
||||||
```
|
|
@ -1,57 +0,0 @@
|
|||||||
---
|
|
||||||
toc_priority: 170
|
|
||||||
---
|
|
||||||
|
|
||||||
# timeSeriesGroupSum {#agg-function-timeseriesgroupsum}
|
|
||||||
|
|
||||||
Syntax: `timeSeriesGroupSum(uid, timestamp, value)`
|
|
||||||
|
|
||||||
`timeSeriesGroupSum` can aggregate different time series that sample timestamp not alignment.
|
|
||||||
It will use linear interpolation between two sample timestamp and then sum time-series together.
|
|
||||||
|
|
||||||
- `uid` is the time series unique id, `UInt64`.
|
|
||||||
- `timestamp` is Int64 type in order to support millisecond or microsecond.
|
|
||||||
- `value` is the metric.
|
|
||||||
|
|
||||||
The function returns array of tuples with `(timestamp, aggregated_value)` pairs.
|
|
||||||
|
|
||||||
Before using this function make sure `timestamp` is in ascending order.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─uid─┬─timestamp─┬─value─┐
|
|
||||||
│ 1 │ 2 │ 0.2 │
|
|
||||||
│ 1 │ 7 │ 0.7 │
|
|
||||||
│ 1 │ 12 │ 1.2 │
|
|
||||||
│ 1 │ 17 │ 1.7 │
|
|
||||||
│ 1 │ 25 │ 2.5 │
|
|
||||||
│ 2 │ 3 │ 0.6 │
|
|
||||||
│ 2 │ 8 │ 1.6 │
|
|
||||||
│ 2 │ 12 │ 2.4 │
|
|
||||||
│ 2 │ 18 │ 3.6 │
|
|
||||||
│ 2 │ 24 │ 4.8 │
|
|
||||||
└─────┴───────────┴───────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
CREATE TABLE time_series(
|
|
||||||
uid UInt64,
|
|
||||||
timestamp Int64,
|
|
||||||
value Float64
|
|
||||||
) ENGINE = Memory;
|
|
||||||
INSERT INTO time_series VALUES
|
|
||||||
(1,2,0.2),(1,7,0.7),(1,12,1.2),(1,17,1.7),(1,25,2.5),
|
|
||||||
(2,3,0.6),(2,8,1.6),(2,12,2.4),(2,18,3.6),(2,24,4.8);
|
|
||||||
|
|
||||||
SELECT timeSeriesGroupSum(uid, timestamp, value)
|
|
||||||
FROM (
|
|
||||||
SELECT * FROM time_series order by timestamp ASC
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
And the result will be:
|
|
||||||
|
|
||||||
``` text
|
|
||||||
[(2,0.2),(3,0.9),(7,2.1),(8,2.4),(12,3.6),(17,5.1),(18,5.4),(24,7.2),(25,2.5)]
|
|
||||||
```
|
|
@ -3,10 +3,45 @@ toc_priority: 47
|
|||||||
toc_title: Date
|
toc_title: Date
|
||||||
---
|
---
|
||||||
|
|
||||||
# Date {#date}
|
# Date {#data_type-date}
|
||||||
|
|
||||||
A date. Stored in two bytes as the number of days since 1970-01-01 (unsigned). Allows storing values from just after the beginning of the Unix Epoch to the upper threshold defined by a constant at the compilation stage (currently, this is until the year 2106, but the final fully-supported year is 2105).
|
A date. Stored in two bytes as the number of days since 1970-01-01 (unsigned). Allows storing values from just after the beginning of the Unix Epoch to the upper threshold defined by a constant at the compilation stage (currently, this is until the year 2106, but the final fully-supported year is 2105).
|
||||||
|
|
||||||
The date value is stored without the time zone.
|
The date value is stored without the time zone.
|
||||||
|
|
||||||
|
## Examples {#examples}
|
||||||
|
|
||||||
|
**1.** Creating a table with a `DateTime`-type column and inserting data into it:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE dt
|
||||||
|
(
|
||||||
|
`timestamp` Date,
|
||||||
|
`event_id` UInt8
|
||||||
|
)
|
||||||
|
ENGINE = TinyLog;
|
||||||
|
```
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
INSERT INTO dt Values (1546300800, 1), ('2019-01-01', 2);
|
||||||
|
```
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT * FROM dt;
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌──timestamp─┬─event_id─┐
|
||||||
|
│ 2019-01-01 │ 1 │
|
||||||
|
│ 2019-01-01 │ 2 │
|
||||||
|
└────────────┴──────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## See Also {#see-also}
|
||||||
|
|
||||||
|
- [Functions for working with dates and times](../../sql-reference/functions/date-time-functions.md)
|
||||||
|
- [Operators for working with dates and times](../../sql-reference/operators/index.md#operators-datetime)
|
||||||
|
- [`DateTime` data type](../../sql-reference/data-types/datetime.md)
|
||||||
|
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/data_types/date/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/data_types/date/) <!--hide-->
|
||||||
|
@ -59,7 +59,8 @@ LAYOUT(LAYOUT_TYPE(param value)) -- layout settings
|
|||||||
- [range_hashed](#range-hashed)
|
- [range_hashed](#range-hashed)
|
||||||
- [complex_key_hashed](#complex-key-hashed)
|
- [complex_key_hashed](#complex-key-hashed)
|
||||||
- [complex_key_cache](#complex-key-cache)
|
- [complex_key_cache](#complex-key-cache)
|
||||||
- [ssd_complex_key_cache](#ssd-cache)
|
- [ssd_cache](#ssd-cache)
|
||||||
|
- [ssd_complex_key_cache](#complex-key-ssd-cache)
|
||||||
- [complex_key_direct](#complex-key-direct)
|
- [complex_key_direct](#complex-key-direct)
|
||||||
- [ip_trie](#ip-trie)
|
- [ip_trie](#ip-trie)
|
||||||
|
|
||||||
|
@ -0,0 +1,91 @@
|
|||||||
|
---
|
||||||
|
toc_priority: 46
|
||||||
|
toc_title: Polygon Dictionaries With Grids
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
# Polygon dictionaries {#polygon-dictionaries}
|
||||||
|
|
||||||
|
Polygon dictionaries allow you to efficiently search for the polygon containing specified points.
|
||||||
|
For example: defining a city area by geographical coordinates.
|
||||||
|
|
||||||
|
Example configuration:
|
||||||
|
|
||||||
|
``` xml
|
||||||
|
<dictionary>
|
||||||
|
<structure>
|
||||||
|
<key>
|
||||||
|
<name>key</name>
|
||||||
|
<type>Array(Array(Array(Array(Float64))))</type>
|
||||||
|
</key>
|
||||||
|
|
||||||
|
<attribute>
|
||||||
|
<name>name</name>
|
||||||
|
<type>String</type>
|
||||||
|
<null_value></null_value>
|
||||||
|
</attribute>
|
||||||
|
|
||||||
|
<attribute>
|
||||||
|
<name>value</name>
|
||||||
|
<type>UInt64</type>
|
||||||
|
<null_value>0</null_value>
|
||||||
|
</attribute>
|
||||||
|
|
||||||
|
</structure>
|
||||||
|
|
||||||
|
<layout>
|
||||||
|
<polygon />
|
||||||
|
</layout>
|
||||||
|
|
||||||
|
</dictionary>
|
||||||
|
```
|
||||||
|
|
||||||
|
Tne corresponding [DDL-query](../../../sql-reference/statements/create/dictionary.md#create-dictionary-query):
|
||||||
|
``` sql
|
||||||
|
CREATE DICTIONARY polygon_dict_name (
|
||||||
|
key Array(Array(Array(Array(Float64)))),
|
||||||
|
name String,
|
||||||
|
value UInt64
|
||||||
|
)
|
||||||
|
PRIMARY KEY key
|
||||||
|
LAYOUT(POLYGON())
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
When configuring the polygon dictionary, the key must have one of two types:
|
||||||
|
- A simple polygon. It is an array of points.
|
||||||
|
- MultiPolygon. It is an array of polygons. Each polygon is a two-dimensional array of points. The first element of this array is the outer boundary of the polygon, and subsequent elements specify areas to be excluded from it.
|
||||||
|
|
||||||
|
Points can be specified as an array or a tuple of their coordinates. In the current implementation, only two-dimensional points are supported.
|
||||||
|
|
||||||
|
The user can [upload their own data](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md) in all formats supported by ClickHouse.
|
||||||
|
|
||||||
|
|
||||||
|
There are 3 types of [in-memory storage](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md) available:
|
||||||
|
|
||||||
|
- POLYGON_SIMPLE. This is a naive implementation, where a linear pass through all polygons is made for each query, and membership is checked for each one without using additional indexes.
|
||||||
|
|
||||||
|
- POLYGON_INDEX_EACH. A separate index is built for each polygon, which allows you to quickly check whether it belongs in most cases (optimized for geographical regions).
|
||||||
|
Also, a grid is superimposed on the area under consideration, which significantly narrows the number of polygons under consideration.
|
||||||
|
The grid is created by recursively dividing the cell into 16 equal parts and is configured with two parameters.
|
||||||
|
The division stops when the recursion depth reaches MAX_DEPTH or when the cell crosses no more than MIN_INTERSECTIONS polygons.
|
||||||
|
To respond to the query, there is a corresponding cell, and the index for the polygons stored in it is accessed alternately.
|
||||||
|
|
||||||
|
- POLYGON_INDEX_CELL. This placement also creates the grid described above. The same options are available. For each sheet cell, an index is built on all pieces of polygons that fall into it, which allows you to quickly respond to a request.
|
||||||
|
|
||||||
|
- POLYGON. Synonym to POLYGON_INDEX_CELL.
|
||||||
|
|
||||||
|
Dictionary queries are carried out using standard [functions](../../../sql-reference/functions/ext-dict-functions.md) for working with external dictionaries.
|
||||||
|
An important difference is that here the keys will be the points for which you want to find the polygon containing them.
|
||||||
|
|
||||||
|
Example of working with the dictionary defined above:
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE points (
|
||||||
|
x Float64,
|
||||||
|
y Float64
|
||||||
|
)
|
||||||
|
...
|
||||||
|
SELECT tuple(x, y) AS key, dictGet(dict_name, 'name', key), dictGet(dict_name, 'value', key) FROM points ORDER BY x, y;
|
||||||
|
```
|
||||||
|
|
||||||
|
As a result of executing the last command for each point in the 'points' table, a minimum area polygon containing this point will be found, and the requested attributes will be output.
|
@ -89,7 +89,7 @@ If the index falls outside of the bounds of an array, it returns some default va
|
|||||||
## has(arr, elem) {#hasarr-elem}
|
## has(arr, elem) {#hasarr-elem}
|
||||||
|
|
||||||
Checks whether the ‘arr’ array has the ‘elem’ element.
|
Checks whether the ‘arr’ array has the ‘elem’ element.
|
||||||
Returns 0 if the the element is not in the array, or 1 if it is.
|
Returns 0 if the element is not in the array, or 1 if it is.
|
||||||
|
|
||||||
`NULL` is processed as a value.
|
`NULL` is processed as a value.
|
||||||
|
|
||||||
|
@ -337,26 +337,124 @@ SELECT toDate('2016-12-27') AS date, toYearWeek(date) AS yearWeek0, toYearWeek(d
|
|||||||
└────────────┴───────────┴───────────┴───────────┘
|
└────────────┴───────────┴───────────┴───────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
## date_trunc(datepart, time_or_data\[, time_zone\]), dateTrunc(datepart, time_or_data\[, time_zone\]) {#date_trunc}
|
## date_trunc {#date_trunc}
|
||||||
|
|
||||||
Truncates a date or date with time based on the specified datepart, such as
|
Truncates date and time data to the specified part of date.
|
||||||
- `second`
|
|
||||||
- `minute`
|
|
||||||
- `hour`
|
|
||||||
- `day`
|
|
||||||
- `week`
|
|
||||||
- `month`
|
|
||||||
- `quarter`
|
|
||||||
- `year`
|
|
||||||
|
|
||||||
```sql
|
**Syntax**
|
||||||
SELECT date_trunc('hour', now())
|
|
||||||
|
``` sql
|
||||||
|
date_trunc(unit, value[, timezone])
|
||||||
```
|
```
|
||||||
|
|
||||||
## now {#now}
|
Alias: `dateTrunc`.
|
||||||
|
|
||||||
Accepts zero or one arguments(timezone) and returns the current time at one of the moments of request execution, or current time of specific timezone at one of the moments of request execution if `timezone` argument provided.
|
**Parameters**
|
||||||
This function returns a constant, even if the request took a long time to complete.
|
|
||||||
|
- `unit` — Part of date. [String](../syntax.md#syntax-string-literal).
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- `second`
|
||||||
|
- `minute`
|
||||||
|
- `hour`
|
||||||
|
- `day`
|
||||||
|
- `week`
|
||||||
|
- `month`
|
||||||
|
- `quarter`
|
||||||
|
- `year`
|
||||||
|
|
||||||
|
- `value` — Date and time. [DateTime](../../sql-reference/data-types/datetime.md) or [DateTime64](../../sql-reference/data-types/datetime64.md).
|
||||||
|
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Value, truncated to the specified part of date.
|
||||||
|
|
||||||
|
Type: [Datetime](../../sql-reference/data-types/datetime.md).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query without timezone:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT now(), date_trunc('hour', now());
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌───────────────now()─┬─date_trunc('hour', now())─┐
|
||||||
|
│ 2020-09-28 10:40:45 │ 2020-09-28 10:00:00 │
|
||||||
|
└─────────────────────┴───────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Query with the specified timezone:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT now(), date_trunc('hour', now(), 'Europe/Moscow');
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌───────────────now()─┬─date_trunc('hour', now(), 'Europe/Moscow')─┐
|
||||||
|
│ 2020-09-28 10:46:26 │ 2020-09-28 13:00:00 │
|
||||||
|
└─────────────────────┴────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**See also**
|
||||||
|
|
||||||
|
- [toStartOfInterval](#tostartofintervaltime-or-data-interval-x-unit-time-zone)
|
||||||
|
|
||||||
|
# now {#now}
|
||||||
|
|
||||||
|
Returns the current date and time.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
now([timezone])
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Current date and time.
|
||||||
|
|
||||||
|
Type: [Datetime](../../sql-reference/data-types/datetime.md).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query without timezone:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT now();
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌───────────────now()─┐
|
||||||
|
│ 2020-10-17 07:42:09 │
|
||||||
|
└─────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Query with the specified timezone:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT now('Europe/Moscow');
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─now('Europe/Moscow')─┐
|
||||||
|
│ 2020-10-17 10:42:23 │
|
||||||
|
└──────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
## today {#today}
|
## today {#today}
|
||||||
|
|
||||||
|
@ -153,15 +153,18 @@ A fast, decent-quality non-cryptographic hash function for a string obtained fro
|
|||||||
`URLHash(s, N)` – Calculates a hash from a string up to the N level in the URL hierarchy, without one of the trailing symbols `/`,`?` or `#` at the end, if present.
|
`URLHash(s, N)` – Calculates a hash from a string up to the N level in the URL hierarchy, without one of the trailing symbols `/`,`?` or `#` at the end, if present.
|
||||||
Levels are the same as in URLHierarchy. This function is specific to Yandex.Metrica.
|
Levels are the same as in URLHierarchy. This function is specific to Yandex.Metrica.
|
||||||
|
|
||||||
|
## farmFingerprint64 {#farmfingerprint64}
|
||||||
|
|
||||||
## farmHash64 {#farmhash64}
|
## farmHash64 {#farmhash64}
|
||||||
|
|
||||||
Produces a 64-bit [FarmHash](https://github.com/google/farmhash) hash value.
|
Produces a 64-bit [FarmHash](https://github.com/google/farmhash) or Fingerprint value. Prefer `farmFingerprint64` for a stable and portable value.
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
|
farmFingerprint64(par1, ...)
|
||||||
farmHash64(par1, ...)
|
farmHash64(par1, ...)
|
||||||
```
|
```
|
||||||
|
|
||||||
The function uses the `Hash64` method from all [available methods](https://github.com/google/farmhash/blob/master/src/farmhash.h).
|
These functions use the `Fingerprint64` and `Hash64` method respectively from all [available methods](https://github.com/google/farmhash/blob/master/src/farmhash.h).
|
||||||
|
|
||||||
**Parameters**
|
**Parameters**
|
||||||
|
|
||||||
|
@ -306,3 +306,67 @@ execute_native_thread_routine
|
|||||||
start_thread
|
start_thread
|
||||||
clone
|
clone
|
||||||
```
|
```
|
||||||
|
## tid {#tid}
|
||||||
|
|
||||||
|
Returns id of the thread, in which current [Block](https://clickhouse.tech/docs/en/development/architecture/#block) is processed.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
tid()
|
||||||
|
```
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Current thread id. [Uint64](../../sql-reference/data-types/int-uint.md#uint-ranges).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT tid();
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─tid()─┐
|
||||||
|
│ 3878 │
|
||||||
|
└───────┘
|
||||||
|
```
|
||||||
|
## logTrace {#logtrace}
|
||||||
|
|
||||||
|
Emits trace log message to server log for each [Block](https://clickhouse.tech/docs/en/development/architecture/#block).
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
logTrace('message')
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `message` — Message that is emitted to server log. [String](../../sql-reference/data-types/string.md#string).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Always returns 0.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT logTrace('logTrace message');
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─logTrace('logTrace message')─┐
|
||||||
|
│ 0 │
|
||||||
|
└──────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.tech/docs/en/query_language/functions/introspection/) <!--hide-->
|
||||||
|
@ -1657,4 +1657,24 @@ Result:
|
|||||||
10 10 19 19 39 39
|
10 10 19 19 39 39
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## errorCodeToName {#error-code-to-name}
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Variable name for the error code.
|
||||||
|
|
||||||
|
Type: [LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md).
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
errorCodeToName(1)
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
UNSUPPORTED_METHOD
|
||||||
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/other_functions/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/query_language/functions/other_functions/) <!--hide-->
|
||||||
|
@ -323,6 +323,62 @@ This function accepts a number or date or date with time, and returns a string c
|
|||||||
|
|
||||||
This function accepts a number or date or date with time, and returns a FixedString containing bytes representing the corresponding value in host order (little endian). Null bytes are dropped from the end. For example, a UInt32 type value of 255 is a FixedString that is one byte long.
|
This function accepts a number or date or date with time, and returns a FixedString containing bytes representing the corresponding value in host order (little endian). Null bytes are dropped from the end. For example, a UInt32 type value of 255 is a FixedString that is one byte long.
|
||||||
|
|
||||||
|
## reinterpretAsUUID {#reinterpretasuuid}
|
||||||
|
|
||||||
|
This function accepts 16 bytes string, and returns UUID containing bytes representing the corresponding value in network byte order (big-endian). If the string isn't long enough, the functions work as if the string is padded with the necessary number of null bytes to the end. If the string longer than 16 bytes, the extra bytes at the end are ignored.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
reinterpretAsUUID(fixed_string)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `fixed_string` — Big-endian byte string. [FixedString](../../sql-reference/data-types/fixedstring.md#fixedstring).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- The UUID type value. [UUID](../../sql-reference/data-types/uuid.md#uuid-data-type).
|
||||||
|
|
||||||
|
**Examples**
|
||||||
|
|
||||||
|
String to UUID.
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT reinterpretAsUUID(reverse(unhex('000102030405060708090a0b0c0d0e0f')))
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─reinterpretAsUUID(reverse(unhex('000102030405060708090a0b0c0d0e0f')))─┐
|
||||||
|
│ 08090a0b-0c0d-0e0f-0001-020304050607 │
|
||||||
|
└───────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Going back and forth from String to UUID.
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
WITH
|
||||||
|
generateUUIDv4() AS uuid,
|
||||||
|
identity(lower(hex(reverse(reinterpretAsString(uuid))))) AS str,
|
||||||
|
reinterpretAsUUID(reverse(unhex(str))) AS uuid2
|
||||||
|
SELECT uuid = uuid2;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─equals(uuid, uuid2)─┐
|
||||||
|
│ 1 │
|
||||||
|
└─────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
## CAST(x, T) {#type_conversion_function-cast}
|
## CAST(x, T) {#type_conversion_function-cast}
|
||||||
|
|
||||||
Converts ‘x’ to the ‘t’ data type. The syntax CAST(x AS t) is also supported.
|
Converts ‘x’ to the ‘t’ data type. The syntax CAST(x AS t) is also supported.
|
||||||
|
@ -151,21 +151,43 @@ Types of intervals:
|
|||||||
- `QUARTER`
|
- `QUARTER`
|
||||||
- `YEAR`
|
- `YEAR`
|
||||||
|
|
||||||
|
You can also use a string literal when setting the `INTERVAL` value. For example, `INTERVAL 1 HOUR` is identical to the `INTERVAL '1 hour'` or `INTERVAL '1' hour`.
|
||||||
|
|
||||||
!!! warning "Warning"
|
!!! warning "Warning"
|
||||||
Intervals with different types can’t be combined. You can’t use expressions like `INTERVAL 4 DAY 1 HOUR`. Specify intervals in units that are smaller or equal to the smallest unit of the interval, for example, `INTERVAL 25 HOUR`. You can use consecutive operations, like in the example below.
|
Intervals with different types can’t be combined. You can’t use expressions like `INTERVAL 4 DAY 1 HOUR`. Specify intervals in units that are smaller or equal to the smallest unit of the interval, for example, `INTERVAL 25 HOUR`. You can use consecutive operations, like in the example below.
|
||||||
|
|
||||||
Example:
|
Examples:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL 3 HOUR
|
SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL 3 HOUR;
|
||||||
```
|
```
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
┌───current_date_time─┬─plus(plus(now(), toIntervalDay(4)), toIntervalHour(3))─┐
|
┌───current_date_time─┬─plus(plus(now(), toIntervalDay(4)), toIntervalHour(3))─┐
|
||||||
│ 2019-10-23 11:16:28 │ 2019-10-27 14:16:28 │
|
│ 2020-11-03 22:09:50 │ 2020-11-08 01:09:50 │
|
||||||
└─────────────────────┴────────────────────────────────────────────────────────┘
|
└─────────────────────┴────────────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT now() AS current_date_time, current_date_time + INTERVAL '4 day' + INTERVAL '3 hour';
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌───current_date_time─┬─plus(plus(now(), toIntervalDay(4)), toIntervalHour(3))─┐
|
||||||
|
│ 2020-11-03 22:12:10 │ 2020-11-08 01:12:10 │
|
||||||
|
└─────────────────────┴────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT now() AS current_date_time, current_date_time + INTERVAL '4' day + INTERVAL '3' hour;
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌───current_date_time─┬─plus(plus(now(), toIntervalDay('4')), toIntervalHour('3'))─┐
|
||||||
|
│ 2020-11-03 22:33:19 │ 2020-11-08 01:33:19 │
|
||||||
|
└─────────────────────┴────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
**See Also**
|
**See Also**
|
||||||
|
|
||||||
- [Interval](../../sql-reference/data-types/special-data-types/interval.md) data type
|
- [Interval](../../sql-reference/data-types/special-data-types/interval.md) data type
|
||||||
|
@ -221,3 +221,85 @@ returns
|
|||||||
│ 1970-03-12 │ 1970-01-08 │ original │
|
│ 1970-03-12 │ 1970-01-08 │ original │
|
||||||
└────────────┴────────────┴──────────┘
|
└────────────┴────────────┴──────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## OFFSET FETCH Clause {#offset-fetch}
|
||||||
|
|
||||||
|
`OFFSET` and `FETCH` allow you to retrieve data by portions. They specify a row block which you want to get by a single query.
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
OFFSET offset_row_count {ROW | ROWS}] [FETCH {FIRST | NEXT} fetch_row_count {ROW | ROWS} {ONLY | WITH TIES}]
|
||||||
|
```
|
||||||
|
|
||||||
|
The `offset_row_count` or `fetch_row_count` value can be a number or a literal constant. You can omit `fetch_row_count`; by default, it equals 1.
|
||||||
|
|
||||||
|
`OFFSET` specifies the number of rows to skip before starting to return rows from the query.
|
||||||
|
|
||||||
|
The `FETCH` specifies the maximum number of rows that can be in the result of a query.
|
||||||
|
|
||||||
|
The `ONLY` option is used to return rows that immediately follow the rows omitted by the `OFFSET`. In this case the `FETCH` is an alternative to the [LIMIT](../../../sql-reference/statements/select/limit.md) clause. For example, the following query
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT * FROM test_fetch ORDER BY a OFFSET 1 ROW FETCH FIRST 3 ROWS ONLY;
|
||||||
|
```
|
||||||
|
|
||||||
|
is identical to the query
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT * FROM test_fetch ORDER BY a LIMIT 3 OFFSET 1;
|
||||||
|
```
|
||||||
|
|
||||||
|
The `WITH TIES` option is used to return any additional rows that tie for the last place in the result set according to the `ORDER BY` clause. For example, if `fetch_row_count` is set to 5 but two additional rows match the values of the `ORDER BY` columns in the fifth row, the result set will contain seven rows.
|
||||||
|
|
||||||
|
!!! note "Note"
|
||||||
|
According to the standard, the `OFFSET` clause must come before the `FETCH` clause if both are present.
|
||||||
|
|
||||||
|
### Examples {#examples}
|
||||||
|
|
||||||
|
Input table:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─a─┬─b─┐
|
||||||
|
│ 1 │ 1 │
|
||||||
|
│ 2 │ 1 │
|
||||||
|
│ 3 │ 4 │
|
||||||
|
│ 1 │ 3 │
|
||||||
|
│ 5 │ 4 │
|
||||||
|
│ 0 │ 6 │
|
||||||
|
│ 5 │ 7 │
|
||||||
|
└───┴───┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Usage of the `ONLY` option:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS ONLY;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─a─┬─b─┐
|
||||||
|
│ 2 │ 1 │
|
||||||
|
│ 3 │ 4 │
|
||||||
|
│ 5 │ 4 │
|
||||||
|
└───┴───┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Usage of the `WITH TIES` option:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS WITH TIES;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─a─┬─b─┐
|
||||||
|
│ 2 │ 1 │
|
||||||
|
│ 3 │ 4 │
|
||||||
|
│ 5 │ 4 │
|
||||||
|
│ 5 │ 7 │
|
||||||
|
└───┴───┘
|
||||||
|
```
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.tech/docs/en/sql-reference/statements/select/order-by/) <!--hide-->
|
||||||
|
@ -204,7 +204,7 @@ SYSTEM STOP MOVES [[db.]merge_tree_family_table_name]
|
|||||||
|
|
||||||
## Managing ReplicatedMergeTree Tables {#query-language-system-replicated}
|
## Managing ReplicatedMergeTree Tables {#query-language-system-replicated}
|
||||||
|
|
||||||
ClickHouse can manage background replication related processes in [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replacingmergetree.md) tables.
|
ClickHouse can manage background replication related processes in [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replication/#table_engines-replication) tables.
|
||||||
|
|
||||||
### STOP FETCHES {#query_language-system-stop-fetches}
|
### STOP FETCHES {#query_language-system-stop-fetches}
|
||||||
|
|
||||||
|
@ -57,7 +57,7 @@ Identifiers are:
|
|||||||
|
|
||||||
Identifiers can be quoted or non-quoted. The latter is preferred.
|
Identifiers can be quoted or non-quoted. The latter is preferred.
|
||||||
|
|
||||||
Non-quoted identifiers must match the regex `^[a-zA-Z_][0-9a-zA-Z_]*$` and can not be equal to [keywords](#syntax-keywords). Examples: `x, _1, X_y__Z123_.`
|
Non-quoted identifiers must match the regex `^[0-9a-zA-Z_]*[a-zA-Z_]$` and can not be equal to [keywords](#syntax-keywords). Examples: `x, _1, X_y__Z123_.`
|
||||||
|
|
||||||
If you want to use identifiers the same as keywords or you want to use other symbols in identifiers, quote it using double quotes or backticks, for example, `"id"`, `` `id` ``.
|
If you want to use identifiers the same as keywords or you want to use other symbols in identifiers, quote it using double quotes or backticks, for example, `"id"`, `` `id` ``.
|
||||||
|
|
||||||
|
43
docs/en/sql-reference/table-functions/null.md
Normal file
43
docs/en/sql-reference/table-functions/null.md
Normal file
@ -0,0 +1,43 @@
|
|||||||
|
---
|
||||||
|
toc_priority: 53
|
||||||
|
toc_title: null function
|
||||||
|
---
|
||||||
|
|
||||||
|
# null {#null-function}
|
||||||
|
|
||||||
|
Creates a temporary table of the specified structure with the [Null](../../engines/table-engines/special/null.md) table engine. According to the `Null`-engine properties, the table data is ignored and the table itself is immediately droped right after the query execution. The function is used for the convenience of test writing and demonstrations.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
null('structure')
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameter**
|
||||||
|
|
||||||
|
- `structure` — A list of columns and column types. [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
A temporary `Null`-engine table with the specified structure.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query with the `null` function:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
INSERT INTO function null('x UInt64') SELECT * FROM numbers_mt(1000000000);
|
||||||
|
```
|
||||||
|
can replace three queries:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE t (x UInt64) ENGINE = Null;
|
||||||
|
INSERT INTO t SELECT * FROM numbers_mt(1000000000);
|
||||||
|
DROP TABLE IF EXISTS t;
|
||||||
|
```
|
||||||
|
|
||||||
|
See also:
|
||||||
|
|
||||||
|
- [Null table engine](../../engines/table-engines/special/null.md)
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.tech/docs/en/sql-reference/table-functions/null/) <!--hide-->
|
@ -19,7 +19,7 @@ $ sudo apt-get install git cmake python ninja-build
|
|||||||
|
|
||||||
O cmake3 en lugar de cmake en sistemas más antiguos.
|
O cmake3 en lugar de cmake en sistemas más antiguos.
|
||||||
|
|
||||||
## Instalar GCC 9 {#install-gcc-9}
|
## Instalar GCC 10 {#install-gcc-10}
|
||||||
|
|
||||||
Hay varias formas de hacer esto.
|
Hay varias formas de hacer esto.
|
||||||
|
|
||||||
@ -29,18 +29,18 @@ Hay varias formas de hacer esto.
|
|||||||
$ sudo apt-get install software-properties-common
|
$ sudo apt-get install software-properties-common
|
||||||
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
||||||
$ sudo apt-get update
|
$ sudo apt-get update
|
||||||
$ sudo apt-get install gcc-9 g++-9
|
$ sudo apt-get install gcc-10 g++-10
|
||||||
```
|
```
|
||||||
|
|
||||||
### Instalar desde fuentes {#install-from-sources}
|
### Instalar desde fuentes {#install-from-sources}
|
||||||
|
|
||||||
Mira [Sistema abierto.](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
Mira [Sistema abierto.](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
||||||
|
|
||||||
## Usar GCC 9 para compilaciones {#use-gcc-9-for-builds}
|
## Usar GCC 10 para compilaciones {#use-gcc-10-for-builds}
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ export CC=gcc-9
|
$ export CC=gcc-10
|
||||||
$ export CXX=g++-9
|
$ export CXX=g++-10
|
||||||
```
|
```
|
||||||
|
|
||||||
## Fuentes de ClickHouse de pago {#checkout-clickhouse-sources}
|
## Fuentes de ClickHouse de pago {#checkout-clickhouse-sources}
|
||||||
@ -76,7 +76,7 @@ La compilación requiere los siguientes componentes:
|
|||||||
- Git (se usa solo para verificar las fuentes, no es necesario para la compilación)
|
- Git (se usa solo para verificar las fuentes, no es necesario para la compilación)
|
||||||
- CMake 3.10 o más reciente
|
- CMake 3.10 o más reciente
|
||||||
- Ninja (recomendado) o Hacer
|
- Ninja (recomendado) o Hacer
|
||||||
- Compilador de C ++: gcc 9 o clang 8 o más reciente
|
- Compilador de C ++: gcc 10 o clang 8 o más reciente
|
||||||
- Enlazador: lld u oro (el clásico GNU ld no funcionará)
|
- Enlazador: lld u oro (el clásico GNU ld no funcionará)
|
||||||
- Python (solo se usa dentro de la compilación LLVM y es opcional)
|
- Python (solo se usa dentro de la compilación LLVM y es opcional)
|
||||||
|
|
||||||
|
@ -135,13 +135,13 @@ ClickHouse utiliza varias bibliotecas externas para la construcción. Todos ello
|
|||||||
|
|
||||||
# Compilador de C ++ {#c-compiler}
|
# Compilador de C ++ {#c-compiler}
|
||||||
|
|
||||||
Los compiladores GCC a partir de la versión 9 y Clang versión 8 o superior son compatibles para construir ClickHouse.
|
Los compiladores GCC a partir de la versión 10 y Clang versión 8 o superior son compatibles para construir ClickHouse.
|
||||||
|
|
||||||
Las compilaciones oficiales de Yandex actualmente usan GCC porque genera código de máquina de un rendimiento ligeramente mejor (con una diferencia de hasta varios por ciento según nuestros puntos de referencia). Y Clang es más conveniente para el desarrollo generalmente. Sin embargo, nuestra plataforma de integración continua (CI) ejecuta verificaciones de aproximadamente una docena de combinaciones de compilación.
|
Las compilaciones oficiales de Yandex actualmente usan GCC porque genera código de máquina de un rendimiento ligeramente mejor (con una diferencia de hasta varios por ciento según nuestros puntos de referencia). Y Clang es más conveniente para el desarrollo generalmente. Sin embargo, nuestra plataforma de integración continua (CI) ejecuta verificaciones de aproximadamente una docena de combinaciones de compilación.
|
||||||
|
|
||||||
Para instalar GCC en Ubuntu, ejecute: `sudo apt install gcc g++`
|
Para instalar GCC en Ubuntu, ejecute: `sudo apt install gcc g++`
|
||||||
|
|
||||||
Compruebe la versión de gcc: `gcc --version`. Si está por debajo de 9, siga las instrucciones aquí: https://clickhouse.tech/docs/es/development/build/#install-gcc-9.
|
Compruebe la versión de gcc: `gcc --version`. Si está por debajo de 9, siga las instrucciones aquí: https://clickhouse.tech/docs/es/development/build/#install-gcc-10.
|
||||||
|
|
||||||
La compilación de Mac OS X solo es compatible con Clang. Sólo tiene que ejecutar `brew install llvm`
|
La compilación de Mac OS X solo es compatible con Clang. Sólo tiene que ejecutar `brew install llvm`
|
||||||
|
|
||||||
@ -156,11 +156,11 @@ Ahora que está listo para construir ClickHouse, le recomendamos que cree un dir
|
|||||||
|
|
||||||
Puede tener varios directorios diferentes (build_release, build_debug, etc.) para diferentes tipos de construcción.
|
Puede tener varios directorios diferentes (build_release, build_debug, etc.) para diferentes tipos de construcción.
|
||||||
|
|
||||||
Mientras que dentro de la `build` directorio, configure su compilación ejecutando CMake. Antes de la primera ejecución, debe definir variables de entorno que especifiquen el compilador (compilador gcc versión 9 en este ejemplo).
|
Mientras que dentro de la `build` directorio, configure su compilación ejecutando CMake. Antes de la primera ejecución, debe definir variables de entorno que especifiquen el compilador (compilador gcc versión 10 en este ejemplo).
|
||||||
|
|
||||||
Linux:
|
Linux:
|
||||||
|
|
||||||
export CC=gcc-9 CXX=g++-9
|
export CC=gcc-10 CXX=g++-10
|
||||||
cmake ..
|
cmake ..
|
||||||
|
|
||||||
Mac OS X:
|
Mac OS X:
|
||||||
|
@ -1,261 +0,0 @@
|
|||||||
---
|
|
||||||
machine_translated: true
|
|
||||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
|
||||||
toc_priority: 69
|
|
||||||
toc_title: "C\xF3mo ejecutar pruebas de ClickHouse"
|
|
||||||
---
|
|
||||||
|
|
||||||
# Pruebas de ClickHouse {#clickhouse-testing}
|
|
||||||
|
|
||||||
## Pruebas funcionales {#functional-tests}
|
|
||||||
|
|
||||||
Las pruebas funcionales son las más simples y cómodas de usar. La mayoría de las características de ClickHouse se pueden probar con pruebas funcionales y son obligatorias para cada cambio en el código de ClickHouse que se puede probar de esa manera.
|
|
||||||
|
|
||||||
Cada prueba funcional envía una o varias consultas al servidor ClickHouse en ejecución y compara el resultado con la referencia.
|
|
||||||
|
|
||||||
Las pruebas se encuentran en `queries` directorio. Hay dos subdirectorios: `stateless` y `stateful`. Las pruebas sin estado ejecutan consultas sin datos de prueba precargados: a menudo crean pequeños conjuntos de datos sintéticos sobre la marcha, dentro de la prueba misma. Las pruebas estatales requieren datos de prueba precargados de Yandex.Métrica y no está disponible para el público en general. Tendemos a usar sólo `stateless` pruebas y evitar la adición de nuevos `stateful` prueba.
|
|
||||||
|
|
||||||
Cada prueba puede ser de dos tipos: `.sql` y `.sh`. `.sql` test es el script SQL simple que se canaliza a `clickhouse-client --multiquery --testmode`. `.sh` test es un script que se ejecuta por sí mismo.
|
|
||||||
|
|
||||||
Para ejecutar todas las pruebas, use `clickhouse-test` herramienta. Mira `--help` para la lista de posibles opciones. Simplemente puede ejecutar todas las pruebas o ejecutar un subconjunto de pruebas filtradas por subcadena en el nombre de la prueba: `./clickhouse-test substring`.
|
|
||||||
|
|
||||||
La forma más sencilla de invocar pruebas funcionales es copiar `clickhouse-client` a `/usr/bin/`, ejecutar `clickhouse-server` y luego ejecutar `./clickhouse-test` de su propio directorio.
|
|
||||||
|
|
||||||
Para agregar una nueva prueba, cree un `.sql` o `.sh` archivo en `queries/0_stateless` directorio, compruébelo manualmente y luego genere `.reference` archivo de la siguiente manera: `clickhouse-client -n --testmode < 00000_test.sql > 00000_test.reference` o `./00000_test.sh > ./00000_test.reference`.
|
|
||||||
|
|
||||||
Las pruebas deben usar (crear, soltar, etc.) solo tablas en `test` base de datos que se supone que se crea de antemano; también las pruebas pueden usar tablas temporales.
|
|
||||||
|
|
||||||
Si desea utilizar consultas distribuidas en pruebas funcionales, puede aprovechar `remote` función de la tabla con `127.0.0.{1..2}` direcciones para que el servidor se consulte; o puede usar clústeres de prueba predefinidos en el archivo de configuración del servidor como `test_shard_localhost`.
|
|
||||||
|
|
||||||
Algunas pruebas están marcadas con `zookeeper`, `shard` o `long` en sus nombres.
|
|
||||||
`zookeeper` es para pruebas que están usando ZooKeeper. `shard` es para pruebas que
|
|
||||||
requiere servidor para escuchar `127.0.0.*`; `distributed` o `global` tienen el mismo
|
|
||||||
significado. `long` es para pruebas que duran un poco más de un segundo. Usted puede
|
|
||||||
deshabilitar estos grupos de pruebas utilizando `--no-zookeeper`, `--no-shard` y
|
|
||||||
`--no-long` opciones, respectivamente.
|
|
||||||
|
|
||||||
## Bugs Conocidos {#known-bugs}
|
|
||||||
|
|
||||||
Si conocemos algunos errores que se pueden reproducir fácilmente mediante pruebas funcionales, colocamos pruebas funcionales preparadas en `tests/queries/bugs` directorio. Estas pruebas se moverán a `tests/queries/0_stateless` cuando se corrigen errores.
|
|
||||||
|
|
||||||
## Pruebas de integración {#integration-tests}
|
|
||||||
|
|
||||||
Las pruebas de integración permiten probar ClickHouse en la configuración agrupada y la interacción de ClickHouse con otros servidores como MySQL, Postgres, MongoDB. Son útiles para emular divisiones de red, caídas de paquetes, etc. Estas pruebas se ejecutan bajo Docker y crean múltiples contenedores con varios software.
|
|
||||||
|
|
||||||
Ver `tests/integration/README.md` sobre cómo ejecutar estas pruebas.
|
|
||||||
|
|
||||||
Tenga en cuenta que la integración de ClickHouse con controladores de terceros no se ha probado. Además, actualmente no tenemos pruebas de integración con nuestros controladores JDBC y ODBC.
|
|
||||||
|
|
||||||
## Pruebas unitarias {#unit-tests}
|
|
||||||
|
|
||||||
Las pruebas unitarias son útiles cuando desea probar no ClickHouse como un todo, sino una sola biblioteca o clase aislada. Puede habilitar o deshabilitar la compilación de pruebas con `ENABLE_TESTS` Opción CMake. Las pruebas unitarias (y otros programas de prueba) se encuentran en `tests` subdirectorios en todo el código. Para ejecutar pruebas unitarias, escriba `ninja test`. Algunas pruebas usan `gtest`, pero algunos son solo programas que devuelven un código de salida distinto de cero en caso de fallo de prueba.
|
|
||||||
|
|
||||||
No es necesariamente tener pruebas unitarias si el código ya está cubierto por pruebas funcionales (y las pruebas funcionales suelen ser mucho más simples de usar).
|
|
||||||
|
|
||||||
## Pruebas de rendimiento {#performance-tests}
|
|
||||||
|
|
||||||
Las pruebas de rendimiento permiten medir y comparar el rendimiento de alguna parte aislada de ClickHouse en consultas sintéticas. Las pruebas se encuentran en `tests/performance`. Cada prueba está representada por `.xml` archivo con la descripción del caso de prueba. Las pruebas se ejecutan con `clickhouse performance-test` herramienta (que está incrustada en `clickhouse` binario). Ver `--help` para la invocación.
|
|
||||||
|
|
||||||
Cada prueba ejecuta una o varias consultas (posiblemente con combinaciones de parámetros) en un bucle con algunas condiciones para detener (como “maximum execution speed is not changing in three seconds”) y medir algunas métricas sobre el rendimiento de las consultas (como “maximum execution speed”). Algunas pruebas pueden contener condiciones previas en el conjunto de datos de pruebas precargado.
|
|
||||||
|
|
||||||
Si desea mejorar el rendimiento de ClickHouse en algún escenario, y si se pueden observar mejoras en consultas simples, se recomienda encarecidamente escribir una prueba de rendimiento. Siempre tiene sentido usar `perf top` u otras herramientas de perf durante sus pruebas.
|
|
||||||
|
|
||||||
## Herramientas de prueba y secuencias de comandos {#test-tools-and-scripts}
|
|
||||||
|
|
||||||
Algunos programas en `tests` directorio no son pruebas preparadas, pero son herramientas de prueba. Por ejemplo, para `Lexer` hay una herramienta `src/Parsers/tests/lexer` que solo hacen la tokenización de stdin y escriben el resultado coloreado en stdout. Puede usar este tipo de herramientas como ejemplos de código y para exploración y pruebas manuales.
|
|
||||||
|
|
||||||
También puede colocar un par de archivos `.sh` y `.reference` junto con la herramienta para ejecutarlo en alguna entrada predefinida, entonces el resultado del script se puede comparar con `.reference` file. Este tipo de pruebas no están automatizadas.
|
|
||||||
|
|
||||||
## Pruebas diversas {#miscellaneous-tests}
|
|
||||||
|
|
||||||
Hay pruebas para diccionarios externos ubicados en `tests/external_dictionaries` y para modelos aprendidos a máquina en `tests/external_models`. Estas pruebas no se actualizan y deben transferirse a pruebas de integración.
|
|
||||||
|
|
||||||
Hay una prueba separada para inserciones de quórum. Esta prueba ejecuta el clúster ClickHouse en servidores separados y emula varios casos de fallas: división de red, caída de paquetes (entre nodos ClickHouse, entre ClickHouse y ZooKeeper, entre el servidor ClickHouse y el cliente, etc.), `kill -9`, `kill -STOP` y `kill -CONT` , como [Jepsen](https://aphyr.com/tags/Jepsen). A continuación, la prueba comprueba que todas las inserciones reconocidas se escribieron y todas las inserciones rechazadas no.
|
|
||||||
|
|
||||||
La prueba de quórum fue escrita por un equipo separado antes de que ClickHouse fuera de código abierto. Este equipo ya no trabaja con ClickHouse. La prueba fue escrita accidentalmente en Java. Por estas razones, la prueba de quórum debe reescribirse y trasladarse a pruebas de integración.
|
|
||||||
|
|
||||||
## Pruebas manuales {#manual-testing}
|
|
||||||
|
|
||||||
Cuando desarrolla una nueva característica, es razonable probarla también manualmente. Puede hacerlo con los siguientes pasos:
|
|
||||||
|
|
||||||
Construir ClickHouse. Ejecute ClickHouse desde el terminal: cambie el directorio a `programs/clickhouse-server` y ejecutarlo con `./clickhouse-server`. Se utilizará la configuración (`config.xml`, `users.xml` y archivos dentro de `config.d` y `users.d` directorios) desde el directorio actual de forma predeterminada. Para conectarse al servidor ClickHouse, ejecute `programs/clickhouse-client/clickhouse-client`.
|
|
||||||
|
|
||||||
Tenga en cuenta que todas las herramientas de clickhouse (servidor, cliente, etc.) son solo enlaces simbólicos a un único binario llamado `clickhouse`. Puede encontrar este binario en `programs/clickhouse`. Todas las herramientas también se pueden invocar como `clickhouse tool` en lugar de `clickhouse-tool`.
|
|
||||||
|
|
||||||
Alternativamente, puede instalar el paquete ClickHouse: ya sea una versión estable del repositorio de Yandex o puede crear un paquete para usted con `./release` en la raíz de fuentes de ClickHouse. Luego inicie el servidor con `sudo service clickhouse-server start` (o detener para detener el servidor). Busque registros en `/etc/clickhouse-server/clickhouse-server.log`.
|
|
||||||
|
|
||||||
Cuando ClickHouse ya está instalado en su sistema, puede crear un nuevo `clickhouse` binario y reemplazar el binario existente:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ sudo service clickhouse-server stop
|
|
||||||
$ sudo cp ./clickhouse /usr/bin/
|
|
||||||
$ sudo service clickhouse-server start
|
|
||||||
```
|
|
||||||
|
|
||||||
También puede detener el servidor de clickhouse del sistema y ejecutar el suyo propio con la misma configuración pero con el registro en la terminal:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ sudo service clickhouse-server stop
|
|
||||||
$ sudo -u clickhouse /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml
|
|
||||||
```
|
|
||||||
|
|
||||||
Ejemplo con gdb:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ sudo -u clickhouse gdb --args /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml
|
|
||||||
```
|
|
||||||
|
|
||||||
Si el servidor de clickhouse del sistema ya se está ejecutando y no desea detenerlo, puede cambiar los números de `config.xml` (o anularlos en un archivo en `config.d` directorio), proporcione la ruta de datos adecuada y ejecútela.
|
|
||||||
|
|
||||||
`clickhouse` binary casi no tiene dependencias y funciona en una amplia gama de distribuciones de Linux. Para probar rápidamente y sucio sus cambios en un servidor, simplemente puede `scp` su fresco construido `clickhouse` binario a su servidor y luego ejecútelo como en los ejemplos anteriores.
|
|
||||||
|
|
||||||
## Entorno de prueba {#testing-environment}
|
|
||||||
|
|
||||||
Antes de publicar la versión como estable, la implementamos en el entorno de prueba. El entorno de prueba es un clúster que procesa 1/39 parte de [El Yandex.Métrica](https://metrica.yandex.com/) datos. Compartimos nuestro entorno de pruebas con Yandex.Equipo de Metrica. ClickHouse se actualiza sin tiempo de inactividad sobre los datos existentes. Nos fijamos en un primer momento que los datos se procesan con éxito sin retraso de tiempo real, la replicación continúan trabajando y no hay problemas visibles para Yandex.Equipo de Metrica. La primera comprobación se puede hacer de la siguiente manera:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT hostName() AS h, any(version()), any(uptime()), max(UTCEventTime), count() FROM remote('example01-01-{1..3}t', merge, hits) WHERE EventDate >= today() - 2 GROUP BY h ORDER BY h;
|
|
||||||
```
|
|
||||||
|
|
||||||
En algunos casos también implementamos en el entorno de prueba de nuestros equipos de amigos en Yandex: Market, Cloud, etc. También tenemos algunos servidores de hardware que se utilizan con fines de desarrollo.
|
|
||||||
|
|
||||||
## Pruebas de carga {#load-testing}
|
|
||||||
|
|
||||||
Después de implementar en el entorno de prueba, ejecutamos pruebas de carga con consultas del clúster de producción. Esto se hace manualmente.
|
|
||||||
|
|
||||||
Asegúrese de que ha habilitado `query_log` en su clúster de producción.
|
|
||||||
|
|
||||||
Recopilar el registro de consultas para un día o más:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ clickhouse-client --query="SELECT DISTINCT query FROM system.query_log WHERE event_date = today() AND query LIKE '%ym:%' AND query NOT LIKE '%system.query_log%' AND type = 2 AND is_initial_query" > queries.tsv
|
|
||||||
```
|
|
||||||
|
|
||||||
Este es un ejemplo complicado. `type = 2` filtrará las consultas que se ejecutan correctamente. `query LIKE '%ym:%'` es seleccionar consultas relevantes de Yandex.Métrica. `is_initial_query` es seleccionar solo las consultas iniciadas por el cliente, no por ClickHouse (como partes del procesamiento de consultas distribuidas).
|
|
||||||
|
|
||||||
`scp` este registro en su clúster de prueba y ejecútelo de la siguiente manera:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ clickhouse benchmark --concurrency 16 < queries.tsv
|
|
||||||
```
|
|
||||||
|
|
||||||
(probablemente también desee especificar un `--user`)
|
|
||||||
|
|
||||||
Luego déjalo por una noche o un fin de semana e ir a tomar un descanso.
|
|
||||||
|
|
||||||
Usted debe comprobar que `clickhouse-server` no se bloquea, la huella de memoria está limitada y el rendimiento no se degrada con el tiempo.
|
|
||||||
|
|
||||||
Los tiempos de ejecución de consultas precisos no se registran y no se comparan debido a la alta variabilidad de las consultas y el entorno.
|
|
||||||
|
|
||||||
## Pruebas de construcción {#build-tests}
|
|
||||||
|
|
||||||
Las pruebas de compilación permiten verificar que la compilación no esté rota en varias configuraciones alternativas y en algunos sistemas extranjeros. Las pruebas se encuentran en `ci` directorio. Ejecutan compilación desde la fuente dentro de Docker, Vagrant y, a veces, con `qemu-user-static` dentro de Docker. Estas pruebas están en desarrollo y las ejecuciones de pruebas no están automatizadas.
|
|
||||||
|
|
||||||
Motivación:
|
|
||||||
|
|
||||||
Normalmente lanzamos y ejecutamos todas las pruebas en una sola variante de compilación ClickHouse. Pero hay variantes de construcción alternativas que no se prueban a fondo. Ejemplos:
|
|
||||||
|
|
||||||
- construir en FreeBSD;
|
|
||||||
- construir en Debian con bibliotecas de paquetes del sistema;
|
|
||||||
- construir con enlaces compartidos de bibliotecas;
|
|
||||||
- construir en la plataforma AArch64;
|
|
||||||
- construir en la plataforma PowerPc.
|
|
||||||
|
|
||||||
Por ejemplo, construir con paquetes del sistema es una mala práctica, porque no podemos garantizar qué versión exacta de paquetes tendrá un sistema. Pero esto es realmente necesario para los mantenedores de Debian. Por esta razón, al menos tenemos que admitir esta variante de construcción. Otro ejemplo: la vinculación compartida es una fuente común de problemas, pero es necesaria para algunos entusiastas.
|
|
||||||
|
|
||||||
Aunque no podemos ejecutar todas las pruebas en todas las variantes de compilaciones, queremos verificar al menos que varias variantes de compilación no estén rotas. Para este propósito utilizamos pruebas de construcción.
|
|
||||||
|
|
||||||
## Pruebas de Compatibilidad de protocolos {#testing-for-protocol-compatibility}
|
|
||||||
|
|
||||||
Cuando ampliamos el protocolo de red ClickHouse, probamos manualmente que el antiguo clickhouse-client funciona con el nuevo clickhouse-server y el nuevo clickhouse-client funciona con el antiguo clickhouse-server (simplemente ejecutando binarios de los paquetes correspondientes).
|
|
||||||
|
|
||||||
## Ayuda del compilador {#help-from-the-compiler}
|
|
||||||
|
|
||||||
Código principal de ClickHouse (que se encuentra en `dbms` directorio) se construye con `-Wall -Wextra -Werror` y con algunas advertencias habilitadas adicionales. Aunque estas opciones no están habilitadas para bibliotecas de terceros.
|
|
||||||
|
|
||||||
Clang tiene advertencias aún más útiles: puedes buscarlas con `-Weverything` y elige algo para la compilación predeterminada.
|
|
||||||
|
|
||||||
Para las compilaciones de producción, se usa gcc (todavía genera un código ligeramente más eficiente que clang). Para el desarrollo, el clang suele ser más conveniente de usar. Puede construir en su propia máquina con el modo de depuración (para ahorrar batería de su computadora portátil), pero tenga en cuenta que el compilador puede generar más advertencias con `-O3` debido a un mejor flujo de control y análisis entre procedimientos. Al construir con clang con el modo de depuración, la versión de depuración de `libc++` se utiliza que permite detectar más errores en tiempo de ejecución.
|
|
||||||
|
|
||||||
## Desinfectantes {#sanitizers}
|
|
||||||
|
|
||||||
**Dirección desinfectante**.
|
|
||||||
Ejecutamos pruebas funcionales y de integración bajo ASan por compromiso.
|
|
||||||
|
|
||||||
**Valgrind (Memcheck)**.
|
|
||||||
Realizamos pruebas funcionales bajo Valgrind durante la noche. Se tarda varias horas. Actualmente hay un falso positivo conocido en `re2` biblioteca, ver [este artículo](https://research.swtch.com/sparse).
|
|
||||||
|
|
||||||
**Desinfectante de comportamiento indefinido.**
|
|
||||||
Ejecutamos pruebas funcionales y de integración bajo ASan por compromiso.
|
|
||||||
|
|
||||||
**Desinfectante de hilo**.
|
|
||||||
Ejecutamos pruebas funcionales bajo TSan por compromiso. Todavía no ejecutamos pruebas de integración bajo TSan por compromiso.
|
|
||||||
|
|
||||||
**Desinfectante de memoria**.
|
|
||||||
Actualmente todavía no usamos MSan.
|
|
||||||
|
|
||||||
**Asignador de depuración.**
|
|
||||||
Versión de depuración de `jemalloc` se utiliza para la compilación de depuración.
|
|
||||||
|
|
||||||
## Fuzzing {#fuzzing}
|
|
||||||
|
|
||||||
ClickHouse fuzzing se implementa tanto usando [LibFuzzer](https://llvm.org/docs/LibFuzzer.html) y consultas SQL aleatorias.
|
|
||||||
Todas las pruebas de fuzz deben realizarse con desinfectantes (Dirección y Undefined).
|
|
||||||
|
|
||||||
LibFuzzer se usa para pruebas de fuzz aisladas del código de la biblioteca. Fuzzers se implementan como parte del código de prueba y tienen “_fuzzer” nombre postfixes.
|
|
||||||
El ejemplo de Fuzzer se puede encontrar en `src/Parsers/tests/lexer_fuzzer.cpp`. Las configuraciones, diccionarios y corpus específicos de LibFuzzer se almacenan en `tests/fuzz`.
|
|
||||||
Le recomendamos que escriba pruebas fuzz para cada funcionalidad que maneje la entrada del usuario.
|
|
||||||
|
|
||||||
Fuzzers no se construyen de forma predeterminada. Para construir fuzzers ambos `-DENABLE_FUZZING=1` y `-DENABLE_TESTS=1` se deben establecer opciones.
|
|
||||||
Recomendamos deshabilitar Jemalloc mientras se construyen fuzzers. Configuración utilizada para integrar
|
|
||||||
Google OSS-Fuzz se puede encontrar en `docker/fuzz`.
|
|
||||||
|
|
||||||
También usamos una prueba de fuzz simple para generar consultas SQL aleatorias y verificar que el servidor no muera al ejecutarlas.
|
|
||||||
Lo puedes encontrar en `00746_sql_fuzzy.pl`. Esta prueba debe ejecutarse de forma continua (de la noche a la mañana y más).
|
|
||||||
|
|
||||||
## Auditoría de seguridad {#security-audit}
|
|
||||||
|
|
||||||
La gente de Yandex Security Team hace una visión general básica de las capacidades de ClickHouse desde el punto de vista de la seguridad.
|
|
||||||
|
|
||||||
## Analizadores estáticos {#static-analyzers}
|
|
||||||
|
|
||||||
Corremos `PVS-Studio` por compromiso. Hemos evaluado `clang-tidy`, `Coverity`, `cppcheck`, `PVS-Studio`, `tscancode`. Encontrará instrucciones de uso en `tests/instructions/` directorio. También puedes leer [el artículo en ruso](https://habr.com/company/yandex/blog/342018/).
|
|
||||||
|
|
||||||
Si usted usa `CLion` como IDE, puede aprovechar algunos `clang-tidy` comprueba fuera de la caja.
|
|
||||||
|
|
||||||
## Endurecer {#hardening}
|
|
||||||
|
|
||||||
`FORTIFY_SOURCE` se utiliza de forma predeterminada. Es casi inútil, pero todavía tiene sentido en casos raros y no lo desactivamos.
|
|
||||||
|
|
||||||
## Estilo de código {#code-style}
|
|
||||||
|
|
||||||
Se describen las reglas de estilo de código [aqui](https://clickhouse.tech/docs/en/development/style/).
|
|
||||||
|
|
||||||
Para comprobar si hay algunas violaciones de estilo comunes, puede usar `utils/check-style` script.
|
|
||||||
|
|
||||||
Para forzar el estilo adecuado de su código, puede usar `clang-format`. File `.clang-format` se encuentra en la raíz de las fuentes. Se corresponde principalmente con nuestro estilo de código real. Pero no se recomienda aplicar `clang-format` a los archivos existentes porque empeora el formato. Usted puede utilizar `clang-format-diff` herramienta que puede encontrar en el repositorio de origen clang.
|
|
||||||
|
|
||||||
Alternativamente, puede intentar `uncrustify` herramienta para reformatear su código. La configuración está en `uncrustify.cfg` en la raíz de las fuentes. Es menos probado que `clang-format`.
|
|
||||||
|
|
||||||
`CLion` tiene su propio formateador de código que debe ajustarse para nuestro estilo de código.
|
|
||||||
|
|
||||||
## Pruebas Metrica B2B {#metrica-b2b-tests}
|
|
||||||
|
|
||||||
Cada lanzamiento de ClickHouse se prueba con los motores Yandex Metrica y AppMetrica. Las pruebas y las versiones estables de ClickHouse se implementan en máquinas virtuales y se ejecutan con una copia pequeña del motor Metrica que procesa una muestra fija de datos de entrada. A continuación, los resultados de dos instancias del motor Metrica se comparan juntos.
|
|
||||||
|
|
||||||
Estas pruebas son automatizadas por un equipo separado. Debido a la gran cantidad de piezas móviles, las pruebas fallan la mayor parte del tiempo por razones completamente no relacionadas, que son muy difíciles de descubrir. Lo más probable es que estas pruebas tengan un valor negativo para nosotros. Sin embargo, se demostró que estas pruebas son útiles en aproximadamente una o dos veces de cada cientos.
|
|
||||||
|
|
||||||
## Cobertura de prueba {#test-coverage}
|
|
||||||
|
|
||||||
A partir de julio de 2018, no realizamos un seguimiento de la cobertura de las pruebas.
|
|
||||||
|
|
||||||
## Automatización de pruebas {#test-automation}
|
|
||||||
|
|
||||||
Realizamos pruebas con el CI interno de Yandex y el sistema de automatización de trabajos llamado “Sandbox”.
|
|
||||||
|
|
||||||
Los trabajos de compilación y las pruebas se ejecutan en Sandbox por confirmación. Los paquetes resultantes y los resultados de las pruebas se publican en GitHub y se pueden descargar mediante enlaces directos. Los artefactos se almacenan eternamente. Cuando envías una solicitud de extracción en GitHub, la etiquetamos como “can be tested” y nuestro sistema CI construirá paquetes ClickHouse (liberación, depuración, con desinfectante de direcciones, etc.) para usted.
|
|
||||||
|
|
||||||
No usamos Travis CI debido al límite de tiempo y potencia computacional.
|
|
||||||
No usamos Jenkins. Se usó antes y ahora estamos felices de no estar usando Jenkins.
|
|
||||||
|
|
||||||
[Artículo Original](https://clickhouse.tech/docs/en/development/tests/) <!--hide-->
|
|
1
docs/es/development/tests.md
Symbolic link
1
docs/es/development/tests.md
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../../en/development/tests.md
|
@ -464,69 +464,6 @@ The kurtosis of the given distribution. Type — [Float64](../../sql-reference/d
|
|||||||
SELECT kurtSamp(value) FROM series_with_value_column
|
SELECT kurtSamp(value) FROM series_with_value_column
|
||||||
```
|
```
|
||||||
|
|
||||||
## Para obtener más información, consulta nuestra Política de privacidad y nuestras Condiciones de uso) {#agg-function-timeseriesgroupsum}
|
|
||||||
|
|
||||||
`timeSeriesGroupSum` puede agregar diferentes series de tiempo que muestran la marca de tiempo no la alineación.
|
|
||||||
Utilizará la interpolación lineal entre dos marcas de tiempo de muestra y luego sumará series temporales juntas.
|
|
||||||
|
|
||||||
- `uid` es la identificación única de la serie temporal, `UInt64`.
|
|
||||||
- `timestamp` es el tipo Int64 para admitir milisegundos o microsegundos.
|
|
||||||
- `value` es la métrica.
|
|
||||||
|
|
||||||
La función devuelve una matriz de tuplas con `(timestamp, aggregated_value)` par.
|
|
||||||
|
|
||||||
Antes de utilizar esta función, asegúrese de `timestamp` está en orden ascendente.
|
|
||||||
|
|
||||||
Ejemplo:
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─uid─┬─timestamp─┬─value─┐
|
|
||||||
│ 1 │ 2 │ 0.2 │
|
|
||||||
│ 1 │ 7 │ 0.7 │
|
|
||||||
│ 1 │ 12 │ 1.2 │
|
|
||||||
│ 1 │ 17 │ 1.7 │
|
|
||||||
│ 1 │ 25 │ 2.5 │
|
|
||||||
│ 2 │ 3 │ 0.6 │
|
|
||||||
│ 2 │ 8 │ 1.6 │
|
|
||||||
│ 2 │ 12 │ 2.4 │
|
|
||||||
│ 2 │ 18 │ 3.6 │
|
|
||||||
│ 2 │ 24 │ 4.8 │
|
|
||||||
└─────┴───────────┴───────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
CREATE TABLE time_series(
|
|
||||||
uid UInt64,
|
|
||||||
timestamp Int64,
|
|
||||||
value Float64
|
|
||||||
) ENGINE = Memory;
|
|
||||||
INSERT INTO time_series VALUES
|
|
||||||
(1,2,0.2),(1,7,0.7),(1,12,1.2),(1,17,1.7),(1,25,2.5),
|
|
||||||
(2,3,0.6),(2,8,1.6),(2,12,2.4),(2,18,3.6),(2,24,4.8);
|
|
||||||
|
|
||||||
SELECT timeSeriesGroupSum(uid, timestamp, value)
|
|
||||||
FROM (
|
|
||||||
SELECT * FROM time_series order by timestamp ASC
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
Y el resultado será:
|
|
||||||
|
|
||||||
``` text
|
|
||||||
[(2,0.2),(3,0.9),(7,2.1),(8,2.4),(12,3.6),(17,5.1),(18,5.4),(24,7.2),(25,2.5)]
|
|
||||||
```
|
|
||||||
|
|
||||||
## También puede utilizar el siguiente ejemplo:) {#agg-function-timeseriesgroupratesum}
|
|
||||||
|
|
||||||
De manera similar a `timeSeriesGroupSum`, `timeSeriesGroupRateSum` calcula la tasa de series temporales y luego suma las tasas juntas.
|
|
||||||
Además, la marca de tiempo debe estar en orden ascendente antes de usar esta función.
|
|
||||||
|
|
||||||
Aplicando esta función a los datos del `timeSeriesGroupSum` ejemplo, se obtiene el siguiente resultado:
|
|
||||||
|
|
||||||
``` text
|
|
||||||
[(2,0),(3,0.1),(7,0.3),(8,0.3),(12,0.3),(17,0.3),(18,0.3),(24,0.3),(25,0.1)]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Acerca de) {#agg_function-avg}
|
## Acerca de) {#agg_function-avg}
|
||||||
|
|
||||||
Calcula el promedio.
|
Calcula el promedio.
|
||||||
|
@ -20,7 +20,7 @@ $ sudo apt-get install git cmake python ninja-build
|
|||||||
|
|
||||||
یا سیمک 3 به جای کیک در سیستم های قدیمی تر.
|
یا سیمک 3 به جای کیک در سیستم های قدیمی تر.
|
||||||
|
|
||||||
## نصب شورای همکاری خلیج فارس 9 {#install-gcc-9}
|
## نصب شورای همکاری خلیج فارس 9 {#install-gcc-10}
|
||||||
|
|
||||||
راه های مختلفی برای انجام این کار وجود دارد.
|
راه های مختلفی برای انجام این کار وجود دارد.
|
||||||
|
|
||||||
@ -30,18 +30,18 @@ $ sudo apt-get install git cmake python ninja-build
|
|||||||
$ sudo apt-get install software-properties-common
|
$ sudo apt-get install software-properties-common
|
||||||
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
||||||
$ sudo apt-get update
|
$ sudo apt-get update
|
||||||
$ sudo apt-get install gcc-9 g++-9
|
$ sudo apt-get install gcc-10 g++-10
|
||||||
```
|
```
|
||||||
|
|
||||||
### نصب از منابع {#install-from-sources}
|
### نصب از منابع {#install-from-sources}
|
||||||
|
|
||||||
نگاه کن [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
نگاه کن [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
||||||
|
|
||||||
## استفاده از شورای همکاری خلیج فارس 9 برای ساخت {#use-gcc-9-for-builds}
|
## استفاده از شورای همکاری خلیج فارس 10 برای ساخت {#use-gcc-10-for-builds}
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ export CC=gcc-9
|
$ export CC=gcc-10
|
||||||
$ export CXX=g++-9
|
$ export CXX=g++-10
|
||||||
```
|
```
|
||||||
|
|
||||||
## پرداخت منابع کلیک {#checkout-clickhouse-sources}
|
## پرداخت منابع کلیک {#checkout-clickhouse-sources}
|
||||||
@ -77,7 +77,7 @@ $ cd ..
|
|||||||
- دستگاه گوارش (استفاده می شود تنها به پرداخت منابع مورد نیاز برای ساخت)
|
- دستگاه گوارش (استفاده می شود تنها به پرداخت منابع مورد نیاز برای ساخت)
|
||||||
- کیک 3.10 یا جدیدتر
|
- کیک 3.10 یا جدیدتر
|
||||||
- نینجا (توصیه می شود) و یا
|
- نینجا (توصیه می شود) و یا
|
||||||
- ج ++ کامپایلر: شورای همکاری خلیج فارس 9 یا صدای شیپور 8 یا جدیدتر
|
- ج ++ کامپایلر: شورای همکاری خلیج فارس 10 یا صدای شیپور 8 یا جدیدتر
|
||||||
- لینکر: لیلند یا طلا (کلاسیک گنو الدی کار نخواهد کرد)
|
- لینکر: لیلند یا طلا (کلاسیک گنو الدی کار نخواهد کرد)
|
||||||
- پایتون (فقط در داخل ساخت لورم استفاده می شود و اختیاری است)
|
- پایتون (فقط در داخل ساخت لورم استفاده می شود و اختیاری است)
|
||||||
|
|
||||||
|
@ -137,13 +137,13 @@ toc_title: "\u062F\u0633\u062A\u0648\u0631\u0627\u0644\u0639\u0645\u0644 \u062A\
|
|||||||
|
|
||||||
# ج ++ کامپایلر {#c-compiler}
|
# ج ++ کامپایلر {#c-compiler}
|
||||||
|
|
||||||
کامپایلر شورای همکاری خلیج فارس با شروع از نسخه 9 و صدای شیپور نسخه 8 یا بالاتر برای ساخت و ساز خانه عروسکی پشتیبانی می کند.
|
کامپایلر شورای همکاری خلیج فارس با شروع از نسخه 10 و صدای شیپور نسخه 8 یا بالاتر برای ساخت و ساز خانه عروسکی پشتیبانی می کند.
|
||||||
|
|
||||||
یاندکس رسمی ایجاد شده در حال حاضر با استفاده از شورای همکاری خلیج فارس به دلیل تولید کد ماشین از عملکرد کمی بهتر (بازده تفاوت تا چند درصد با توجه به معیار ما). و کلانگ معمولا برای توسعه راحت تر است. هر چند, ادغام مداوم ما (سی) پلت فرم اجرا می شود چک برای حدود یک دوجین از ترکیب ساخت.
|
یاندکس رسمی ایجاد شده در حال حاضر با استفاده از شورای همکاری خلیج فارس به دلیل تولید کد ماشین از عملکرد کمی بهتر (بازده تفاوت تا چند درصد با توجه به معیار ما). و کلانگ معمولا برای توسعه راحت تر است. هر چند, ادغام مداوم ما (سی) پلت فرم اجرا می شود چک برای حدود یک دوجین از ترکیب ساخت.
|
||||||
|
|
||||||
برای نصب شورای همکاری خلیج فارس در اوبونتو اجرای: `sudo apt install gcc g++`
|
برای نصب شورای همکاری خلیج فارس در اوبونتو اجرای: `sudo apt install gcc g++`
|
||||||
|
|
||||||
بررسی نسخه شورای همکاری خلیج فارس: `gcc --version`. اگر زیر است 9, سپس دستورالعمل اینجا را دنبال کنید: https://clickhouse.tech/docs/fa/development/build/#install-gcc-9.
|
بررسی نسخه شورای همکاری خلیج فارس: `gcc --version`. اگر زیر است 10, سپس دستورالعمل اینجا را دنبال کنید: https://clickhouse.tech/docs/fa/development/build/#install-gcc-10.
|
||||||
|
|
||||||
سیستم عامل مک ایکس ساخت فقط برای صدای جرنگ جرنگ پشتیبانی می شود. فقط فرار کن `brew install llvm`
|
سیستم عامل مک ایکس ساخت فقط برای صدای جرنگ جرنگ پشتیبانی می شود. فقط فرار کن `brew install llvm`
|
||||||
|
|
||||||
@ -158,11 +158,11 @@ toc_title: "\u062F\u0633\u062A\u0648\u0631\u0627\u0644\u0639\u0645\u0644 \u062A\
|
|||||||
|
|
||||||
شما می توانید چندین دایرکتوری های مختلف (build_release, build_debug ، ) برای انواع مختلف ساخت.
|
شما می توانید چندین دایرکتوری های مختلف (build_release, build_debug ، ) برای انواع مختلف ساخت.
|
||||||
|
|
||||||
در حالی که در داخل `build` فهرست, پیکربندی ساخت خود را با در حال اجرا کیک. قبل از اولین اجرا, شما نیاز به تعریف متغیرهای محیطی که کامپایلر را مشخص (نسخه 9 کامپایلر شورای همکاری خلیج فارس در این مثال).
|
در حالی که در داخل `build` فهرست, پیکربندی ساخت خود را با در حال اجرا کیک. قبل از اولین اجرا, شما نیاز به تعریف متغیرهای محیطی که کامپایلر را مشخص (نسخه 10 کامپایلر شورای همکاری خلیج فارس در این مثال).
|
||||||
|
|
||||||
لینوکس:
|
لینوکس:
|
||||||
|
|
||||||
export CC=gcc-9 CXX=g++-9
|
export CC=gcc-10 CXX=g++-10
|
||||||
cmake ..
|
cmake ..
|
||||||
|
|
||||||
سیستم عامل مک ایکس:
|
سیستم عامل مک ایکس:
|
||||||
|
@ -1,262 +0,0 @@
|
|||||||
---
|
|
||||||
machine_translated: true
|
|
||||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
|
||||||
toc_priority: 69
|
|
||||||
toc_title: "\u0646\u062D\u0648\u0647 \u0627\u062C\u0631\u0627\u06CC \u062A\u0633\u062A\
|
|
||||||
\ \u0647\u0627\u06CC \u06A9\u0644\u06CC\u06A9 \u062E\u0627\u0646\u0647"
|
|
||||||
---
|
|
||||||
|
|
||||||
# تست کلیک {#clickhouse-testing}
|
|
||||||
|
|
||||||
## تست های کاربردی {#functional-tests}
|
|
||||||
|
|
||||||
تست های کاربردی ساده ترین و راحت برای استفاده هستند. بسیاری از ClickHouse ویژگی ها را می توان مورد آزمایش با استفاده از آزمون های عملکردی و آنها را اجباری به استفاده از برای هر تغییر در ClickHouse کد است که می تواند آزمایش می شود که در راه است.
|
|
||||||
|
|
||||||
هر تست عملکردی یک یا چند نمایش داده شد به سرور در حال اجرا تاتر می فرستد و نتیجه را با مرجع مقایسه می کند.
|
|
||||||
|
|
||||||
تست ها در واقع `queries` فهرست راهنما. دو زیرشاخه وجود دارد: `stateless` و `stateful`. تست های بدون تابعیت بدون هیچ گونه داده های تست پیش بارگذاری شده نمایش داده می شوند-اغلب مجموعه داده های مصنوعی کوچک را در پرواز در داخل تست خود ایجاد می کنند. تست های نفرت انگیز نیاز به داده های تست از قبل نصب شده از یاندکس.متریکا و در دسترس عموم نیست. ما تمایل به استفاده از تنها `stateless` تست ها و جلوگیری از اضافه کردن جدید `stateful` تستها
|
|
||||||
|
|
||||||
هر تست می تواند یکی از دو نوع باشد: `.sql` و `.sh`. `.sql` تست اسکریپت ساده مربع است که به لوله کشی است `clickhouse-client --multiquery --testmode`. `.sh` تست یک اسکریپت است که به خودی خود اجرا است.
|
|
||||||
|
|
||||||
برای اجرای تمام تست ها استفاده کنید `clickhouse-test` ابزار. نگاه کن `--help` برای لیستی از گزینه های ممکن. شما به سادگی می توانید تمام تست ها را اجرا کنید یا زیر مجموعه ای از تست های فیلتر شده توسط زیر رشته را در نام تست اجرا کنید: `./clickhouse-test substring`.
|
|
||||||
|
|
||||||
ساده ترین راه برای فراخوانی تست های کاربردی کپی است `clickhouse-client` به `/usr/bin/` فرار کن `clickhouse-server` و سپس اجرا کنید `./clickhouse-test` از دایرکتوری خود را.
|
|
||||||
|
|
||||||
برای اضافه کردن تست جدید, ایجاد یک `.sql` یا `.sh` پرونده در `queries/0_stateless` فهرست راهنما را به صورت دستی بررسی کنید و سپس تولید کنید `.reference` پرونده به روش زیر: `clickhouse-client -n --testmode < 00000_test.sql > 00000_test.reference` یا `./00000_test.sh > ./00000_test.reference`.
|
|
||||||
|
|
||||||
تست باید استفاده کنید (ساختن, قطره, و غیره) تنها جداول در `test` پایگاه داده است که فرض بر این است که از قبل ایجاد می شود; همچنین تست می توانید جداول موقت استفاده.
|
|
||||||
|
|
||||||
اگر شما می خواهید به استفاده از نمایش داده شد توزیع شده در تست های کاربردی, شما می توانید اهرم `remote` تابع جدول با `127.0.0.{1..2}` یا شما می توانید خوشه تست از پیش تعریف شده در فایل پیکربندی سرور مانند استفاده کنید `test_shard_localhost`.
|
|
||||||
|
|
||||||
برخی از تست ها با مشخص شده اند `zookeeper`, `shard` یا `long` در نام خود.
|
|
||||||
`zookeeper` برای تست هایی است که از باغ وحش استفاده می کنند. `shard` برای تست هایی است که
|
|
||||||
نیاز به سرور برای گوش دادن `127.0.0.*`; `distributed` یا `global` همان
|
|
||||||
معنی. `long` برای تست هایی است که کمی طولانی تر اجرا می شوند که یک ثانیه. شما می توانید
|
|
||||||
غیر فعال کردن این گروه از تست با استفاده از `--no-zookeeper`, `--no-shard` و
|
|
||||||
`--no-long` گزینه, به ترتیب.
|
|
||||||
|
|
||||||
## اشکالات شناخته شده {#known-bugs}
|
|
||||||
|
|
||||||
اگر ما می دانیم برخی از اشکالات است که می تواند به راحتی توسط تست های کاربردی تکثیر, ما تست های عملکردی تهیه شده در `tests/queries/bugs` فهرست راهنما. این تست خواهد شد به نقل مکان کرد `tests/queries/0_stateless` هنگامی که اشکالات ثابت هستند.
|
|
||||||
|
|
||||||
## تست های ادغام {#integration-tests}
|
|
||||||
|
|
||||||
ادغام آزمون اجازه می دهد برای تست ClickHouse در خوشه پیکربندی و ClickHouse تعامل با سرور های دیگر مانند MySQL, Postgres, MongoDB. مفید برای تقلید انشعابات شبکه قطره بسته و غیره هستند. این تست ها تحت کارگر بارانداز اجرا و ایجاد ظروف متعدد با نرم افزار های مختلف.
|
|
||||||
|
|
||||||
ببینید `tests/integration/README.md` در مورد چگونگی اجرای این تست.
|
|
||||||
|
|
||||||
توجه داشته باشید که ادغام کلیک با رانندگان شخص ثالث تست نشده است. همچنین ما در حال حاضر تست های یکپارچه سازی با رانندگان جی بی سی و بی سی ما ندارد.
|
|
||||||
|
|
||||||
## تست های واحد {#unit-tests}
|
|
||||||
|
|
||||||
تست واحد مفید هستند که شما می خواهید برای تست نیست خانه کلیک به عنوان یک کل, اما یک کتابخانه جدا شده و یا کلاس. شما می توانید ساخت تست ها را فعال یا غیر فعال کنید `ENABLE_TESTS` گزینه کیک. تست واحد (و دیگر برنامه های تست) در واقع `tests` زیرشاخه در سراسر کد. برای اجرای تست واحد, نوع `ninja test`. برخی از تست ها استفاده می کنند `gtest`, اما برخی فقط برنامه هایی که بازگشت کد خروج غیر صفر در شکست تست.
|
|
||||||
|
|
||||||
این لزوما به تست واحد اگر کد در حال حاضر توسط تست های کاربردی تحت پوشش (و تست های کاربردی معمولا بسیار ساده تر برای استفاده).
|
|
||||||
|
|
||||||
## تست های عملکرد {#performance-tests}
|
|
||||||
|
|
||||||
تست های عملکرد اجازه می دهد برای اندازه گیری و مقایسه عملکرد برخی از بخش جدا شده از خانه رعیتی در نمایش داده شد مصنوعی. تست ها در واقع `tests/performance`. هر تست توسط نمایندگی `.xml` فایل با شرح مورد تست. تست ها با اجرا `clickhouse performance-test` ابزار (که در تعبیه شده است `clickhouse` دودویی). ببینید `--help` برای نیایش.
|
|
||||||
|
|
||||||
هر تست یک یا چند نمایش داده شد (احتمالا با ترکیبی از پارامترهای) در یک حلقه با برخی از شرایط برای توقف (مانند “maximum execution speed is not changing in three seconds”) و اندازه گیری برخی از معیارهای مورد عملکرد پرس و جو (مانند “maximum execution speed”). برخی از تست ها می توانند پیش شرط ها را در مجموعه داده های تست پیش بارگذاری شده داشته باشند.
|
|
||||||
|
|
||||||
اگر شما می خواهید برای بهبود عملکرد تاتر در برخی از سناریو, و اگر پیشرفت را می توان در نمایش داده شد ساده مشاهده, بسیار توصیه می شود برای نوشتن یک تست عملکرد. همیشه حس می کند به استفاده از `perf top` و یا دیگر ابزار دقیق در طول تست های خود را.
|
|
||||||
|
|
||||||
## ابزار تست و اسکریپت {#test-tools-and-scripts}
|
|
||||||
|
|
||||||
برخی از برنامه ها در `tests` دایرکتوری تست تهیه نشده, اما ابزار تست. مثلا, برای `Lexer` یک ابزار وجود دارد `src/Parsers/tests/lexer` این فقط تقلید از استدین را انجام می دهد و نتیجه رنگی را به انحراف می نویسد. شما می توانید از این نوع ابزار به عنوان نمونه کد و برای اکتشاف و تست دستی استفاده کنید.
|
|
||||||
|
|
||||||
شما همچنین می توانید جفت فایل قرار دهید `.sh` و `.reference` همراه با ابزار برای اجرا در برخی از ورودی از پیش تعریف شده - سپس نتیجه اسکریپت را می توان به مقایسه `.reference` پرونده. این نوع تست ها خودکار نیستند.
|
|
||||||
|
|
||||||
## تست های متفرقه {#miscellaneous-tests}
|
|
||||||
|
|
||||||
تست برای لغت نامه های خارجی واقع در وجود دارد `tests/external_dictionaries` و برای مدل های ماشین یاد گرفته شده در `tests/external_models`. این تست ها به روز نمی شوند و باید به تست های ادغام منتقل شوند.
|
|
||||||
|
|
||||||
تست جداگانه برای درج حد نصاب وجود دارد. این اجرای آزمون ClickHouse خوشه در سرورهای جداگانه و شبیه سازی شکست های مختلف در موارد: شبکه تقسیم بسته رها کردن (بین ClickHouse گره بین ClickHouse و باغ وحش بین ClickHouse سرور و کلاینت ، ), `kill -9`, `kill -STOP` و `kill -CONT` مثل [جپسن](https://aphyr.com/tags/Jepsen). سپس چک تست که همه درج اذعان نوشته شده بود و همه درج رد شد.
|
|
||||||
|
|
||||||
تست حد نصاب توسط تیم جداگانه نوشته شده بود قبل از کلیک باز منابع بود. این تیم دیگر با کلیکهاوس کار. تست به طور تصادفی در جاوا نوشته شده بود. به این دلایل, تست حد نصاب باید بازنویسی شود و به تست ادغام نقل مکان کرد.
|
|
||||||
|
|
||||||
## تست دستی {#manual-testing}
|
|
||||||
|
|
||||||
هنگامی که شما توسعه یک ویژگی جدید معقول نیز دستی تست است. شما می توانید این کار را با مراحل زیر انجام دهید:
|
|
||||||
|
|
||||||
ساخت خانه کلیک. اجرای کلیک از ترمینال: تغییر دایرکتوری به `programs/clickhouse-server` و با `./clickhouse-server`. این پیکربندی استفاده کنید (`config.xml`, `users.xml` و فایل ها در `config.d` و `users.d` دایرکتوری ها) از دایرکتوری جاری به طور پیش فرض. برای اتصال به سرور کلیک اجرا کنید `programs/clickhouse-client/clickhouse-client`.
|
|
||||||
|
|
||||||
توجه داشته باشید که تمام clickhouse ابزار (سرور مشتری و غیره) فقط symlinks به یک باینری به نام `clickhouse`. شما می توانید این دودویی در `programs/clickhouse`. همه ابزار همچنین می توانید به عنوان استناد شود `clickhouse tool` به جای `clickhouse-tool`.
|
|
||||||
|
|
||||||
متناوبا شما می توانید بسته بندی کلیک را نصب کنید: در هر صورت انتشار پایدار از مخزن یاندکس و یا شما می توانید بسته را برای خودتان با ساخت `./release` در منابع کلیک خانه ریشه. سپس سرور را با شروع `sudo service clickhouse-server start` (یا توقف برای متوقف کردن سرور). به دنبال سیاهههای مربوط در `/etc/clickhouse-server/clickhouse-server.log`.
|
|
||||||
|
|
||||||
هنگامی که تاتر در حال حاضر بر روی سیستم شما نصب شده, شما می توانید جدید ساخت `clickhouse` دودویی و جایگزین باینری موجود:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ sudo service clickhouse-server stop
|
|
||||||
$ sudo cp ./clickhouse /usr/bin/
|
|
||||||
$ sudo service clickhouse-server start
|
|
||||||
```
|
|
||||||
|
|
||||||
همچنین شما می توانید سیستم کلیک سرور را متوقف و اجرا خود را با همان پیکربندی اما با ورود به ترمینال:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ sudo service clickhouse-server stop
|
|
||||||
$ sudo -u clickhouse /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml
|
|
||||||
```
|
|
||||||
|
|
||||||
به عنوان مثال با دیابت بارداری:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ sudo -u clickhouse gdb --args /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml
|
|
||||||
```
|
|
||||||
|
|
||||||
اگر سیستم کلیک-سرور در حال اجرا است و شما نمی خواهید برای متوقف کردن, شما می توانید شماره پورت در خود تغییر دهید `config.xml` (یا نادیده گرفتن در یک فایل در `config.d` فهرست راهنما) مسیر داده مناسب را فراهم کرده و اجرا کنید.
|
|
||||||
|
|
||||||
`clickhouse` دودویی تقریبا هیچ وابستگی و کار در سراسر طیف گسترده ای از توزیع های لینوکس. برای تست سریع و کثیف تغییرات خود را بر روی یک سرور, شما به سادگی می توانید `scp` تازه ساخته شده است `clickhouse` باینری به سرور شما و سپس به عنوان مثال بالا اجرا شود.
|
|
||||||
|
|
||||||
## محیط تست {#testing-environment}
|
|
||||||
|
|
||||||
قبل از انتشار انتشار به عنوان پایدار ما را در محیط تست استقرار. محیط تست یک خوشه است که بخشی از 1/39 را پردازش می کند [یاندکسمتریکا](https://metrica.yandex.com/) داده ها. ما محیط تست خود را با یاندکس به اشتراک می گذاریم.تیم متریکا تاتر بدون خرابی در بالای داده های موجود به روز رسانی. ما در ابتدا نگاه کنید که داده ها با موفقیت و بدون عقب مانده از زمان واقعی پردازش, تکرار ادامه کار و هیچ مشکلی برای یاندکس قابل مشاهده وجود دارد.تیم متریکا اولین چک را می توان در راه زیر انجام داد:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT hostName() AS h, any(version()), any(uptime()), max(UTCEventTime), count() FROM remote('example01-01-{1..3}t', merge, hits) WHERE EventDate >= today() - 2 GROUP BY h ORDER BY h;
|
|
||||||
```
|
|
||||||
|
|
||||||
در برخی موارد ما نیز به تست محیط زیست از تیم های دوست ما در یاندکس استقرار: بازار, ابر, و غیره. همچنین در حال حاضر برخی از سرورهای سخت افزاری است که برای اهداف توسعه استفاده می شود.
|
|
||||||
|
|
||||||
## تست بار {#load-testing}
|
|
||||||
|
|
||||||
پس از استقرار به محیط تست ما تست بار با نمایش داده شد از خوشه تولید را اجرا کنید. این کار به صورت دستی انجام می شود.
|
|
||||||
|
|
||||||
اطمینان حاصل کنید که شما را فعال کرده اند `query_log` در خوشه تولید خود را.
|
|
||||||
|
|
||||||
جمع کردن گزارش پرس و جو برای یک روز یا بیشتر:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ clickhouse-client --query="SELECT DISTINCT query FROM system.query_log WHERE event_date = today() AND query LIKE '%ym:%' AND query NOT LIKE '%system.query_log%' AND type = 2 AND is_initial_query" > queries.tsv
|
|
||||||
```
|
|
||||||
|
|
||||||
این یک مثال راه پیچیده است. `type = 2` نمایش داده شد که با موفقیت اجرا فیلتر کنید. `query LIKE '%ym:%'` است برای انتخاب نمایش داده شد مربوطه از یاندکس.متریکا `is_initial_query` است را انتخاب کنید تنها نمایش داده شد که توسط مشتری شروع, نه با کلیک خود (به عنوان بخش هایی از پردازش پرس و جو توزیع).
|
|
||||||
|
|
||||||
`scp` این ورود به خوشه تست خود را و اجرا به شرح زیر است:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ clickhouse benchmark --concurrency 16 < queries.tsv
|
|
||||||
```
|
|
||||||
|
|
||||||
(احتمالا شما همچنین می خواهید برای مشخص کردن یک `--user`)
|
|
||||||
|
|
||||||
پس یه شب یا هفته ولش کن و برو استراحت کن
|
|
||||||
|
|
||||||
شما باید بررسی کنید که `clickhouse-server` سقوط نمی کند, رد پای حافظه محدود است و عملکرد در طول زمان تنزل نمی.
|
|
||||||
|
|
||||||
زمان اجرای پرس و جو دقیق ثبت نشده است و با توجه به تنوع بالا از نمایش داده شد و محیط زیست در مقایسه نیست.
|
|
||||||
|
|
||||||
## ساخت تست {#build-tests}
|
|
||||||
|
|
||||||
تست های ساخت اجازه می دهد تا بررسی کنید که ساخت در تنظیمات مختلف جایگزین و در برخی از سیستم های خارجی شکسته نمی شود. تست ها در واقع `ci` فهرست راهنما. ساخت از منبع داخل کارگر بارانداز ولگرد و گاهی با اجرا می شوند `qemu-user-static` در داخل کارگر بارانداز. این تست ها در حال توسعه هستند و تست اجرا می شود خودکار نیست.
|
|
||||||
|
|
||||||
انگیزه:
|
|
||||||
|
|
||||||
به طور معمول ما انتشار و اجرای تمام تست بر روی یک نوع واحد از ساخت تاتر. اما انواع ساخت جایگزین است که به طور کامل تست شده وجود دارد. مثالها:
|
|
||||||
|
|
||||||
- ساخت در بورس;
|
|
||||||
- ساخت در دبیان با کتابخانه ها از بسته های سیستم;
|
|
||||||
- ساخت با لینک مشترک از کتابخانه ها;
|
|
||||||
- ساخت پلت فرم AArch64;
|
|
||||||
- ساخت بر روی پلت فرم پاور.
|
|
||||||
|
|
||||||
مثلا, ساخت با بسته های سیستم عمل بد است, چرا که ما نمی تواند تضمین کند که چه نسخه دقیق از بسته های یک سیستم باید. اما این واقعا توسط نگهداری دبیان مورد نیاز است. به همین دلیل ما حداقل باید برای حمایت از این نوع ساخت. مثال دیگر: ارتباط مشترک یک منبع مشترک از مشکل است, اما برای برخی از علاقه مندان مورد نیاز است.
|
|
||||||
|
|
||||||
هر چند ما می توانیم تمام تست در همه نوع از ایجاد اجرا کنید, ما می خواهیم برای بررسی حداقل که انواع ساخت های مختلف شکسته نمی. برای این منظور ما از تست های ساخت استفاده می کنیم.
|
|
||||||
|
|
||||||
## تست برای سازگاری پروتکل {#testing-for-protocol-compatibility}
|
|
||||||
|
|
||||||
هنگامی که ما گسترش ClickHouse پروتکل شبکه ما تست دستی که clickhouse-مشتری با این نسخهها کار جدید clickhouse-سرور و جدید clickhouse-مشتری با این نسخهها کار با clickhouse-سرور (به سادگی با در حال اجرا فایل های باینری از مربوطه بسته).
|
|
||||||
|
|
||||||
## کمک از کامپایلر {#help-from-the-compiler}
|
|
||||||
|
|
||||||
کد اصلی کلیک (که در واقع `dbms` فهرست راهنما) با ساخته شده است `-Wall -Wextra -Werror` و با برخی از هشدارهای اضافی را فعال کنید. اگر چه این گزینه ها برای کتابخانه های شخص ثالث فعال نیست.
|
|
||||||
|
|
||||||
کلانگ هشدارهای بیشتری دارد - شما می توانید با `-Weverything` و انتخاب چیزی به طور پیش فرض ساخت.
|
|
||||||
|
|
||||||
برای تولید ساخت, شورای همکاری خلیج فارس استفاده می شود (هنوز تولید کد کمی موثر تر از صدای جرنگ جرنگ). برای توسعه, صدای جرنگ جرنگ است که معمولا راحت تر به استفاده از. شما می توانید بر روی دستگاه خود را با حالت اشکال زدایی ساخت (برای صرفه جویی در باتری لپ تاپ خود را), اما لطفا توجه داشته باشید که کامپایلر قادر به تولید هشدارهای بیشتر با است `-O3` با توجه به جریان کنترل بهتر و تجزیه و تحلیل بین روش. هنگام ساخت با صدای جرنگ جرنگ, `libc++` به جای استفاده `libstdc++` و هنگامی که ساختمان با حالت اشکال زدایی, نسخه اشکال زدایی از `libc++` استفاده شده است که اجازه می دهد تا برای گرفتن خطاهای بیشتر در زمان اجرا.
|
|
||||||
|
|
||||||
## Sanitizers {#sanitizers}
|
|
||||||
|
|
||||||
**نشانی ضد عفونی کننده**.
|
|
||||||
ما تست های کاربردی و یکپارچه سازی را تحت عنوان بر اساس هر متعهد اجرا می کنیم.
|
|
||||||
|
|
||||||
**Valgrind (Memcheck)**.
|
|
||||||
ما یک شبه تست های کاربردی را تحت ارزیابی قرار می دهیم. چند ساعت طول می کشد. در حال حاضر یک مثبت کاذب شناخته شده در وجود دارد `re2` کتابخانه را ببینید [این مقاله](https://research.swtch.com/sparse).
|
|
||||||
|
|
||||||
**تعریف نشده رفتار ضد عفونی کننده.**
|
|
||||||
ما تست های کاربردی و یکپارچه سازی را تحت عنوان بر اساس هر متعهد اجرا می کنیم.
|
|
||||||
|
|
||||||
**ضدعفونی کننده موضوع**.
|
|
||||||
ما تست های کاربردی تحت تسان بر اساس هر مرتکب اجرا. ما هنوز تست های ادغام تحت تسان بر اساس هر متعهد اجرا کنید.
|
|
||||||
|
|
||||||
**ضد عفونی کننده حافظه**.
|
|
||||||
در حال حاضر ما هنوز از خانم استفاده نمی کنیم.
|
|
||||||
|
|
||||||
**اشکال زدایی تخصیص.**
|
|
||||||
نسخه اشکال زدایی از `jemalloc` برای ساخت اشکال زدایی استفاده می شود.
|
|
||||||
|
|
||||||
## Fuzzing {#fuzzing}
|
|
||||||
|
|
||||||
ریش ریش شدن کلیک هر دو با استفاده از اجرا شده است [هرزه](https://llvm.org/docs/LibFuzzer.html) و تصادفی گذاشتن نمایش داده شد.
|
|
||||||
تمام تست ریش شدن باید با ضدعفونی کننده انجام شود (نشانی و تعریف نشده).
|
|
||||||
|
|
||||||
پازل برای تست ریش ریش شدن جدا شده از کد کتابخانه استفاده می شود. طبع به عنوان بخشی از کد تست اجرا و “_fuzzer” نام پسوند.
|
|
||||||
به عنوان مثال ریش ریش شدن را می توان در یافت `src/Parsers/tests/lexer_fuzzer.cpp`. تنظیمات-پازل خاص, لغت نامه ها و جسم در ذخیره می شود `tests/fuzz`.
|
|
||||||
ما شما را تشویق به نوشتن تست ریش ریش شدن برای هر قابلیت که دسته ورودی کاربر.
|
|
||||||
|
|
||||||
طبع به طور پیش فرض ساخته شده است. برای ساخت ریش ریش ریش ریش شدن هر دو `-DENABLE_FUZZING=1` و `-DENABLE_TESTS=1` گزینه ها باید تنظیم شود.
|
|
||||||
ما توصیه می کنیم برای غیر فعال کردن Jemalloc در حالی که ساختمان fuzzers. پیکربندی مورد استفاده برای ادغام ریش ریش شدن تاتر به
|
|
||||||
گوگل اوس فوز را می توان در یافت `docker/fuzz`.
|
|
||||||
|
|
||||||
ما همچنین از تست ریش ریش شدن ساده برای تولید پرس و جو تصادفی ساده استفاده می کنیم و بررسی می کنیم که سرور نمی میرد.
|
|
||||||
شما می توانید این را در `00746_sql_fuzzy.pl`. این تست باید به طور مداوم اجرا شود (یک شبه و طولانی تر).
|
|
||||||
|
|
||||||
## ممیزی امنیتی {#security-audit}
|
|
||||||
|
|
||||||
مردم از تیم امنیتی یاندکس انجام برخی از بررسی اجمالی اساسی از قابلیت های تاتر از نقطه نظر امنیت.
|
|
||||||
|
|
||||||
## تجزیه و تحلیل استاتیک {#static-analyzers}
|
|
||||||
|
|
||||||
فرار میکنیم `PVS-Studio` بر اساس هر مرتکب. ما ارزیابی کرده ایم `clang-tidy`, `Coverity`, `cppcheck`, `PVS-Studio`, `tscancode`. شما دستورالعمل برای استفاده در پیدا `tests/instructions/` فهرست راهنما. همچنین شما می توانید به عنوان خوانده شده [مقاله در روسیه](https://habr.com/company/yandex/blog/342018/).
|
|
||||||
|
|
||||||
در صورت استفاده `CLion` به عنوان محیط برنامه نویسی, شما می توانید اهرم برخی از `clang-tidy` چک از جعبه.
|
|
||||||
|
|
||||||
## سخت شدن {#hardening}
|
|
||||||
|
|
||||||
`FORTIFY_SOURCE` به طور پیش فرض استفاده می شود. این تقریبا بی فایده است, اما هنوز هم حس می کند در موارد نادر و ما این کار را غیر فعال کنید.
|
|
||||||
|
|
||||||
## سبک کد {#code-style}
|
|
||||||
|
|
||||||
قوانین سبک کد شرح داده شده است [اینجا](https://clickhouse.tech/docs/en/development/style/).
|
|
||||||
|
|
||||||
برای بررسی برخی از نقض سبک مشترک, شما می توانید استفاده کنید `utils/check-style` خط نوشتن.
|
|
||||||
|
|
||||||
به زور سبک مناسب از کد خود را, شما می توانید استفاده کنید `clang-format`. پرونده `.clang-format` در منابع ریشه واقع شده است. این بیشتر با سبک کد واقعی ما مطابقت دارد. اما توصیه نمی شود که اعمال شود `clang-format` به فایل های موجود چون باعث می شود قالب بندی بدتر است. شما می توانید استفاده کنید `clang-format-diff` ابزاری است که شما می توانید در مخزن منبع صدای جرنگ جرنگ پیدا.
|
|
||||||
|
|
||||||
متناوبا شما می توانید سعی کنید `uncrustify` ابزار مجدد کد خود را. پیکربندی در `uncrustify.cfg` در منابع ریشه. این کمتر از تست شده است `clang-format`.
|
|
||||||
|
|
||||||
`CLion` فرمت کد خود را دارد که باید برای سبک کد ما تنظیم شود.
|
|
||||||
|
|
||||||
## تست های متریکا ب2 {#metrica-b2b-tests}
|
|
||||||
|
|
||||||
هر ClickHouse نسخه تست شده با Yandex Metrica و AppMetrica موتورهای. تست و نسخه های پایدار از تاتر در ماشین های مجازی مستقر و اجرا با یک کپی کوچک از موتور متریکا است که پردازش نمونه ثابت از داده های ورودی. سپس نتایج حاصل از دو نمونه از موتور متریکا با هم مقایسه می شوند.
|
|
||||||
|
|
||||||
این تست ها توسط تیم جداگانه خودکار می شوند. با توجه به تعداد زیادی از قطعات متحرک, تست شکست بیشتر از زمان به دلایل کاملا نامربوط, که بسیار دشوار است برای کشف کردن. به احتمال زیاد این تست ها ارزش منفی برای ما دارند. با این وجود این تست در حدود یک یا دو بار از صدها مفید ثابت شد.
|
|
||||||
|
|
||||||
## پوشش تست {#test-coverage}
|
|
||||||
|
|
||||||
تا جولای 2018 ما پوشش تست را پیگیری نمی کنیم.
|
|
||||||
|
|
||||||
## اتوماسیون تست {#test-automation}
|
|
||||||
|
|
||||||
ما تست ها را با سیستم اتوماسیون داخلی یاندکس اجرا می کنیم “Sandbox”.
|
|
||||||
|
|
||||||
ساخت شغل و تست ها در گودال ماسهبازی در هر مرتکب اساس اجرا شود. نتیجه بسته ها و نتایج تست در گیتهاب منتشر شده و می تواند توسط لینک مستقیم دانلود. مصنوعات ابد ذخیره می شود. هنگامی که شما یک درخواست کشش ارسال در گیتهاب, ما برچسب به عنوان “can be tested” و سیستم سی ما خواهد بسته های تاتر ساخت (رهایی, اشکال زدایی, با نشانی ضد عفونی کننده, و غیره) برای شما.
|
|
||||||
|
|
||||||
ما از تراویس سی به دلیل محدودیت در زمان و قدرت محاسباتی استفاده نمی کنیم.
|
|
||||||
ما از جنکینز استفاده نمیکنیم. این قبل از استفاده شد و در حال حاضر ما خوشحال ما با استفاده از جنکینز نیست.
|
|
||||||
|
|
||||||
[مقاله اصلی](https://clickhouse.tech/docs/en/development/tests/) <!--hide-->
|
|
1
docs/fa/development/tests.md
Symbolic link
1
docs/fa/development/tests.md
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../../en/development/tests.md
|
@ -464,69 +464,6 @@ The kurtosis of the given distribution. Type — [جسم شناور64](../../sql
|
|||||||
SELECT kurtSamp(value) FROM series_with_value_column
|
SELECT kurtSamp(value) FROM series_with_value_column
|
||||||
```
|
```
|
||||||
|
|
||||||
## هشدار داده می شود) {#agg-function-timeseriesgroupsum}
|
|
||||||
|
|
||||||
`timeSeriesGroupSum` می توانید سری های زمانی مختلف که برچسب زمان نمونه هم ترازی جمع نمی.
|
|
||||||
این برون یابی خطی بین دو برچسب زمان نمونه و سپس مجموع زمان سری با هم استفاده کنید.
|
|
||||||
|
|
||||||
- `uid` سری زمان شناسه منحصر به فرد است, `UInt64`.
|
|
||||||
- `timestamp` است نوع درون64 به منظور حمایت میلی ثانیه یا میکروثانیه.
|
|
||||||
- `value` متریک است.
|
|
||||||
|
|
||||||
تابع گرداند مجموعه ای از تاپل با `(timestamp, aggregated_value)` جفت
|
|
||||||
|
|
||||||
قبل از استفاده از این تابع اطمینان حاصل کنید `timestamp` به ترتیب صعودی است.
|
|
||||||
|
|
||||||
مثال:
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─uid─┬─timestamp─┬─value─┐
|
|
||||||
│ 1 │ 2 │ 0.2 │
|
|
||||||
│ 1 │ 7 │ 0.7 │
|
|
||||||
│ 1 │ 12 │ 1.2 │
|
|
||||||
│ 1 │ 17 │ 1.7 │
|
|
||||||
│ 1 │ 25 │ 2.5 │
|
|
||||||
│ 2 │ 3 │ 0.6 │
|
|
||||||
│ 2 │ 8 │ 1.6 │
|
|
||||||
│ 2 │ 12 │ 2.4 │
|
|
||||||
│ 2 │ 18 │ 3.6 │
|
|
||||||
│ 2 │ 24 │ 4.8 │
|
|
||||||
└─────┴───────────┴───────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
CREATE TABLE time_series(
|
|
||||||
uid UInt64,
|
|
||||||
timestamp Int64,
|
|
||||||
value Float64
|
|
||||||
) ENGINE = Memory;
|
|
||||||
INSERT INTO time_series VALUES
|
|
||||||
(1,2,0.2),(1,7,0.7),(1,12,1.2),(1,17,1.7),(1,25,2.5),
|
|
||||||
(2,3,0.6),(2,8,1.6),(2,12,2.4),(2,18,3.6),(2,24,4.8);
|
|
||||||
|
|
||||||
SELECT timeSeriesGroupSum(uid, timestamp, value)
|
|
||||||
FROM (
|
|
||||||
SELECT * FROM time_series order by timestamp ASC
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
و نتیجه خواهد بود:
|
|
||||||
|
|
||||||
``` text
|
|
||||||
[(2,0.2),(3,0.9),(7,2.1),(8,2.4),(12,3.6),(17,5.1),(18,5.4),(24,7.2),(25,2.5)]
|
|
||||||
```
|
|
||||||
|
|
||||||
## هشدار داده می شود) {#agg-function-timeseriesgroupratesum}
|
|
||||||
|
|
||||||
به طور مشابه به `timeSeriesGroupSum`, `timeSeriesGroupRateSum` محاسبه نرخ زمان سری و سپس مجموع نرخ با هم.
|
|
||||||
همچنین, برچسب زمان باید در جهت صعود قبل از استفاده از این تابع باشد.
|
|
||||||
|
|
||||||
استفاده از این تابع به داده ها از `timeSeriesGroupSum` مثال, شما نتیجه زیر را دریافت کنید:
|
|
||||||
|
|
||||||
``` text
|
|
||||||
[(2,0),(3,0.1),(7,0.3),(8,0.3),(12,0.3),(17,0.3),(18,0.3),(24,0.3),(25,0.1)]
|
|
||||||
```
|
|
||||||
|
|
||||||
## میانگین) {#agg_function-avg}
|
## میانگین) {#agg_function-avg}
|
||||||
|
|
||||||
محاسبه متوسط.
|
محاسبه متوسط.
|
||||||
|
@ -19,7 +19,7 @@ $ sudo apt-get install git cmake python ninja-build
|
|||||||
|
|
||||||
Ou cmake3 au lieu de cmake sur les systèmes plus anciens.
|
Ou cmake3 au lieu de cmake sur les systèmes plus anciens.
|
||||||
|
|
||||||
## Installer GCC 9 {#install-gcc-9}
|
## Installer GCC 10 {#install-gcc-10}
|
||||||
|
|
||||||
Il y a plusieurs façons de le faire.
|
Il y a plusieurs façons de le faire.
|
||||||
|
|
||||||
@ -29,18 +29,18 @@ Il y a plusieurs façons de le faire.
|
|||||||
$ sudo apt-get install software-properties-common
|
$ sudo apt-get install software-properties-common
|
||||||
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
||||||
$ sudo apt-get update
|
$ sudo apt-get update
|
||||||
$ sudo apt-get install gcc-9 g++-9
|
$ sudo apt-get install gcc-10 g++-10
|
||||||
```
|
```
|
||||||
|
|
||||||
### Installer à partir de Sources {#install-from-sources}
|
### Installer à partir de Sources {#install-from-sources}
|
||||||
|
|
||||||
Regarder [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
Regarder [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
||||||
|
|
||||||
## Utilisez GCC 9 pour les Builds {#use-gcc-9-for-builds}
|
## Utilisez GCC 10 pour les Builds {#use-gcc-10-for-builds}
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ export CC=gcc-9
|
$ export CC=gcc-10
|
||||||
$ export CXX=g++-9
|
$ export CXX=g++-10
|
||||||
```
|
```
|
||||||
|
|
||||||
## Commander Clickhouse Sources {#checkout-clickhouse-sources}
|
## Commander Clickhouse Sources {#checkout-clickhouse-sources}
|
||||||
@ -76,7 +76,7 @@ La construction nécessite les composants suivants:
|
|||||||
- Git (est utilisé uniquement pour extraire les sources, ce n'est pas nécessaire pour la construction)
|
- Git (est utilisé uniquement pour extraire les sources, ce n'est pas nécessaire pour la construction)
|
||||||
- CMake 3.10 ou plus récent
|
- CMake 3.10 ou plus récent
|
||||||
- Ninja (recommandé) ou faire
|
- Ninja (recommandé) ou faire
|
||||||
- Compilateur C++: gcc 9 ou clang 8 ou plus récent
|
- Compilateur C++: gcc 10 ou clang 8 ou plus récent
|
||||||
- Linker: lld ou gold (le classique GNU LD ne fonctionnera pas)
|
- Linker: lld ou gold (le classique GNU LD ne fonctionnera pas)
|
||||||
- Python (est seulement utilisé dans la construction LLVM et il est facultatif)
|
- Python (est seulement utilisé dans la construction LLVM et il est facultatif)
|
||||||
|
|
||||||
|
@ -135,13 +135,13 @@ ClickHouse utilise plusieurs bibliothèques externes pour la construction. Tous
|
|||||||
|
|
||||||
# Compilateur C++ {#c-compiler}
|
# Compilateur C++ {#c-compiler}
|
||||||
|
|
||||||
Les compilateurs GCC à partir de la version 9 et Clang version 8 ou supérieure sont pris en charge pour construire ClickHouse.
|
Les compilateurs GCC à partir de la version 10 et Clang version 8 ou supérieure sont pris en charge pour construire ClickHouse.
|
||||||
|
|
||||||
Les builds officiels de Yandex utilisent actuellement GCC car ils génèrent du code machine de performances légèrement meilleures (ce qui donne une différence allant jusqu'à plusieurs pour cent selon nos benchmarks). Et Clang est plus pratique pour le développement habituellement. Cependant, notre plate-forme d'intégration continue (CI) vérifie environ une douzaine de combinaisons de construction.
|
Les builds officiels de Yandex utilisent actuellement GCC car ils génèrent du code machine de performances légèrement meilleures (ce qui donne une différence allant jusqu'à plusieurs pour cent selon nos benchmarks). Et Clang est plus pratique pour le développement habituellement. Cependant, notre plate-forme d'intégration continue (CI) vérifie environ une douzaine de combinaisons de construction.
|
||||||
|
|
||||||
Pour installer GCC sur Ubuntu Exécutez: `sudo apt install gcc g++`
|
Pour installer GCC sur Ubuntu Exécutez: `sudo apt install gcc g++`
|
||||||
|
|
||||||
Vérifiez la version de gcc: `gcc --version`. Si elle est inférieure à 9, suivez les instructions ici: https://clickhouse.tech/docs/fr/development/build/#install-gcc-9.
|
Vérifiez la version de gcc: `gcc --version`. Si elle est inférieure à 10, suivez les instructions ici: https://clickhouse.tech/docs/fr/development/build/#install-gcc-10.
|
||||||
|
|
||||||
Mac OS X build est pris en charge uniquement pour Clang. Il suffit d'exécuter `brew install llvm`
|
Mac OS X build est pris en charge uniquement pour Clang. Il suffit d'exécuter `brew install llvm`
|
||||||
|
|
||||||
@ -156,11 +156,11 @@ Maintenant que vous êtes prêt à construire ClickHouse nous vous conseillons d
|
|||||||
|
|
||||||
Vous pouvez avoir plusieurs répertoires différents (build_release, build_debug, etc.) pour les différents types de construction.
|
Vous pouvez avoir plusieurs répertoires différents (build_release, build_debug, etc.) pour les différents types de construction.
|
||||||
|
|
||||||
Tandis qu'à l'intérieur de la `build` répertoire, configurez votre build en exécutant CMake. Avant la première exécution, vous devez définir des variables d'environnement qui spécifient le compilateur (compilateur gcc version 9 dans cet exemple).
|
Tandis qu'à l'intérieur de la `build` répertoire, configurez votre build en exécutant CMake. Avant la première exécution, vous devez définir des variables d'environnement qui spécifient le compilateur (compilateur gcc version 10 dans cet exemple).
|
||||||
|
|
||||||
Linux:
|
Linux:
|
||||||
|
|
||||||
export CC=gcc-9 CXX=g++-9
|
export CC=gcc-10 CXX=g++-10
|
||||||
cmake ..
|
cmake ..
|
||||||
|
|
||||||
Mac OS X:
|
Mac OS X:
|
||||||
|
@ -1,261 +0,0 @@
|
|||||||
---
|
|
||||||
machine_translated: true
|
|
||||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
|
||||||
toc_priority: 69
|
|
||||||
toc_title: "Comment ex\xE9cuter des Tests ClickHouse"
|
|
||||||
---
|
|
||||||
|
|
||||||
# ClickHouse Test {#clickhouse-testing}
|
|
||||||
|
|
||||||
## Les Tests Fonctionnels {#functional-tests}
|
|
||||||
|
|
||||||
Les tests fonctionnels sont les plus simples et pratiques à utiliser. La plupart des fonctionnalités de ClickHouse peuvent être testées avec des tests fonctionnels et elles sont obligatoires à utiliser pour chaque changement de code de ClickHouse qui peut être testé de cette façon.
|
|
||||||
|
|
||||||
Chaque test fonctionnel envoie une ou plusieurs requêtes au serveur clickhouse en cours d'exécution et compare le résultat avec la référence.
|
|
||||||
|
|
||||||
Les Tests sont situés dans `queries` répertoire. Il y a deux sous-répertoires: `stateless` et `stateful`. Les tests sans état exécutent des requêtes sans données de test préchargées - ils créent souvent de petits ensembles de données synthétiques à la volée, dans le test lui-même. Les tests avec État nécessitent des données de test préchargées de Yandex.Metrica et non disponible pour le grand public. Nous avons tendance à utiliser uniquement `stateless` tests et éviter d'ajouter de nouveaux `stateful` test.
|
|
||||||
|
|
||||||
Chaque test peut être de deux types: `.sql` et `.sh`. `.sql` test est le script SQL simple qui est canalisé vers `clickhouse-client --multiquery --testmode`. `.sh` test est un script qui est exécuté par lui-même.
|
|
||||||
|
|
||||||
Pour exécuter tous les tests, utilisez `clickhouse-test` outil. Regarder `--help` pour la liste des options possibles. Vous pouvez simplement exécuter tous les tests ou exécuter un sous ensemble de tests filtrés par sous chaîne dans le nom du test: `./clickhouse-test substring`.
|
|
||||||
|
|
||||||
Le moyen le plus simple d'invoquer des tests fonctionnels est de copier `clickhouse-client` de `/usr/bin/`, exécuter `clickhouse-server` et puis exécutez `./clickhouse-test` à partir de son propre répertoire.
|
|
||||||
|
|
||||||
Pour ajouter un nouveau test, créez un `.sql` ou `.sh` fichier dans `queries/0_stateless` répertoire, vérifiez-le manuellement, puis générez `.reference` fichier de la façon suivante: `clickhouse-client -n --testmode < 00000_test.sql > 00000_test.reference` ou `./00000_test.sh > ./00000_test.reference`.
|
|
||||||
|
|
||||||
Les Tests doivent utiliser (create, drop, etc) uniquement des tables dans `test` base de données supposée être créée au préalable; les tests peuvent également utiliser des tables temporaires.
|
|
||||||
|
|
||||||
Si vous souhaitez utiliser des requêtes distribuées dans les tests fonctionnels, vous pouvez tirer parti de `remote` fonction de table avec `127.0.0.{1..2}` ou vous pouvez utiliser des clusters de test prédéfinis dans le fichier de configuration du serveur comme `test_shard_localhost`.
|
|
||||||
|
|
||||||
Certains tests sont marqués avec `zookeeper`, `shard` ou `long` en leurs noms.
|
|
||||||
`zookeeper` est pour les tests qui utilisent ZooKeeper. `shard` est pour les tests
|
|
||||||
nécessite l'écoute du serveur `127.0.0.*`; `distributed` ou `global` avoir le même
|
|
||||||
sens. `long` est pour les tests qui s'exécutent légèrement plus longtemps qu'une seconde. Vous pouvez
|
|
||||||
désactivez ces groupes de tests en utilisant `--no-zookeeper`, `--no-shard` et
|
|
||||||
`--no-long` options, respectivement.
|
|
||||||
|
|
||||||
## Bugs Connus {#known-bugs}
|
|
||||||
|
|
||||||
Si nous connaissons des bugs qui peuvent être facilement reproduits par des tests fonctionnels, nous plaçons des tests fonctionnels préparés dans `tests/queries/bugs` répertoire. Ces tests seront déplacés à `tests/queries/0_stateless` quand les bugs sont corrigés.
|
|
||||||
|
|
||||||
## Les Tests D'Intégration {#integration-tests}
|
|
||||||
|
|
||||||
Les tests d'intégration permettent de tester ClickHouse en configuration cluster et clickhouse interaction avec D'autres serveurs comme MySQL, Postgres, MongoDB. Ils sont utiles pour émuler les splits réseau, les chutes de paquets, etc. Ces tests sont exécutés sous Docker et créent plusieurs conteneurs avec divers logiciels.
|
|
||||||
|
|
||||||
Voir `tests/integration/README.md` sur la façon d'exécuter ces tests.
|
|
||||||
|
|
||||||
Notez que l'intégration de ClickHouse avec des pilotes tiers n'est pas testée. De plus, nous n'avons actuellement pas de tests d'intégration avec nos pilotes JDBC et ODBC.
|
|
||||||
|
|
||||||
## Les Tests Unitaires {#unit-tests}
|
|
||||||
|
|
||||||
Les tests unitaires sont utiles lorsque vous voulez tester non pas le ClickHouse dans son ensemble, mais une seule bibliothèque ou classe isolée. Vous pouvez activer ou désactiver la génération de tests avec `ENABLE_TESTS` Option CMake. Les tests unitaires (et autres programmes de test) sont situés dans `tests` sous-répertoires à travers le code. Pour exécuter des tests unitaires, tapez `ninja test`. Certains tests utilisent `gtest`, mais certains ne sont que des programmes qui renvoient un code de sortie non nul en cas d'échec du test.
|
|
||||||
|
|
||||||
Ce n'est pas nécessairement d'avoir des tests unitaires si le code est déjà couvert par des tests fonctionnels (et les tests fonctionnels sont généralement beaucoup plus simples à utiliser).
|
|
||||||
|
|
||||||
## Tests De Performance {#performance-tests}
|
|
||||||
|
|
||||||
Les tests de Performance permettent de mesurer et de comparer les performances d'une partie isolée de ClickHouse sur des requêtes synthétiques. Les Tests sont situés à `tests/performance`. Chaque test est représenté par `.xml` fichier avec description du cas de test. Les Tests sont exécutés avec `clickhouse performance-test` outil (qui est incorporé dans `clickhouse` binaire). Voir `--help` pour l'invocation.
|
|
||||||
|
|
||||||
Chaque essai d'exécuter une ou plusieurs requêtes (éventuellement avec des combinaisons de paramètres) dans une boucle avec certaines conditions pour l'arrêt (comme “maximum execution speed is not changing in three seconds”) et mesurer certaines mesures sur les performances de la requête (comme “maximum execution speed”). Certains tests peuvent contenir des conditions préalables sur un ensemble de données de test préchargé.
|
|
||||||
|
|
||||||
Si vous souhaitez améliorer les performances de ClickHouse dans certains scénarios, et si des améliorations peuvent être observées sur des requêtes simples, il est fortement recommandé d'écrire un test de performance. Il est toujours logique d'utiliser `perf top` ou d'autres outils perf pendant vos tests.
|
|
||||||
|
|
||||||
## Outils et Scripts de Test {#test-tools-and-scripts}
|
|
||||||
|
|
||||||
Certains programmes dans `tests` directory ne sont pas des tests préparés, mais sont des outils de test. Par exemple, pour `Lexer` il est un outil `src/Parsers/tests/lexer` Cela fait juste la tokenisation de stdin et écrit le résultat colorisé dans stdout. Vous pouvez utiliser ce genre d'outils comme exemples de code et pour l'exploration et les tests manuels.
|
|
||||||
|
|
||||||
Vous pouvez également placer une paire de fichiers `.sh` et `.reference` avec l'outil pour l'exécuter sur une entrée prédéfinie - alors le résultat du script peut être comparé à `.reference` fichier. Ce genre de tests ne sont pas automatisés.
|
|
||||||
|
|
||||||
## Divers Tests {#miscellaneous-tests}
|
|
||||||
|
|
||||||
Il existe des tests pour les dictionnaires externes situés à `tests/external_dictionaries` et pour machine appris modèles dans `tests/external_models`. Ces tests ne sont pas mis à jour et doivent être transférés aux tests d'intégration.
|
|
||||||
|
|
||||||
Il y a un test séparé pour les inserts de quorum. Ce test exécute le cluster ClickHouse sur des serveurs séparés et émule divers cas d'échec: scission réseau, chute de paquets (entre les nœuds ClickHouse, entre Clickhouse et ZooKeeper, entre le serveur ClickHouse et le client, etc.), `kill -9`, `kill -STOP` et `kill -CONT` , comme [Jepsen](https://aphyr.com/tags/Jepsen). Ensuite, le test vérifie que toutes les insertions reconnues ont été écrites et que toutes les insertions rejetées ne l'ont pas été.
|
|
||||||
|
|
||||||
Le test de Quorum a été écrit par une équipe distincte avant que ClickHouse ne soit open-source. Cette équipe ne travaille plus avec ClickHouse. Test a été écrit accidentellement en Java. Pour ces raisons, quorum test doit être réécrit et déplacé vers tests d'intégration.
|
|
||||||
|
|
||||||
## Les Tests Manuels {#manual-testing}
|
|
||||||
|
|
||||||
Lorsque vous développez une nouvelle fonctionnalité, il est raisonnable de tester également manuellement. Vous pouvez le faire avec les étapes suivantes:
|
|
||||||
|
|
||||||
Construire ClickHouse. Exécuter ClickHouse à partir du terminal: changer le répertoire à `programs/clickhouse-server` et de l'exécuter avec `./clickhouse-server`. Il utilisera la configuration (`config.xml`, `users.xml` et les fichiers à l'intérieur `config.d` et `users.d` répertoires) à partir du répertoire courant par défaut. Pour vous connecter au serveur ClickHouse, exécutez `programs/clickhouse-client/clickhouse-client`.
|
|
||||||
|
|
||||||
Notez que tous les outils clickhouse (serveur, client, etc.) ne sont que des liens symboliques vers un seul binaire nommé `clickhouse`. Vous pouvez trouver ce binaire à `programs/clickhouse`. Tous les outils peuvent également être invoquée comme `clickhouse tool` plutôt `clickhouse-tool`.
|
|
||||||
|
|
||||||
Alternativement, vous pouvez installer le paquet ClickHouse: soit une version stable du référentiel Yandex, soit vous pouvez créer un paquet pour vous-même avec `./release` dans les sources de ClickHouse racine. Puis démarrez le serveur avec `sudo service clickhouse-server start` (ou stop pour arrêter le serveur). Rechercher des journaux à `/etc/clickhouse-server/clickhouse-server.log`.
|
|
||||||
|
|
||||||
Lorsque ClickHouse est déjà installé sur votre système, vous pouvez créer un nouveau `clickhouse` binaire et remplacer le binaire:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ sudo service clickhouse-server stop
|
|
||||||
$ sudo cp ./clickhouse /usr/bin/
|
|
||||||
$ sudo service clickhouse-server start
|
|
||||||
```
|
|
||||||
|
|
||||||
Vous pouvez également arrêter system clickhouse-server et exécuter le vôtre avec la même configuration mais en vous connectant au terminal:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ sudo service clickhouse-server stop
|
|
||||||
$ sudo -u clickhouse /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml
|
|
||||||
```
|
|
||||||
|
|
||||||
Exemple avec gdb:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ sudo -u clickhouse gdb --args /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml
|
|
||||||
```
|
|
||||||
|
|
||||||
Si le système clickhouse-server est déjà en cours d'exécution et que vous ne voulez pas l'arrêter, vous pouvez modifier les numéros de port dans votre `config.xml` (ou de les remplacer dans un fichier `config.d` répertoire), fournissez le chemin de données approprié, et exécutez-le.
|
|
||||||
|
|
||||||
`clickhouse` binary n'a presque aucune dépendance et fonctionne sur un large éventail de distributions Linux. Rapide et sale de tester vos modifications sur un serveur, vous pouvez simplement `scp` votre douce construite `clickhouse` binaire à votre serveur et ensuite l'exécuter comme dans les exemples ci-dessus.
|
|
||||||
|
|
||||||
## L'Environnement De Test {#testing-environment}
|
|
||||||
|
|
||||||
Avant de publier la version stable, nous la déployons sur l'environnement de test. L'environnement de test est un cluster processus 1/39 partie de [Yandex.Metrica](https://metrica.yandex.com/) données. Nous partageons notre environnement de test avec Yandex.Metrica de l'équipe. ClickHouse est mis à niveau sans temps d'arrêt au-dessus des données existantes. Nous regardons d'abord que les données sont traitées avec succès sans retard par rapport au temps réel, la réplication continue à fonctionner et il n'y a pas de problèmes visibles pour Yandex.Metrica de l'équipe. Première vérification peut être effectuée de la façon suivante:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT hostName() AS h, any(version()), any(uptime()), max(UTCEventTime), count() FROM remote('example01-01-{1..3}t', merge, hits) WHERE EventDate >= today() - 2 GROUP BY h ORDER BY h;
|
|
||||||
```
|
|
||||||
|
|
||||||
Dans certains cas, nous déployons également à l'environnement de test de nos équipes d'amis dans Yandex: marché, Cloud, etc. Nous avons également des serveurs matériels qui sont utilisés à des fins de développement.
|
|
||||||
|
|
||||||
## Les Tests De Charge {#load-testing}
|
|
||||||
|
|
||||||
Après le déploiement dans l'environnement de test, nous exécutons des tests de charge avec des requêtes du cluster de production. Ceci est fait manuellement.
|
|
||||||
|
|
||||||
Assurez-vous que vous avez activé `query_log` sur votre cluster de production.
|
|
||||||
|
|
||||||
Recueillir le journal des requêtes pour une journée ou plus:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ clickhouse-client --query="SELECT DISTINCT query FROM system.query_log WHERE event_date = today() AND query LIKE '%ym:%' AND query NOT LIKE '%system.query_log%' AND type = 2 AND is_initial_query" > queries.tsv
|
|
||||||
```
|
|
||||||
|
|
||||||
C'est une façon compliquée exemple. `type = 2` filtrera les requêtes exécutées avec succès. `query LIKE '%ym:%'` est de sélectionner les requêtes de Yandex.Metrica. `is_initial_query` est de sélectionner uniquement les requêtes initiées par le client, pas par ClickHouse lui-même (en tant que partie du traitement de requête distribué).
|
|
||||||
|
|
||||||
`scp` ce journal à votre cluster de test et l'exécuter comme suit:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ clickhouse benchmark --concurrency 16 < queries.tsv
|
|
||||||
```
|
|
||||||
|
|
||||||
(probablement vous voulez aussi spécifier un `--user`)
|
|
||||||
|
|
||||||
Ensuite, laissez-le pour une nuit ou un week-end et allez vous reposer.
|
|
||||||
|
|
||||||
Tu devrais vérifier ça `clickhouse-server` ne plante pas, l'empreinte mémoire est limitée et les performances ne se dégradent pas au fil du temps.
|
|
||||||
|
|
||||||
Les délais précis d'exécution des requêtes ne sont pas enregistrés et ne sont pas comparés en raison de la grande variabilité des requêtes et de l'environnement.
|
|
||||||
|
|
||||||
## Essais De Construction {#build-tests}
|
|
||||||
|
|
||||||
Les tests de construction permettent de vérifier que la construction n'est pas interrompue sur diverses configurations alternatives et sur certains systèmes étrangers. Les Tests sont situés à `ci` répertoire. Ils exécutent build from source à L'intérieur de Docker, Vagrant, et parfois avec `qemu-user-static` à l'intérieur de Docker. Ces tests sont en cours de développement et les essais ne sont pas automatisées.
|
|
||||||
|
|
||||||
Motivation:
|
|
||||||
|
|
||||||
Normalement, nous libérons et exécutons tous les tests sur une seule variante de construction ClickHouse. Mais il existe des variantes de construction alternatives qui ne sont pas complètement testées. Exemple:
|
|
||||||
|
|
||||||
- construire sur FreeBSD;
|
|
||||||
- construire sur Debian avec les bibliothèques des paquets système;
|
|
||||||
- construire avec des liens partagés de bibliothèques;
|
|
||||||
- construire sur la plate-forme AArch64;
|
|
||||||
- construire sur la plate-forme PowerPc.
|
|
||||||
|
|
||||||
Par exemple, construire avec des paquets système est une mauvaise pratique, car nous ne pouvons pas garantir quelle version exacte des paquets un système aura. Mais c'est vraiment nécessaire pour les responsables Debian. Pour cette raison, nous devons au moins soutenir cette variante de construction. Un autre exemple: la liaison partagée est une source commune de problèmes, mais elle est nécessaire pour certains amateurs.
|
|
||||||
|
|
||||||
Bien que nous ne puissions pas exécuter tous les tests sur toutes les variantes de builds, nous voulons vérifier au moins que les différentes variantes de build ne sont pas cassées. Pour cela nous utilisons les essais de construction.
|
|
||||||
|
|
||||||
## Test de compatibilité du protocole {#testing-for-protocol-compatibility}
|
|
||||||
|
|
||||||
Lorsque nous étendons le protocole réseau ClickHouse, nous testons manuellement que l'ancien clickhouse-client fonctionne avec le nouveau clickhouse-server et que le nouveau clickhouse-client fonctionne avec l'ancien clickhouse-server (simplement en exécutant des binaires à partir des paquets correspondants).
|
|
||||||
|
|
||||||
## L'aide du Compilateur {#help-from-the-compiler}
|
|
||||||
|
|
||||||
Code ClickHouse principal (qui est situé dans `dbms` annuaire) est construit avec `-Wall -Wextra -Werror` et avec quelques avertissements supplémentaires activés. Bien que ces options ne soient pas activées pour les bibliothèques tierces.
|
|
||||||
|
|
||||||
Clang a des avertissements encore plus utiles - vous pouvez les chercher avec `-Weverything` et choisissez quelque chose à construire par défaut.
|
|
||||||
|
|
||||||
Pour les builds de production, gcc est utilisé (il génère toujours un code légèrement plus efficace que clang). Pour le développement, clang est généralement plus pratique à utiliser. Vous pouvez construire sur votre propre machine avec le mode débogage (pour économiser la batterie de votre ordinateur portable), mais veuillez noter que le compilateur est capable de générer plus d'Avertissements avec `-O3` grâce à une meilleure analyse du flux de contrôle et de l'inter-procédure. Lors de la construction avec clang avec le mode débogage, la version de débogage de `libc++` est utilisé qui permet d'attraper plus d'erreurs à l'exécution.
|
|
||||||
|
|
||||||
## Désinfectant {#sanitizers}
|
|
||||||
|
|
||||||
**Désinfectant d'adresse**.
|
|
||||||
Nous exécutons des tests fonctionnels et d'intégration sous ASan sur la base de per-commit.
|
|
||||||
|
|
||||||
**Valgrind (Memcheck)**.
|
|
||||||
Nous effectuons des tests fonctionnels sous Valgrind pendant la nuit. Cela prend plusieurs heures. Actuellement il y a un faux positif connu dans `re2` bibliothèque, consultez [cet article](https://research.swtch.com/sparse).
|
|
||||||
|
|
||||||
**Désinfectant de comportement indéfini.**
|
|
||||||
Nous exécutons des tests fonctionnels et d'intégration sous ASan sur la base de per-commit.
|
|
||||||
|
|
||||||
**Désinfectant pour filetage**.
|
|
||||||
Nous exécutons des tests fonctionnels sous TSan sur la base de per-commit. Nous n'exécutons toujours pas de tests D'intégration sous TSan sur la base de la validation.
|
|
||||||
|
|
||||||
**Mémoire de désinfectant**.
|
|
||||||
Actuellement, nous n'utilisons toujours pas MSan.
|
|
||||||
|
|
||||||
**Débogueur allocateur.**
|
|
||||||
Version de débogage de `jemalloc` est utilisé pour la construction de débogage.
|
|
||||||
|
|
||||||
## Fuzzing {#fuzzing}
|
|
||||||
|
|
||||||
Clickhouse fuzzing est implémenté à la fois en utilisant [libFuzzer](https://llvm.org/docs/LibFuzzer.html) et des requêtes SQL aléatoires.
|
|
||||||
Tous les tests de fuzz doivent être effectués avec des désinfectants (adresse et indéfini).
|
|
||||||
|
|
||||||
LibFuzzer est utilisé pour les tests de fuzz isolés du code de la bibliothèque. Les Fuzzers sont implémentés dans le cadre du code de test et ont “_fuzzer” nom postfixes.
|
|
||||||
Exemple Fuzzer peut être trouvé à `src/Parsers/tests/lexer_fuzzer.cpp`. Les configs, dictionnaires et corpus spécifiques à LibFuzzer sont stockés à `tests/fuzz`.
|
|
||||||
Nous vous encourageons à écrire des tests fuzz pour chaque fonctionnalité qui gère l'entrée de l'utilisateur.
|
|
||||||
|
|
||||||
Fuzzers ne sont pas construits par défaut. Pour construire fuzzers à la fois `-DENABLE_FUZZING=1` et `-DENABLE_TESTS=1` options doivent être définies.
|
|
||||||
Nous vous recommandons de désactiver Jemalloc lors de la construction de fuzzers. Configuration utilisée pour intégrer clickhouse fuzzing à
|
|
||||||
Google OSS-Fuzz peut être trouvé à `docker/fuzz`.
|
|
||||||
|
|
||||||
Nous utilisons également un simple test fuzz pour générer des requêtes SQL aléatoires et vérifier que le serveur ne meurt pas en les exécutant.
|
|
||||||
Vous pouvez le trouver dans `00746_sql_fuzzy.pl`. Ce test doit être exécuté en continu (pendant la nuit et plus longtemps).
|
|
||||||
|
|
||||||
## Audit De Sécurité {#security-audit}
|
|
||||||
|
|
||||||
Les gens de L'équipe de sécurité Yandex font un aperçu de base des capacités de ClickHouse du point de vue de la sécurité.
|
|
||||||
|
|
||||||
## Analyseurs Statiques {#static-analyzers}
|
|
||||||
|
|
||||||
Nous courons `PVS-Studio` par commettre base. Nous avons évalué `clang-tidy`, `Coverity`, `cppcheck`, `PVS-Studio`, `tscancode`. Vous trouverez des instructions pour l'utilisation dans `tests/instructions/` répertoire. Aussi, vous pouvez lire [l'article en russe](https://habr.com/company/yandex/blog/342018/).
|
|
||||||
|
|
||||||
Si vous utilisez `CLion` en tant QU'IDE, vous pouvez tirer parti de certains `clang-tidy` contrôles de la boîte.
|
|
||||||
|
|
||||||
## Durcir {#hardening}
|
|
||||||
|
|
||||||
`FORTIFY_SOURCE` est utilisé par défaut. C'est presque inutile, mais cela a toujours du sens dans de rares cas et nous ne le désactivons pas.
|
|
||||||
|
|
||||||
## Code De Style {#code-style}
|
|
||||||
|
|
||||||
Les règles de style de Code sont décrites [ici](https://clickhouse.tech/docs/en/development/style/).
|
|
||||||
|
|
||||||
Pour vérifier certaines violations de style courantes, vous pouvez utiliser `utils/check-style` script.
|
|
||||||
|
|
||||||
Pour forcer le style approprié de votre code, vous pouvez utiliser `clang-format`. Fichier `.clang-format` est situé à la racine des sources. Il correspond principalement à notre style de code réel. Mais il n'est pas recommandé d'appliquer `clang-format` pour les fichiers existants, car il rend le formatage pire. Vous pouvez utiliser `clang-format-diff` outil que vous pouvez trouver dans clang référentiel source.
|
|
||||||
|
|
||||||
Alternativement vous pouvez essayer `uncrustify` outil pour reformater votre code. La Configuration est en `uncrustify.cfg` dans la racine des sources. Il est moins testé que `clang-format`.
|
|
||||||
|
|
||||||
`CLion` a son propre formateur de code qui doit être réglé pour notre style de code.
|
|
||||||
|
|
||||||
## Tests Metrica B2B {#metrica-b2b-tests}
|
|
||||||
|
|
||||||
Chaque version de ClickHouse est testée avec les moteurs Yandex Metrica et AppMetrica. Les versions de test et stables de ClickHouse sont déployées sur des machines virtuelles et exécutées avec une petite copie de metrica engine qui traite un échantillon fixe de données d'entrée. Ensuite, les résultats de deux instances de metrica engine sont comparés ensemble.
|
|
||||||
|
|
||||||
Ces tests sont automatisés par une équipe distincte. En raison du nombre élevé de pièces en mouvement, les tests échouent la plupart du temps complètement raisons, qui sont très difficiles à comprendre. Très probablement, ces tests ont une valeur négative pour nous. Néanmoins, ces tests se sont révélés utiles dans environ une ou deux fois sur des centaines.
|
|
||||||
|
|
||||||
## La Couverture De Test {#test-coverage}
|
|
||||||
|
|
||||||
En juillet 2018, nous ne suivons pas la couverture des tests.
|
|
||||||
|
|
||||||
## Automatisation Des Tests {#test-automation}
|
|
||||||
|
|
||||||
Nous exécutons des tests avec Yandex CI interne et le système d'automatisation des tâches nommé “Sandbox”.
|
|
||||||
|
|
||||||
Les travaux de construction et les tests sont exécutés dans Sandbox sur une base de validation. Les paquets résultants et les résultats des tests sont publiés dans GitHub et peuvent être téléchargés par des liens directs. Les artefacts sont stockés éternellement. Lorsque vous envoyez une demande de tirage sur GitHub, nous l'étiquetons comme “can be tested” et notre système CI construira des paquets ClickHouse (release, debug, avec un désinfectant d'adresse, etc.) pour vous.
|
|
||||||
|
|
||||||
Nous n'utilisons pas Travis CI en raison de la limite de temps et de puissance de calcul.
|
|
||||||
On n'utilise pas Jenkins. Il a été utilisé avant et maintenant nous sommes heureux de ne pas utiliser Jenkins.
|
|
||||||
|
|
||||||
[Article Original](https://clickhouse.tech/docs/en/development/tests/) <!--hide-->
|
|
1
docs/fr/development/tests.md
Symbolic link
1
docs/fr/development/tests.md
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../../en/development/tests.md
|
@ -464,69 +464,6 @@ The kurtosis of the given distribution. Type — [Float64](../../sql-reference/d
|
|||||||
SELECT kurtSamp(value) FROM series_with_value_column
|
SELECT kurtSamp(value) FROM series_with_value_column
|
||||||
```
|
```
|
||||||
|
|
||||||
## timeSeriesGroupSum(uid, horodatage, valeur) {#agg-function-timeseriesgroupsum}
|
|
||||||
|
|
||||||
`timeSeriesGroupSum` peut agréger différentes séries temporelles qui échantillonnent l'horodatage et non l'alignement.
|
|
||||||
Il utilisera une interpolation linéaire entre deux échantillons d'horodatage, puis additionnera les séries temporelles ensemble.
|
|
||||||
|
|
||||||
- `uid` la série temporelle est elle unique, `UInt64`.
|
|
||||||
- `timestamp` est de type Int64 afin de prendre en charge la milliseconde ou la microseconde.
|
|
||||||
- `value` est la métrique.
|
|
||||||
|
|
||||||
La fonction renvoie un tableau de tuples avec `(timestamp, aggregated_value)` pair.
|
|
||||||
|
|
||||||
Avant d'utiliser cette fonction, assurez-vous `timestamp` est dans l'ordre croissant.
|
|
||||||
|
|
||||||
Exemple:
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─uid─┬─timestamp─┬─value─┐
|
|
||||||
│ 1 │ 2 │ 0.2 │
|
|
||||||
│ 1 │ 7 │ 0.7 │
|
|
||||||
│ 1 │ 12 │ 1.2 │
|
|
||||||
│ 1 │ 17 │ 1.7 │
|
|
||||||
│ 1 │ 25 │ 2.5 │
|
|
||||||
│ 2 │ 3 │ 0.6 │
|
|
||||||
│ 2 │ 8 │ 1.6 │
|
|
||||||
│ 2 │ 12 │ 2.4 │
|
|
||||||
│ 2 │ 18 │ 3.6 │
|
|
||||||
│ 2 │ 24 │ 4.8 │
|
|
||||||
└─────┴───────────┴───────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
CREATE TABLE time_series(
|
|
||||||
uid UInt64,
|
|
||||||
timestamp Int64,
|
|
||||||
value Float64
|
|
||||||
) ENGINE = Memory;
|
|
||||||
INSERT INTO time_series VALUES
|
|
||||||
(1,2,0.2),(1,7,0.7),(1,12,1.2),(1,17,1.7),(1,25,2.5),
|
|
||||||
(2,3,0.6),(2,8,1.6),(2,12,2.4),(2,18,3.6),(2,24,4.8);
|
|
||||||
|
|
||||||
SELECT timeSeriesGroupSum(uid, timestamp, value)
|
|
||||||
FROM (
|
|
||||||
SELECT * FROM time_series order by timestamp ASC
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
Et le résultat sera:
|
|
||||||
|
|
||||||
``` text
|
|
||||||
[(2,0.2),(3,0.9),(7,2.1),(8,2.4),(12,3.6),(17,5.1),(18,5.4),(24,7.2),(25,2.5)]
|
|
||||||
```
|
|
||||||
|
|
||||||
## timeSeriesGroupRateSum(uid, ts, val) {#agg-function-timeseriesgroupratesum}
|
|
||||||
|
|
||||||
De la même manière à `timeSeriesGroupSum`, `timeSeriesGroupRateSum` calcule le taux de séries chronologiques, puis additionne les taux ensemble.
|
|
||||||
En outre, l'horodatage doit être dans l'ordre croissant avant d'utiliser cette fonction.
|
|
||||||
|
|
||||||
Application de cette fonction aux données du `timeSeriesGroupSum` exemple, vous obtenez le résultat suivant:
|
|
||||||
|
|
||||||
``` text
|
|
||||||
[(2,0),(3,0.1),(7,0.3),(8,0.3),(12,0.3),(17,0.3),(18,0.3),(24,0.3),(25,0.1)]
|
|
||||||
```
|
|
||||||
|
|
||||||
## avg (x) {#agg_function-avg}
|
## avg (x) {#agg_function-avg}
|
||||||
|
|
||||||
Calcule la moyenne.
|
Calcule la moyenne.
|
||||||
|
@ -19,7 +19,7 @@ $ sudo apt-get install git cmake python ninja-build
|
|||||||
|
|
||||||
古いシステムではcmakeの代わりにcmake3。
|
古いシステムではcmakeの代わりにcmake3。
|
||||||
|
|
||||||
## GCC9のインストール {#install-gcc-9}
|
## GCC9のインストール {#install-gcc-10}
|
||||||
|
|
||||||
これを行うにはいくつかの方法があります。
|
これを行うにはいくつかの方法があります。
|
||||||
|
|
||||||
@ -29,18 +29,18 @@ $ sudo apt-get install git cmake python ninja-build
|
|||||||
$ sudo apt-get install software-properties-common
|
$ sudo apt-get install software-properties-common
|
||||||
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
||||||
$ sudo apt-get update
|
$ sudo apt-get update
|
||||||
$ sudo apt-get install gcc-9 g++-9
|
$ sudo apt-get install gcc-10 g++-10
|
||||||
```
|
```
|
||||||
|
|
||||||
### ソースからインスト {#install-from-sources}
|
### ソースからインスト {#install-from-sources}
|
||||||
|
|
||||||
見て [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
見て [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
||||||
|
|
||||||
## ビルドにGCC9を使用する {#use-gcc-9-for-builds}
|
## ビルドにGCC9を使用する {#use-gcc-10-for-builds}
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ export CC=gcc-9
|
$ export CC=gcc-10
|
||||||
$ export CXX=g++-9
|
$ export CXX=g++-10
|
||||||
```
|
```
|
||||||
|
|
||||||
## ツつィツ姪"ツ債ツつケ {#checkout-clickhouse-sources}
|
## ツつィツ姪"ツ債ツつケ {#checkout-clickhouse-sources}
|
||||||
|
@ -141,7 +141,7 @@ ClickHouseのビルドには、バージョン9以降のGCCとClangバージョ
|
|||||||
|
|
||||||
UBUNTUにGCCをインストールするには: `sudo apt install gcc g++`
|
UBUNTUにGCCをインストールするには: `sudo apt install gcc g++`
|
||||||
|
|
||||||
Gccのバージョンを確認する: `gcc --version`. の場合は下記9その指示に従う。https://clickhouse.tech/docs/ja/development/build/#install-gcc-9.
|
Gccのバージョンを確認する: `gcc --version`. の場合は下記9その指示に従う。https://clickhouse.tech/docs/ja/development/build/#install-gcc-10.
|
||||||
|
|
||||||
Mac OS XのビルドはClangでのみサポートされています。 ちょうど実行 `brew install llvm`
|
Mac OS XのビルドはClangでのみサポートされています。 ちょうど実行 `brew install llvm`
|
||||||
|
|
||||||
@ -160,7 +160,7 @@ ClickHouseを構築する準備ができたので、別のディレクトリを
|
|||||||
|
|
||||||
Linux:
|
Linux:
|
||||||
|
|
||||||
export CC=gcc-9 CXX=g++-9
|
export CC=gcc-10 CXX=g++-10
|
||||||
cmake ..
|
cmake ..
|
||||||
|
|
||||||
Mac OS X:
|
Mac OS X:
|
||||||
|
@ -1,261 +0,0 @@
|
|||||||
---
|
|
||||||
machine_translated: true
|
|
||||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
|
||||||
toc_priority: 69
|
|
||||||
toc_title: "ClickHouse\u30C6\u30B9\u30C8\u306E\u5B9F\u884C\u65B9\u6CD5"
|
|
||||||
---
|
|
||||||
|
|
||||||
# ClickHouseのテスト {#clickhouse-testing}
|
|
||||||
|
|
||||||
## 機能テスト {#functional-tests}
|
|
||||||
|
|
||||||
機能テストは、最も簡単で使いやすいです。 ClickHouseの機能のほとんどは機能テストでテストすることができ、そのようにテストできるClickHouseコードのすべての変更に使用することが必須です。
|
|
||||||
|
|
||||||
各機能テストは、実行中のClickHouseサーバーに一つまたは複数のクエリを送信し、結果を参照と比較します。
|
|
||||||
|
|
||||||
テストは `queries` ディレクトリ。 サブディレクトリは二つあります: `stateless` と `stateful`. ステートレステストは、プリロードされたテストデータなしでクエリを実行します。 状態での検査が必要とな予圧試験データからのYandex.Metricaおよび一般に利用できない。 私たちは使用する傾向があります `stateless` テストと新しい追加を避ける `stateful` テストだ
|
|
||||||
|
|
||||||
それぞれの試験できるの種類: `.sql` と `.sh`. `.sql` testは、パイプ処理される単純なSQLスクリプトです `clickhouse-client --multiquery --testmode`. `.sh` testは、それ自体で実行されるスクリプトです。
|
|
||||||
|
|
||||||
すべてのテストを実行するには、 `clickhouse-test` ツール。 見て! `--help` 可能なオプションのリスト。 できるだけ実行すべての試験または実行のサブセットの試験フィルター部分文字列の試験名: `./clickhouse-test substring`.
|
|
||||||
|
|
||||||
機能テストを呼び出す最も簡単な方法は、コピーすることです `clickhouse-client` に `/usr/bin/`,run `clickhouse-server` そして、実行 `./clickhouse-test` 独自のディレクトリから。
|
|
||||||
|
|
||||||
新しいテストを追加するには、 `.sql` または `.sh` ファイル `queries/0_stateless` ディレクトリでチェックを手動でその生成 `.reference` 次の方法でファイル: `clickhouse-client -n --testmode < 00000_test.sql > 00000_test.reference` または `./00000_test.sh > ./00000_test.reference`.
|
|
||||||
|
|
||||||
テストでは、テーブルのみを使用(create、dropなど)する必要があります `test` また、テストでは一時テーブルを使用することもできます。
|
|
||||||
|
|
||||||
機能テストで分散クエリを使用する場合は、以下を利用できます `remote` テーブル関数 `127.0.0.{1..2}` または、サーバー設定ファイルで次のように定義済みのテストクラスタを使用できます `test_shard_localhost`.
|
|
||||||
|
|
||||||
いくつかのテストには `zookeeper`, `shard` または `long` 彼らの名前で。
|
|
||||||
`zookeeper` ZooKeeperを使用しているテスト用です。 `shard` そのテストのためです
|
|
||||||
サーバーにリッスンが必要 `127.0.0.*`; `distributed` または `global` 同じを持っている
|
|
||||||
意味だ `long` 少し長く実行されるテストのためのものです。 あなたはできる
|
|
||||||
disableこれらのグループの試験を使用 `--no-zookeeper`, `--no-shard` と
|
|
||||||
`--no-long` オプション、それぞれ。
|
|
||||||
|
|
||||||
## 既知のバグ {#known-bugs}
|
|
||||||
|
|
||||||
機能テストで簡単に再現できるいくつかのバグがわかっている場合は、準備された機能テストを `tests/queries/bugs` ディレクトリ。 これらのテストは `tests/queries/0_stateless` バグが修正されたとき。
|
|
||||||
|
|
||||||
## 統合テスト {#integration-tests}
|
|
||||||
|
|
||||||
統合テストでは、クラスター化された構成でClickHouseをテストし、Mysql、Postgres、MongoDBなどの他のサーバーとClickHouseの相互作用をテストできます。 これらをエミュレートするネットワーク分割、パケットの落下など。 これらの試験する方向に作用しDockerを複数の容器を様々なソフトウェアです。
|
|
||||||
|
|
||||||
見る `tests/integration/README.md` これらのテストを実行する方法について。
|
|
||||||
|
|
||||||
この統合ClickHouse第三者によるドライバーではない。 また、現在、JDBCおよびODBCドライバとの統合テストはありません。
|
|
||||||
|
|
||||||
## 単体テスト {#unit-tests}
|
|
||||||
|
|
||||||
単体テストは、ClickHouse全体ではなく、単一の孤立したライブラリまたはクラスをテストする場合に便利です。 テストのビルドを有効または無効にするには `ENABLE_TESTS` CMakeオプション。 単体テスト(およびその他のテストプログラム)は `tests` コード全体のサブディレクトリ。 単体テストを実行するには、 `ninja test`. 一部のテストでは `gtest` しかし、いくつかは、テストの失敗でゼロ以外の終了コードを返すプログラムです。
|
|
||||||
|
|
||||||
コードがすでに機能テストでカバーされている場合は、必ずしも単体テストを持つとは限りません(機能テストは通常ははるかに簡単です)。
|
|
||||||
|
|
||||||
## 性能テスト {#performance-tests}
|
|
||||||
|
|
||||||
パフォーマ テストは `tests/performance`. それぞれの試験に代表される `.xml` テストケースの説明を持つファイル。 テストは以下で実行されます `clickhouse performance-test` ツール(埋め込まれている `clickhouse` バイナリ)。 見る `--help` 呼び出し用。
|
|
||||||
|
|
||||||
それぞれの試験実行または複数のクエリ(このパラメータの組み合わせ)のループ条件のための停止など “maximum execution speed is not changing in three seconds” 測定一部の指標につクエリの性能など “maximum execution speed”). いくつかの試験を含むことができ前提条件に予圧試験データを得る。
|
|
||||||
|
|
||||||
いくつかのシナリオでClickHouseのパフォーマンスを向上させたい場合や、単純なクエリで改善が見られる場合は、パフォーマンステストを作成することを強 いう意味があるのに使用 `perf top` またはあなたのテストの間の他のperf用具。
|
|
||||||
|
|
||||||
## テストツールとスクリプ {#test-tools-and-scripts}
|
|
||||||
|
|
||||||
一部のプログラム `tests` ディレク 例えば、 `Lexer` ツールがあります `src/Parsers/tests/lexer` それはstdinのトークン化を行い、色付けされた結果をstdoutに書き込みます。 これらの種類のツールは、コード例として、また探索と手動テストに使用できます。
|
|
||||||
|
|
||||||
でも一対のファイル `.sh` と `.reference` いくつかの事前定義された入力でそれを実行するためのツールと一緒に-その後、スクリプトの結果は `.reference` ファイル これらの種類のテストは自動化されていません。
|
|
||||||
|
|
||||||
## その他のテスト {#miscellaneous-tests}
|
|
||||||
|
|
||||||
外部辞書のテストは次の場所にあります `tests/external_dictionaries` そして機械学んだモデルのために `tests/external_models`. これらのテストは更新されず、統合テストに転送する必要があります。
|
|
||||||
|
|
||||||
クォーラム挿入には別のテストがあります。 このテストでは、ネットワーク分割、パケットドロップ(ClickHouseノード間、ClickHouseとZooKeeper間、ClickHouseサーバーとクライアント間など)など、さまざまな障害ケースをエミュレートします。), `kill -9`, `kill -STOP` と `kill -CONT` 例えば [ジェプセン](https://aphyr.com/tags/Jepsen). その後、試験チェックすべての認識を挿入したすべて拒否された挿入しました。
|
|
||||||
|
|
||||||
定足数を緩和試験の筆に別々のチーム前ClickHouseしたオープン達した. このチームはClickHouseでは動作しなくなりました。 テストは誤ってJavaで書かれました。 これらのことから、決議の定足数テストを書き換え及び移転統合。
|
|
||||||
|
|
||||||
## 手動テスト {#manual-testing}
|
|
||||||
|
|
||||||
新しい機能を開発するときは、手動でもテストするのが妥当です。 これを行うには、次の手順を実行します:
|
|
||||||
|
|
||||||
ClickHouseを構築します。 ターミナルからClickHouseを実行します。 `programs/clickhouse-server` そして、それを実行します `./clickhouse-server`. それは構成を使用します (`config.xml`, `users.xml` そして内のファイル `config.d` と `users.d` ディレクトリ)から、現在のディレクトリがデフォルトです。 ClickHouseサーバーに接続するには、以下を実行します `programs/clickhouse-client/clickhouse-client`.
|
|
||||||
|
|
||||||
これらのclickhouseツール(サーバ、クライアント、などだそうでsymlinks単一のバイナリ名 `clickhouse`. このバイナリは `programs/clickhouse`. すべてのツ `clickhouse tool` 代わりに `clickhouse-tool`.
|
|
||||||
|
|
||||||
またインストールすることができClickHouseパッケージは安定したリリースからのYandexリポジトリあるいはすることで作ることができるパッケージで `./release` ClickHouseソースルートで. 次に、サーバーを起動します `sudo service clickhouse-server start` (または停止してサーバーを停止します)。 ログを探す `/etc/clickhouse-server/clickhouse-server.log`.
|
|
||||||
|
|
||||||
時ClickHouseでに既にインストールされているシステムを構築できる新しい `clickhouse` 既存のバイナリを置き換えます:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ sudo service clickhouse-server stop
|
|
||||||
$ sudo cp ./clickhouse /usr/bin/
|
|
||||||
$ sudo service clickhouse-server start
|
|
||||||
```
|
|
||||||
|
|
||||||
また、システムclickhouse-serverを停止し、同じ構成ではなく端末にログインして独自のものを実行することもできます:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ sudo service clickhouse-server stop
|
|
||||||
$ sudo -u clickhouse /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml
|
|
||||||
```
|
|
||||||
|
|
||||||
Gdbの例:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ sudo -u clickhouse gdb --args /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml
|
|
||||||
```
|
|
||||||
|
|
||||||
システムclickhouse-serverがすでに実行されていて、それを停止したくない場合は、次のポート番号を変更できます `config.xml` (または、ファイル内でそれらを上書きする `config.d` ディレクトリ)、適切なデータパスを提供し、それを実行します。
|
|
||||||
|
|
||||||
`clickhouse` バイナリーはほとんどない依存関係の作品を広い範囲のLinuxディストリビューション. サーバー上で変更を迅速かつ汚いテストするには、次のことができます `scp` あなたの新鮮な構築 `clickhouse` あなたのサーバーにバイナリし、上記の例のように実行します。
|
|
||||||
|
|
||||||
## テスト環境 {#testing-environment}
|
|
||||||
|
|
||||||
リリースを安定版として公開する前に、テスト環境に展開します。 テスト環境は1/39の部分を処理する集りです [Yandex.メトリカ](https://metrica.yandex.com/) データ テスト環境をYandexと共有しています。メトリカ-チーム ClickHouseは既存のデータの上にダウンタイムなしで改善される。 私たちは、データがリアルタイムから遅れることなく正常に処理され、複製が動作し続け、Yandexに見える問題はないことを最初に見ています。メトリカ-チーム 最初のチェックは、次の方法で行うことができます:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT hostName() AS h, any(version()), any(uptime()), max(UTCEventTime), count() FROM remote('example01-01-{1..3}t', merge, hits) WHERE EventDate >= today() - 2 GROUP BY h ORDER BY h;
|
|
||||||
```
|
|
||||||
|
|
||||||
市場、クラウドなど:いくつかのケースでは、我々はまた、Yandexの中で私たちの友人チームのテスト環境に展開します また、開発目的で使用されるハードウェアサーバーもあります。
|
|
||||||
|
|
||||||
## 負荷テスト {#load-testing}
|
|
||||||
|
|
||||||
後の展開を試験環境を実行負荷テストクエリから生産ます。 これは手動で行われます。
|
|
||||||
|
|
||||||
有効にしていることを確認します `query_log` 運用クラスター上。
|
|
||||||
|
|
||||||
一日以上のクエリログを収集する:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ clickhouse-client --query="SELECT DISTINCT query FROM system.query_log WHERE event_date = today() AND query LIKE '%ym:%' AND query NOT LIKE '%system.query_log%' AND type = 2 AND is_initial_query" > queries.tsv
|
|
||||||
```
|
|
||||||
|
|
||||||
これは複雑な例です。 `type = 2` 正常に実行されたクエリをフィルタ処理します。 `query LIKE '%ym:%'` Yandexから関連するクエリを選択することです。メトリカ `is_initial_query` ClickHouse自体ではなく、クライアントによって開始されたクエリのみを選択することです(分散クエリ処理の一部として)。
|
|
||||||
|
|
||||||
`scp` このログをテストクラスタに記録し、次のように実行します:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ clickhouse benchmark --concurrency 16 < queries.tsv
|
|
||||||
```
|
|
||||||
|
|
||||||
(おそらくあなたはまた、 `--user`)
|
|
||||||
|
|
||||||
それから夜または週末のためにそれを残し、残りを取る行きなさい。
|
|
||||||
|
|
||||||
きることを確認 `clickhouse-server` なクラッシュメモリのフットプリントは有界性なつ品位を傷つける。
|
|
||||||
|
|
||||||
クエリと環境の変動が大きいため、正確なクエリ実行タイミングは記録されず、比較されません。
|
|
||||||
|
|
||||||
## ビルドテスト {#build-tests}
|
|
||||||
|
|
||||||
構築を試験できることを確認の構築においても様々な代替構成されており、外国のシステム。 テストは `ci` ディレクトリ。 Docker、Vagrant、時には以下のようなソースからビルドを実行します `qemu-user-static` ドッカー内部。 これらのテストは開発中であり、テストの実行は自動化されません。
|
|
||||||
|
|
||||||
動機:
|
|
||||||
|
|
||||||
通常、ClickHouse buildの単一のバリアントですべてのテストをリリースして実行します。 しかし、徹底的にテストされていない別のビルド変種があります。 例:
|
|
||||||
|
|
||||||
- FreeBSD上でビルド;
|
|
||||||
- をDebianを対象として図書館システムのパッケージ;
|
|
||||||
- ライブラリの共有リンクでビルド;
|
|
||||||
- AArch64プラットフォ;
|
|
||||||
- PowerPcプラットフォーム上で構築。
|
|
||||||
|
|
||||||
たとえば、システムパッケージを使用したビルドは悪い習慣です。 しかし、これは本当にDebianメンテナに必要です。 このため、少なくともこのビルドの変種をサポートする必要があります。 別の例:共有リンクは一般的な問題の原因ですが、一部の愛好家にとって必要です。
|
|
||||||
|
|
||||||
ができませんので実行した全試験はすべての変異体を構築し、チェックしたい少なくとも上記に記載された各種の構築異な破となりました。 この目的のためにビルドテストを使用します。
|
|
||||||
|
|
||||||
## プロトコル互換性のテスト {#testing-for-protocol-compatibility}
|
|
||||||
|
|
||||||
ClickHouse network protocolを拡張すると、古いclickhouse-clientが新しいclickhouse-serverで動作し、新しいclickhouse-clientが古いclickhouse-serverで動作することを手動でテストします(対応するパッケージからバイナリを
|
|
||||||
|
|
||||||
## コンパイラからのヘルプ {#help-from-the-compiler}
|
|
||||||
|
|
||||||
メインクリックハウスコード(にある `dbms` ディレクトリ)は `-Wall -Wextra -Werror` そして、いくつかの追加の有効な警告と。 これらのオプションは有効になっていないためにサードパーティーのライブラリ.
|
|
||||||
|
|
||||||
Clangにはさらに便利な警告があります。 `-Weverything` デフォルトのビルドに何かを選ぶ。
|
|
||||||
|
|
||||||
本番ビルドでは、gccが使用されます(clangよりもやや効率的なコードが生成されます)。 開発のために、clangは通常、使用する方が便利です。 あなたは(あなたのラップトップのバッテリーを節約するために)デバッグモードで自分のマシン上で構築することができますが、コンパイラがでより `-O3` よりよい制御フローおよびinter-procedure分析が原因で。 Clangでビルドする場合, `libc++` の代わりに使用されます。 `libstdc++` そして、デバッグモードでビルドするとき、 `libc++` 使用可能にするにはより誤差があります。.
|
|
||||||
|
|
||||||
## サニタイザー {#sanitizers}
|
|
||||||
|
|
||||||
**アドレスsanitizer**.
|
|
||||||
私たちは、コミットごとにASanの下で機能テストと統合テストを実行します。
|
|
||||||
|
|
||||||
**ヴァルグリンド(曖昧さ回避)**.
|
|
||||||
私たちは一晩Valgrindの下で機能テストを実行します。 数時間かかります。 現在知られている偽陽性があります `re2` 図書館、参照 [この記事](https://research.swtch.com/sparse).
|
|
||||||
|
|
||||||
**未定義の動作のサニタイザー。**
|
|
||||||
私たちは、コミットごとにASanの下で機能テストと統合テストを実行します。
|
|
||||||
|
|
||||||
**糸のsanitizer**.
|
|
||||||
私たちは、コミットごとにTSanの下で機能テストを実行します。 コミットごとにTSanの下で統合テストを実行することはまだありません。
|
|
||||||
|
|
||||||
**メモリサニタイザー**.
|
|
||||||
現在、我々はまだMSanを使用していません。
|
|
||||||
|
|
||||||
**デバッグアロケータ。**
|
|
||||||
デバッグバージョン `jemalloc` デバッグビルドに使用されます。
|
|
||||||
|
|
||||||
## ファジング {#fuzzing}
|
|
||||||
|
|
||||||
ClickHouseファジングは、両方を使用して実装されます [libFuzzer](https://llvm.org/docs/LibFuzzer.html) とランダムSQLクエリ。
|
|
||||||
すべてのファズテストは、サニタイザー(アドレスと未定義)で実行する必要があります。
|
|
||||||
|
|
||||||
LibFuzzerは、ライブラリコードの分離ファズテストに使用されます。 ファザーはテストコードの一部として実装され “_fuzzer” 名前の接尾辞。
|
|
||||||
Fuzzerの例はで見つけることができます `src/Parsers/tests/lexer_fuzzer.cpp`. LibFuzzer固有の設定、辞書、およびコーパスは次の場所に格納されます `tests/fuzz`.
|
|
||||||
ご協力をお願いいたし書きファズ試験べての機能を取り扱うユーザー入力します。
|
|
||||||
|
|
||||||
ファザーはデフォルトではビルドされません。 両方のファザーを構築するには `-DENABLE_FUZZING=1` と `-DENABLE_TESTS=1` 選択は置かれるべきである。
|
|
||||||
ファザーのビルド中にJemallocを無効にすることをお勧めします。 ClickHouseファジングを統合するために使用される設定
|
|
||||||
Google OSS-Fuzzは次の場所にあります `docker/fuzz`.
|
|
||||||
|
|
||||||
また簡単なファズ試験をランダムなSQLクエリーやことを確認するにはサーバーにな金型を実行します。
|
|
||||||
それを見つけることができる `00746_sql_fuzzy.pl`. このテストは、継続的に実行する必要があります(一晩と長い)。
|
|
||||||
|
|
||||||
## セキュリティ監査 {#security-audit}
|
|
||||||
|
|
||||||
人からのYandexセキュリティチームはいくつかの基本的な概要ClickHouse力からのセキュリティの観点から.
|
|
||||||
|
|
||||||
## 静的アナライザ {#static-analyzers}
|
|
||||||
|
|
||||||
私たちは走る `PVS-Studio` コミットごと。 私達は評価しました `clang-tidy`, `Coverity`, `cppcheck`, `PVS-Studio`, `tscancode`. 使用のための指示をで見つけます `tests/instructions/` ディレクトリ。 また読むことができます [ロシア語の記事](https://habr.com/company/yandex/blog/342018/).
|
|
||||||
|
|
||||||
を使用する場合 `CLion` IDEとして、いくつかを活用できます `clang-tidy` 箱から出してチェックします。
|
|
||||||
|
|
||||||
## 硬化 {#hardening}
|
|
||||||
|
|
||||||
`FORTIFY_SOURCE` デフォルトで使用されます。 それはほとんど役に立たないですが、まれに理にかなっており、それを無効にしません。
|
|
||||||
|
|
||||||
## コードスタイル {#code-style}
|
|
||||||
|
|
||||||
コードのスタイルのルールを記述 [ここに](https://clickhouse.tech/docs/en/development/style/).
|
|
||||||
|
|
||||||
チェックのための、共通したスタイル違反、利用できる `utils/check-style` スクリプト
|
|
||||||
|
|
||||||
コードの適切なスタイルを強制するには、次のようにします `clang-format`. ファイル `.clang-format` ソースルートにあります。 実際のコードスタイルにほとんど対応しています。 しかし、適用することはお勧めしません `clang-format` 既存のファイルへの書式設定が悪化するためです。 以下を使用できます `clang-format-diff` clangソースリポジトリで見つけることができるツール。
|
|
||||||
|
|
||||||
あるいは、 `uncrustify` コードを再フォーマットするツール。 設定は次のとおりです `uncrustify.cfg` ソースルートで。 それはより少なくテストさ `clang-format`.
|
|
||||||
|
|
||||||
`CLion` 独自のコードをフォーマッタしていると見ることができる調整のためのコードです。
|
|
||||||
|
|
||||||
## Metrica B2Bテスト {#metrica-b2b-tests}
|
|
||||||
|
|
||||||
各ClickHouseリリースはYandex MetricaとAppMetricaエンジンでテストされます。 ClickHouseのテスト版と安定版はVmにデプロイされ、入力データの固定サンプルを処理するMetrica engineの小さなコピーで実行されます。 次に,Metricaエンジンの二つのインスタンスの結果を比較した。
|
|
||||||
|
|
||||||
これらの試験により自動化されており、別のチームです。 可動部分の高い数が原因で、テストは把握し非常ににくい完全に無関係な理由によって失敗ほとんどの時間です。 がこれらの試験は負の値です。 しかしこれらの試験することが明らかとなったが有用である一又は二倍の数百名
|
|
||||||
|
|
||||||
## テスト範囲 {#test-coverage}
|
|
||||||
|
|
||||||
2018年現在、テストカバーは行っていない。
|
|
||||||
|
|
||||||
## テスト自動化 {#test-automation}
|
|
||||||
|
|
||||||
Yandex内部CIとジョブ自動化システムという名前のテストを実行します “Sandbox”.
|
|
||||||
|
|
||||||
ビルドジョブとテストは、コミットごとにSandboxで実行されます。 結果のパッケージとテスト結果はGitHubに公開され、直接リンクでダウンロードできます。 成果物は永遠に保存されます。 GitHubでプルリクエストを送信すると、次のようにタグ付けします “can be tested” そして私達のCIシステムはあなたのためのClickHouseのパッケージ(住所sanitizerの解放、デバッグ、等)を造ります。
|
|
||||||
|
|
||||||
時間と計算能力の限界のため、Travis CIは使用しません。
|
|
||||||
ジェンキンスは使わない 以前は使用されていましたが、今はJenkinsを使用していません。
|
|
||||||
|
|
||||||
[元の記事](https://clickhouse.tech/docs/en/development/tests/) <!--hide-->
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user