mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-12-11 08:52:06 +00:00
Merge remote-tracking branch 'upstream/master' into HEAD
This commit is contained in:
commit
245c395a68
4
.gitmodules
vendored
4
.gitmodules
vendored
@ -186,3 +186,7 @@
|
|||||||
path = contrib/cyrus-sasl
|
path = contrib/cyrus-sasl
|
||||||
url = https://github.com/cyrusimap/cyrus-sasl
|
url = https://github.com/cyrusimap/cyrus-sasl
|
||||||
branch = cyrus-sasl-2.1
|
branch = cyrus-sasl-2.1
|
||||||
|
[submodule "contrib/croaring"]
|
||||||
|
path = contrib/croaring
|
||||||
|
url = https://github.com/RoaringBitmap/CRoaring
|
||||||
|
branch = v0.2.66
|
||||||
|
220
CHANGELOG.md
220
CHANGELOG.md
@ -1,6 +1,218 @@
|
|||||||
|
## ClickHouse release 20.10
|
||||||
|
|
||||||
|
### ClickHouse release v20.10.3.30, 2020-10-28
|
||||||
|
|
||||||
|
#### Backward Incompatible Change
|
||||||
|
|
||||||
|
* Make `multiple_joins_rewriter_version` obsolete. Remove first version of joins rewriter. [#15472](https://github.com/ClickHouse/ClickHouse/pull/15472) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Change default value of `format_regexp_escaping_rule` setting (it's related to `Regexp` format) to `Raw` (it means - read whole subpattern as a value) to make the behaviour more like to what users expect. [#15426](https://github.com/ClickHouse/ClickHouse/pull/15426) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add support for nested multiline comments `/* comment /* comment */ */` in SQL. This conforms to the SQL standard. [#14655](https://github.com/ClickHouse/ClickHouse/pull/14655) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Added MergeTree settings (`max_replicated_merges_with_ttl_in_queue` and `max_number_of_merges_with_ttl_in_pool`) to control the number of merges with TTL in the background pool and replicated queue. This change breaks compatibility with older versions only if you use delete TTL. Otherwise, replication will stay compatible. You can avoid incompatibility issues if you update all shard replicas at once or execute `SYSTEM STOP TTL MERGES` until you finish the update of all replicas. If you'll get an incompatible entry in the replication queue, first of all, execute `SYSTEM STOP TTL MERGES` and after `ALTER TABLE ... DETACH PARTITION ...` the partition where incompatible TTL merge was assigned. Attach it back on a single replica. [#14490](https://github.com/ClickHouse/ClickHouse/pull/14490) ([alesapin](https://github.com/alesapin)).
|
||||||
|
|
||||||
|
#### New Feature
|
||||||
|
|
||||||
|
* Background data recompression. Add the ability to specify `TTL ... RECOMPRESS codec_name` for MergeTree table engines family. [#14494](https://github.com/ClickHouse/ClickHouse/pull/14494) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Add parallel quorum inserts. This closes [#15601](https://github.com/ClickHouse/ClickHouse/issues/15601). [#15601](https://github.com/ClickHouse/ClickHouse/pull/15601) ([Latysheva Alexandra](https://github.com/alexelex)).
|
||||||
|
* Settings for additional enforcement of data durability. Useful for non-replicated setups. [#11948](https://github.com/ClickHouse/ClickHouse/pull/11948) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* When duplicate block is written to replica where it does not exist locally (has not been fetched from replicas), don't ignore it and write locally to achieve the same effect as if it was successfully replicated. [#11684](https://github.com/ClickHouse/ClickHouse/pull/11684) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Now we support `WITH <identifier> AS (subquery) ... ` to introduce named subqueries in the query context. This closes [#2416](https://github.com/ClickHouse/ClickHouse/issues/2416). This closes [#4967](https://github.com/ClickHouse/ClickHouse/issues/4967). [#14771](https://github.com/ClickHouse/ClickHouse/pull/14771) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Introduce `enable_global_with_statement` setting which propagates the first select's `WITH` statements to other select queries at the same level, and makes aliases in `WITH` statements visible to subqueries. [#15451](https://github.com/ClickHouse/ClickHouse/pull/15451) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Secure inter-cluster query execution (with initial_user as current query user). [#13156](https://github.com/ClickHouse/ClickHouse/pull/13156) ([Azat Khuzhin](https://github.com/azat)). [#15551](https://github.com/ClickHouse/ClickHouse/pull/15551) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add the ability to remove column properties and table TTLs. Introduced queries `ALTER TABLE MODIFY COLUMN col_name REMOVE what_to_remove` and `ALTER TABLE REMOVE TTL`. Both operations are lightweight and executed at the metadata level. [#14742](https://github.com/ClickHouse/ClickHouse/pull/14742) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Added format `RawBLOB`. It is intended for input or output a single value without any escaping and delimiters. This closes [#15349](https://github.com/ClickHouse/ClickHouse/issues/15349). [#15364](https://github.com/ClickHouse/ClickHouse/pull/15364) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add the `reinterpretAsUUID` function that allows to convert a big-endian byte string to UUID. [#15480](https://github.com/ClickHouse/ClickHouse/pull/15480) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Implement `force_data_skipping_indices` setting. [#15642](https://github.com/ClickHouse/ClickHouse/pull/15642) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add a setting `output_format_pretty_row_numbers` to numerate the result in Pretty formats. This closes [#15350](https://github.com/ClickHouse/ClickHouse/issues/15350). [#15443](https://github.com/ClickHouse/ClickHouse/pull/15443) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Added query obfuscation tool. It allows to share more queries for better testing. This closes [#15268](https://github.com/ClickHouse/ClickHouse/issues/15268). [#15321](https://github.com/ClickHouse/ClickHouse/pull/15321) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add table function `null('structure')`. [#14797](https://github.com/ClickHouse/ClickHouse/pull/14797) ([vxider](https://github.com/Vxider)).
|
||||||
|
* Added `formatReadableQuantity` function. It is useful for reading big numbers by human. [#14725](https://github.com/ClickHouse/ClickHouse/pull/14725) ([Artem Hnilov](https://github.com/BooBSD)).
|
||||||
|
* Add format `LineAsString` that accepts a sequence of lines separated by newlines, every line is parsed as a whole as a single String field. [#14703](https://github.com/ClickHouse/ClickHouse/pull/14703) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)), [#13846](https://github.com/ClickHouse/ClickHouse/pull/13846) ([hexiaoting](https://github.com/hexiaoting)).
|
||||||
|
* Add `JSONStrings` format which output data in arrays of strings. [#14333](https://github.com/ClickHouse/ClickHouse/pull/14333) ([hcz](https://github.com/hczhcz)).
|
||||||
|
* Add support for "Raw" column format for `Regexp` format. It allows to simply extract subpatterns as a whole without any escaping rules. [#15363](https://github.com/ClickHouse/ClickHouse/pull/15363) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Allow configurable `NULL` representation for `TSV` output format. It is controlled by the setting `output_format_tsv_null_representation` which is `\N` by default. This closes [#9375](https://github.com/ClickHouse/ClickHouse/issues/9375). Note that the setting only controls output format and `\N` is the only supported `NULL` representation for `TSV` input format. [#14586](https://github.com/ClickHouse/ClickHouse/pull/14586) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Support Decimal data type for `MaterializedMySQL`. `MaterializedMySQL` is an experimental feature. [#14535](https://github.com/ClickHouse/ClickHouse/pull/14535) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Add new feature: `SHOW DATABASES LIKE 'xxx'`. [#14521](https://github.com/ClickHouse/ClickHouse/pull/14521) ([hexiaoting](https://github.com/hexiaoting)).
|
||||||
|
* Added a script to import (arbitrary) git repository to ClickHouse as a sample dataset. [#14471](https://github.com/ClickHouse/ClickHouse/pull/14471) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Now insert statements can have asterisk (or variants) with column transformers in the column list. [#14453](https://github.com/ClickHouse/ClickHouse/pull/14453) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* New query complexity limit settings `max_rows_to_read_leaf`, `max_bytes_to_read_leaf` for distributed queries to limit max rows/bytes read on the leaf nodes. Limit is applied for local reads only, *excluding* the final merge stage on the root node. [#14221](https://github.com/ClickHouse/ClickHouse/pull/14221) ([Roman Khavronenko](https://github.com/hagen1778)).
|
||||||
|
* Allow user to specify settings for `ReplicatedMergeTree*` storage in `<replicated_merge_tree>` section of config file. It works similarly to `<merge_tree>` section. For `ReplicatedMergeTree*` storages settings from `<merge_tree>` and `<replicated_merge_tree>` are applied together, but settings from `<replicated_merge_tree>` has higher priority. Added `system.replicated_merge_tree_settings` table. [#13573](https://github.com/ClickHouse/ClickHouse/pull/13573) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add `mapPopulateSeries` function. [#13166](https://github.com/ClickHouse/ClickHouse/pull/13166) ([Ildus Kurbangaliev](https://github.com/ildus)).
|
||||||
|
* Supporting MySQL types: `decimal` (as ClickHouse `Decimal`) and `datetime` with sub-second precision (as `DateTime64`). [#11512](https://github.com/ClickHouse/ClickHouse/pull/11512) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
|
* Introduce `event_time_microseconds` field to `system.text_log`, `system.trace_log`, `system.query_log` and `system.query_thread_log` tables. [#14760](https://github.com/ClickHouse/ClickHouse/pull/14760) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Add `event_time_microseconds` to `system.asynchronous_metric_log` & `system.metric_log` tables. [#14514](https://github.com/ClickHouse/ClickHouse/pull/14514) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Add `query_start_time_microseconds` field to `system.query_log` & `system.query_thread_log` tables. [#14252](https://github.com/ClickHouse/ClickHouse/pull/14252) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix the case when memory can be overallocated regardless to the limit. This closes [#14560](https://github.com/ClickHouse/ClickHouse/issues/14560). [#16206](https://github.com/ClickHouse/ClickHouse/pull/16206) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix `executable` dictionary source hang. In previous versions, when using some formats (e.g. `JSONEachRow`) data was not feed to a child process before it outputs at least something. This closes [#1697](https://github.com/ClickHouse/ClickHouse/issues/1697). This closes [#2455](https://github.com/ClickHouse/ClickHouse/issues/2455). [#14525](https://github.com/ClickHouse/ClickHouse/pull/14525) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix double free in case of exception in function `dictGet`. It could have happened if dictionary was loaded with error. [#16429](https://github.com/ClickHouse/ClickHouse/pull/16429) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix group by with totals/rollup/cube modifers and min/max functions over group by keys. Fixes [#16393](https://github.com/ClickHouse/ClickHouse/issues/16393). [#16397](https://github.com/ClickHouse/ClickHouse/pull/16397) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix async Distributed INSERT with prefer_localhost_replica=0 and internal_replication. [#16358](https://github.com/ClickHouse/ClickHouse/pull/16358) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix a very wrong code in TwoLevelStringHashTable implementation, which might lead to memory leak. [#16264](https://github.com/ClickHouse/ClickHouse/pull/16264) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix segfault in some cases of wrong aggregation in lambdas. [#16082](https://github.com/ClickHouse/ClickHouse/pull/16082) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix `ALTER MODIFY ... ORDER BY` query hang for `ReplicatedVersionedCollapsingMergeTree`. This fixes [#15980](https://github.com/ClickHouse/ClickHouse/issues/15980). [#16011](https://github.com/ClickHouse/ClickHouse/pull/16011) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fix collate name & charset name parser and support `length = 0` for string type. [#16008](https://github.com/ClickHouse/ClickHouse/pull/16008) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Allow to use `direct` layout for dictionaries with complex keys. [#16007](https://github.com/ClickHouse/ClickHouse/pull/16007) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Prevent replica hang for 5-10 mins when replication error happens after a period of inactivity. [#15987](https://github.com/ClickHouse/ClickHouse/pull/15987) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes [#15628](https://github.com/ClickHouse/ClickHouse/issues/15628). [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fix crash on create database failure. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) - Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixes [#12513](https://github.com/ClickHouse/ClickHouse/issues/12513): difference expressions with same alias when query is reanalyzed. [#15886](https://github.com/ClickHouse/ClickHouse/pull/15886) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fix possible very rare deadlocks in RBAC implementation. [#15875](https://github.com/ClickHouse/ClickHouse/pull/15875) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix exception `Block structure mismatch` in `SELECT ... ORDER BY DESC` queries which were executed after `ALTER MODIFY COLUMN` query. Fixes [#15800](https://github.com/ClickHouse/ClickHouse/issues/15800). [#15852](https://github.com/ClickHouse/ClickHouse/pull/15852) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fix `select count()` inaccuracy. [#15767](https://github.com/ClickHouse/ClickHouse/pull/15767) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix some cases of queries, in which only virtual columns are selected. Previously `Not found column _nothing in block` exception may be thrown. Fixes [#12298](https://github.com/ClickHouse/ClickHouse/issues/12298). [#15756](https://github.com/ClickHouse/ClickHouse/pull/15756) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix drop of materialized view with inner table in Atomic database (hangs all subsequent DROP TABLE due to hang of the worker thread, due to recursive DROP TABLE for inner table of MV). [#15743](https://github.com/ClickHouse/ClickHouse/pull/15743) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Possibility to move part to another disk/volume if the first attempt was failed. [#15723](https://github.com/ClickHouse/ClickHouse/pull/15723) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Fix error `Cannot find column` which may happen at insertion into `MATERIALIZED VIEW` in case if query for `MV` containes `ARRAY JOIN`. [#15717](https://github.com/ClickHouse/ClickHouse/pull/15717) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed too low default value of `max_replicated_logs_to_keep` setting, which might cause replicas to become lost too often. Improve lost replica recovery process by choosing the most up-to-date replica to clone. Also do not remove old parts from lost replica, detach them instead. [#15701](https://github.com/ClickHouse/ClickHouse/pull/15701) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix rare race condition in dictionaries and tables from MySQL. [#15686](https://github.com/ClickHouse/ClickHouse/pull/15686) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix (benign) race condition in AMQP-CPP. [#15667](https://github.com/ClickHouse/ClickHouse/pull/15667) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix error `Cannot add simple transform to empty Pipe` which happened while reading from `Buffer` table which has different structure than destination table. It was possible if destination table returned empty result for query. Fixes [#15529](https://github.com/ClickHouse/ClickHouse/issues/15529). [#15662](https://github.com/ClickHouse/ClickHouse/pull/15662) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Proper error handling during insert into MergeTree with S3. MergeTree over S3 is an experimental feature. [#15657](https://github.com/ClickHouse/ClickHouse/pull/15657) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Fixed bug with S3 table function: region from URL was not applied to S3 client configuration. [#15646](https://github.com/ClickHouse/ClickHouse/pull/15646) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Fix the order of destruction for resources in `ReadFromStorage` step of query plan. It might cause crashes in rare cases. Possibly connected with [#15610](https://github.com/ClickHouse/ClickHouse/issues/15610). [#15645](https://github.com/ClickHouse/ClickHouse/pull/15645) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Subtract `ReadonlyReplica` metric when detach readonly tables. [#15592](https://github.com/ClickHouse/ClickHouse/pull/15592) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
* Fixed `Element ... is not a constant expression` error when using `JSON*` function result in `VALUES`, `LIMIT` or right side of `IN` operator. [#15589](https://github.com/ClickHouse/ClickHouse/pull/15589) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Query will finish faster in case of exception. Cancel execution on remote replicas if exception happens. [#15578](https://github.com/ClickHouse/ClickHouse/pull/15578) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Prevent the possibility of error message `Could not calculate available disk space (statvfs), errno: 4, strerror: Interrupted system call`. This fixes [#15541](https://github.com/ClickHouse/ClickHouse/issues/15541). [#15557](https://github.com/ClickHouse/ClickHouse/pull/15557) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix `Database <db> doesn't exist.` in queries with IN and Distributed table when there's no database on initiator. [#15538](https://github.com/ClickHouse/ClickHouse/pull/15538) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix bug when `ILIKE` operator stops being case insensitive if `LIKE` with the same pattern was executed. [#15536](https://github.com/ClickHouse/ClickHouse/pull/15536) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix `Missing columns` errors when selecting columns which absent in data, but depend on other columns which also absent in data. Fixes [#15530](https://github.com/ClickHouse/ClickHouse/issues/15530). [#15532](https://github.com/ClickHouse/ClickHouse/pull/15532) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Throw an error when a single parameter is passed to ReplicatedMergeTree instead of ignoring it. [#15516](https://github.com/ClickHouse/ClickHouse/pull/15516) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
* Fix bug with event subscription in DDLWorker which rarely may lead to query hangs in `ON CLUSTER`. Introduced in [#13450](https://github.com/ClickHouse/ClickHouse/issues/13450). [#15477](https://github.com/ClickHouse/ClickHouse/pull/15477) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Report proper error when the second argument of `boundingRatio` aggregate function has a wrong type. [#15407](https://github.com/ClickHouse/ClickHouse/pull/15407) ([detailyang](https://github.com/detailyang)).
|
||||||
|
* Fixes [#15365](https://github.com/ClickHouse/ClickHouse/issues/15365): attach a database with MySQL engine throws exception (no query context). [#15384](https://github.com/ClickHouse/ClickHouse/pull/15384) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fix the case of multiple occurrences of column transformers in a select query. [#15378](https://github.com/ClickHouse/ClickHouse/pull/15378) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed compression in `S3` storage. [#15376](https://github.com/ClickHouse/ClickHouse/pull/15376) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Fix bug where queries like `SELECT toStartOfDay(today())` fail complaining about empty time_zone argument. [#15319](https://github.com/ClickHouse/ClickHouse/pull/15319) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix race condition during MergeTree table rename and background cleanup. [#15304](https://github.com/ClickHouse/ClickHouse/pull/15304) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix rare race condition on server startup when system logs are enabled. [#15300](https://github.com/ClickHouse/ClickHouse/pull/15300) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix hang of queries with a lot of subqueries to same table of `MySQL` engine. Previously, if there were more than 16 subqueries to same `MySQL` table in query, it hang forever. [#15299](https://github.com/ClickHouse/ClickHouse/pull/15299) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix 'Unknown identifier' in GROUP BY when query has JOIN over Merge table. [#15242](https://github.com/ClickHouse/ClickHouse/pull/15242) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Fix instance crash when using `joinGet` with `LowCardinality` types. This fixes https://github.com/ClickHouse/ClickHouse/issues/15214. [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Adjust Decimal field size in MySQL column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)).
|
||||||
|
* Fixes `Data compressed with different methods` in `join_algorithm='auto'`. Keep LowCardinality as type for left table join key in `join_algorithm='partial_merge'`. [#15088](https://github.com/ClickHouse/ClickHouse/pull/15088) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Update `jemalloc` to fix `percpu_arena` with affinity mask. [#15035](https://github.com/ClickHouse/ClickHouse/pull/15035) ([Azat Khuzhin](https://github.com/azat)). [#14957](https://github.com/ClickHouse/ClickHouse/pull/14957) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes https://github.com/ClickHouse/ClickHouse/issues/14908. [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* If function `bar` was called with specifically crafted arguments, buffer overflow was possible. This closes [#13926](https://github.com/ClickHouse/ClickHouse/issues/13926). [#15028](https://github.com/ClickHouse/ClickHouse/pull/15028) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in Docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix crash in RIGHT or FULL JOIN with join_algorith='auto' when memory limit exceeded and we should change HashJoin with MergeJoin. [#15002](https://github.com/ClickHouse/ClickHouse/pull/15002) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Now settings `number_of_free_entries_in_pool_to_execute_mutation` and `number_of_free_entries_in_pool_to_lower_max_size_of_merge` can be equal to `background_pool_size`. [#14975](https://github.com/ClickHouse/ClickHouse/pull/14975) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix to make predicate push down work when subquery contains `finalizeAggregation` function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes https://github.com/ClickHouse/ClickHouse/issues/14923. [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fixed `.metadata.tmp File exists` error. [#14898](https://github.com/ClickHouse/ClickHouse/pull/14898) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fix the issue when some invocations of `extractAllGroups` function may trigger "Memory limit exceeded" error. This fixes [#13383](https://github.com/ClickHouse/ClickHouse/issues/13383). [#14889](https://github.com/ClickHouse/ClickHouse/pull/14889) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix SIGSEGV for an attempt to INSERT into StorageFile with file descriptor. [#14887](https://github.com/ClickHouse/ClickHouse/pull/14887) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fixed segfault in `cache` dictionary [#14837](https://github.com/ClickHouse/ClickHouse/issues/14837). [#14879](https://github.com/ClickHouse/ClickHouse/pull/14879) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fixed bug in parsing MySQL binlog events, which causes `Attempt to read after eof` and `Packet payload is not fully read` in `MaterializeMySQL` database engine. [#14852](https://github.com/ClickHouse/ClickHouse/pull/14852) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Fix rare error in `SELECT` queries when the queried column has `DEFAULT` expression which depends on the other column which also has `DEFAULT` and not present in select query and not exists on disk. Partially fixes [#14531](https://github.com/ClickHouse/ClickHouse/issues/14531). [#14845](https://github.com/ClickHouse/ClickHouse/pull/14845) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix a problem where the server may get stuck on startup while talking to ZooKeeper, if the configuration files have to be fetched from ZK (using the `from_zk` include option). This fixes [#14814](https://github.com/ClickHouse/ClickHouse/issues/14814). [#14843](https://github.com/ClickHouse/ClickHouse/pull/14843) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Fix wrong monotonicity detection for shrunk `Int -> Int` cast of signed types. It might lead to incorrect query result. This bug is unveiled in [#14513](https://github.com/ClickHouse/ClickHouse/issues/14513). [#14783](https://github.com/ClickHouse/ClickHouse/pull/14783) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* `Replace` column transformer should replace identifiers with cloned ASTs. This fixes https://github.com/ClickHouse/ClickHouse/issues/14695 . [#14734](https://github.com/ClickHouse/ClickHouse/pull/14734) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed missed default database name in metadata of materialized view when executing `ALTER ... MODIFY QUERY`. [#14664](https://github.com/ClickHouse/ClickHouse/pull/14664) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix bug when `ALTER UPDATE` mutation with `Nullable` column in assignment expression and constant value (like `UPDATE x = 42`) leads to incorrect value in column or segfault. Fixes [#13634](https://github.com/ClickHouse/ClickHouse/issues/13634), [#14045](https://github.com/ClickHouse/ClickHouse/issues/14045). [#14646](https://github.com/ClickHouse/ClickHouse/pull/14646) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix wrong Decimal multiplication result caused wrong decimal scale of result column. [#14603](https://github.com/ClickHouse/ClickHouse/pull/14603) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Fix function `has` with `LowCardinality` of `Nullable`. [#14591](https://github.com/ClickHouse/ClickHouse/pull/14591) ([Mike](https://github.com/myrrc)).
|
||||||
|
* Cleanup data directory after Zookeeper exceptions during CreateQuery for StorageReplicatedMergeTree Engine. [#14563](https://github.com/ClickHouse/ClickHouse/pull/14563) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix rare segfaults in functions with combinator `-Resample`, which could appear in result of overflow with very large parameters. [#14562](https://github.com/ClickHouse/ClickHouse/pull/14562) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix a bug when converting `Nullable(String)` to Enum. Introduced by https://github.com/ClickHouse/ClickHouse/pull/12745. This fixes https://github.com/ClickHouse/ClickHouse/issues/14435. [#14530](https://github.com/ClickHouse/ClickHouse/pull/14530) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed the incorrect sorting order of `Nullable` column. This fixes [#14344](https://github.com/ClickHouse/ClickHouse/issues/14344). [#14495](https://github.com/ClickHouse/ClickHouse/pull/14495) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix `currentDatabase()` function cannot be used in `ON CLUSTER` ddl query. [#14211](https://github.com/ClickHouse/ClickHouse/pull/14211) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* `MaterializedMySQL` (experimental feature): Fixed `Packet payload is not fully read` error in `MaterializeMySQL` database engine. [#14696](https://github.com/ClickHouse/ClickHouse/pull/14696) ([BohuTANG](https://github.com/BohuTANG)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Enable `Atomic` database engine by default for newly created databases. [#15003](https://github.com/ClickHouse/ClickHouse/pull/15003) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Add the ability to specify specialized codecs like `Delta`, `T64`, etc. for columns with subtypes. Implements [#12551](https://github.com/ClickHouse/ClickHouse/issues/12551), fixes [#11397](https://github.com/ClickHouse/ClickHouse/issues/11397), fixes [#4609](https://github.com/ClickHouse/ClickHouse/issues/4609). [#15089](https://github.com/ClickHouse/ClickHouse/pull/15089) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Dynamic reload of zookeeper config. [#14678](https://github.com/ClickHouse/ClickHouse/pull/14678) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
* Now it's allowed to execute `ALTER ... ON CLUSTER` queries regardless of the `<internal_replication>` setting in cluster config. [#16075](https://github.com/ClickHouse/ClickHouse/pull/16075) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Now `joinGet` supports multi-key lookup. Continuation of [#12418](https://github.com/ClickHouse/ClickHouse/issues/12418). [#13015](https://github.com/ClickHouse/ClickHouse/pull/13015) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Wait for `DROP/DETACH TABLE` to actually finish if `NO DELAY` or `SYNC` is specified for `Atomic` database. [#15448](https://github.com/ClickHouse/ClickHouse/pull/15448) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Now it's possible to change the type of version column for `VersionedCollapsingMergeTree` with `ALTER` query. [#15442](https://github.com/ClickHouse/ClickHouse/pull/15442) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Unfold `{database}`, `{table}` and `{uuid}` macros in `zookeeper_path` on replicated table creation. Do not allow `RENAME TABLE` if it may break `zookeeper_path` after server restart. Fixes [#6917](https://github.com/ClickHouse/ClickHouse/issues/6917). [#15348](https://github.com/ClickHouse/ClickHouse/pull/15348) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* The function `now` allows an argument with timezone. This closes [15264](https://github.com/ClickHouse/ClickHouse/issues/15264). [#15285](https://github.com/ClickHouse/ClickHouse/pull/15285) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Do not allow connections to ClickHouse server until all scripts in `/docker-entrypoint-initdb.d/` are executed. [#15244](https://github.com/ClickHouse/ClickHouse/pull/15244) ([Aleksei Kozharin](https://github.com/alekseik1)).
|
||||||
|
* Added `optimize` setting to `EXPLAIN PLAN` query. If enabled, query plan level optimisations are applied. Enabled by default. [#15201](https://github.com/ClickHouse/ClickHouse/pull/15201) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Proper exception message for wrong number of arguments of CAST. This closes [#13992](https://github.com/ClickHouse/ClickHouse/issues/13992). [#15029](https://github.com/ClickHouse/ClickHouse/pull/15029) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add option to disable TTL move on data part insert. [#15000](https://github.com/ClickHouse/ClickHouse/pull/15000) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Ignore key constraints when doing mutations. Without this pull request, it's not possible to do mutations when `force_index_by_date = 1` or `force_primary_key = 1`. [#14973](https://github.com/ClickHouse/ClickHouse/pull/14973) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Allow to drop Replicated table if previous drop attempt was failed due to ZooKeeper session expiration. This fixes [#11891](https://github.com/ClickHouse/ClickHouse/issues/11891). [#14926](https://github.com/ClickHouse/ClickHouse/pull/14926) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed excessive settings constraint violation when running SELECT with SETTINGS from a distributed table. [#14876](https://github.com/ClickHouse/ClickHouse/pull/14876) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Provide a `load_balancing_first_offset` query setting to explicitly state what the first replica is. It's used together with `FIRST_OR_RANDOM` load balancing strategy, which allows to control replicas workload. [#14867](https://github.com/ClickHouse/ClickHouse/pull/14867) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Show subqueries for `SET` and `JOIN` in `EXPLAIN` result. [#14856](https://github.com/ClickHouse/ClickHouse/pull/14856) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Allow using multi-volume storage configuration in storage `Distributed`. [#14839](https://github.com/ClickHouse/ClickHouse/pull/14839) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Construct `query_start_time` and `query_start_time_microseconds` from the same timespec. [#14831](https://github.com/ClickHouse/ClickHouse/pull/14831) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Support for disabling persistency for `StorageJoin` and `StorageSet`, this feature is controlled by setting `disable_set_and_join_persistency`. And this PR solved issue [#6318](https://github.com/ClickHouse/ClickHouse/issues/6318). [#14776](https://github.com/ClickHouse/ClickHouse/pull/14776) ([vxider](https://github.com/Vxider)).
|
||||||
|
* Now `COLUMNS` can be used to wrap over a list of columns and apply column transformers afterwards. [#14775](https://github.com/ClickHouse/ClickHouse/pull/14775) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add `merge_algorithm` to `system.merges` table to improve merging inspections. [#14705](https://github.com/ClickHouse/ClickHouse/pull/14705) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix potential memory leak caused by zookeeper exists watch. [#14693](https://github.com/ClickHouse/ClickHouse/pull/14693) ([hustnn](https://github.com/hustnn)).
|
||||||
|
* Allow parallel execution of distributed DDL. [#14684](https://github.com/ClickHouse/ClickHouse/pull/14684) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add `QueryMemoryLimitExceeded` event counter. This closes [#14589](https://github.com/ClickHouse/ClickHouse/issues/14589). [#14647](https://github.com/ClickHouse/ClickHouse/pull/14647) ([fastio](https://github.com/fastio)).
|
||||||
|
* Fix some trailing whitespaces in query formatting. [#14595](https://github.com/ClickHouse/ClickHouse/pull/14595) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* ClickHouse treats partition expr and key expr differently. Partition expr is used to construct an minmax index containing related columns, while primary key expr is stored as an expr. Sometimes user might partition a table at coarser levels, such as `partition by i / 1000`. However, binary operators are not monotonic and this PR tries to fix that. It might also benifit other use cases. [#14513](https://github.com/ClickHouse/ClickHouse/pull/14513) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add an option to skip access checks for `DiskS3`. `s3` disk is an experimental feature. [#14497](https://github.com/ClickHouse/ClickHouse/pull/14497) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Speed up server shutdown process if there are ongoing S3 requests. [#14496](https://github.com/ClickHouse/ClickHouse/pull/14496) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* `SYSTEM RELOAD CONFIG` now throws an exception if failed to reload and continues using the previous users.xml. The background periodic reloading also continues using the previous users.xml if failed to reload. [#14492](https://github.com/ClickHouse/ClickHouse/pull/14492) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* For INSERTs with inline data in VALUES format in the script mode of `clickhouse-client`, support semicolon as the data terminator, in addition to the new line. Closes https://github.com/ClickHouse/ClickHouse/issues/12288. [#13192](https://github.com/ClickHouse/ClickHouse/pull/13192) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Support custom codecs in compact parts. [#12183](https://github.com/ClickHouse/ClickHouse/pull/12183) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
#### Performance Improvement
|
||||||
|
|
||||||
|
* Enable compact parts by default for small parts. This will allow to process frequent inserts slightly more efficiently (4..100 times). [#11913](https://github.com/ClickHouse/ClickHouse/pull/11913) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Improve `quantileTDigest` performance. This fixes [#2668](https://github.com/ClickHouse/ClickHouse/issues/2668). [#15542](https://github.com/ClickHouse/ClickHouse/pull/15542) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Significantly reduce memory usage in AggregatingInOrderTransform/optimize_aggregation_in_order. [#15543](https://github.com/ClickHouse/ClickHouse/pull/15543) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Faster 256-bit multiplication. [#15418](https://github.com/ClickHouse/ClickHouse/pull/15418) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Improve performance of 256-bit types using (u)int64_t as base type for wide integers. Original wide integers use 8-bit types as base. [#14859](https://github.com/ClickHouse/ClickHouse/pull/14859) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Explicitly use a temporary disk to store vertical merge temporary data. [#15639](https://github.com/ClickHouse/ClickHouse/pull/15639) ([Grigory Pervakov](https://github.com/GrigoryPervakov)).
|
||||||
|
* Use one S3 DeleteObjects request instead of multiple DeleteObject in a loop. No any functionality changes, so covered by existing tests like integration/test_log_family_s3. [#15238](https://github.com/ClickHouse/ClickHouse/pull/15238) ([ianton-ru](https://github.com/ianton-ru)).
|
||||||
|
* Fix `DateTime <op> DateTime` mistakenly choosing the slow generic implementation. This fixes https://github.com/ClickHouse/ClickHouse/issues/15153. [#15178](https://github.com/ClickHouse/ClickHouse/pull/15178) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Improve performance of GROUP BY key of type `FixedString`. [#15034](https://github.com/ClickHouse/ClickHouse/pull/15034) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Only `mlock` code segment when starting clickhouse-server. In previous versions, all mapped regions were locked in memory, including debug info. Debug info is usually splitted to a separate file but if it isn't, it led to +2..3 GiB memory usage. [#14929](https://github.com/ClickHouse/ClickHouse/pull/14929) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* ClickHouse binary become smaller due to link time optimization.
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
|
||||||
|
* Now we use clang-11 for production ClickHouse build. [#15239](https://github.com/ClickHouse/ClickHouse/pull/15239) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Now we use clang-11 to build ClickHouse in CI. [#14846](https://github.com/ClickHouse/ClickHouse/pull/14846) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Switch binary builds (Linux, Darwin, AArch64, FreeDSD) to clang-11. [#15622](https://github.com/ClickHouse/ClickHouse/pull/15622) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||||
|
* Now all test images use `llvm-symbolizer-11`. [#15069](https://github.com/ClickHouse/ClickHouse/pull/15069) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Allow to build with llvm-11. [#15366](https://github.com/ClickHouse/ClickHouse/pull/15366) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Switch from `clang-tidy-10` to `clang-tidy-11`. [#14922](https://github.com/ClickHouse/ClickHouse/pull/14922) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Use LLVM's experimental pass manager by default. [#15608](https://github.com/ClickHouse/ClickHouse/pull/15608) ([Danila Kutenin](https://github.com/danlark1)).
|
||||||
|
* Don't allow any C++ translation unit to build more than 10 minutes or to use more than 10 GB or memory. This fixes [#14925](https://github.com/ClickHouse/ClickHouse/issues/14925). [#15060](https://github.com/ClickHouse/ClickHouse/pull/15060) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Make performance test more stable and representative by splitting test runs and profile runs. [#15027](https://github.com/ClickHouse/ClickHouse/pull/15027) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Attempt to make performance test more reliable. It is done by remapping the executable memory of the process on the fly with `madvise` to use transparent huge pages - it can lower the number of iTLB misses which is the main source of instabilities in performance tests. [#14685](https://github.com/ClickHouse/ClickHouse/pull/14685) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Convert to python3. This closes [#14886](https://github.com/ClickHouse/ClickHouse/issues/14886). [#15007](https://github.com/ClickHouse/ClickHouse/pull/15007) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fail early in functional tests if server failed to respond. This closes [#15262](https://github.com/ClickHouse/ClickHouse/issues/15262). [#15267](https://github.com/ClickHouse/ClickHouse/pull/15267) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Allow to run AArch64 version of clickhouse-server without configs. This facilitates [#15174](https://github.com/ClickHouse/ClickHouse/issues/15174). [#15266](https://github.com/ClickHouse/ClickHouse/pull/15266) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Improvements in CI docker images: get rid of ZooKeeper and single script for test configs installation. [#15215](https://github.com/ClickHouse/ClickHouse/pull/15215) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix CMake options forwarding in fast test script. Fixes error in [#14711](https://github.com/ClickHouse/ClickHouse/issues/14711). [#15155](https://github.com/ClickHouse/ClickHouse/pull/15155) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Added a script to perform hardware benchmark in a single command. [#15115](https://github.com/ClickHouse/ClickHouse/pull/15115) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Splitted huge test `test_dictionaries_all_layouts_and_sources` into smaller ones. [#15110](https://github.com/ClickHouse/ClickHouse/pull/15110) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Maybe fix MSan report in base64 (on servers with AVX-512). This fixes [#14006](https://github.com/ClickHouse/ClickHouse/issues/14006). [#15030](https://github.com/ClickHouse/ClickHouse/pull/15030) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Reformat and cleanup code in all integration test *.py files. [#14864](https://github.com/ClickHouse/ClickHouse/pull/14864) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix MaterializeMySQL empty transaction unstable test case found in CI. [#14854](https://github.com/ClickHouse/ClickHouse/pull/14854) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Attempt to speed up build a little. [#14808](https://github.com/ClickHouse/ClickHouse/pull/14808) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Speed up build a little by removing unused headers. [#14714](https://github.com/ClickHouse/ClickHouse/pull/14714) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix build failure in OSX. [#14761](https://github.com/ClickHouse/ClickHouse/pull/14761) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Enable ccache by default in cmake if it's found in OS. [#14575](https://github.com/ClickHouse/ClickHouse/pull/14575) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Control CI builds configuration from the ClickHouse repository. [#14547](https://github.com/ClickHouse/ClickHouse/pull/14547) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* In CMake files: - Moved some options' descriptions' parts to comments above. - Replace 0 -> `OFF`, 1 -> `ON` in `option`s default values. - Added some descriptions and links to docs to the options. - Replaced `FUZZER` option (there is another option `ENABLE_FUZZING` which also enables same functionality). - Removed `ENABLE_GTEST_LIBRARY` option as there is `ENABLE_TESTS`. See the full description in PR: [#14711](https://github.com/ClickHouse/ClickHouse/pull/14711) ([Mike](https://github.com/myrrc)).
|
||||||
|
* Make binary a bit smaller (~50 Mb for debug version). [#14555](https://github.com/ClickHouse/ClickHouse/pull/14555) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Use std::filesystem::path in ConfigProcessor for concatenating file paths. [#14558](https://github.com/ClickHouse/ClickHouse/pull/14558) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix debug assertion in `bitShiftLeft()` when called with negative big integer. [#14697](https://github.com/ClickHouse/ClickHouse/pull/14697) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
|
||||||
|
|
||||||
## ClickHouse release 20.9
|
## ClickHouse release 20.9
|
||||||
|
|
||||||
### ClickHouse release v20.9.2.20-stable, 2020-09-22
|
### ClickHouse release v20.9.2.20, 2020-09-22
|
||||||
|
|
||||||
#### New Feature
|
#### New Feature
|
||||||
|
|
||||||
@ -84,7 +296,6 @@
|
|||||||
|
|
||||||
#### New Feature
|
#### New Feature
|
||||||
|
|
||||||
* ClickHouse can work as MySQL replica - it is implemented by `MaterializeMySQL` database engine. Implements [#4006](https://github.com/ClickHouse/ClickHouse/issues/4006). [#10851](https://github.com/ClickHouse/ClickHouse/pull/10851) ([Winter Zhang](https://github.com/zhang2014)).
|
|
||||||
* Add the ability to specify `Default` compression codec for columns that correspond to settings specified in `config.xml`. Implements: [#9074](https://github.com/ClickHouse/ClickHouse/issues/9074). [#14049](https://github.com/ClickHouse/ClickHouse/pull/14049) ([alesapin](https://github.com/alesapin)).
|
* Add the ability to specify `Default` compression codec for columns that correspond to settings specified in `config.xml`. Implements: [#9074](https://github.com/ClickHouse/ClickHouse/issues/9074). [#14049](https://github.com/ClickHouse/ClickHouse/pull/14049) ([alesapin](https://github.com/alesapin)).
|
||||||
* Support Kerberos authentication in Kafka, using `krb5` and `cyrus-sasl` libraries. [#12771](https://github.com/ClickHouse/ClickHouse/pull/12771) ([Ilya Golshtein](https://github.com/ilejn)).
|
* Support Kerberos authentication in Kafka, using `krb5` and `cyrus-sasl` libraries. [#12771](https://github.com/ClickHouse/ClickHouse/pull/12771) ([Ilya Golshtein](https://github.com/ilejn)).
|
||||||
* Add function `normalizeQuery` that replaces literals, sequences of literals and complex aliases with placeholders. Add function `normalizedQueryHash` that returns identical 64bit hash values for similar queries. It helps to analyze query log. This closes [#11271](https://github.com/ClickHouse/ClickHouse/issues/11271). [#13816](https://github.com/ClickHouse/ClickHouse/pull/13816) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
* Add function `normalizeQuery` that replaces literals, sequences of literals and complex aliases with placeholders. Add function `normalizedQueryHash` that returns identical 64bit hash values for similar queries. It helps to analyze query log. This closes [#11271](https://github.com/ClickHouse/ClickHouse/issues/11271). [#13816](https://github.com/ClickHouse/ClickHouse/pull/13816) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
@ -184,6 +395,7 @@
|
|||||||
|
|
||||||
#### Experimental Feature
|
#### Experimental Feature
|
||||||
|
|
||||||
|
* ClickHouse can work as MySQL replica - it is implemented by `MaterializeMySQL` database engine. Implements [#4006](https://github.com/ClickHouse/ClickHouse/issues/4006). [#10851](https://github.com/ClickHouse/ClickHouse/pull/10851) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
* Add types `Int128`, `Int256`, `UInt256` and related functions for them. Extend Decimals with Decimal256 (precision up to 76 digits). New types are under the setting `allow_experimental_bigint_types`. It is working extremely slow and bad. The implementation is incomplete. Please don't use this feature. [#13097](https://github.com/ClickHouse/ClickHouse/pull/13097) ([Artem Zuikov](https://github.com/4ertus2)).
|
* Add types `Int128`, `Int256`, `UInt256` and related functions for them. Extend Decimals with Decimal256 (precision up to 76 digits). New types are under the setting `allow_experimental_bigint_types`. It is working extremely slow and bad. The implementation is incomplete. Please don't use this feature. [#13097](https://github.com/ClickHouse/ClickHouse/pull/13097) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
|
||||||
#### Build/Testing/Packaging Improvement
|
#### Build/Testing/Packaging Improvement
|
||||||
@ -409,7 +621,7 @@
|
|||||||
|
|
||||||
## ClickHouse release 20.6
|
## ClickHouse release 20.6
|
||||||
|
|
||||||
### ClickHouse release v20.6.3.28-stable
|
### ClickHouse release v20.6.3.28-stable
|
||||||
|
|
||||||
#### New Feature
|
#### New Feature
|
||||||
|
|
||||||
@ -2362,7 +2574,7 @@ No changes compared to v20.4.3.16-stable.
|
|||||||
* `Live View` table engine refactoring. [#8519](https://github.com/ClickHouse/ClickHouse/pull/8519) ([vzakaznikov](https://github.com/vzakaznikov))
|
* `Live View` table engine refactoring. [#8519](https://github.com/ClickHouse/ClickHouse/pull/8519) ([vzakaznikov](https://github.com/vzakaznikov))
|
||||||
* Add additional checks for external dictionaries created from DDL-queries. [#8127](https://github.com/ClickHouse/ClickHouse/pull/8127) ([alesapin](https://github.com/alesapin))
|
* Add additional checks for external dictionaries created from DDL-queries. [#8127](https://github.com/ClickHouse/ClickHouse/pull/8127) ([alesapin](https://github.com/alesapin))
|
||||||
* Fix error `Column ... already exists` while using `FINAL` and `SAMPLE` together, e.g. `select count() from table final sample 1/2`. Fixes [#5186](https://github.com/ClickHouse/ClickHouse/issues/5186). [#7907](https://github.com/ClickHouse/ClickHouse/pull/7907) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
* Fix error `Column ... already exists` while using `FINAL` and `SAMPLE` together, e.g. `select count() from table final sample 1/2`. Fixes [#5186](https://github.com/ClickHouse/ClickHouse/issues/5186). [#7907](https://github.com/ClickHouse/ClickHouse/pull/7907) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||||
* Now table the first argument of `joinGet` function can be table indentifier. [#7707](https://github.com/ClickHouse/ClickHouse/pull/7707) ([Amos Bird](https://github.com/amosbird))
|
* Now table the first argument of `joinGet` function can be table identifier. [#7707](https://github.com/ClickHouse/ClickHouse/pull/7707) ([Amos Bird](https://github.com/amosbird))
|
||||||
* Allow using `MaterializedView` with subqueries above `Kafka` tables. [#8197](https://github.com/ClickHouse/ClickHouse/pull/8197) ([filimonov](https://github.com/filimonov))
|
* Allow using `MaterializedView` with subqueries above `Kafka` tables. [#8197](https://github.com/ClickHouse/ClickHouse/pull/8197) ([filimonov](https://github.com/filimonov))
|
||||||
* Now background moves between disks run it the seprate thread pool. [#7670](https://github.com/ClickHouse/ClickHouse/pull/7670) ([Vladimir Chebotarev](https://github.com/excitoon))
|
* Now background moves between disks run it the seprate thread pool. [#7670](https://github.com/ClickHouse/ClickHouse/pull/7670) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||||
* `SYSTEM RELOAD DICTIONARY` now executes synchronously. [#8240](https://github.com/ClickHouse/ClickHouse/pull/8240) ([Vitaly Baranov](https://github.com/vitlibar))
|
* `SYSTEM RELOAD DICTIONARY` now executes synchronously. [#8240](https://github.com/ClickHouse/ClickHouse/pull/8240) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||||
|
@ -59,25 +59,6 @@ set(CMAKE_DEBUG_POSTFIX "d" CACHE STRING "Generate debug library name with a pos
|
|||||||
# For more info see https://cmake.org/cmake/help/latest/prop_gbl/USE_FOLDERS.html
|
# For more info see https://cmake.org/cmake/help/latest/prop_gbl/USE_FOLDERS.html
|
||||||
set_property(GLOBAL PROPERTY USE_FOLDERS ON)
|
set_property(GLOBAL PROPERTY USE_FOLDERS ON)
|
||||||
|
|
||||||
# cmake 3.9+ needed.
|
|
||||||
# Usually impractical.
|
|
||||||
# See also ${ENABLE_THINLTO}
|
|
||||||
option(ENABLE_IPO "Full link time optimization")
|
|
||||||
|
|
||||||
if(ENABLE_IPO)
|
|
||||||
cmake_policy(SET CMP0069 NEW)
|
|
||||||
include(CheckIPOSupported)
|
|
||||||
check_ipo_supported(RESULT IPO_SUPPORTED OUTPUT IPO_NOT_SUPPORTED)
|
|
||||||
if(IPO_SUPPORTED)
|
|
||||||
message(STATUS "IPO/LTO is supported, enabling")
|
|
||||||
set(CMAKE_INTERPROCEDURAL_OPTIMIZATION TRUE)
|
|
||||||
else()
|
|
||||||
message (${RECONFIGURE_MESSAGE_LEVEL} "IPO/LTO is not supported: <${IPO_NOT_SUPPORTED}>")
|
|
||||||
endif()
|
|
||||||
else()
|
|
||||||
message(STATUS "IPO/LTO not enabled.")
|
|
||||||
endif()
|
|
||||||
|
|
||||||
# Check that submodules are present only if source was downloaded with git
|
# Check that submodules are present only if source was downloaded with git
|
||||||
if (EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/.git" AND NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/boost/boost")
|
if (EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/.git" AND NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/boost/boost")
|
||||||
message (FATAL_ERROR "Submodules are not initialized. Run\n\tgit submodule update --init --recursive")
|
message (FATAL_ERROR "Submodules are not initialized. Run\n\tgit submodule update --init --recursive")
|
||||||
|
@ -17,4 +17,6 @@ ClickHouse is an open-source column-oriented database management system that all
|
|||||||
|
|
||||||
## Upcoming Events
|
## Upcoming Events
|
||||||
|
|
||||||
* [ClickHouse virtual office hours](https://www.eventbrite.com/e/clickhouse-october-virtual-meetup-office-hours-tickets-123129500651) on October 22, 2020.
|
* [The Second ClickHouse Meetup East (online)](https://www.eventbrite.com/e/the-second-clickhouse-meetup-east-tickets-126787955187) on October 31, 2020.
|
||||||
|
* [ClickHouse for Enterprise Meetup (online in Russian)](https://arenadata-events.timepad.ru/event/1465249/) on November 10, 2020.
|
||||||
|
|
||||||
|
@ -51,7 +51,7 @@ struct StringRef
|
|||||||
};
|
};
|
||||||
|
|
||||||
/// Here constexpr doesn't implicate inline, see https://www.viva64.com/en/w/v1043/
|
/// Here constexpr doesn't implicate inline, see https://www.viva64.com/en/w/v1043/
|
||||||
/// nullptr can't be used because the StringRef values are used in SipHash's pointer arithmetics
|
/// nullptr can't be used because the StringRef values are used in SipHash's pointer arithmetic
|
||||||
/// and the UBSan thinks that something like nullptr + 8 is UB.
|
/// and the UBSan thinks that something like nullptr + 8 is UB.
|
||||||
constexpr const inline char empty_string_ref_addr{};
|
constexpr const inline char empty_string_ref_addr{};
|
||||||
constexpr const inline StringRef EMPTY_STRING_REF{&empty_string_ref_addr, 0};
|
constexpr const inline StringRef EMPTY_STRING_REF{&empty_string_ref_addr, 0};
|
||||||
|
@ -35,25 +35,25 @@ PEERDIR(
|
|||||||
CFLAGS(-g0)
|
CFLAGS(-g0)
|
||||||
|
|
||||||
SRCS(
|
SRCS(
|
||||||
argsToConfig.cpp
|
|
||||||
coverage.cpp
|
|
||||||
DateLUT.cpp
|
DateLUT.cpp
|
||||||
DateLUTImpl.cpp
|
DateLUTImpl.cpp
|
||||||
|
JSON.cpp
|
||||||
|
LineReader.cpp
|
||||||
|
StringRef.cpp
|
||||||
|
argsToConfig.cpp
|
||||||
|
coverage.cpp
|
||||||
demangle.cpp
|
demangle.cpp
|
||||||
errnoToString.cpp
|
errnoToString.cpp
|
||||||
getFQDNOrHostName.cpp
|
getFQDNOrHostName.cpp
|
||||||
getMemoryAmount.cpp
|
getMemoryAmount.cpp
|
||||||
getResource.cpp
|
getResource.cpp
|
||||||
getThreadId.cpp
|
getThreadId.cpp
|
||||||
JSON.cpp
|
|
||||||
LineReader.cpp
|
|
||||||
mremap.cpp
|
mremap.cpp
|
||||||
phdr_cache.cpp
|
phdr_cache.cpp
|
||||||
preciseExp10.cpp
|
preciseExp10.cpp
|
||||||
setTerminalEcho.cpp
|
setTerminalEcho.cpp
|
||||||
shift10.cpp
|
shift10.cpp
|
||||||
sleep.cpp
|
sleep.cpp
|
||||||
StringRef.cpp
|
|
||||||
terminalColors.cpp
|
terminalColors.cpp
|
||||||
|
|
||||||
)
|
)
|
||||||
|
339
base/glibc-compatibility/musl/lgammal.c
Normal file
339
base/glibc-compatibility/musl/lgammal.c
Normal file
@ -0,0 +1,339 @@
|
|||||||
|
/* origin: OpenBSD /usr/src/lib/libm/src/ld80/e_lgammal.c */
|
||||||
|
/*
|
||||||
|
* ====================================================
|
||||||
|
* Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved.
|
||||||
|
*
|
||||||
|
* Developed at SunPro, a Sun Microsystems, Inc. business.
|
||||||
|
* Permission to use, copy, modify, and distribute this
|
||||||
|
* software is freely granted, provided that this notice
|
||||||
|
* is preserved.
|
||||||
|
* ====================================================
|
||||||
|
*/
|
||||||
|
/*
|
||||||
|
* Copyright (c) 2008 Stephen L. Moshier <steve@moshier.net>
|
||||||
|
*
|
||||||
|
* Permission to use, copy, modify, and distribute this software for any
|
||||||
|
* purpose with or without fee is hereby granted, provided that the above
|
||||||
|
* copyright notice and this permission notice appear in all copies.
|
||||||
|
*
|
||||||
|
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
*/
|
||||||
|
/* lgammal(x)
|
||||||
|
* Reentrant version of the logarithm of the Gamma function
|
||||||
|
* with user provide pointer for the sign of Gamma(x).
|
||||||
|
*
|
||||||
|
* Method:
|
||||||
|
* 1. Argument Reduction for 0 < x <= 8
|
||||||
|
* Since gamma(1+s)=s*gamma(s), for x in [0,8], we may
|
||||||
|
* reduce x to a number in [1.5,2.5] by
|
||||||
|
* lgamma(1+s) = log(s) + lgamma(s)
|
||||||
|
* for example,
|
||||||
|
* lgamma(7.3) = log(6.3) + lgamma(6.3)
|
||||||
|
* = log(6.3*5.3) + lgamma(5.3)
|
||||||
|
* = log(6.3*5.3*4.3*3.3*2.3) + lgamma(2.3)
|
||||||
|
* 2. Polynomial approximation of lgamma around its
|
||||||
|
* minimun ymin=1.461632144968362245 to maintain monotonicity.
|
||||||
|
* On [ymin-0.23, ymin+0.27] (i.e., [1.23164,1.73163]), use
|
||||||
|
* Let z = x-ymin;
|
||||||
|
* lgamma(x) = -1.214862905358496078218 + z^2*poly(z)
|
||||||
|
* 2. Rational approximation in the primary interval [2,3]
|
||||||
|
* We use the following approximation:
|
||||||
|
* s = x-2.0;
|
||||||
|
* lgamma(x) = 0.5*s + s*P(s)/Q(s)
|
||||||
|
* Our algorithms are based on the following observation
|
||||||
|
*
|
||||||
|
* zeta(2)-1 2 zeta(3)-1 3
|
||||||
|
* lgamma(2+s) = s*(1-Euler) + --------- * s - --------- * s + ...
|
||||||
|
* 2 3
|
||||||
|
*
|
||||||
|
* where Euler = 0.5771... is the Euler constant, which is very
|
||||||
|
* close to 0.5.
|
||||||
|
*
|
||||||
|
* 3. For x>=8, we have
|
||||||
|
* lgamma(x)~(x-0.5)log(x)-x+0.5*log(2pi)+1/(12x)-1/(360x**3)+....
|
||||||
|
* (better formula:
|
||||||
|
* lgamma(x)~(x-0.5)*(log(x)-1)-.5*(log(2pi)-1) + ...)
|
||||||
|
* Let z = 1/x, then we approximation
|
||||||
|
* f(z) = lgamma(x) - (x-0.5)(log(x)-1)
|
||||||
|
* by
|
||||||
|
* 3 5 11
|
||||||
|
* w = w0 + w1*z + w2*z + w3*z + ... + w6*z
|
||||||
|
*
|
||||||
|
* 4. For negative x, since (G is gamma function)
|
||||||
|
* -x*G(-x)*G(x) = pi/sin(pi*x),
|
||||||
|
* we have
|
||||||
|
* G(x) = pi/(sin(pi*x)*(-x)*G(-x))
|
||||||
|
* since G(-x) is positive, sign(G(x)) = sign(sin(pi*x)) for x<0
|
||||||
|
* Hence, for x<0, signgam = sign(sin(pi*x)) and
|
||||||
|
* lgamma(x) = log(|Gamma(x)|)
|
||||||
|
* = log(pi/(|x*sin(pi*x)|)) - lgamma(-x);
|
||||||
|
* Note: one should avoid compute pi*(-x) directly in the
|
||||||
|
* computation of sin(pi*(-x)).
|
||||||
|
*
|
||||||
|
* 5. Special Cases
|
||||||
|
* lgamma(2+s) ~ s*(1-Euler) for tiny s
|
||||||
|
* lgamma(1)=lgamma(2)=0
|
||||||
|
* lgamma(x) ~ -log(x) for tiny x
|
||||||
|
* lgamma(0) = lgamma(inf) = inf
|
||||||
|
* lgamma(-integer) = +-inf
|
||||||
|
*
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include <stdint.h>
|
||||||
|
#include <math.h>
|
||||||
|
#include "libm.h"
|
||||||
|
|
||||||
|
|
||||||
|
#if LDBL_MANT_DIG == 53 && LDBL_MAX_EXP == 1024
|
||||||
|
double lgamma_r(double x, int *sg);
|
||||||
|
|
||||||
|
long double lgammal_r(long double x, int *sg)
|
||||||
|
{
|
||||||
|
return lgamma_r(x, sg);
|
||||||
|
}
|
||||||
|
#elif LDBL_MANT_DIG == 64 && LDBL_MAX_EXP == 16384
|
||||||
|
|
||||||
|
static const long double pi = 3.14159265358979323846264L,
|
||||||
|
|
||||||
|
/* lgam(1+x) = 0.5 x + x a(x)/b(x)
|
||||||
|
-0.268402099609375 <= x <= 0
|
||||||
|
peak relative error 6.6e-22 */
|
||||||
|
a0 = -6.343246574721079391729402781192128239938E2L,
|
||||||
|
a1 = 1.856560238672465796768677717168371401378E3L,
|
||||||
|
a2 = 2.404733102163746263689288466865843408429E3L,
|
||||||
|
a3 = 8.804188795790383497379532868917517596322E2L,
|
||||||
|
a4 = 1.135361354097447729740103745999661157426E2L,
|
||||||
|
a5 = 3.766956539107615557608581581190400021285E0L,
|
||||||
|
|
||||||
|
b0 = 8.214973713960928795704317259806842490498E3L,
|
||||||
|
b1 = 1.026343508841367384879065363925870888012E4L,
|
||||||
|
b2 = 4.553337477045763320522762343132210919277E3L,
|
||||||
|
b3 = 8.506975785032585797446253359230031874803E2L,
|
||||||
|
b4 = 6.042447899703295436820744186992189445813E1L,
|
||||||
|
/* b5 = 1.000000000000000000000000000000000000000E0 */
|
||||||
|
|
||||||
|
|
||||||
|
tc = 1.4616321449683623412626595423257213284682E0L,
|
||||||
|
tf = -1.2148629053584961146050602565082954242826E-1, /* double precision */
|
||||||
|
/* tt = (tail of tf), i.e. tf + tt has extended precision. */
|
||||||
|
tt = 3.3649914684731379602768989080467587736363E-18L,
|
||||||
|
/* lgam ( 1.4616321449683623412626595423257213284682E0 ) =
|
||||||
|
-1.2148629053584960809551455717769158215135617312999903886372437313313530E-1 */
|
||||||
|
|
||||||
|
/* lgam (x + tc) = tf + tt + x g(x)/h(x)
|
||||||
|
-0.230003726999612341262659542325721328468 <= x
|
||||||
|
<= 0.2699962730003876587373404576742786715318
|
||||||
|
peak relative error 2.1e-21 */
|
||||||
|
g0 = 3.645529916721223331888305293534095553827E-18L,
|
||||||
|
g1 = 5.126654642791082497002594216163574795690E3L,
|
||||||
|
g2 = 8.828603575854624811911631336122070070327E3L,
|
||||||
|
g3 = 5.464186426932117031234820886525701595203E3L,
|
||||||
|
g4 = 1.455427403530884193180776558102868592293E3L,
|
||||||
|
g5 = 1.541735456969245924860307497029155838446E2L,
|
||||||
|
g6 = 4.335498275274822298341872707453445815118E0L,
|
||||||
|
|
||||||
|
h0 = 1.059584930106085509696730443974495979641E4L,
|
||||||
|
h1 = 2.147921653490043010629481226937850618860E4L,
|
||||||
|
h2 = 1.643014770044524804175197151958100656728E4L,
|
||||||
|
h3 = 5.869021995186925517228323497501767586078E3L,
|
||||||
|
h4 = 9.764244777714344488787381271643502742293E2L,
|
||||||
|
h5 = 6.442485441570592541741092969581997002349E1L,
|
||||||
|
/* h6 = 1.000000000000000000000000000000000000000E0 */
|
||||||
|
|
||||||
|
|
||||||
|
/* lgam (x+1) = -0.5 x + x u(x)/v(x)
|
||||||
|
-0.100006103515625 <= x <= 0.231639862060546875
|
||||||
|
peak relative error 1.3e-21 */
|
||||||
|
u0 = -8.886217500092090678492242071879342025627E1L,
|
||||||
|
u1 = 6.840109978129177639438792958320783599310E2L,
|
||||||
|
u2 = 2.042626104514127267855588786511809932433E3L,
|
||||||
|
u3 = 1.911723903442667422201651063009856064275E3L,
|
||||||
|
u4 = 7.447065275665887457628865263491667767695E2L,
|
||||||
|
u5 = 1.132256494121790736268471016493103952637E2L,
|
||||||
|
u6 = 4.484398885516614191003094714505960972894E0L,
|
||||||
|
|
||||||
|
v0 = 1.150830924194461522996462401210374632929E3L,
|
||||||
|
v1 = 3.399692260848747447377972081399737098610E3L,
|
||||||
|
v2 = 3.786631705644460255229513563657226008015E3L,
|
||||||
|
v3 = 1.966450123004478374557778781564114347876E3L,
|
||||||
|
v4 = 4.741359068914069299837355438370682773122E2L,
|
||||||
|
v5 = 4.508989649747184050907206782117647852364E1L,
|
||||||
|
/* v6 = 1.000000000000000000000000000000000000000E0 */
|
||||||
|
|
||||||
|
|
||||||
|
/* lgam (x+2) = .5 x + x s(x)/r(x)
|
||||||
|
0 <= x <= 1
|
||||||
|
peak relative error 7.2e-22 */
|
||||||
|
s0 = 1.454726263410661942989109455292824853344E6L,
|
||||||
|
s1 = -3.901428390086348447890408306153378922752E6L,
|
||||||
|
s2 = -6.573568698209374121847873064292963089438E6L,
|
||||||
|
s3 = -3.319055881485044417245964508099095984643E6L,
|
||||||
|
s4 = -7.094891568758439227560184618114707107977E5L,
|
||||||
|
s5 = -6.263426646464505837422314539808112478303E4L,
|
||||||
|
s6 = -1.684926520999477529949915657519454051529E3L,
|
||||||
|
|
||||||
|
r0 = -1.883978160734303518163008696712983134698E7L,
|
||||||
|
r1 = -2.815206082812062064902202753264922306830E7L,
|
||||||
|
r2 = -1.600245495251915899081846093343626358398E7L,
|
||||||
|
r3 = -4.310526301881305003489257052083370058799E6L,
|
||||||
|
r4 = -5.563807682263923279438235987186184968542E5L,
|
||||||
|
r5 = -3.027734654434169996032905158145259713083E4L,
|
||||||
|
r6 = -4.501995652861105629217250715790764371267E2L,
|
||||||
|
/* r6 = 1.000000000000000000000000000000000000000E0 */
|
||||||
|
|
||||||
|
|
||||||
|
/* lgam(x) = ( x - 0.5 ) * log(x) - x + LS2PI + 1/x w(1/x^2)
|
||||||
|
x >= 8
|
||||||
|
Peak relative error 1.51e-21
|
||||||
|
w0 = LS2PI - 0.5 */
|
||||||
|
w0 = 4.189385332046727417803e-1L,
|
||||||
|
w1 = 8.333333333333331447505E-2L,
|
||||||
|
w2 = -2.777777777750349603440E-3L,
|
||||||
|
w3 = 7.936507795855070755671E-4L,
|
||||||
|
w4 = -5.952345851765688514613E-4L,
|
||||||
|
w5 = 8.412723297322498080632E-4L,
|
||||||
|
w6 = -1.880801938119376907179E-3L,
|
||||||
|
w7 = 4.885026142432270781165E-3L;
|
||||||
|
|
||||||
|
|
||||||
|
long double lgammal_r(long double x, int *sg) {
|
||||||
|
long double t, y, z, nadj, p, p1, p2, q, r, w;
|
||||||
|
union ldshape u = {x};
|
||||||
|
uint32_t ix = (u.i.se & 0x7fffU)<<16 | u.i.m>>48;
|
||||||
|
int sign = u.i.se >> 15;
|
||||||
|
int i;
|
||||||
|
|
||||||
|
*sg = 1;
|
||||||
|
|
||||||
|
/* purge off +-inf, NaN, +-0, tiny and negative arguments */
|
||||||
|
if (ix >= 0x7fff0000)
|
||||||
|
return x * x;
|
||||||
|
if (ix < 0x3fc08000) { /* |x|<2**-63, return -log(|x|) */
|
||||||
|
if (sign) {
|
||||||
|
*sg = -1;
|
||||||
|
x = -x;
|
||||||
|
}
|
||||||
|
return -logl(x);
|
||||||
|
}
|
||||||
|
if (sign) {
|
||||||
|
x = -x;
|
||||||
|
t = sin(pi * x);
|
||||||
|
if (t == 0.0)
|
||||||
|
return 1.0 / (x-x); /* -integer */
|
||||||
|
if (t > 0.0)
|
||||||
|
*sg = -1;
|
||||||
|
else
|
||||||
|
t = -t;
|
||||||
|
nadj = logl(pi / (t * x));
|
||||||
|
}
|
||||||
|
|
||||||
|
/* purge off 1 and 2 (so the sign is ok with downward rounding) */
|
||||||
|
if ((ix == 0x3fff8000 || ix == 0x40008000) && u.i.m == 0) {
|
||||||
|
r = 0;
|
||||||
|
} else if (ix < 0x40008000) { /* x < 2.0 */
|
||||||
|
if (ix <= 0x3ffee666) { /* 8.99993896484375e-1 */
|
||||||
|
/* lgamma(x) = lgamma(x+1) - log(x) */
|
||||||
|
r = -logl(x);
|
||||||
|
if (ix >= 0x3ffebb4a) { /* 7.31597900390625e-1 */
|
||||||
|
y = x - 1.0;
|
||||||
|
i = 0;
|
||||||
|
} else if (ix >= 0x3ffced33) { /* 2.31639862060546875e-1 */
|
||||||
|
y = x - (tc - 1.0);
|
||||||
|
i = 1;
|
||||||
|
} else { /* x < 0.23 */
|
||||||
|
y = x;
|
||||||
|
i = 2;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
r = 0.0;
|
||||||
|
if (ix >= 0x3fffdda6) { /* 1.73162841796875 */
|
||||||
|
/* [1.7316,2] */
|
||||||
|
y = x - 2.0;
|
||||||
|
i = 0;
|
||||||
|
} else if (ix >= 0x3fff9da6) { /* 1.23162841796875 */
|
||||||
|
/* [1.23,1.73] */
|
||||||
|
y = x - tc;
|
||||||
|
i = 1;
|
||||||
|
} else {
|
||||||
|
/* [0.9, 1.23] */
|
||||||
|
y = x - 1.0;
|
||||||
|
i = 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
switch (i) {
|
||||||
|
case 0:
|
||||||
|
p1 = a0 + y * (a1 + y * (a2 + y * (a3 + y * (a4 + y * a5))));
|
||||||
|
p2 = b0 + y * (b1 + y * (b2 + y * (b3 + y * (b4 + y))));
|
||||||
|
r += 0.5 * y + y * p1/p2;
|
||||||
|
break;
|
||||||
|
case 1:
|
||||||
|
p1 = g0 + y * (g1 + y * (g2 + y * (g3 + y * (g4 + y * (g5 + y * g6)))));
|
||||||
|
p2 = h0 + y * (h1 + y * (h2 + y * (h3 + y * (h4 + y * (h5 + y)))));
|
||||||
|
p = tt + y * p1/p2;
|
||||||
|
r += (tf + p);
|
||||||
|
break;
|
||||||
|
case 2:
|
||||||
|
p1 = y * (u0 + y * (u1 + y * (u2 + y * (u3 + y * (u4 + y * (u5 + y * u6))))));
|
||||||
|
p2 = v0 + y * (v1 + y * (v2 + y * (v3 + y * (v4 + y * (v5 + y)))));
|
||||||
|
r += (-0.5 * y + p1 / p2);
|
||||||
|
}
|
||||||
|
} else if (ix < 0x40028000) { /* 8.0 */
|
||||||
|
/* x < 8.0 */
|
||||||
|
i = (int)x;
|
||||||
|
y = x - (double)i;
|
||||||
|
p = y * (s0 + y * (s1 + y * (s2 + y * (s3 + y * (s4 + y * (s5 + y * s6))))));
|
||||||
|
q = r0 + y * (r1 + y * (r2 + y * (r3 + y * (r4 + y * (r5 + y * (r6 + y))))));
|
||||||
|
r = 0.5 * y + p / q;
|
||||||
|
z = 1.0;
|
||||||
|
/* lgamma(1+s) = log(s) + lgamma(s) */
|
||||||
|
switch (i) {
|
||||||
|
case 7:
|
||||||
|
z *= (y + 6.0); /* FALLTHRU */
|
||||||
|
case 6:
|
||||||
|
z *= (y + 5.0); /* FALLTHRU */
|
||||||
|
case 5:
|
||||||
|
z *= (y + 4.0); /* FALLTHRU */
|
||||||
|
case 4:
|
||||||
|
z *= (y + 3.0); /* FALLTHRU */
|
||||||
|
case 3:
|
||||||
|
z *= (y + 2.0); /* FALLTHRU */
|
||||||
|
r += logl(z);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
} else if (ix < 0x40418000) { /* 2^66 */
|
||||||
|
/* 8.0 <= x < 2**66 */
|
||||||
|
t = logl(x);
|
||||||
|
z = 1.0 / x;
|
||||||
|
y = z * z;
|
||||||
|
w = w0 + z * (w1 + y * (w2 + y * (w3 + y * (w4 + y * (w5 + y * (w6 + y * w7))))));
|
||||||
|
r = (x - 0.5) * (t - 1.0) + w;
|
||||||
|
} else /* 2**66 <= x <= inf */
|
||||||
|
r = x * (logl(x) - 1.0);
|
||||||
|
if (sign)
|
||||||
|
r = nadj - r;
|
||||||
|
return r;
|
||||||
|
}
|
||||||
|
#elif LDBL_MANT_DIG == 113 && LDBL_MAX_EXP == 16384
|
||||||
|
// TODO: broken implementation to make things compile
|
||||||
|
double lgamma_r(double x, int *sg);
|
||||||
|
|
||||||
|
long double lgammal_r(long double x, int *sg)
|
||||||
|
{
|
||||||
|
return lgamma_r(x, sg);
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
|
|
||||||
|
int signgam_lgammal;
|
||||||
|
|
||||||
|
long double lgammal(long double x)
|
||||||
|
{
|
||||||
|
return lgammal_r(x, &signgam_lgammal);
|
||||||
|
}
|
||||||
|
|
@ -16,8 +16,4 @@ endif ()
|
|||||||
|
|
||||||
if (CMAKE_SYSTEM_PROCESSOR MATCHES "^(ppc64le.*|PPC64LE.*)")
|
if (CMAKE_SYSTEM_PROCESSOR MATCHES "^(ppc64le.*|PPC64LE.*)")
|
||||||
set (ARCH_PPC64LE 1)
|
set (ARCH_PPC64LE 1)
|
||||||
# FIXME: move this check into tools.cmake
|
|
||||||
if (COMPILER_CLANG OR (COMPILER_GCC AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 8))
|
|
||||||
message(FATAL_ERROR "Only gcc-8 or higher is supported for powerpc architecture")
|
|
||||||
endif ()
|
|
||||||
endif ()
|
endif ()
|
||||||
|
@ -1,9 +1,9 @@
|
|||||||
# This strings autochanged from release_lib.sh:
|
# This strings autochanged from release_lib.sh:
|
||||||
SET(VERSION_REVISION 54442)
|
SET(VERSION_REVISION 54443)
|
||||||
SET(VERSION_MAJOR 20)
|
SET(VERSION_MAJOR 20)
|
||||||
SET(VERSION_MINOR 11)
|
SET(VERSION_MINOR 12)
|
||||||
SET(VERSION_PATCH 1)
|
SET(VERSION_PATCH 1)
|
||||||
SET(VERSION_GITHASH 76a04fb4b4f6cd27ad999baf6dc9a25e88851c42)
|
SET(VERSION_GITHASH c53725fb1f846fda074347607ab582fbb9c6f7a1)
|
||||||
SET(VERSION_DESCRIBE v20.11.1.1-prestable)
|
SET(VERSION_DESCRIBE v20.12.1.1-prestable)
|
||||||
SET(VERSION_STRING 20.11.1.1)
|
SET(VERSION_STRING 20.12.1.1)
|
||||||
# end of autochange
|
# end of autochange
|
||||||
|
@ -84,3 +84,9 @@ if (LINKER_NAME)
|
|||||||
|
|
||||||
message(STATUS "Using custom linker by name: ${LINKER_NAME}")
|
message(STATUS "Using custom linker by name: ${LINKER_NAME}")
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
|
if (ARCH_PPC64LE)
|
||||||
|
if (COMPILER_CLANG OR (COMPILER_GCC AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 8))
|
||||||
|
message(FATAL_ERROR "Only gcc-8 or higher is supported for powerpc architecture")
|
||||||
|
endif ()
|
||||||
|
endif ()
|
@ -11,11 +11,11 @@ CFLAGS (GLOBAL -DDBMS_VERSION_MAJOR=${VERSION_MAJOR})
|
|||||||
CFLAGS (GLOBAL -DDBMS_VERSION_MINOR=${VERSION_MINOR})
|
CFLAGS (GLOBAL -DDBMS_VERSION_MINOR=${VERSION_MINOR})
|
||||||
CFLAGS (GLOBAL -DDBMS_VERSION_PATCH=${VERSION_PATCH})
|
CFLAGS (GLOBAL -DDBMS_VERSION_PATCH=${VERSION_PATCH})
|
||||||
CFLAGS (GLOBAL -DVERSION_FULL=\"\\\"${VERSION_FULL}\\\"\")
|
CFLAGS (GLOBAL -DVERSION_FULL=\"\\\"${VERSION_FULL}\\\"\")
|
||||||
CFLAGS (GLOBAL -DVERSION_MAJOR=${VERSION_MAJOR})
|
CFLAGS (GLOBAL -DVERSION_MAJOR=${VERSION_MAJOR})
|
||||||
CFLAGS (GLOBAL -DVERSION_MINOR=${VERSION_MINOR})
|
CFLAGS (GLOBAL -DVERSION_MINOR=${VERSION_MINOR})
|
||||||
CFLAGS (GLOBAL -DVERSION_PATCH=${VERSION_PATCH})
|
CFLAGS (GLOBAL -DVERSION_PATCH=${VERSION_PATCH})
|
||||||
|
|
||||||
# TODO: not supported yet, not sure if ya.make supports arithmetics.
|
# TODO: not supported yet, not sure if ya.make supports arithmetic.
|
||||||
CFLAGS (GLOBAL -DVERSION_INTEGER=0)
|
CFLAGS (GLOBAL -DVERSION_INTEGER=0)
|
||||||
|
|
||||||
CFLAGS (GLOBAL -DVERSION_NAME=\"\\\"${VERSION_NAME}\\\"\")
|
CFLAGS (GLOBAL -DVERSION_NAME=\"\\\"${VERSION_NAME}\\\"\")
|
||||||
|
2
contrib/CMakeLists.txt
vendored
2
contrib/CMakeLists.txt
vendored
@ -20,7 +20,6 @@ add_subdirectory (boost-cmake)
|
|||||||
add_subdirectory (cctz-cmake)
|
add_subdirectory (cctz-cmake)
|
||||||
add_subdirectory (consistent-hashing-sumbur)
|
add_subdirectory (consistent-hashing-sumbur)
|
||||||
add_subdirectory (consistent-hashing)
|
add_subdirectory (consistent-hashing)
|
||||||
add_subdirectory (croaring)
|
|
||||||
add_subdirectory (FastMemcpy)
|
add_subdirectory (FastMemcpy)
|
||||||
add_subdirectory (hyperscan-cmake)
|
add_subdirectory (hyperscan-cmake)
|
||||||
add_subdirectory (jemalloc-cmake)
|
add_subdirectory (jemalloc-cmake)
|
||||||
@ -34,6 +33,7 @@ add_subdirectory (ryu-cmake)
|
|||||||
add_subdirectory (unixodbc-cmake)
|
add_subdirectory (unixodbc-cmake)
|
||||||
|
|
||||||
add_subdirectory (poco-cmake)
|
add_subdirectory (poco-cmake)
|
||||||
|
add_subdirectory (croaring-cmake)
|
||||||
|
|
||||||
|
|
||||||
# TODO: refactor the contrib libraries below this comment.
|
# TODO: refactor the contrib libraries below this comment.
|
||||||
|
2
contrib/aws
vendored
2
contrib/aws
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 17e10c0fc77f22afe890fa6d1b283760e5edaa56
|
Subproject commit a220591e335923ce1c19bbf9eb925787f7ab6c13
|
1
contrib/croaring
vendored
Submodule
1
contrib/croaring
vendored
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit 5f20740ec0de5e153e8f4cb2ab91814e8b291a14
|
25
contrib/croaring-cmake/CMakeLists.txt
Normal file
25
contrib/croaring-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,25 @@
|
|||||||
|
set(LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/croaring)
|
||||||
|
|
||||||
|
set(SRCS
|
||||||
|
${LIBRARY_DIR}/src/array_util.c
|
||||||
|
${LIBRARY_DIR}/src/bitset_util.c
|
||||||
|
${LIBRARY_DIR}/src/containers/array.c
|
||||||
|
${LIBRARY_DIR}/src/containers/bitset.c
|
||||||
|
${LIBRARY_DIR}/src/containers/containers.c
|
||||||
|
${LIBRARY_DIR}/src/containers/convert.c
|
||||||
|
${LIBRARY_DIR}/src/containers/mixed_intersection.c
|
||||||
|
${LIBRARY_DIR}/src/containers/mixed_union.c
|
||||||
|
${LIBRARY_DIR}/src/containers/mixed_equal.c
|
||||||
|
${LIBRARY_DIR}/src/containers/mixed_subset.c
|
||||||
|
${LIBRARY_DIR}/src/containers/mixed_negation.c
|
||||||
|
${LIBRARY_DIR}/src/containers/mixed_xor.c
|
||||||
|
${LIBRARY_DIR}/src/containers/mixed_andnot.c
|
||||||
|
${LIBRARY_DIR}/src/containers/run.c
|
||||||
|
${LIBRARY_DIR}/src/roaring.c
|
||||||
|
${LIBRARY_DIR}/src/roaring_priority_queue.c
|
||||||
|
${LIBRARY_DIR}/src/roaring_array.c)
|
||||||
|
|
||||||
|
add_library(roaring ${SRCS})
|
||||||
|
|
||||||
|
target_include_directories(roaring PRIVATE ${LIBRARY_DIR}/include/roaring)
|
||||||
|
target_include_directories(roaring SYSTEM BEFORE PUBLIC ${LIBRARY_DIR}/include)
|
@ -1,6 +0,0 @@
|
|||||||
add_library(roaring
|
|
||||||
roaring.c
|
|
||||||
roaring/roaring.h
|
|
||||||
roaring/roaring.hh)
|
|
||||||
|
|
||||||
target_include_directories (roaring SYSTEM PUBLIC ${CMAKE_CURRENT_SOURCE_DIR})
|
|
@ -1,202 +0,0 @@
|
|||||||
Apache License
|
|
||||||
Version 2.0, January 2004
|
|
||||||
http://www.apache.org/licenses/
|
|
||||||
|
|
||||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
|
||||||
|
|
||||||
1. Definitions.
|
|
||||||
|
|
||||||
"License" shall mean the terms and conditions for use, reproduction,
|
|
||||||
and distribution as defined by Sections 1 through 9 of this document.
|
|
||||||
|
|
||||||
"Licensor" shall mean the copyright owner or entity authorized by
|
|
||||||
the copyright owner that is granting the License.
|
|
||||||
|
|
||||||
"Legal Entity" shall mean the union of the acting entity and all
|
|
||||||
other entities that control, are controlled by, or are under common
|
|
||||||
control with that entity. For the purposes of this definition,
|
|
||||||
"control" means (i) the power, direct or indirect, to cause the
|
|
||||||
direction or management of such entity, whether by contract or
|
|
||||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
|
||||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
|
||||||
|
|
||||||
"You" (or "Your") shall mean an individual or Legal Entity
|
|
||||||
exercising permissions granted by this License.
|
|
||||||
|
|
||||||
"Source" form shall mean the preferred form for making modifications,
|
|
||||||
including but not limited to software source code, documentation
|
|
||||||
source, and configuration files.
|
|
||||||
|
|
||||||
"Object" form shall mean any form resulting from mechanical
|
|
||||||
transformation or translation of a Source form, including but
|
|
||||||
not limited to compiled object code, generated documentation,
|
|
||||||
and conversions to other media types.
|
|
||||||
|
|
||||||
"Work" shall mean the work of authorship, whether in Source or
|
|
||||||
Object form, made available under the License, as indicated by a
|
|
||||||
copyright notice that is included in or attached to the work
|
|
||||||
(an example is provided in the Appendix below).
|
|
||||||
|
|
||||||
"Derivative Works" shall mean any work, whether in Source or Object
|
|
||||||
form, that is based on (or derived from) the Work and for which the
|
|
||||||
editorial revisions, annotations, elaborations, or other modifications
|
|
||||||
represent, as a whole, an original work of authorship. For the purposes
|
|
||||||
of this License, Derivative Works shall not include works that remain
|
|
||||||
separable from, or merely link (or bind by name) to the interfaces of,
|
|
||||||
the Work and Derivative Works thereof.
|
|
||||||
|
|
||||||
"Contribution" shall mean any work of authorship, including
|
|
||||||
the original version of the Work and any modifications or additions
|
|
||||||
to that Work or Derivative Works thereof, that is intentionally
|
|
||||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
|
||||||
or by an individual or Legal Entity authorized to submit on behalf of
|
|
||||||
the copyright owner. For the purposes of this definition, "submitted"
|
|
||||||
means any form of electronic, verbal, or written communication sent
|
|
||||||
to the Licensor or its representatives, including but not limited to
|
|
||||||
communication on electronic mailing lists, source code control systems,
|
|
||||||
and issue tracking systems that are managed by, or on behalf of, the
|
|
||||||
Licensor for the purpose of discussing and improving the Work, but
|
|
||||||
excluding communication that is conspicuously marked or otherwise
|
|
||||||
designated in writing by the copyright owner as "Not a Contribution."
|
|
||||||
|
|
||||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
|
||||||
on behalf of whom a Contribution has been received by Licensor and
|
|
||||||
subsequently incorporated within the Work.
|
|
||||||
|
|
||||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
copyright license to reproduce, prepare Derivative Works of,
|
|
||||||
publicly display, publicly perform, sublicense, and distribute the
|
|
||||||
Work and such Derivative Works in Source or Object form.
|
|
||||||
|
|
||||||
3. Grant of Patent License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
(except as stated in this section) patent license to make, have made,
|
|
||||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
|
||||||
where such license applies only to those patent claims licensable
|
|
||||||
by such Contributor that are necessarily infringed by their
|
|
||||||
Contribution(s) alone or by combination of their Contribution(s)
|
|
||||||
with the Work to which such Contribution(s) was submitted. If You
|
|
||||||
institute patent litigation against any entity (including a
|
|
||||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
|
||||||
or a Contribution incorporated within the Work constitutes direct
|
|
||||||
or contributory patent infringement, then any patent licenses
|
|
||||||
granted to You under this License for that Work shall terminate
|
|
||||||
as of the date such litigation is filed.
|
|
||||||
|
|
||||||
4. Redistribution. You may reproduce and distribute copies of the
|
|
||||||
Work or Derivative Works thereof in any medium, with or without
|
|
||||||
modifications, and in Source or Object form, provided that You
|
|
||||||
meet the following conditions:
|
|
||||||
|
|
||||||
(a) You must give any other recipients of the Work or
|
|
||||||
Derivative Works a copy of this License; and
|
|
||||||
|
|
||||||
(b) You must cause any modified files to carry prominent notices
|
|
||||||
stating that You changed the files; and
|
|
||||||
|
|
||||||
(c) You must retain, in the Source form of any Derivative Works
|
|
||||||
that You distribute, all copyright, patent, trademark, and
|
|
||||||
attribution notices from the Source form of the Work,
|
|
||||||
excluding those notices that do not pertain to any part of
|
|
||||||
the Derivative Works; and
|
|
||||||
|
|
||||||
(d) If the Work includes a "NOTICE" text file as part of its
|
|
||||||
distribution, then any Derivative Works that You distribute must
|
|
||||||
include a readable copy of the attribution notices contained
|
|
||||||
within such NOTICE file, excluding those notices that do not
|
|
||||||
pertain to any part of the Derivative Works, in at least one
|
|
||||||
of the following places: within a NOTICE text file distributed
|
|
||||||
as part of the Derivative Works; within the Source form or
|
|
||||||
documentation, if provided along with the Derivative Works; or,
|
|
||||||
within a display generated by the Derivative Works, if and
|
|
||||||
wherever such third-party notices normally appear. The contents
|
|
||||||
of the NOTICE file are for informational purposes only and
|
|
||||||
do not modify the License. You may add Your own attribution
|
|
||||||
notices within Derivative Works that You distribute, alongside
|
|
||||||
or as an addendum to the NOTICE text from the Work, provided
|
|
||||||
that such additional attribution notices cannot be construed
|
|
||||||
as modifying the License.
|
|
||||||
|
|
||||||
You may add Your own copyright statement to Your modifications and
|
|
||||||
may provide additional or different license terms and conditions
|
|
||||||
for use, reproduction, or distribution of Your modifications, or
|
|
||||||
for any such Derivative Works as a whole, provided Your use,
|
|
||||||
reproduction, and distribution of the Work otherwise complies with
|
|
||||||
the conditions stated in this License.
|
|
||||||
|
|
||||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
|
||||||
any Contribution intentionally submitted for inclusion in the Work
|
|
||||||
by You to the Licensor shall be under the terms and conditions of
|
|
||||||
this License, without any additional terms or conditions.
|
|
||||||
Notwithstanding the above, nothing herein shall supersede or modify
|
|
||||||
the terms of any separate license agreement you may have executed
|
|
||||||
with Licensor regarding such Contributions.
|
|
||||||
|
|
||||||
6. Trademarks. This License does not grant permission to use the trade
|
|
||||||
names, trademarks, service marks, or product names of the Licensor,
|
|
||||||
except as required for reasonable and customary use in describing the
|
|
||||||
origin of the Work and reproducing the content of the NOTICE file.
|
|
||||||
|
|
||||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
|
||||||
agreed to in writing, Licensor provides the Work (and each
|
|
||||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
implied, including, without limitation, any warranties or conditions
|
|
||||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
|
||||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
|
||||||
appropriateness of using or redistributing the Work and assume any
|
|
||||||
risks associated with Your exercise of permissions under this License.
|
|
||||||
|
|
||||||
8. Limitation of Liability. In no event and under no legal theory,
|
|
||||||
whether in tort (including negligence), contract, or otherwise,
|
|
||||||
unless required by applicable law (such as deliberate and grossly
|
|
||||||
negligent acts) or agreed to in writing, shall any Contributor be
|
|
||||||
liable to You for damages, including any direct, indirect, special,
|
|
||||||
incidental, or consequential damages of any character arising as a
|
|
||||||
result of this License or out of the use or inability to use the
|
|
||||||
Work (including but not limited to damages for loss of goodwill,
|
|
||||||
work stoppage, computer failure or malfunction, or any and all
|
|
||||||
other commercial damages or losses), even if such Contributor
|
|
||||||
has been advised of the possibility of such damages.
|
|
||||||
|
|
||||||
9. Accepting Warranty or Additional Liability. While redistributing
|
|
||||||
the Work or Derivative Works thereof, You may choose to offer,
|
|
||||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
|
||||||
or other liability obligations and/or rights consistent with this
|
|
||||||
License. However, in accepting such obligations, You may act only
|
|
||||||
on Your own behalf and on Your sole responsibility, not on behalf
|
|
||||||
of any other Contributor, and only if You agree to indemnify,
|
|
||||||
defend, and hold each Contributor harmless for any liability
|
|
||||||
incurred by, or claims asserted against, such Contributor by reason
|
|
||||||
of your accepting any such warranty or additional liability.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
APPENDIX: How to apply the Apache License to your work.
|
|
||||||
|
|
||||||
To apply the Apache License to your work, attach the following
|
|
||||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
|
||||||
replaced with your own identifying information. (Don't include
|
|
||||||
the brackets!) The text should be enclosed in the appropriate
|
|
||||||
comment syntax for the file format. We also recommend that a
|
|
||||||
file or class name and description of purpose be included on the
|
|
||||||
same "printed page" as the copyright notice for easier
|
|
||||||
identification within third-party archives.
|
|
||||||
|
|
||||||
Copyright 2016 The CRoaring authors
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
|
|
@ -1,2 +0,0 @@
|
|||||||
download from https://github.com/RoaringBitmap/CRoaring/archive/v0.2.57.tar.gz
|
|
||||||
and use ./amalgamation.sh generate
|
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -192,7 +192,7 @@ set(SRCS
|
|||||||
${HDFS3_SOURCE_DIR}/common/FileWrapper.h
|
${HDFS3_SOURCE_DIR}/common/FileWrapper.h
|
||||||
)
|
)
|
||||||
|
|
||||||
# old kernels (< 3.17) doens't have SYS_getrandom. Always use POSIX implementation to have better compatibility
|
# old kernels (< 3.17) doesn't have SYS_getrandom. Always use POSIX implementation to have better compatibility
|
||||||
set_source_files_properties(${HDFS3_SOURCE_DIR}/rpc/RpcClient.cpp PROPERTIES COMPILE_FLAGS "-DBOOST_UUID_RANDOM_PROVIDER_FORCE_POSIX=1")
|
set_source_files_properties(${HDFS3_SOURCE_DIR}/rpc/RpcClient.cpp PROPERTIES COMPILE_FLAGS "-DBOOST_UUID_RANDOM_PROVIDER_FORCE_POSIX=1")
|
||||||
|
|
||||||
# target
|
# target
|
||||||
|
2
contrib/mariadb-connector-c
vendored
2
contrib/mariadb-connector-c
vendored
@ -1 +1 @@
|
|||||||
Subproject commit f5638e954a79f50bac7c7a5deaa5a241e0ce8b5f
|
Subproject commit 1485b0de3eaa1508dfe49a5ba1e4aa2a71fd8335
|
4
debian/changelog
vendored
4
debian/changelog
vendored
@ -1,5 +1,5 @@
|
|||||||
clickhouse (20.11.1.1) unstable; urgency=low
|
clickhouse (20.12.1.1) unstable; urgency=low
|
||||||
|
|
||||||
* Modified source code
|
* Modified source code
|
||||||
|
|
||||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Sat, 10 Oct 2020 18:39:55 +0300
|
-- clickhouse-release <clickhouse-release@yandex-team.ru> Thu, 05 Nov 2020 21:52:47 +0300
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:18.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=20.11.1.*
|
ARG version=20.12.1.*
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install --yes --no-install-recommends \
|
&& apt-get install --yes --no-install-recommends \
|
||||||
|
@ -63,7 +63,7 @@ then
|
|||||||
mkdir -p /output/config
|
mkdir -p /output/config
|
||||||
cp ../programs/server/config.xml /output/config
|
cp ../programs/server/config.xml /output/config
|
||||||
cp ../programs/server/users.xml /output/config
|
cp ../programs/server/users.xml /output/config
|
||||||
cp -r ../programs/server/config.d /output/config
|
cp -r --dereference ../programs/server/config.d /output/config
|
||||||
tar -czvf "$COMBINED_OUTPUT.tgz" /output
|
tar -czvf "$COMBINED_OUTPUT.tgz" /output
|
||||||
rm -r /output/*
|
rm -r /output/*
|
||||||
mv "$COMBINED_OUTPUT.tgz" /output
|
mv "$COMBINED_OUTPUT.tgz" /output
|
||||||
|
@ -31,10 +31,6 @@ RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \
|
|||||||
&& chmod +x dpkg-deb \
|
&& chmod +x dpkg-deb \
|
||||||
&& cp dpkg-deb /usr/bin
|
&& cp dpkg-deb /usr/bin
|
||||||
|
|
||||||
RUN export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
|
|
||||||
&& wget -nv -O /tmp/arrow-keyring.deb "https://apache.bintray.com/arrow/ubuntu/apache-arrow-archive-keyring-latest-${CODENAME}.deb" \
|
|
||||||
&& dpkg -i /tmp/arrow-keyring.deb
|
|
||||||
|
|
||||||
# Libraries from OS are only needed to test the "unbundled" build (this is not used in production).
|
# Libraries from OS are only needed to test the "unbundled" build (this is not used in production).
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install \
|
&& apt-get install \
|
||||||
|
@ -1,6 +1,10 @@
|
|||||||
# docker build -t yandex/clickhouse-unbundled-builder .
|
# docker build -t yandex/clickhouse-unbundled-builder .
|
||||||
FROM yandex/clickhouse-deb-builder
|
FROM yandex/clickhouse-deb-builder
|
||||||
|
|
||||||
|
RUN export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
|
||||||
|
&& wget -nv -O /tmp/arrow-keyring.deb "https://apache.bintray.com/arrow/ubuntu/apache-arrow-archive-keyring-latest-${CODENAME}.deb" \
|
||||||
|
&& dpkg -i /tmp/arrow-keyring.deb
|
||||||
|
|
||||||
# Libraries from OS are only needed to test the "unbundled" build (that is not used in production).
|
# Libraries from OS are only needed to test the "unbundled" build (that is not used in production).
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install \
|
&& apt-get install \
|
||||||
|
8
docker/server/.dockerignore
Normal file
8
docker/server/.dockerignore
Normal file
@ -0,0 +1,8 @@
|
|||||||
|
# post / preinstall scripts (not needed, we do it in Dockerfile)
|
||||||
|
alpine-root/install/*
|
||||||
|
|
||||||
|
# docs (looks useless)
|
||||||
|
alpine-root/usr/share/doc/*
|
||||||
|
|
||||||
|
# packages, etc. (used by prepare.sh)
|
||||||
|
alpine-root/tgz-packages/*
|
1
docker/server/.gitignore
vendored
Normal file
1
docker/server/.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
|||||||
|
alpine-root/*
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:20.04
|
FROM ubuntu:20.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=20.11.1.*
|
ARG version=20.12.1.*
|
||||||
ARG gosu_ver=1.10
|
ARG gosu_ver=1.10
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
|
26
docker/server/Dockerfile.alpine
Normal file
26
docker/server/Dockerfile.alpine
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
FROM alpine
|
||||||
|
|
||||||
|
ENV LANG=en_US.UTF-8 \
|
||||||
|
LANGUAGE=en_US:en \
|
||||||
|
LC_ALL=en_US.UTF-8 \
|
||||||
|
TZ=UTC \
|
||||||
|
CLICKHOUSE_CONFIG=/etc/clickhouse-server/config.xml
|
||||||
|
|
||||||
|
COPY alpine-root/ /
|
||||||
|
|
||||||
|
# from https://github.com/ClickHouse/ClickHouse/blob/master/debian/clickhouse-server.postinst
|
||||||
|
RUN addgroup clickhouse \
|
||||||
|
&& adduser -S -H -h /nonexistent -s /bin/false -G clickhouse -g "ClickHouse server" clickhouse \
|
||||||
|
&& chown clickhouse:clickhouse /var/lib/clickhouse \
|
||||||
|
&& chmod 700 /var/lib/clickhouse \
|
||||||
|
&& chown root:clickhouse /var/log/clickhouse-server \
|
||||||
|
&& chmod 775 /var/log/clickhouse-server \
|
||||||
|
&& chmod +x /entrypoint.sh \
|
||||||
|
&& apk add --no-cache su-exec
|
||||||
|
|
||||||
|
EXPOSE 9000 8123 9009
|
||||||
|
|
||||||
|
VOLUME /var/lib/clickhouse \
|
||||||
|
/var/log/clickhouse-server
|
||||||
|
|
||||||
|
ENTRYPOINT ["/entrypoint.sh"]
|
59
docker/server/alpine-build.sh
Executable file
59
docker/server/alpine-build.sh
Executable file
@ -0,0 +1,59 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
set -x
|
||||||
|
|
||||||
|
REPO_CHANNEL="${REPO_CHANNEL:-stable}" # lts / testing / prestable / etc
|
||||||
|
REPO_URL="${REPO_URL:-"https://repo.yandex.ru/clickhouse/tgz/${REPO_CHANNEL}"}"
|
||||||
|
VERSION="${VERSION:-20.9.3.45}"
|
||||||
|
|
||||||
|
# where original files live
|
||||||
|
DOCKER_BUILD_FOLDER="${BASH_SOURCE%/*}"
|
||||||
|
|
||||||
|
# we will create root for our image here
|
||||||
|
CONTAINER_ROOT_FOLDER="${DOCKER_BUILD_FOLDER}/alpine-root"
|
||||||
|
|
||||||
|
# where to put downloaded tgz
|
||||||
|
TGZ_PACKAGES_FOLDER="${CONTAINER_ROOT_FOLDER}/tgz-packages"
|
||||||
|
|
||||||
|
# clean up the root from old runs
|
||||||
|
rm -rf "$CONTAINER_ROOT_FOLDER"
|
||||||
|
|
||||||
|
mkdir -p "$TGZ_PACKAGES_FOLDER"
|
||||||
|
|
||||||
|
PACKAGES=( "clickhouse-client" "clickhouse-server" "clickhouse-common-static" )
|
||||||
|
|
||||||
|
# download tars from the repo
|
||||||
|
for package in "${PACKAGES[@]}"
|
||||||
|
do
|
||||||
|
wget -q --show-progress "${REPO_URL}/${package}-${VERSION}.tgz" -O "${TGZ_PACKAGES_FOLDER}/${package}-${VERSION}.tgz"
|
||||||
|
done
|
||||||
|
|
||||||
|
# unpack tars
|
||||||
|
for package in "${PACKAGES[@]}"
|
||||||
|
do
|
||||||
|
tar xvzf "${TGZ_PACKAGES_FOLDER}/${package}-${VERSION}.tgz" --strip-components=2 -C "$CONTAINER_ROOT_FOLDER"
|
||||||
|
done
|
||||||
|
|
||||||
|
# prepare few more folders
|
||||||
|
mkdir -p "${CONTAINER_ROOT_FOLDER}/etc/clickhouse-server/users.d" \
|
||||||
|
"${CONTAINER_ROOT_FOLDER}/etc/clickhouse-server/config.d" \
|
||||||
|
"${CONTAINER_ROOT_FOLDER}/var/log/clickhouse-server" \
|
||||||
|
"${CONTAINER_ROOT_FOLDER}/var/lib/clickhouse" \
|
||||||
|
"${CONTAINER_ROOT_FOLDER}/docker-entrypoint-initdb.d" \
|
||||||
|
"${CONTAINER_ROOT_FOLDER}/lib64"
|
||||||
|
|
||||||
|
cp "${DOCKER_BUILD_FOLDER}/docker_related_config.xml" "${CONTAINER_ROOT_FOLDER}/etc/clickhouse-server/config.d/"
|
||||||
|
cp "${DOCKER_BUILD_FOLDER}/entrypoint.alpine.sh" "${CONTAINER_ROOT_FOLDER}/entrypoint.sh"
|
||||||
|
|
||||||
|
## get glibc components from ubuntu 20.04 and put them to expected place
|
||||||
|
docker pull ubuntu:20.04
|
||||||
|
ubuntu20image=$(docker create --rm ubuntu:20.04)
|
||||||
|
docker cp -L ${ubuntu20image}:/lib/x86_64-linux-gnu/libc.so.6 "${CONTAINER_ROOT_FOLDER}/lib"
|
||||||
|
docker cp -L ${ubuntu20image}:/lib/x86_64-linux-gnu/libdl.so.2 "${CONTAINER_ROOT_FOLDER}/lib"
|
||||||
|
docker cp -L ${ubuntu20image}:/lib/x86_64-linux-gnu/libm.so.6 "${CONTAINER_ROOT_FOLDER}/lib"
|
||||||
|
docker cp -L ${ubuntu20image}:/lib/x86_64-linux-gnu/libpthread.so.0 "${CONTAINER_ROOT_FOLDER}/lib"
|
||||||
|
docker cp -L ${ubuntu20image}:/lib/x86_64-linux-gnu/librt.so.1 "${CONTAINER_ROOT_FOLDER}/lib"
|
||||||
|
docker cp -L ${ubuntu20image}:/lib/x86_64-linux-gnu/libnss_dns.so.2 "${CONTAINER_ROOT_FOLDER}/lib"
|
||||||
|
docker cp -L ${ubuntu20image}:/lib/x86_64-linux-gnu/libresolv.so.2 "${CONTAINER_ROOT_FOLDER}/lib"
|
||||||
|
docker cp -L ${ubuntu20image}:/lib64/ld-linux-x86-64.so.2 "${CONTAINER_ROOT_FOLDER}/lib64"
|
||||||
|
|
||||||
|
docker build "$DOCKER_BUILD_FOLDER" -f Dockerfile.alpine -t "yandex/clickhouse-server:${VERSION}-alpine" --pull
|
152
docker/server/entrypoint.alpine.sh
Executable file
152
docker/server/entrypoint.alpine.sh
Executable file
@ -0,0 +1,152 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
#set -x
|
||||||
|
|
||||||
|
DO_CHOWN=1
|
||||||
|
if [ "$CLICKHOUSE_DO_NOT_CHOWN" = 1 ]; then
|
||||||
|
DO_CHOWN=0
|
||||||
|
fi
|
||||||
|
|
||||||
|
CLICKHOUSE_UID="${CLICKHOUSE_UID:-"$(id -u clickhouse)"}"
|
||||||
|
CLICKHOUSE_GID="${CLICKHOUSE_GID:-"$(id -g clickhouse)"}"
|
||||||
|
|
||||||
|
# support --user
|
||||||
|
if [ "$(id -u)" = "0" ]; then
|
||||||
|
USER=$CLICKHOUSE_UID
|
||||||
|
GROUP=$CLICKHOUSE_GID
|
||||||
|
# busybox has setuidgid & chpst buildin
|
||||||
|
gosu="su-exec $USER:$GROUP"
|
||||||
|
else
|
||||||
|
USER="$(id -u)"
|
||||||
|
GROUP="$(id -g)"
|
||||||
|
gosu=""
|
||||||
|
DO_CHOWN=0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# set some vars
|
||||||
|
CLICKHOUSE_CONFIG="${CLICKHOUSE_CONFIG:-/etc/clickhouse-server/config.xml}"
|
||||||
|
|
||||||
|
# port is needed to check if clickhouse-server is ready for connections
|
||||||
|
HTTP_PORT="$(clickhouse extract-from-config --config-file $CLICKHOUSE_CONFIG --key=http_port)"
|
||||||
|
|
||||||
|
# get CH directories locations
|
||||||
|
DATA_DIR="$(clickhouse extract-from-config --config-file $CLICKHOUSE_CONFIG --key=path || true)"
|
||||||
|
TMP_DIR="$(clickhouse extract-from-config --config-file $CLICKHOUSE_CONFIG --key=tmp_path || true)"
|
||||||
|
USER_PATH="$(clickhouse extract-from-config --config-file $CLICKHOUSE_CONFIG --key=user_files_path || true)"
|
||||||
|
LOG_PATH="$(clickhouse extract-from-config --config-file $CLICKHOUSE_CONFIG --key=logger.log || true)"
|
||||||
|
LOG_DIR="$(dirname $LOG_PATH || true)"
|
||||||
|
ERROR_LOG_PATH="$(clickhouse extract-from-config --config-file $CLICKHOUSE_CONFIG --key=logger.errorlog || true)"
|
||||||
|
ERROR_LOG_DIR="$(dirname $ERROR_LOG_PATH || true)"
|
||||||
|
FORMAT_SCHEMA_PATH="$(clickhouse extract-from-config --config-file $CLICKHOUSE_CONFIG --key=format_schema_path || true)"
|
||||||
|
|
||||||
|
CLICKHOUSE_USER="${CLICKHOUSE_USER:-default}"
|
||||||
|
CLICKHOUSE_PASSWORD="${CLICKHOUSE_PASSWORD:-}"
|
||||||
|
CLICKHOUSE_DB="${CLICKHOUSE_DB:-}"
|
||||||
|
|
||||||
|
for dir in "$DATA_DIR" \
|
||||||
|
"$ERROR_LOG_DIR" \
|
||||||
|
"$LOG_DIR" \
|
||||||
|
"$TMP_DIR" \
|
||||||
|
"$USER_PATH" \
|
||||||
|
"$FORMAT_SCHEMA_PATH"
|
||||||
|
do
|
||||||
|
# check if variable not empty
|
||||||
|
[ -z "$dir" ] && continue
|
||||||
|
# ensure directories exist
|
||||||
|
if ! mkdir -p "$dir"; then
|
||||||
|
echo "Couldn't create necessary directory: $dir"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$DO_CHOWN" = "1" ]; then
|
||||||
|
# ensure proper directories permissions
|
||||||
|
chown -R "$USER:$GROUP" "$dir"
|
||||||
|
elif [ "$(stat -c %u "$dir")" != "$USER" ]; then
|
||||||
|
echo "Necessary directory '$dir' isn't owned by user with id '$USER'"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# if clickhouse user is defined - create it (user "default" already exists out of box)
|
||||||
|
if [ -n "$CLICKHOUSE_USER" ] && [ "$CLICKHOUSE_USER" != "default" ] || [ -n "$CLICKHOUSE_PASSWORD" ]; then
|
||||||
|
echo "$0: create new user '$CLICKHOUSE_USER' instead 'default'"
|
||||||
|
cat <<EOT > /etc/clickhouse-server/users.d/default-user.xml
|
||||||
|
<yandex>
|
||||||
|
<!-- Docs: <https://clickhouse.tech/docs/en/operations/settings/settings_users/> -->
|
||||||
|
<users>
|
||||||
|
<!-- Remove default user -->
|
||||||
|
<default remove="remove">
|
||||||
|
</default>
|
||||||
|
|
||||||
|
<${CLICKHOUSE_USER}>
|
||||||
|
<profile>default</profile>
|
||||||
|
<networks>
|
||||||
|
<ip>::/0</ip>
|
||||||
|
</networks>
|
||||||
|
<password>${CLICKHOUSE_PASSWORD}</password>
|
||||||
|
<quota>default</quota>
|
||||||
|
</${CLICKHOUSE_USER}>
|
||||||
|
</users>
|
||||||
|
</yandex>
|
||||||
|
EOT
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -n "$(ls /docker-entrypoint-initdb.d/)" ] || [ -n "$CLICKHOUSE_DB" ]; then
|
||||||
|
# Listen only on localhost until the initialization is done
|
||||||
|
$gosu /usr/bin/clickhouse-server --config-file=$CLICKHOUSE_CONFIG -- --listen_host=127.0.0.1 &
|
||||||
|
pid="$!"
|
||||||
|
|
||||||
|
# check if clickhouse is ready to accept connections
|
||||||
|
# will try to send ping clickhouse via http_port (max 6 retries, with 1 sec timeout and 1 sec delay between retries)
|
||||||
|
tries=6
|
||||||
|
while ! wget --spider -T 1 -q "http://localhost:$HTTP_PORT/ping" 2>/dev/null; do
|
||||||
|
if [ "$tries" -le "0" ]; then
|
||||||
|
echo >&2 'ClickHouse init process failed.'
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
tries=$(( tries-1 ))
|
||||||
|
sleep 1
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ ! -z "$CLICKHOUSE_PASSWORD" ]; then
|
||||||
|
printf -v WITH_PASSWORD '%s %q' "--password" "$CLICKHOUSE_PASSWORD"
|
||||||
|
fi
|
||||||
|
|
||||||
|
clickhouseclient="clickhouse-client --multiquery -u $CLICKHOUSE_USER $WITH_PASSWORD "
|
||||||
|
|
||||||
|
# create default database, if defined
|
||||||
|
if [ -n "$CLICKHOUSE_DB" ]; then
|
||||||
|
echo "$0: create database '$CLICKHOUSE_DB'"
|
||||||
|
"$clickhouseclient" -q "CREATE DATABASE IF NOT EXISTS $CLICKHOUSE_DB";
|
||||||
|
fi
|
||||||
|
|
||||||
|
for f in /docker-entrypoint-initdb.d/*; do
|
||||||
|
case "$f" in
|
||||||
|
*.sh)
|
||||||
|
if [ -x "$f" ]; then
|
||||||
|
echo "$0: running $f"
|
||||||
|
"$f"
|
||||||
|
else
|
||||||
|
echo "$0: sourcing $f"
|
||||||
|
. "$f"
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
*.sql) echo "$0: running $f"; cat "$f" | "$clickhouseclient" ; echo ;;
|
||||||
|
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "$clickhouseclient"; echo ;;
|
||||||
|
*) echo "$0: ignoring $f" ;;
|
||||||
|
esac
|
||||||
|
echo
|
||||||
|
done
|
||||||
|
|
||||||
|
if ! kill -s TERM "$pid" || ! wait "$pid"; then
|
||||||
|
echo >&2 'Finishing of ClickHouse init process failed.'
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# if no args passed to `docker run` or first argument start with `--`, then the user is passing clickhouse-server arguments
|
||||||
|
if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then
|
||||||
|
exec $gosu /usr/bin/clickhouse-server --config-file=$CLICKHOUSE_CONFIG "$@"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Otherwise, we assume the user want to run his own process, for example a `bash` shell to explore this image
|
||||||
|
exec "$@"
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:18.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=20.11.1.*
|
ARG version=20.12.1.*
|
||||||
|
|
||||||
RUN apt-get update && \
|
RUN apt-get update && \
|
||||||
apt-get install -y apt-transport-https dirmngr && \
|
apt-get install -y apt-transport-https dirmngr && \
|
||||||
|
@ -53,6 +53,7 @@ RUN apt-get update \
|
|||||||
ninja-build \
|
ninja-build \
|
||||||
psmisc \
|
psmisc \
|
||||||
python3 \
|
python3 \
|
||||||
|
python3-pip \
|
||||||
python3-lxml \
|
python3-lxml \
|
||||||
python3-requests \
|
python3-requests \
|
||||||
python3-termcolor \
|
python3-termcolor \
|
||||||
@ -62,6 +63,8 @@ RUN apt-get update \
|
|||||||
unixodbc \
|
unixodbc \
|
||||||
--yes --no-install-recommends
|
--yes --no-install-recommends
|
||||||
|
|
||||||
|
RUN pip3 install numpy scipy pandas
|
||||||
|
|
||||||
# This symlink required by gcc to find lld compiler
|
# This symlink required by gcc to find lld compiler
|
||||||
RUN ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/ld.lld
|
RUN ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/ld.lld
|
||||||
|
|
||||||
@ -79,6 +82,7 @@ RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
|||||||
|
|
||||||
ENV COMMIT_SHA=''
|
ENV COMMIT_SHA=''
|
||||||
ENV PULL_REQUEST_NUMBER=''
|
ENV PULL_REQUEST_NUMBER=''
|
||||||
|
ENV COPY_CLICKHOUSE_BINARY_TO_OUTPUT=0
|
||||||
|
|
||||||
COPY run.sh /
|
COPY run.sh /
|
||||||
CMD ["/bin/bash", "/run.sh"]
|
CMD ["/bin/bash", "/run.sh"]
|
||||||
|
@ -127,7 +127,7 @@ function clone_submodules
|
|||||||
(
|
(
|
||||||
cd "$FASTTEST_SOURCE"
|
cd "$FASTTEST_SOURCE"
|
||||||
|
|
||||||
SUBMODULES_TO_UPDATE=(contrib/boost contrib/zlib-ng contrib/libxml2 contrib/poco contrib/libunwind contrib/ryu contrib/fmtlib contrib/base64 contrib/cctz contrib/libcpuid contrib/double-conversion contrib/libcxx contrib/libcxxabi contrib/libc-headers contrib/lz4 contrib/zstd contrib/fastops contrib/rapidjson contrib/re2 contrib/sparsehash-c11)
|
SUBMODULES_TO_UPDATE=(contrib/boost contrib/zlib-ng contrib/libxml2 contrib/poco contrib/libunwind contrib/ryu contrib/fmtlib contrib/base64 contrib/cctz contrib/libcpuid contrib/double-conversion contrib/libcxx contrib/libcxxabi contrib/libc-headers contrib/lz4 contrib/zstd contrib/fastops contrib/rapidjson contrib/re2 contrib/sparsehash-c11 contrib/croaring)
|
||||||
|
|
||||||
git submodule sync
|
git submodule sync
|
||||||
git submodule update --init --recursive "${SUBMODULES_TO_UPDATE[@]}"
|
git submodule update --init --recursive "${SUBMODULES_TO_UPDATE[@]}"
|
||||||
@ -172,6 +172,9 @@ function build
|
|||||||
(
|
(
|
||||||
cd "$FASTTEST_BUILD"
|
cd "$FASTTEST_BUILD"
|
||||||
time ninja clickhouse-bundle | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/build_log.txt"
|
time ninja clickhouse-bundle | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/build_log.txt"
|
||||||
|
if [ "$COPY_CLICKHOUSE_BINARY_TO_OUTPUT" -eq "1" ]; then
|
||||||
|
cp programs/clickhouse "$FASTTEST_OUTPUT/clickhouse"
|
||||||
|
fi
|
||||||
ccache --show-stats ||:
|
ccache --show-stats ||:
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
@ -268,7 +271,14 @@ TESTS_TO_SKIP=(
|
|||||||
00974_query_profiler
|
00974_query_profiler
|
||||||
|
|
||||||
# Look at DistributedFilesToInsert, so cannot run in parallel.
|
# Look at DistributedFilesToInsert, so cannot run in parallel.
|
||||||
01457_DistributedFilesToInsert
|
01460_DistributedFilesToInsert
|
||||||
|
|
||||||
|
01541_max_memory_usage_for_user
|
||||||
|
|
||||||
|
# Require python libraries like scipy, pandas and numpy
|
||||||
|
01322_ttest_scipy
|
||||||
|
|
||||||
|
01545_system_errors
|
||||||
)
|
)
|
||||||
|
|
||||||
time clickhouse-test -j 8 --order=random --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt"
|
time clickhouse-test -j 8 --order=random --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt"
|
||||||
|
@ -45,11 +45,11 @@ function configure
|
|||||||
{
|
{
|
||||||
rm -rf db ||:
|
rm -rf db ||:
|
||||||
mkdir db ||:
|
mkdir db ||:
|
||||||
cp -av "$repo_dir"/programs/server/config* db
|
cp -av --dereference "$repo_dir"/programs/server/config* db
|
||||||
cp -av "$repo_dir"/programs/server/user* db
|
cp -av --dereference "$repo_dir"/programs/server/user* db
|
||||||
# TODO figure out which ones are needed
|
# TODO figure out which ones are needed
|
||||||
cp -av "$repo_dir"/tests/config/config.d/listen.xml db/config.d
|
cp -av --dereference "$repo_dir"/tests/config/config.d/listen.xml db/config.d
|
||||||
cp -av "$script_dir"/query-fuzzer-tweaks-users.xml db/users.d
|
cp -av --dereference "$script_dir"/query-fuzzer-tweaks-users.xml db/users.d
|
||||||
}
|
}
|
||||||
|
|
||||||
function watchdog
|
function watchdog
|
||||||
@ -89,7 +89,7 @@ function fuzz
|
|||||||
> >(tail -10000 > fuzzer.log) \
|
> >(tail -10000 > fuzzer.log) \
|
||||||
2>&1 \
|
2>&1 \
|
||||||
|| fuzzer_exit_code=$?
|
|| fuzzer_exit_code=$?
|
||||||
|
|
||||||
echo "Fuzzer exit code is $fuzzer_exit_code"
|
echo "Fuzzer exit code is $fuzzer_exit_code"
|
||||||
|
|
||||||
./clickhouse-client --query "select elapsed, query from system.processes" ||:
|
./clickhouse-client --query "select elapsed, query from system.processes" ||:
|
||||||
@ -164,7 +164,7 @@ case "$stage" in
|
|||||||
# Lost connection to the server. This probably means that the server died
|
# Lost connection to the server. This probably means that the server died
|
||||||
# with abort.
|
# with abort.
|
||||||
echo "failure" > status.txt
|
echo "failure" > status.txt
|
||||||
if ! grep -ao "Received signal.*\|Logical error.*\|Assertion.*failed" server.log > description.txt
|
if ! grep -ao "Received signal.*\|Logical error.*\|Assertion.*failed\|Failed assertion.*" server.log > description.txt
|
||||||
then
|
then
|
||||||
echo "Lost connection to server. See the logs" > description.txt
|
echo "Lost connection to server. See the logs" > description.txt
|
||||||
fi
|
fi
|
||||||
|
@ -17,7 +17,8 @@ RUN apt-get update \
|
|||||||
sqlite3 \
|
sqlite3 \
|
||||||
curl \
|
curl \
|
||||||
tar \
|
tar \
|
||||||
krb5-user
|
krb5-user \
|
||||||
|
iproute2
|
||||||
RUN rm -rf \
|
RUN rm -rf \
|
||||||
/var/lib/apt/lists/* \
|
/var/lib/apt/lists/* \
|
||||||
/var/cache/debconf \
|
/var/cache/debconf \
|
||||||
|
@ -63,7 +63,7 @@ function configure
|
|||||||
# Make copies of the original db for both servers. Use hardlinks instead
|
# Make copies of the original db for both servers. Use hardlinks instead
|
||||||
# of copying to save space. Before that, remove preprocessed configs and
|
# of copying to save space. Before that, remove preprocessed configs and
|
||||||
# system tables, because sharing them between servers with hardlinks may
|
# system tables, because sharing them between servers with hardlinks may
|
||||||
# lead to weird effects.
|
# lead to weird effects.
|
||||||
rm -r left/db ||:
|
rm -r left/db ||:
|
||||||
rm -r right/db ||:
|
rm -r right/db ||:
|
||||||
rm -r db0/preprocessed_configs ||:
|
rm -r db0/preprocessed_configs ||:
|
||||||
@ -77,15 +77,12 @@ function restart
|
|||||||
while killall clickhouse-server; do echo . ; sleep 1 ; done
|
while killall clickhouse-server; do echo . ; sleep 1 ; done
|
||||||
echo all killed
|
echo all killed
|
||||||
|
|
||||||
# Disable percpu arenas because they segfault when the process is bound to
|
# Change the jemalloc settings here.
|
||||||
# a particular NUMA node: https://github.com/jemalloc/jemalloc/pull/1939
|
|
||||||
#
|
|
||||||
# About the jemalloc settings:
|
|
||||||
# https://github.com/jemalloc/jemalloc/wiki/Getting-Started
|
# https://github.com/jemalloc/jemalloc/wiki/Getting-Started
|
||||||
export MALLOC_CONF="percpu_arena:disabled,confirm_conf:true"
|
export MALLOC_CONF="confirm_conf:true"
|
||||||
|
|
||||||
set -m # Spawn servers in their own process groups
|
set -m # Spawn servers in their own process groups
|
||||||
|
|
||||||
left/clickhouse-server --config-file=left/config/config.xml \
|
left/clickhouse-server --config-file=left/config/config.xml \
|
||||||
-- --path left/db --user_files_path left/db/user_files \
|
-- --path left/db --user_files_path left/db/user_files \
|
||||||
&>> left-server-log.log &
|
&>> left-server-log.log &
|
||||||
@ -211,7 +208,7 @@ function run_tests
|
|||||||
echo test "$test_name"
|
echo test "$test_name"
|
||||||
|
|
||||||
# Don't profile if we're past the time limit.
|
# Don't profile if we're past the time limit.
|
||||||
# Use awk because bash doesn't support floating point arithmetics.
|
# Use awk because bash doesn't support floating point arithmetic.
|
||||||
profile_seconds=$(awk "BEGIN { print ($profile_seconds_left > 0 ? 10 : 0) }")
|
profile_seconds=$(awk "BEGIN { print ($profile_seconds_left > 0 ? 10 : 0) }")
|
||||||
|
|
||||||
TIMEFORMAT=$(printf "$test_name\t%%3R\t%%3U\t%%3S\n")
|
TIMEFORMAT=$(printf "$test_name\t%%3R\t%%3U\t%%3S\n")
|
||||||
@ -544,10 +541,10 @@ create table queries engine File(TSVWithNamesAndTypes, 'report/queries.tsv')
|
|||||||
as select
|
as select
|
||||||
abs(diff) > report_threshold and abs(diff) > stat_threshold as changed_fail,
|
abs(diff) > report_threshold and abs(diff) > stat_threshold as changed_fail,
|
||||||
abs(diff) > report_threshold - 0.05 and abs(diff) > stat_threshold as changed_show,
|
abs(diff) > report_threshold - 0.05 and abs(diff) > stat_threshold as changed_show,
|
||||||
|
|
||||||
not changed_fail and stat_threshold > report_threshold + 0.10 as unstable_fail,
|
not changed_fail and stat_threshold > report_threshold + 0.10 as unstable_fail,
|
||||||
not changed_show and stat_threshold > report_threshold - 0.05 as unstable_show,
|
not changed_show and stat_threshold > report_threshold - 0.05 as unstable_show,
|
||||||
|
|
||||||
left, right, diff, stat_threshold,
|
left, right, diff, stat_threshold,
|
||||||
if(report_threshold > 0, report_threshold, 0.10) as report_threshold,
|
if(report_threshold > 0, report_threshold, 0.10) as report_threshold,
|
||||||
query_metric_stats.test test, query_metric_stats.query_index query_index,
|
query_metric_stats.test test, query_metric_stats.query_index query_index,
|
||||||
@ -770,7 +767,7 @@ create table all_tests_report engine File(TSV, 'report/all-queries.tsv') as
|
|||||||
-- The threshold for 2) is significantly larger than the threshold for 1), to
|
-- The threshold for 2) is significantly larger than the threshold for 1), to
|
||||||
-- avoid jitter.
|
-- avoid jitter.
|
||||||
create view shortness
|
create view shortness
|
||||||
as select
|
as select
|
||||||
(test, query_index) in
|
(test, query_index) in
|
||||||
(select * from file('analyze/marked-short-queries.tsv', TSV,
|
(select * from file('analyze/marked-short-queries.tsv', TSV,
|
||||||
'test text, query_index int'))
|
'test text, query_index int'))
|
||||||
@ -1077,6 +1074,53 @@ wait
|
|||||||
unset IFS
|
unset IFS
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function upload_results
|
||||||
|
{
|
||||||
|
if ! [ -v CHPC_DATABASE_URL ]
|
||||||
|
then
|
||||||
|
echo Database for test results is not specified, will not upload them.
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Surprisingly, clickhouse-client doesn't understand --host 127.0.0.1:9000
|
||||||
|
# so I have to extract host and port with clickhouse-local. I tried to use
|
||||||
|
# Poco URI parser to support this in the client, but it's broken and can't
|
||||||
|
# parse host:port.
|
||||||
|
set +x # Don't show password in the log
|
||||||
|
clickhouse-client \
|
||||||
|
$(clickhouse-local --query "with '${CHPC_DATABASE_URL}' as url select '--host ' || domain(url) || ' --port ' || toString(port(url)) format TSV") \
|
||||||
|
--secure \
|
||||||
|
--user "${CHPC_DATABASE_USER}" \
|
||||||
|
--password "${CHPC_DATABASE_PASSWORD}" \
|
||||||
|
--config "right/config/client_config.xml" \
|
||||||
|
--database perftest \
|
||||||
|
--date_time_input_format=best_effort \
|
||||||
|
--query "
|
||||||
|
insert into query_metrics_v2
|
||||||
|
select
|
||||||
|
toDate(event_time) event_date,
|
||||||
|
toDateTime('$(cd right/ch && git show -s --format=%ci "$SHA_TO_TEST" | cut -d' ' -f-2)') event_time,
|
||||||
|
$PR_TO_TEST pr_number,
|
||||||
|
'$REF_SHA' old_sha,
|
||||||
|
'$SHA_TO_TEST' new_sha,
|
||||||
|
test,
|
||||||
|
query_index,
|
||||||
|
query_display_name,
|
||||||
|
metric_name,
|
||||||
|
old_value,
|
||||||
|
new_value,
|
||||||
|
diff,
|
||||||
|
stat_threshold
|
||||||
|
from input('metric_name text, old_value float, new_value float, diff float,
|
||||||
|
ratio_display_text text, stat_threshold float,
|
||||||
|
test text, query_index int, query_display_name text')
|
||||||
|
settings date_time_input_format='best_effort'
|
||||||
|
format TSV
|
||||||
|
settings date_time_input_format='best_effort'
|
||||||
|
" < report/all-query-metrics.tsv # Don't leave whitespace after INSERT: https://github.com/ClickHouse/ClickHouse/issues/16652
|
||||||
|
set -x
|
||||||
|
}
|
||||||
|
|
||||||
# Check that local and client are in PATH
|
# Check that local and client are in PATH
|
||||||
clickhouse-local --version > /dev/null
|
clickhouse-local --version > /dev/null
|
||||||
clickhouse-client --version > /dev/null
|
clickhouse-client --version > /dev/null
|
||||||
@ -1148,6 +1192,9 @@ case "$stage" in
|
|||||||
time "$script_dir/report.py" --report=all-queries > all-queries.html 2> >(tee -a report/errors.log 1>&2) ||:
|
time "$script_dir/report.py" --report=all-queries > all-queries.html 2> >(tee -a report/errors.log 1>&2) ||:
|
||||||
time "$script_dir/report.py" > report.html
|
time "$script_dir/report.py" > report.html
|
||||||
;&
|
;&
|
||||||
|
"upload_results")
|
||||||
|
time upload_results ||:
|
||||||
|
;&
|
||||||
esac
|
esac
|
||||||
|
|
||||||
# Print some final debug info to help debug Weirdness, of which there is plenty.
|
# Print some final debug info to help debug Weirdness, of which there is plenty.
|
||||||
|
17
docker/test/performance-comparison/config/client_config.xml
Normal file
17
docker/test/performance-comparison/config/client_config.xml
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
<!--
|
||||||
|
This config is used to upload test results to a public ClickHouse instance.
|
||||||
|
It has bad certificates so we ignore them.
|
||||||
|
-->
|
||||||
|
<config>
|
||||||
|
<openSSL>
|
||||||
|
<client>
|
||||||
|
<loadDefaultCAFile>true</loadDefaultCAFile>
|
||||||
|
<cacheSessions>true</cacheSessions>
|
||||||
|
<disableProtocols>sslv2,sslv3</disableProtocols>
|
||||||
|
<preferServerCiphers>true</preferServerCiphers>
|
||||||
|
<invalidCertificateHandler>
|
||||||
|
<name>AcceptCertificateHandler</name> <!-- For tests only-->
|
||||||
|
</invalidCertificateHandler>
|
||||||
|
</client>
|
||||||
|
</openSSL>
|
||||||
|
</config>
|
@ -121,6 +121,9 @@ set +e
|
|||||||
PATH="$(readlink -f right/)":"$PATH"
|
PATH="$(readlink -f right/)":"$PATH"
|
||||||
export PATH
|
export PATH
|
||||||
|
|
||||||
|
export REF_PR
|
||||||
|
export REF_SHA
|
||||||
|
|
||||||
# Start the main comparison script.
|
# Start the main comparison script.
|
||||||
{ \
|
{ \
|
||||||
time ../download.sh "$REF_PR" "$REF_SHA" "$PR_TO_TEST" "$SHA_TO_TEST" && \
|
time ../download.sh "$REF_PR" "$REF_SHA" "$PR_TO_TEST" "$SHA_TO_TEST" && \
|
||||||
|
@ -16,6 +16,7 @@ RUN apt-get update -y \
|
|||||||
python3-lxml \
|
python3-lxml \
|
||||||
python3-requests \
|
python3-requests \
|
||||||
python3-termcolor \
|
python3-termcolor \
|
||||||
|
python3-pip \
|
||||||
qemu-user-static \
|
qemu-user-static \
|
||||||
sudo \
|
sudo \
|
||||||
telnet \
|
telnet \
|
||||||
@ -23,6 +24,8 @@ RUN apt-get update -y \
|
|||||||
unixodbc \
|
unixodbc \
|
||||||
wget
|
wget
|
||||||
|
|
||||||
|
RUN pip3 install numpy scipy pandas
|
||||||
|
|
||||||
RUN mkdir -p /tmp/clickhouse-odbc-tmp \
|
RUN mkdir -p /tmp/clickhouse-odbc-tmp \
|
||||||
&& wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
&& wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
||||||
&& cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \
|
&& cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \
|
||||||
|
@ -17,14 +17,24 @@ service clickhouse-server start && sleep 5
|
|||||||
if grep -q -- "--use-skip-list" /usr/bin/clickhouse-test; then
|
if grep -q -- "--use-skip-list" /usr/bin/clickhouse-test; then
|
||||||
SKIP_LIST_OPT="--use-skip-list"
|
SKIP_LIST_OPT="--use-skip-list"
|
||||||
fi
|
fi
|
||||||
# We can have several additional options so we path them as array because it's
|
|
||||||
# more idiologically correct.
|
|
||||||
read -ra ADDITIONAL_OPTIONS <<< "${ADDITIONAL_OPTIONS:-}"
|
|
||||||
|
|
||||||
function run_tests()
|
function run_tests()
|
||||||
{
|
{
|
||||||
|
# We can have several additional options so we path them as array because it's
|
||||||
|
# more idiologically correct.
|
||||||
|
read -ra ADDITIONAL_OPTIONS <<< "${ADDITIONAL_OPTIONS:-}"
|
||||||
|
|
||||||
|
# Skip these tests, because they fail when we rerun them multiple times
|
||||||
|
if [ "$NUM_TRIES" -gt "1" ]; then
|
||||||
|
ADDITIONAL_OPTIONS+=('--skip')
|
||||||
|
ADDITIONAL_OPTIONS+=('00000_no_tests_to_skip')
|
||||||
|
fi
|
||||||
|
|
||||||
for i in $(seq 1 $NUM_TRIES); do
|
for i in $(seq 1 $NUM_TRIES); do
|
||||||
clickhouse-test --testname --shard --zookeeper --hung-check --print-time "$SKIP_LIST_OPT" "${ADDITIONAL_OPTIONS[@]}" "$SKIP_TESTS_OPTION" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee -a test_output/test_result.txt
|
clickhouse-test --testname --shard --zookeeper --hung-check --print-time "$SKIP_LIST_OPT" "${ADDITIONAL_OPTIONS[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee -a test_output/test_result.txt
|
||||||
|
if [ ${PIPESTATUS[0]} -ne "0" ]; then
|
||||||
|
break;
|
||||||
|
fi
|
||||||
done
|
done
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -58,6 +58,7 @@ RUN apt-get --allow-unauthenticated update -y \
|
|||||||
python3-lxml \
|
python3-lxml \
|
||||||
python3-requests \
|
python3-requests \
|
||||||
python3-termcolor \
|
python3-termcolor \
|
||||||
|
python3-pip \
|
||||||
qemu-user-static \
|
qemu-user-static \
|
||||||
sudo \
|
sudo \
|
||||||
telnet \
|
telnet \
|
||||||
@ -68,6 +69,8 @@ RUN apt-get --allow-unauthenticated update -y \
|
|||||||
wget \
|
wget \
|
||||||
zlib1g-dev
|
zlib1g-dev
|
||||||
|
|
||||||
|
RUN pip3 install numpy scipy pandas
|
||||||
|
|
||||||
RUN mkdir -p /tmp/clickhouse-odbc-tmp \
|
RUN mkdir -p /tmp/clickhouse-odbc-tmp \
|
||||||
&& wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
&& wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
||||||
&& cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \
|
&& cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \
|
||||||
|
@ -1,7 +1,6 @@
|
|||||||
Allow to run simple ClickHouse stress test in Docker from debian packages.
|
Allow to run simple ClickHouse stress test in Docker from debian packages.
|
||||||
Actually it runs single copy of clickhouse-performance-test and multiple copies
|
Actually it runs multiple copies of clickhouse-test (functional tests).
|
||||||
of clickhouse-test (functional tests). This allows to find problems like
|
This allows to find problems like segmentation fault which cause shutdown of server.
|
||||||
segmentation fault which cause shutdown of server.
|
|
||||||
|
|
||||||
Usage:
|
Usage:
|
||||||
```
|
```
|
||||||
|
@ -68,8 +68,6 @@ if __name__ == "__main__":
|
|||||||
parser.add_argument("--test-cmd", default='/usr/bin/clickhouse-test')
|
parser.add_argument("--test-cmd", default='/usr/bin/clickhouse-test')
|
||||||
parser.add_argument("--skip-func-tests", default='')
|
parser.add_argument("--skip-func-tests", default='')
|
||||||
parser.add_argument("--client-cmd", default='clickhouse-client')
|
parser.add_argument("--client-cmd", default='clickhouse-client')
|
||||||
parser.add_argument("--perf-test-cmd", default='clickhouse-performance-test')
|
|
||||||
parser.add_argument("--perf-test-xml-path", default='/usr/share/clickhouse-test/performance/')
|
|
||||||
parser.add_argument("--server-log-folder", default='/var/log/clickhouse-server')
|
parser.add_argument("--server-log-folder", default='/var/log/clickhouse-server')
|
||||||
parser.add_argument("--output-folder")
|
parser.add_argument("--output-folder")
|
||||||
parser.add_argument("--global-time-limit", type=int, default=3600)
|
parser.add_argument("--global-time-limit", type=int, default=3600)
|
||||||
|
@ -35,7 +35,7 @@ RUN apt-get update \
|
|||||||
ENV TZ=Europe/Moscow
|
ENV TZ=Europe/Moscow
|
||||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||||
|
|
||||||
RUN pip3 install urllib3 testflows==1.6.57 docker-compose docker dicttoxml kazoo tzlocal
|
RUN pip3 install urllib3 testflows==1.6.62 docker-compose docker dicttoxml kazoo tzlocal
|
||||||
|
|
||||||
ENV DOCKER_CHANNEL stable
|
ENV DOCKER_CHANNEL stable
|
||||||
ENV DOCKER_VERSION 17.09.1-ce
|
ENV DOCKER_VERSION 17.09.1-ce
|
||||||
|
@ -195,7 +195,7 @@ Templates:
|
|||||||
|
|
||||||
- [Function](_description_templates/template-function.md)
|
- [Function](_description_templates/template-function.md)
|
||||||
- [Setting](_description_templates/template-setting.md)
|
- [Setting](_description_templates/template-setting.md)
|
||||||
- [Table engine](_description_templates/template-table-engine.md)
|
- [Database or Table engine](_description_templates/template-engine.md)
|
||||||
- [System table](_description_templates/template-system-table.md)
|
- [System table](_description_templates/template-system-table.md)
|
||||||
|
|
||||||
|
|
||||||
|
@ -1,8 +1,14 @@
|
|||||||
# EngineName {#enginename}
|
# EngineName {#enginename}
|
||||||
|
|
||||||
- What the engine does.
|
- What the Database/Table engine does.
|
||||||
- Relations with other engines if they exist.
|
- Relations with other engines if they exist.
|
||||||
|
|
||||||
|
## Creating a Database {#creating-a-database}
|
||||||
|
``` sql
|
||||||
|
CREATE DATABASE ...
|
||||||
|
```
|
||||||
|
or
|
||||||
|
|
||||||
## Creating a Table {#creating-a-table}
|
## Creating a Table {#creating-a-table}
|
||||||
``` sql
|
``` sql
|
||||||
CREATE TABLE ...
|
CREATE TABLE ...
|
||||||
@ -10,12 +16,19 @@
|
|||||||
|
|
||||||
**Engine Parameters**
|
**Engine Parameters**
|
||||||
|
|
||||||
**Query Clauses**
|
**Query Clauses** (for Table engines only)
|
||||||
|
|
||||||
## Virtual columns {#virtual-columns}
|
## Virtual columns {#virtual-columns} (for Table engines only)
|
||||||
|
|
||||||
List and virtual columns with description, if they exist.
|
List and virtual columns with description, if they exist.
|
||||||
|
|
||||||
|
## Data Types Support {#data_types-support} (for Database engines only)
|
||||||
|
|
||||||
|
| EngineName | ClickHouse |
|
||||||
|
|-----------------------|------------------------------------|
|
||||||
|
| NativeDataTypeName | [ClickHouseDataTypeName](link#) |
|
||||||
|
|
||||||
|
|
||||||
## Specifics and recommendations {#specifics-and-recommendations}
|
## Specifics and recommendations {#specifics-and-recommendations}
|
||||||
|
|
||||||
Algorithms
|
Algorithms
|
@ -18,4 +18,14 @@ toc_title: Cloud
|
|||||||
- Encryption and isolation
|
- Encryption and isolation
|
||||||
- Automated maintenance
|
- Automated maintenance
|
||||||
|
|
||||||
|
## Altinity.Cloud {#altinity.cloud}
|
||||||
|
|
||||||
|
[Altinity.Cloud](https://altinity.com/cloud-database/) is a fully managed ClickHouse-as-a-Service for the Amazon public cloud.
|
||||||
|
- Fast deployment of ClickHouse clusters on Amazon resources
|
||||||
|
- Easy scale-out/scale-in as well as vertical scaling of nodes
|
||||||
|
- Isolated per-tenant VPCs with public endpoint or VPC peering
|
||||||
|
- Configurable storage types and volume configurations
|
||||||
|
- Cross-AZ scaling for performance and high availability
|
||||||
|
- Built-in monitoring and SQL query editor
|
||||||
|
|
||||||
{## [Original article](https://clickhouse.tech/docs/en/commercial/cloud/) ##}
|
{## [Original article](https://clickhouse.tech/docs/en/commercial/cloud/) ##}
|
||||||
|
@ -189,7 +189,7 @@ Replication is implemented in the `ReplicatedMergeTree` storage engine. The path
|
|||||||
|
|
||||||
Replication uses an asynchronous multi-master scheme. You can insert data into any replica that has a session with `ZooKeeper`, and data is replicated to all other replicas asynchronously. Because ClickHouse doesn’t support UPDATEs, replication is conflict-free. As there is no quorum acknowledgment of inserts, just-inserted data might be lost if one node fails.
|
Replication uses an asynchronous multi-master scheme. You can insert data into any replica that has a session with `ZooKeeper`, and data is replicated to all other replicas asynchronously. Because ClickHouse doesn’t support UPDATEs, replication is conflict-free. As there is no quorum acknowledgment of inserts, just-inserted data might be lost if one node fails.
|
||||||
|
|
||||||
Metadata for replication is stored in ZooKeeper. There is a replication log that lists what actions to do. Actions are: get part; merge parts; drop a partition, and so on. Each replica copies the replication log to its queue and then executes the actions from the queue. For example, on insertion, the “get the part” action is created in the log, and every replica downloads that part. Merges are coordinated between replicas to get byte-identical results. All parts are merged in the same way on all replicas. It is achieved by electing one replica as the leader, and that replica initiates merges and writes “merge parts” actions to the log.
|
Metadata for replication is stored in ZooKeeper. There is a replication log that lists what actions to do. Actions are: get part; merge parts; drop a partition, and so on. Each replica copies the replication log to its queue and then executes the actions from the queue. For example, on insertion, the “get the part” action is created in the log, and every replica downloads that part. Merges are coordinated between replicas to get byte-identical results. All parts are merged in the same way on all replicas. One of the leaders initiates a new merge first and writes “merge parts” actions to the log. Multiple replicas (or all) can be leaders at the same time. A replica can be prevented from becoming a leader using the `merge_tree` setting `replicated_can_become_leader`. The leaders are responsible for scheduling background merges.
|
||||||
|
|
||||||
Replication is physical: only compressed parts are transferred between nodes, not queries. Merges are processed on each replica independently in most cases to lower the network costs by avoiding network amplification. Large merged parts are sent over the network only in cases of significant replication lag.
|
Replication is physical: only compressed parts are transferred between nodes, not queries. Merges are processed on each replica independently in most cases to lower the network costs by avoiding network amplification. Large merged parts are sent over the network only in cases of significant replication lag.
|
||||||
|
|
||||||
|
@ -23,7 +23,7 @@ $ sudo apt-get install git cmake python ninja-build
|
|||||||
|
|
||||||
Or cmake3 instead of cmake on older systems.
|
Or cmake3 instead of cmake on older systems.
|
||||||
|
|
||||||
### Install GCC 9 {#install-gcc-9}
|
### Install GCC 10 {#install-gcc-10}
|
||||||
|
|
||||||
There are several ways to do this.
|
There are several ways to do this.
|
||||||
|
|
||||||
@ -32,7 +32,7 @@ There are several ways to do this.
|
|||||||
On Ubuntu 19.10 or newer:
|
On Ubuntu 19.10 or newer:
|
||||||
|
|
||||||
$ sudo apt-get update
|
$ sudo apt-get update
|
||||||
$ sudo apt-get install gcc-9 g++-9
|
$ sudo apt-get install gcc-10 g++-10
|
||||||
|
|
||||||
#### Install from a PPA Package {#install-from-a-ppa-package}
|
#### Install from a PPA Package {#install-from-a-ppa-package}
|
||||||
|
|
||||||
@ -42,18 +42,18 @@ On older Ubuntu:
|
|||||||
$ sudo apt-get install software-properties-common
|
$ sudo apt-get install software-properties-common
|
||||||
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
||||||
$ sudo apt-get update
|
$ sudo apt-get update
|
||||||
$ sudo apt-get install gcc-9 g++-9
|
$ sudo apt-get install gcc-10 g++-10
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Install from Sources {#install-from-sources}
|
#### Install from Sources {#install-from-sources}
|
||||||
|
|
||||||
See [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
See [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
||||||
|
|
||||||
### Use GCC 9 for Builds {#use-gcc-9-for-builds}
|
### Use GCC 10 for Builds {#use-gcc-10-for-builds}
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ export CC=gcc-9
|
$ export CC=gcc-10
|
||||||
$ export CXX=g++-9
|
$ export CXX=g++-10
|
||||||
```
|
```
|
||||||
|
|
||||||
### Checkout ClickHouse Sources {#checkout-clickhouse-sources}
|
### Checkout ClickHouse Sources {#checkout-clickhouse-sources}
|
||||||
@ -88,7 +88,7 @@ The build requires the following components:
|
|||||||
- Git (is used only to checkout the sources, it’s not needed for the build)
|
- Git (is used only to checkout the sources, it’s not needed for the build)
|
||||||
- CMake 3.10 or newer
|
- CMake 3.10 or newer
|
||||||
- Ninja (recommended) or Make
|
- Ninja (recommended) or Make
|
||||||
- C++ compiler: gcc 9 or clang 8 or newer
|
- C++ compiler: gcc 10 or clang 8 or newer
|
||||||
- Linker: lld or gold (the classic GNU ld won’t work)
|
- Linker: lld or gold (the classic GNU ld won’t work)
|
||||||
- Python (is only used inside LLVM build and it is optional)
|
- Python (is only used inside LLVM build and it is optional)
|
||||||
|
|
||||||
|
@ -131,13 +131,13 @@ ClickHouse uses several external libraries for building. All of them do not need
|
|||||||
|
|
||||||
## C++ Compiler {#c-compiler}
|
## C++ Compiler {#c-compiler}
|
||||||
|
|
||||||
Compilers GCC starting from version 9 and Clang version 8 or above are supported for building ClickHouse.
|
Compilers GCC starting from version 10 and Clang version 8 or above are supported for building ClickHouse.
|
||||||
|
|
||||||
Official Yandex builds currently use GCC because it generates machine code of slightly better performance (yielding a difference of up to several percent according to our benchmarks). And Clang is more convenient for development usually. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations.
|
Official Yandex builds currently use GCC because it generates machine code of slightly better performance (yielding a difference of up to several percent according to our benchmarks). And Clang is more convenient for development usually. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations.
|
||||||
|
|
||||||
To install GCC on Ubuntu run: `sudo apt install gcc g++`
|
To install GCC on Ubuntu run: `sudo apt install gcc g++`
|
||||||
|
|
||||||
Check the version of gcc: `gcc --version`. If it is below 9, then follow the instruction here: https://clickhouse.tech/docs/en/development/build/#install-gcc-9.
|
Check the version of gcc: `gcc --version`. If it is below 10, then follow the instruction here: https://clickhouse.tech/docs/en/development/build/#install-gcc-10.
|
||||||
|
|
||||||
Mac OS X build is supported only for Clang. Just run `brew install llvm`
|
Mac OS X build is supported only for Clang. Just run `brew install llvm`
|
||||||
|
|
||||||
@ -152,11 +152,11 @@ Now that you are ready to build ClickHouse we recommend you to create a separate
|
|||||||
|
|
||||||
You can have several different directories (build_release, build_debug, etc.) for different types of build.
|
You can have several different directories (build_release, build_debug, etc.) for different types of build.
|
||||||
|
|
||||||
While inside the `build` directory, configure your build by running CMake. Before the first run, you need to define environment variables that specify compiler (version 9 gcc compiler in this example).
|
While inside the `build` directory, configure your build by running CMake. Before the first run, you need to define environment variables that specify compiler (version 10 gcc compiler in this example).
|
||||||
|
|
||||||
Linux:
|
Linux:
|
||||||
|
|
||||||
export CC=gcc-9 CXX=g++-9
|
export CC=gcc-10 CXX=g++-10
|
||||||
cmake ..
|
cmake ..
|
||||||
|
|
||||||
Mac OS X:
|
Mac OS X:
|
||||||
|
@ -74,9 +74,9 @@ It’s not necessarily to have unit tests if the code is already covered by func
|
|||||||
|
|
||||||
## Performance Tests {#performance-tests}
|
## Performance Tests {#performance-tests}
|
||||||
|
|
||||||
Performance tests allow to measure and compare performance of some isolated part of ClickHouse on synthetic queries. Tests are located at `tests/performance`. Each test is represented by `.xml` file with description of test case. Tests are run with `clickhouse performance-test` tool (that is embedded in `clickhouse` binary). See `--help` for invocation.
|
Performance tests allow to measure and compare performance of some isolated part of ClickHouse on synthetic queries. Tests are located at `tests/performance`. Each test is represented by `.xml` file with description of test case. Tests are run with `docker/tests/performance-comparison` tool . See the readme file for invocation.
|
||||||
|
|
||||||
Each test run one or multiple queries (possibly with combinations of parameters) in a loop with some conditions for stop (like “maximum execution speed is not changing in three seconds”) and measure some metrics about query performance (like “maximum execution speed”). Some tests can contain preconditions on preloaded test dataset.
|
Each test run one or multiple queries (possibly with combinations of parameters) in a loop. Some tests can contain preconditions on preloaded test dataset.
|
||||||
|
|
||||||
If you want to improve performance of ClickHouse in some scenario, and if improvements can be observed on simple queries, it is highly recommended to write a performance test. It always makes sense to use `perf top` or other perf tools during your tests.
|
If you want to improve performance of ClickHouse in some scenario, and if improvements can be observed on simple queries, it is highly recommended to write a performance test. It always makes sense to use `perf top` or other perf tools during your tests.
|
||||||
|
|
||||||
|
@ -51,7 +51,7 @@ Optional parameters:
|
|||||||
- `rabbitmq_row_delimiter` – Delimiter character, which ends the message.
|
- `rabbitmq_row_delimiter` – Delimiter character, which ends the message.
|
||||||
- `rabbitmq_schema` – Parameter that must be used if the format requires a schema definition. For example, [Cap’n Proto](https://capnproto.org/) requires the path to the schema file and the name of the root `schema.capnp:Message` object.
|
- `rabbitmq_schema` – Parameter that must be used if the format requires a schema definition. For example, [Cap’n Proto](https://capnproto.org/) requires the path to the schema file and the name of the root `schema.capnp:Message` object.
|
||||||
- `rabbitmq_num_consumers` – The number of consumers per table. Default: `1`. Specify more consumers if the throughput of one consumer is insufficient.
|
- `rabbitmq_num_consumers` – The number of consumers per table. Default: `1`. Specify more consumers if the throughput of one consumer is insufficient.
|
||||||
- `rabbitmq_num_queues` – The number of queues per consumer. Default: `1`. Specify more queues if the capacity of one queue per consumer is insufficient.
|
- `rabbitmq_num_queues` – Total number of queues. Default: `1`. Increasing this number can significantly improve performance.
|
||||||
- `rabbitmq_queue_base` - Specify a hint for queue names. Use cases of this setting are described below.
|
- `rabbitmq_queue_base` - Specify a hint for queue names. Use cases of this setting are described below.
|
||||||
- `rabbitmq_deadletter_exchange` - Specify name for a [dead letter exchange](https://www.rabbitmq.com/dlx.html). You can create another table with this exchange name and collect messages in cases when they are republished to dead letter exchange. By default dead letter exchange is not specified.
|
- `rabbitmq_deadletter_exchange` - Specify name for a [dead letter exchange](https://www.rabbitmq.com/dlx.html). You can create another table with this exchange name and collect messages in cases when they are republished to dead letter exchange. By default dead letter exchange is not specified.
|
||||||
- `rabbitmq_persistent` - If set to 1 (true), in insert query delivery mode will be set to 2 (marks messages as 'persistent'). Default: `0`.
|
- `rabbitmq_persistent` - If set to 1 (true), in insert query delivery mode will be set to 2 (marks messages as 'persistent'). Default: `0`.
|
||||||
@ -148,4 +148,5 @@ Example:
|
|||||||
- `_channel_id` - ChannelID, on which consumer, who received the message, was declared.
|
- `_channel_id` - ChannelID, on which consumer, who received the message, was declared.
|
||||||
- `_delivery_tag` - DeliveryTag of the received message. Scoped per channel.
|
- `_delivery_tag` - DeliveryTag of the received message. Scoped per channel.
|
||||||
- `_redelivered` - `redelivered` flag of the message.
|
- `_redelivered` - `redelivered` flag of the message.
|
||||||
- `_message_id` - MessageID of the received message; non-empty if was set, when message was published.
|
- `_message_id` - messageID of the received message; non-empty if was set, when message was published.
|
||||||
|
- `_timestamp` - timestamp of the received message; non-empty if was set, when message was published.
|
||||||
|
@ -88,7 +88,7 @@ For a description of parameters, see the [CREATE query description](../../../sql
|
|||||||
|
|
||||||
- `index_granularity` — Maximum number of data rows between the marks of an index. Default value: 8192. See [Data Storage](#mergetree-data-storage).
|
- `index_granularity` — Maximum number of data rows between the marks of an index. Default value: 8192. See [Data Storage](#mergetree-data-storage).
|
||||||
- `index_granularity_bytes` — Maximum size of data granules in bytes. Default value: 10Mb. To restrict the granule size only by number of rows, set to 0 (not recommended). See [Data Storage](#mergetree-data-storage).
|
- `index_granularity_bytes` — Maximum size of data granules in bytes. Default value: 10Mb. To restrict the granule size only by number of rows, set to 0 (not recommended). See [Data Storage](#mergetree-data-storage).
|
||||||
- `min_index_granularity_bytes` — Min allowed size of data granules in bytes. Default value: 1024b. To provide safeguard against accidentally creating tables with very low index_granularity_bytes. See [Data Storage](#mergetree-data-storage).
|
- `min_index_granularity_bytes` — Min allowed size of data granules in bytes. Default value: 1024b. To provide a safeguard against accidentally creating tables with very low index_granularity_bytes. See [Data Storage](#mergetree-data-storage).
|
||||||
- `enable_mixed_granularity_parts` — Enables or disables transitioning to control the granule size with the `index_granularity_bytes` setting. Before version 19.11, there was only the `index_granularity` setting for restricting granule size. The `index_granularity_bytes` setting improves ClickHouse performance when selecting data from tables with big rows (tens and hundreds of megabytes). If you have tables with big rows, you can enable this setting for the tables to improve the efficiency of `SELECT` queries.
|
- `enable_mixed_granularity_parts` — Enables or disables transitioning to control the granule size with the `index_granularity_bytes` setting. Before version 19.11, there was only the `index_granularity` setting for restricting granule size. The `index_granularity_bytes` setting improves ClickHouse performance when selecting data from tables with big rows (tens and hundreds of megabytes). If you have tables with big rows, you can enable this setting for the tables to improve the efficiency of `SELECT` queries.
|
||||||
- `use_minimalistic_part_header_in_zookeeper` — Storage method of the data parts headers in ZooKeeper. If `use_minimalistic_part_header_in_zookeeper=1`, then ZooKeeper stores less data. For more information, see the [setting description](../../../operations/server-configuration-parameters/settings.md#server-settings-use_minimalistic_part_header_in_zookeeper) in “Server configuration parameters”.
|
- `use_minimalistic_part_header_in_zookeeper` — Storage method of the data parts headers in ZooKeeper. If `use_minimalistic_part_header_in_zookeeper=1`, then ZooKeeper stores less data. For more information, see the [setting description](../../../operations/server-configuration-parameters/settings.md#server-settings-use_minimalistic_part_header_in_zookeeper) in “Server configuration parameters”.
|
||||||
- `min_merge_bytes_to_use_direct_io` — The minimum data volume for merge operation that is required for using direct I/O access to the storage disk. When merging data parts, ClickHouse calculates the total storage volume of all the data to be merged. If the volume exceeds `min_merge_bytes_to_use_direct_io` bytes, ClickHouse reads and writes the data to the storage disk using the direct I/O interface (`O_DIRECT` option). If `min_merge_bytes_to_use_direct_io = 0`, then direct I/O is disabled. Default value: `10 * 1024 * 1024 * 1024` bytes.
|
- `min_merge_bytes_to_use_direct_io` — The minimum data volume for merge operation that is required for using direct I/O access to the storage disk. When merging data parts, ClickHouse calculates the total storage volume of all the data to be merged. If the volume exceeds `min_merge_bytes_to_use_direct_io` bytes, ClickHouse reads and writes the data to the storage disk using the direct I/O interface (`O_DIRECT` option). If `min_merge_bytes_to_use_direct_io = 0`, then direct I/O is disabled. Default value: `10 * 1024 * 1024 * 1024` bytes.
|
||||||
@ -343,8 +343,8 @@ The `set` index can be used with all functions. Function subsets for other index
|
|||||||
|------------------------------------------------------------------------------------------------------------|-------------|--------|-------------|-------------|---------------|
|
|------------------------------------------------------------------------------------------------------------|-------------|--------|-------------|-------------|---------------|
|
||||||
| [equals (=, ==)](../../../sql-reference/functions/comparison-functions.md#function-equals) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
| [equals (=, ==)](../../../sql-reference/functions/comparison-functions.md#function-equals) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||||
| [notEquals(!=, \<\>)](../../../sql-reference/functions/comparison-functions.md#function-notequals) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
| [notEquals(!=, \<\>)](../../../sql-reference/functions/comparison-functions.md#function-notequals) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||||
| [like](../../../sql-reference/functions/string-search-functions.md#function-like) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
| [like](../../../sql-reference/functions/string-search-functions.md#function-like) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
||||||
| [notLike](../../../sql-reference/functions/string-search-functions.md#function-notlike) | ✔ | ✔ | ✗ | ✗ | ✗ |
|
| [notLike](../../../sql-reference/functions/string-search-functions.md#function-notlike) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
||||||
| [startsWith](../../../sql-reference/functions/string-functions.md#startswith) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
| [startsWith](../../../sql-reference/functions/string-functions.md#startswith) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
||||||
| [endsWith](../../../sql-reference/functions/string-functions.md#endswith) | ✗ | ✗ | ✔ | ✔ | ✗ |
|
| [endsWith](../../../sql-reference/functions/string-functions.md#endswith) | ✗ | ✗ | ✔ | ✔ | ✗ |
|
||||||
| [multiSearchAny](../../../sql-reference/functions/string-search-functions.md#function-multisearchany) | ✗ | ✗ | ✔ | ✗ | ✗ |
|
| [multiSearchAny](../../../sql-reference/functions/string-search-functions.md#function-multisearchany) | ✗ | ✗ | ✔ | ✗ | ✗ |
|
||||||
|
@ -148,6 +148,31 @@ You can define the parameters explicitly instead of using substitutions. This mi
|
|||||||
|
|
||||||
When working with large clusters, we recommend using substitutions because they reduce the probability of error.
|
When working with large clusters, we recommend using substitutions because they reduce the probability of error.
|
||||||
|
|
||||||
|
You can specify default arguments for `Replicated` table engine in the server configuration file. For instance:
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<default_replica_path>/clickhouse/tables/{shard}/{database}/{table}</default_replica_path>
|
||||||
|
<default_replica_name>{replica}</default_replica_path>
|
||||||
|
```
|
||||||
|
|
||||||
|
In this case, you can omit arguments when creating tables:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE table_name (
|
||||||
|
x UInt32
|
||||||
|
) ENGINE = ReplicatedMergeTree
|
||||||
|
ORDER BY x;
|
||||||
|
```
|
||||||
|
|
||||||
|
It is equivalent to:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE table_name (
|
||||||
|
x UInt32
|
||||||
|
) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/{database}/table_name', '{replica}')
|
||||||
|
ORDER BY x;
|
||||||
|
```
|
||||||
|
|
||||||
Run the `CREATE TABLE` query on each replica. This query creates a new replicated table, or adds a new replica to an existing one.
|
Run the `CREATE TABLE` query on each replica. This query creates a new replicated table, or adds a new replica to an existing one.
|
||||||
|
|
||||||
If you add a new replica after the table already contains some data on other replicas, the data will be copied from the other replicas to the new one after running the query. In other words, the new replica syncs itself with the others.
|
If you add a new replica after the table already contains some data on other replicas, the data will be copied from the other replicas to the new one after running the query. In other words, the new replica syncs itself with the others.
|
||||||
|
@ -98,6 +98,7 @@ When creating a table, the following settings are applied:
|
|||||||
- [max_bytes_in_join](../../../operations/settings/query-complexity.md#settings-max_bytes_in_join)
|
- [max_bytes_in_join](../../../operations/settings/query-complexity.md#settings-max_bytes_in_join)
|
||||||
- [join_overflow_mode](../../../operations/settings/query-complexity.md#settings-join_overflow_mode)
|
- [join_overflow_mode](../../../operations/settings/query-complexity.md#settings-join_overflow_mode)
|
||||||
- [join_any_take_last_row](../../../operations/settings/settings.md#settings-join_any_take_last_row)
|
- [join_any_take_last_row](../../../operations/settings/settings.md#settings-join_any_take_last_row)
|
||||||
|
- [persistent](../../../operations/settings/settings.md#persistent)
|
||||||
|
|
||||||
The `Join`-engine tables can’t be used in `GLOBAL JOIN` operations.
|
The `Join`-engine tables can’t be used in `GLOBAL JOIN` operations.
|
||||||
|
|
||||||
|
@ -14,4 +14,10 @@ Data is always located in RAM. For `INSERT`, the blocks of inserted data are als
|
|||||||
|
|
||||||
For a rough server restart, the block of data on the disk might be lost or damaged. In the latter case, you may need to manually delete the file with damaged data.
|
For a rough server restart, the block of data on the disk might be lost or damaged. In the latter case, you may need to manually delete the file with damaged data.
|
||||||
|
|
||||||
|
### Limitations and Settings {#join-limitations-and-settings}
|
||||||
|
|
||||||
|
When creating a table, the following settings are applied:
|
||||||
|
|
||||||
|
- [persistent](../../../operations/settings/settings.md#persistent)
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/set/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/set/) <!--hide-->
|
||||||
|
@ -30,4 +30,4 @@ Instead of inserting data manually, you might consider to use one of [client lib
|
|||||||
- `input_format_import_nested_json` allows to insert nested JSON objects into columns of [Nested](../../sql-reference/data-types/nested-data-structures/nested.md) type.
|
- `input_format_import_nested_json` allows to insert nested JSON objects into columns of [Nested](../../sql-reference/data-types/nested-data-structures/nested.md) type.
|
||||||
|
|
||||||
!!! note "Note"
|
!!! note "Note"
|
||||||
Settings are specified as `GET` parameters for the HTTP interface or as additional command-line arguments prefixed with `--` for the CLI interface.
|
Settings are specified as `GET` parameters for the HTTP interface or as additional command-line arguments prefixed with `--` for the `CLI` interface.
|
@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
toc_priority: 17
|
toc_priority: 19
|
||||||
toc_title: AMPLab Big Data Benchmark
|
toc_title: AMPLab Big Data Benchmark
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
toc_priority: 19
|
toc_priority: 18
|
||||||
toc_title: Terabyte Click Logs from Criteo
|
toc_title: Terabyte Click Logs from Criteo
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
---
|
---
|
||||||
toc_folder_title: Example Datasets
|
toc_folder_title: Example Datasets
|
||||||
toc_priority: 15
|
toc_priority: 14
|
||||||
toc_title: Introduction
|
toc_title: Introduction
|
||||||
---
|
---
|
||||||
|
|
||||||
@ -18,4 +18,4 @@ The list of documented datasets:
|
|||||||
- [New York Taxi Data](../../getting-started/example-datasets/nyc-taxi.md)
|
- [New York Taxi Data](../../getting-started/example-datasets/nyc-taxi.md)
|
||||||
- [OnTime](../../getting-started/example-datasets/ontime.md)
|
- [OnTime](../../getting-started/example-datasets/ontime.md)
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/getting_started/example_datasets) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/getting_started/example_datasets) <!--hide-->
|
@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
toc_priority: 14
|
toc_priority: 15
|
||||||
toc_title: Yandex.Metrica Data
|
toc_title: Yandex.Metrica Data
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
toc_priority: 16
|
toc_priority: 20
|
||||||
toc_title: New York Taxi Data
|
toc_title: New York Taxi Data
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
toc_priority: 15
|
toc_priority: 21
|
||||||
toc_title: OnTime
|
toc_title: OnTime
|
||||||
---
|
---
|
||||||
|
|
||||||
@ -148,7 +148,7 @@ SETTINGS index_granularity = 8192;
|
|||||||
Loading data:
|
Loading data:
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhouse-client --host=example-perftest01j --query="INSERT INTO ontime FORMAT CSVWithNames"; done
|
$ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhouse-client --input_format_with_names_use_header=0 --host=example-perftest01j --query="INSERT INTO ontime FORMAT CSVWithNames"; done
|
||||||
```
|
```
|
||||||
|
|
||||||
## Download of Prepared Partitions {#download-of-prepared-partitions}
|
## Download of Prepared Partitions {#download-of-prepared-partitions}
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
toc_priority: 20
|
toc_priority: 16
|
||||||
toc_title: Star Schema Benchmark
|
toc_title: Star Schema Benchmark
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
toc_priority: 18
|
toc_priority: 17
|
||||||
toc_title: WikiStat
|
toc_title: WikiStat
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -460,7 +460,7 @@ See also the [JSONEachRow](#jsoneachrow) format.
|
|||||||
|
|
||||||
## JSONString {#jsonstring}
|
## JSONString {#jsonstring}
|
||||||
|
|
||||||
Differs from JSON only in that data fields are output in strings, not in typed json values.
|
Differs from JSON only in that data fields are output in strings, not in typed JSON values.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
@ -596,7 +596,7 @@ When inserting the data, you should provide a separate JSON value for each row.
|
|||||||
## JSONEachRowWithProgress {#jsoneachrowwithprogress}
|
## JSONEachRowWithProgress {#jsoneachrowwithprogress}
|
||||||
## JSONStringEachRowWithProgress {#jsonstringeachrowwithprogress}
|
## JSONStringEachRowWithProgress {#jsonstringeachrowwithprogress}
|
||||||
|
|
||||||
Differs from JSONEachRow/JSONStringEachRow in that ClickHouse will also yield progress information as JSON objects.
|
Differs from `JSONEachRow`/`JSONStringEachRow` in that ClickHouse will also yield progress information as JSON values.
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{"row":{"'hello'":"hello","multiply(42, number)":"0","range(5)":[0,1,2,3,4]}}
|
{"row":{"'hello'":"hello","multiply(42, number)":"0","range(5)":[0,1,2,3,4]}}
|
||||||
@ -608,7 +608,7 @@ Differs from JSONEachRow/JSONStringEachRow in that ClickHouse will also yield pr
|
|||||||
## JSONCompactEachRowWithNamesAndTypes {#jsoncompacteachrowwithnamesandtypes}
|
## JSONCompactEachRowWithNamesAndTypes {#jsoncompacteachrowwithnamesandtypes}
|
||||||
## JSONCompactStringEachRowWithNamesAndTypes {#jsoncompactstringeachrowwithnamesandtypes}
|
## JSONCompactStringEachRowWithNamesAndTypes {#jsoncompactstringeachrowwithnamesandtypes}
|
||||||
|
|
||||||
Differs from JSONCompactEachRow/JSONCompactStringEachRow in that the column names and types are written as the first two rows.
|
Differs from `JSONCompactEachRow`/`JSONCompactStringEachRow` in that the column names and types are written as the first two rows.
|
||||||
|
|
||||||
```json
|
```json
|
||||||
["'hello'", "multiply(42, number)", "range(5)"]
|
["'hello'", "multiply(42, number)", "range(5)"]
|
||||||
|
@ -79,7 +79,7 @@ By default, data is returned in TabSeparated format (for more information, see t
|
|||||||
|
|
||||||
You use the FORMAT clause of the query to request any other format.
|
You use the FORMAT clause of the query to request any other format.
|
||||||
|
|
||||||
Also, you can use the ‘default_format’ URL parameter or ‘X-ClickHouse-Format’ header to specify a default format other than TabSeparated.
|
Also, you can use the ‘default_format’ URL parameter or the ‘X-ClickHouse-Format’ header to specify a default format other than TabSeparated.
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ echo 'SELECT 1 FORMAT Pretty' | curl 'http://localhost:8123/?' --data-binary @-
|
$ echo 'SELECT 1 FORMAT Pretty' | curl 'http://localhost:8123/?' --data-binary @-
|
||||||
@ -170,7 +170,7 @@ $ echo "SELECT 1" | gzip -c | curl -sS --data-binary @- -H 'Content-Encoding: gz
|
|||||||
!!! note "Note"
|
!!! note "Note"
|
||||||
Some HTTP clients might decompress data from the server by default (with `gzip` and `deflate`) and you might get decompressed data even if you use the compression settings correctly.
|
Some HTTP clients might decompress data from the server by default (with `gzip` and `deflate`) and you might get decompressed data even if you use the compression settings correctly.
|
||||||
|
|
||||||
You can use the ‘database’ URL parameter or ‘X-ClickHouse-Database’ header to specify the default database.
|
You can use the ‘database’ URL parameter or the ‘X-ClickHouse-Database’ header to specify the default database.
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ echo 'SELECT number FROM numbers LIMIT 10' | curl 'http://localhost:8123/?database=system' --data-binary @-
|
$ echo 'SELECT number FROM numbers LIMIT 10' | curl 'http://localhost:8123/?database=system' --data-binary @-
|
||||||
|
@ -6,7 +6,7 @@ toc_title: Client Libraries
|
|||||||
# Client Libraries from Third-party Developers {#client-libraries-from-third-party-developers}
|
# Client Libraries from Third-party Developers {#client-libraries-from-third-party-developers}
|
||||||
|
|
||||||
!!! warning "Disclaimer"
|
!!! warning "Disclaimer"
|
||||||
Yandex does **not** maintain the libraries listed below and haven’t done any extensive testing to ensure their quality.
|
Yandex does **not** maintain the libraries listed below and hasn’t done any extensive testing to ensure their quality.
|
||||||
|
|
||||||
- Python
|
- Python
|
||||||
- [infi.clickhouse_orm](https://github.com/Infinidat/infi.clickhouse_orm)
|
- [infi.clickhouse_orm](https://github.com/Infinidat/infi.clickhouse_orm)
|
||||||
|
@ -90,6 +90,7 @@ toc_title: Adopters
|
|||||||
| <a href="https://www.splunk.com/" class="favicon">Splunk</a> | Business Analytics | Main product | — | — | [Slides in English, January 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/splunk.pdf) |
|
| <a href="https://www.splunk.com/" class="favicon">Splunk</a> | Business Analytics | Main product | — | — | [Slides in English, January 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/splunk.pdf) |
|
||||||
| <a href="https://www.spotify.com" class="favicon">Spotify</a> | Music | Experimentation | — | — | [Slides, July 2018](https://www.slideshare.net/glebus/using-clickhouse-for-experimentation-104247173) |
|
| <a href="https://www.spotify.com" class="favicon">Spotify</a> | Music | Experimentation | — | — | [Slides, July 2018](https://www.slideshare.net/glebus/using-clickhouse-for-experimentation-104247173) |
|
||||||
| <a href="https://www.staffcop.ru/" class="favicon">Staffcop</a> | Information Security | Main Product | — | — | [Official website, Documentation](https://www.staffcop.ru/sce43) |
|
| <a href="https://www.staffcop.ru/" class="favicon">Staffcop</a> | Information Security | Main Product | — | — | [Official website, Documentation](https://www.staffcop.ru/sce43) |
|
||||||
|
| <a href="https://www.teralytics.net/" class="favicon">Teralytics</a> | Mobility | Analytics | — | — | [Tech blog](https://www.teralytics.net/knowledge-hub/visualizing-mobility-data-the-scalability-challenge) |
|
||||||
| <a href="https://www.tencent.com" class="favicon">Tencent</a> | Big Data | Data processing | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/5.%20ClickHouse大数据集群应用_李俊飞腾讯网媒事业部.pdf) |
|
| <a href="https://www.tencent.com" class="favicon">Tencent</a> | Big Data | Data processing | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/5.%20ClickHouse大数据集群应用_李俊飞腾讯网媒事业部.pdf) |
|
||||||
| <a href="https://www.tencent.com" class="favicon">Tencent</a> | Messaging | Logging | — | — | [Talk in Chinese, November 2019](https://youtu.be/T-iVQRuw-QY?t=5050) |
|
| <a href="https://www.tencent.com" class="favicon">Tencent</a> | Messaging | Logging | — | — | [Talk in Chinese, November 2019](https://youtu.be/T-iVQRuw-QY?t=5050) |
|
||||||
| <a href="https://trafficstars.com/" class="favicon">Traffic Stars</a> | AD network | — | — | — | [Slides in Russian, May 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup15/lightning/ninja.pdf) |
|
| <a href="https://trafficstars.com/" class="favicon">Traffic Stars</a> | AD network | — | — | — | [Slides in Russian, May 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup15/lightning/ninja.pdf) |
|
||||||
|
69
docs/en/operations/opentelemetry.md
Normal file
69
docs/en/operations/opentelemetry.md
Normal file
@ -0,0 +1,69 @@
|
|||||||
|
---
|
||||||
|
toc_priority: 62
|
||||||
|
toc_title: OpenTelemetry Support
|
||||||
|
---
|
||||||
|
|
||||||
|
# [experimental] OpenTelemetry Support
|
||||||
|
|
||||||
|
[OpenTelemetry](https://opentelemetry.io/) is an open standard for collecting
|
||||||
|
traces and metrics from distributed application. ClickHouse has some support
|
||||||
|
for OpenTelemetry.
|
||||||
|
|
||||||
|
!!! warning "Warning"
|
||||||
|
This is an experimental feature that will change in backwards-incompatible ways in the future releases.
|
||||||
|
|
||||||
|
|
||||||
|
## Supplying Trace Context to ClickHouse
|
||||||
|
|
||||||
|
ClickHouse accepts trace context HTTP headers, as described by
|
||||||
|
the [W3C recommendation](https://www.w3.org/TR/trace-context/).
|
||||||
|
It also accepts trace context over native protocol that is used for
|
||||||
|
communication between ClickHouse servers or between the client and server.
|
||||||
|
For manual testing, trace context headers conforming to the Trace Context
|
||||||
|
recommendation can be supplied to `clickhouse-client` using
|
||||||
|
`--opentelemetry-traceparent` and `--opentelemetry-tracestate` flags.
|
||||||
|
|
||||||
|
If no parent trace context is supplied, ClickHouse can start a new trace, with
|
||||||
|
probability controlled by the `opentelemetry_start_trace_probability` setting.
|
||||||
|
|
||||||
|
|
||||||
|
## Propagating the Trace Context
|
||||||
|
|
||||||
|
The trace context is propagated to downstream services in the following cases:
|
||||||
|
|
||||||
|
* Queries to remote ClickHouse servers, such as when using `Distributed` table
|
||||||
|
engine.
|
||||||
|
|
||||||
|
* `URL` table function. Trace context information is sent in HTTP headers.
|
||||||
|
|
||||||
|
|
||||||
|
## Tracing the ClickHouse Itself
|
||||||
|
|
||||||
|
ClickHouse creates _trace spans_ for each query and some of the query execution
|
||||||
|
stages, such as query planning or distributed queries.
|
||||||
|
|
||||||
|
To be useful, the tracing information has to be exported to a monitoring system
|
||||||
|
that supports OpenTelemetry, such as Jaeger or Prometheus. ClickHouse avoids
|
||||||
|
a dependency on a particular monitoring system, instead only
|
||||||
|
providing the tracing data conforming to the standard. A natural way to do so
|
||||||
|
in an SQL RDBMS is a system table. OpenTelemetry trace span information
|
||||||
|
[required by the standard](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/overview.md#span)
|
||||||
|
is stored in the system table called `system.opentelemetry_span_log`.
|
||||||
|
|
||||||
|
The table must be enabled in the server configuration, see the `opentelemetry_span_log`
|
||||||
|
element in the default config file `config.xml`. It is enabled by default.
|
||||||
|
|
||||||
|
The table has the following columns:
|
||||||
|
|
||||||
|
- `trace_id`
|
||||||
|
- `span_id`
|
||||||
|
- `parent_span_id`
|
||||||
|
- `operation_name`
|
||||||
|
- `start_time`
|
||||||
|
- `finish_time`
|
||||||
|
- `finish_date`
|
||||||
|
- `attribute.name`
|
||||||
|
- `attribute.values`
|
||||||
|
|
||||||
|
The tags or attributes are saved as two parallel arrays, containing the keys
|
||||||
|
and values. Use `ARRAY JOIN` to work with them.
|
@ -479,6 +479,26 @@ The maximum number of simultaneously processed requests.
|
|||||||
<max_concurrent_queries>100</max_concurrent_queries>
|
<max_concurrent_queries>100</max_concurrent_queries>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## max_concurrent_queries_for_all_users {#max-concurrent-queries-for-all-users}
|
||||||
|
|
||||||
|
Throw exception if the value of this setting is less or equal than the current number of simultaneously processed queries.
|
||||||
|
|
||||||
|
Example: `max_concurrent_queries_for_all_users` can be set to 99 for all users and database administrator can set it to 100 for itself to run queries for investigation even when the server is overloaded.
|
||||||
|
|
||||||
|
Modifying the setting for one query or user does not affect other queries.
|
||||||
|
|
||||||
|
Default value: `0` that means no limit.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
``` xml
|
||||||
|
<max_concurrent_queries_for_all_users>99</max_concurrent_queries_for_all_users>
|
||||||
|
```
|
||||||
|
|
||||||
|
**See Also**
|
||||||
|
|
||||||
|
- [max_concurrent_queries](#max-concurrent-queries)
|
||||||
|
|
||||||
## max_connections {#max-connections}
|
## max_connections {#max-connections}
|
||||||
|
|
||||||
The maximum number of inbound connections.
|
The maximum number of inbound connections.
|
||||||
|
@ -680,6 +680,21 @@ Example:
|
|||||||
log_queries=1
|
log_queries=1
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## log_queries_min_query_duration_ms {#settings-log-queries-min-query-duration-ms}
|
||||||
|
|
||||||
|
Minimal time for the query to run to get to the following tables:
|
||||||
|
|
||||||
|
- `system.query_log`
|
||||||
|
- `system.query_thread_log`
|
||||||
|
|
||||||
|
Only the queries with the following type will get to the log:
|
||||||
|
|
||||||
|
- `QUERY_FINISH`
|
||||||
|
- `EXCEPTION_WHILE_PROCESSING`
|
||||||
|
|
||||||
|
- Type: milliseconds
|
||||||
|
- Default value: 0 (any query)
|
||||||
|
|
||||||
## log_queries_min_type {#settings-log-queries-min-type}
|
## log_queries_min_type {#settings-log-queries-min-type}
|
||||||
|
|
||||||
`query_log` minimal type to log.
|
`query_log` minimal type to log.
|
||||||
@ -2148,7 +2163,34 @@ Result:
|
|||||||
└───────────────┘
|
└───────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/settings/settings/) <!-- hide -->
|
## output_format_pretty_row_numbers {#output_format_pretty_row_numbers}
|
||||||
|
|
||||||
|
Adds row numbers to output in the [Pretty](../../interfaces/formats.md#pretty) format.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 0 — Output without row numbers.
|
||||||
|
- 1 — Output with row numbers.
|
||||||
|
|
||||||
|
Default value: `0`.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SET output_format_pretty_row_numbers = 1;
|
||||||
|
SELECT TOP 3 name, value FROM system.settings;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
```text
|
||||||
|
┌─name────────────────────┬─value───┐
|
||||||
|
1. │ min_compress_block_size │ 65536 │
|
||||||
|
2. │ max_compress_block_size │ 1048576 │
|
||||||
|
3. │ max_block_size │ 65505 │
|
||||||
|
└─────────────────────────┴─────────┘
|
||||||
|
```
|
||||||
|
|
||||||
## allow_experimental_bigint_types {#allow_experimental_bigint_types}
|
## allow_experimental_bigint_types {#allow_experimental_bigint_types}
|
||||||
|
|
||||||
@ -2160,3 +2202,18 @@ Possible values:
|
|||||||
- 0 — The bigint data type is disabled.
|
- 0 — The bigint data type is disabled.
|
||||||
|
|
||||||
Default value: `0`.
|
Default value: `0`.
|
||||||
|
|
||||||
|
## persistent {#persistent}
|
||||||
|
|
||||||
|
Disables persistency for the [Set](../../engines/table-engines/special/set.md#set) and [Join](../../engines/table-engines/special/join.md#join) table engines.
|
||||||
|
|
||||||
|
Reduces the I/O overhead. Suitable for scenarios that pursue performance and do not require persistence.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 1 — Enabled.
|
||||||
|
- 0 — Disabled.
|
||||||
|
|
||||||
|
Default value: `1`.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.tech/docs/en/operations/settings/settings/) <!-- hide -->
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
## system.asynchronous_metric_log {#system-tables-async-log}
|
## system.asynchronous_metric_log {#system-tables-async-log}
|
||||||
|
|
||||||
Contains the historical values for `system.asynchronous_metrics`, which are saved once per minute. This feature is enabled by default.
|
Contains the historical values for `system.asynchronous_metrics`, which are saved once per minute. Enabled by default.
|
||||||
|
|
||||||
Columns:
|
Columns:
|
||||||
|
|
||||||
@ -33,7 +33,7 @@ SELECT * FROM system.asynchronous_metric_log LIMIT 10
|
|||||||
|
|
||||||
**See Also**
|
**See Also**
|
||||||
|
|
||||||
- [system.asynchronous_metrics](../system-tables/asynchronous_metrics.md) — Contains metrics that are calculated periodically in the background.
|
- [system.asynchronous_metrics](../system-tables/asynchronous_metrics.md) — Contains metrics, calculated periodically in the background.
|
||||||
- [system.metric_log](../system-tables/metric_log.md) — Contains history of metrics values from tables `system.metrics` and `system.events`, periodically flushed to disk.
|
- [system.metric_log](../system-tables/metric_log.md) — Contains history of metrics values from tables `system.metrics` and `system.events`, periodically flushed to disk.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/asynchronous_metric_log) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/asynchronous_metric_log) <!--hide-->
|
||||||
|
@ -6,19 +6,21 @@ You can use this table to get information similar to the [DESCRIBE TABLE](../../
|
|||||||
|
|
||||||
The `system.columns` table contains the following columns (the column type is shown in brackets):
|
The `system.columns` table contains the following columns (the column type is shown in brackets):
|
||||||
|
|
||||||
- `database` (String) — Database name.
|
- `database` ([String](../../sql-reference/data-types/string.md)) — Database name.
|
||||||
- `table` (String) — Table name.
|
- `table` ([String](../../sql-reference/data-types/string.md)) — Table name.
|
||||||
- `name` (String) — Column name.
|
- `name` ([String](../../sql-reference/data-types/string.md)) — Column name.
|
||||||
- `type` (String) — Column type.
|
- `type` ([String](../../sql-reference/data-types/string.md)) — Column type.
|
||||||
- `default_kind` (String) — Expression type (`DEFAULT`, `MATERIALIZED`, `ALIAS`) for the default value, or an empty string if it is not defined.
|
- `position` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Ordinal position of a column in a table starting with 1.
|
||||||
- `default_expression` (String) — Expression for the default value, or an empty string if it is not defined.
|
- `default_kind` ([String](../../sql-reference/data-types/string.md)) — Expression type (`DEFAULT`, `MATERIALIZED`, `ALIAS`) for the default value, or an empty string if it is not defined.
|
||||||
- `data_compressed_bytes` (UInt64) — The size of compressed data, in bytes.
|
- `default_expression` ([String](../../sql-reference/data-types/string.md)) — Expression for the default value, or an empty string if it is not defined.
|
||||||
- `data_uncompressed_bytes` (UInt64) — The size of decompressed data, in bytes.
|
- `data_compressed_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The size of compressed data, in bytes.
|
||||||
- `marks_bytes` (UInt64) — The size of marks, in bytes.
|
- `data_uncompressed_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The size of decompressed data, in bytes.
|
||||||
- `comment` (String) — Comment on the column, or an empty string if it is not defined.
|
- `marks_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The size of marks, in bytes.
|
||||||
- `is_in_partition_key` (UInt8) — Flag that indicates whether the column is in the partition expression.
|
- `comment` ([String](../../sql-reference/data-types/string.md)) — Comment on the column, or an empty string if it is not defined.
|
||||||
- `is_in_sorting_key` (UInt8) — Flag that indicates whether the column is in the sorting key expression.
|
- `is_in_partition_key` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Flag that indicates whether the column is in the partition expression.
|
||||||
- `is_in_primary_key` (UInt8) — Flag that indicates whether the column is in the primary key expression.
|
- `is_in_sorting_key` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Flag that indicates whether the column is in the sorting key expression.
|
||||||
- `is_in_sampling_key` (UInt8) — Flag that indicates whether the column is in the sampling key expression.
|
- `is_in_primary_key` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Flag that indicates whether the column is in the primary key expression.
|
||||||
|
- `is_in_sampling_key` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Flag that indicates whether the column is in the sampling key expression.
|
||||||
|
- `compression_codec` ([String](../../sql-reference/data-types/string.md)) — Compression codec name.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/columns) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/columns) <!--hide-->
|
||||||
|
48
docs/en/operations/system-tables/crash-log.md
Normal file
48
docs/en/operations/system-tables/crash-log.md
Normal file
@ -0,0 +1,48 @@
|
|||||||
|
# system.crash_log {#system-tables_crash_log}
|
||||||
|
|
||||||
|
Contains information about stack traces for fatal errors. The table does not exist in the database by default, it is created only when fatal errors occur.
|
||||||
|
|
||||||
|
Columns:
|
||||||
|
|
||||||
|
- `event_date` ([Datetime](../../sql-reference/data-types/datetime.md)) — Date of the event.
|
||||||
|
- `event_time` ([Datetime](../../sql-reference/data-types/datetime.md)) — Time of the event.
|
||||||
|
- `timestamp_ns` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Timestamp of the event with nanoseconds.
|
||||||
|
- `signal` ([Int32](../../sql-reference/data-types/int-uint.md)) — Signal number.
|
||||||
|
- `thread_id` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Thread ID.
|
||||||
|
- `query_id` ([String](../../sql-reference/data-types/string.md)) — Query ID.
|
||||||
|
- `trace` ([Array](../../sql-reference/data-types/array.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Stack trace at the moment of crash. Each element is a virtual memory address inside ClickHouse server process.
|
||||||
|
- `trace_full` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — Stack trace at the moment of crash. Each element contains a called method inside ClickHouse server process.
|
||||||
|
- `version` ([String](../../sql-reference/data-types/string.md)) — ClickHouse server version.
|
||||||
|
- `revision` ([UInt32](../../sql-reference/data-types/int-uint.md)) — ClickHouse server revision.
|
||||||
|
- `build_id` ([String](../../sql-reference/data-types/string.md)) — BuildID that is generated by compiler.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT * FROM system.crash_log ORDER BY event_time DESC LIMIT 1;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result (not full):
|
||||||
|
|
||||||
|
``` text
|
||||||
|
Row 1:
|
||||||
|
──────
|
||||||
|
event_date: 2020-10-14
|
||||||
|
event_time: 2020-10-14 15:47:40
|
||||||
|
timestamp_ns: 1602679660271312710
|
||||||
|
signal: 11
|
||||||
|
thread_id: 23624
|
||||||
|
query_id: 428aab7c-8f5c-44e9-9607-d16b44467e69
|
||||||
|
trace: [188531193,...]
|
||||||
|
trace_full: ['3. DB::(anonymous namespace)::FunctionFormatReadableTimeDelta::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) const @ 0xb3cc1f9 in /home/username/work/ClickHouse/build/programs/clickhouse',...]
|
||||||
|
version: ClickHouse 20.11.1.1
|
||||||
|
revision: 54442
|
||||||
|
build_id:
|
||||||
|
```
|
||||||
|
|
||||||
|
**See also**
|
||||||
|
- [trace_log](../../operations/system-tables/trace_log.md) system table
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/crash-log)
|
23
docs/en/operations/system-tables/errors.md
Normal file
23
docs/en/operations/system-tables/errors.md
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
# system.errors {#system_tables-errors}
|
||||||
|
|
||||||
|
Contains error codes with number of times they have been triggered.
|
||||||
|
|
||||||
|
Columns:
|
||||||
|
|
||||||
|
- `name` ([String](../../sql-reference/data-types/string.md)) — name of the error (`errorCodeToName`).
|
||||||
|
- `code` ([Int32](../../sql-reference/data-types/int-uint.md)) — code number of the error.
|
||||||
|
- `value` ([UInt64](../../sql-reference/data-types/int-uint.md)) - number of times this error has been happened.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT *
|
||||||
|
FROM system.errors
|
||||||
|
WHERE value > 0
|
||||||
|
ORDER BY code ASC
|
||||||
|
LIMIT 1
|
||||||
|
|
||||||
|
┌─name─────────────┬─code─┬─value─┐
|
||||||
|
│ CANNOT_OPEN_FILE │ 76 │ 1 │
|
||||||
|
└──────────────────┴──────┴───────┘
|
||||||
|
```
|
@ -1,6 +1,7 @@
|
|||||||
# system.metric_log {#system_tables-metric_log}
|
# system.metric_log {#system_tables-metric_log}
|
||||||
|
|
||||||
Contains history of metrics values from tables `system.metrics` and `system.events`, periodically flushed to disk.
|
Contains history of metrics values from tables `system.metrics` and `system.events`, periodically flushed to disk.
|
||||||
|
|
||||||
To turn on metrics history collection on `system.metric_log`, create `/etc/clickhouse-server/config.d/metric_log.xml` with following content:
|
To turn on metrics history collection on `system.metric_log`, create `/etc/clickhouse-server/config.d/metric_log.xml` with following content:
|
||||||
|
|
||||||
``` xml
|
``` xml
|
||||||
@ -14,6 +15,11 @@ To turn on metrics history collection on `system.metric_log`, create `/etc/click
|
|||||||
</yandex>
|
</yandex>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Columns:
|
||||||
|
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — Event date.
|
||||||
|
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Event time.
|
||||||
|
- `event_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Event time with microseconds resolution.
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
|
148
docs/en/operations/system-tables/parts_columns.md
Normal file
148
docs/en/operations/system-tables/parts_columns.md
Normal file
@ -0,0 +1,148 @@
|
|||||||
|
# system.parts_columns {#system_tables-parts_columns}
|
||||||
|
|
||||||
|
Contains information about parts and columns of [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) tables.
|
||||||
|
|
||||||
|
Each row describes one data part.
|
||||||
|
|
||||||
|
Columns:
|
||||||
|
|
||||||
|
- `partition` ([String](../../sql-reference/data-types/string.md)) — The partition name. To learn what a partition is, see the description of the [ALTER](../../sql-reference/statements/alter/index.md#query_language_queries_alter) query.
|
||||||
|
|
||||||
|
Formats:
|
||||||
|
|
||||||
|
- `YYYYMM` for automatic partitioning by month.
|
||||||
|
- `any_string` when partitioning manually.
|
||||||
|
|
||||||
|
- `name` ([String](../../sql-reference/data-types/string.md)) — Name of the data part.
|
||||||
|
|
||||||
|
- `part_type` ([String](../../sql-reference/data-types/string.md)) — The data part storing format.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- `Wide` — Each column is stored in a separate file in a filesystem.
|
||||||
|
- `Compact` — All columns are stored in one file in a filesystem.
|
||||||
|
|
||||||
|
Data storing format is controlled by the `min_bytes_for_wide_part` and `min_rows_for_wide_part` settings of the [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) table.
|
||||||
|
|
||||||
|
- `active` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Flag that indicates whether the data part is active. If a data part is active, it’s used in a table. Otherwise, it’s deleted. Inactive data parts remain after merging.
|
||||||
|
|
||||||
|
- `marks` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The number of marks. To get the approximate number of rows in a data part, multiply `marks` by the index granularity (usually 8192) (this hint doesn’t work for adaptive granularity).
|
||||||
|
|
||||||
|
- `rows` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The number of rows.
|
||||||
|
|
||||||
|
- `bytes_on_disk` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Total size of all the data part files in bytes.
|
||||||
|
|
||||||
|
- `data_compressed_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Total size of compressed data in the data part. All the auxiliary files (for example, files with marks) are not included.
|
||||||
|
|
||||||
|
- `data_uncompressed_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Total size of uncompressed data in the data part. All the auxiliary files (for example, files with marks) are not included.
|
||||||
|
|
||||||
|
- `marks_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The size of the file with marks.
|
||||||
|
|
||||||
|
- `modification_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — The time the directory with the data part was modified. This usually corresponds to the time of data part creation.
|
||||||
|
|
||||||
|
- `remove_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — The time when the data part became inactive.
|
||||||
|
|
||||||
|
- `refcount` ([UInt32](../../sql-reference/data-types/int-uint.md)) — The number of places where the data part is used. A value greater than 2 indicates that the data part is used in queries or merges.
|
||||||
|
|
||||||
|
- `min_date` ([Date](../../sql-reference/data-types/date.md)) — The minimum value of the date key in the data part.
|
||||||
|
|
||||||
|
- `max_date` ([Date](../../sql-reference/data-types/date.md)) — The maximum value of the date key in the data part.
|
||||||
|
|
||||||
|
- `partition_id` ([String](../../sql-reference/data-types/string.md)) — ID of the partition.
|
||||||
|
|
||||||
|
- `min_block_number` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The minimum number of data parts that make up the current part after merging.
|
||||||
|
|
||||||
|
- `max_block_number` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The maximum number of data parts that make up the current part after merging.
|
||||||
|
|
||||||
|
- `level` ([UInt32](../../sql-reference/data-types/int-uint.md)) — Depth of the merge tree. Zero means that the current part was created by insert rather than by merging other parts.
|
||||||
|
|
||||||
|
- `data_version` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Number that is used to determine which mutations should be applied to the data part (mutations with a version higher than `data_version`).
|
||||||
|
|
||||||
|
- `primary_key_bytes_in_memory` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The amount of memory (in bytes) used by primary key values.
|
||||||
|
|
||||||
|
- `primary_key_bytes_in_memory_allocated` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The amount of memory (in bytes) reserved for primary key values.
|
||||||
|
|
||||||
|
- `database` ([String](../../sql-reference/data-types/string.md)) — Name of the database.
|
||||||
|
|
||||||
|
- `table` ([String](../../sql-reference/data-types/string.md)) — Name of the table.
|
||||||
|
|
||||||
|
- `engine` ([String](../../sql-reference/data-types/string.md)) — Name of the table engine without parameters.
|
||||||
|
|
||||||
|
- `disk_name` ([String](../../sql-reference/data-types/string.md)) — Name of a disk that stores the data part.
|
||||||
|
|
||||||
|
- `path` ([String](../../sql-reference/data-types/string.md)) — Absolute path to the folder with data part files.
|
||||||
|
|
||||||
|
- `column` ([String](../../sql-reference/data-types/string.md)) — Name of the column.
|
||||||
|
|
||||||
|
- `type` ([String](../../sql-reference/data-types/string.md)) — Column type.
|
||||||
|
|
||||||
|
- `column_position` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Ordinal position of a column in a table starting with 1.
|
||||||
|
|
||||||
|
- `default_kind` ([String](../../sql-reference/data-types/string.md)) — Expression type (`DEFAULT`, `MATERIALIZED`, `ALIAS`) for the default value, or an empty string if it is not defined.
|
||||||
|
|
||||||
|
- `default_expression` ([String](../../sql-reference/data-types/string.md)) — Expression for the default value, or an empty string if it is not defined.
|
||||||
|
|
||||||
|
- `column_bytes_on_disk` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Total size of the column in bytes.
|
||||||
|
|
||||||
|
- `column_data_compressed_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Total size of compressed data in the column, in bytes.
|
||||||
|
|
||||||
|
- `column_data_uncompressed_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Total size of the decompressed data in the column, in bytes.
|
||||||
|
|
||||||
|
- `column_marks_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The size of the column with marks, in bytes.
|
||||||
|
|
||||||
|
- `bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Alias for `bytes_on_disk`.
|
||||||
|
|
||||||
|
- `marks_size` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Alias for `marks_bytes`.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT * FROM system.parts_columns LIMIT 1 FORMAT Vertical;
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
Row 1:
|
||||||
|
──────
|
||||||
|
partition: tuple()
|
||||||
|
name: all_1_2_1
|
||||||
|
part_type: Wide
|
||||||
|
active: 1
|
||||||
|
marks: 2
|
||||||
|
rows: 2
|
||||||
|
bytes_on_disk: 155
|
||||||
|
data_compressed_bytes: 56
|
||||||
|
data_uncompressed_bytes: 4
|
||||||
|
marks_bytes: 96
|
||||||
|
modification_time: 2020-09-23 10:13:36
|
||||||
|
remove_time: 2106-02-07 06:28:15
|
||||||
|
refcount: 1
|
||||||
|
min_date: 1970-01-01
|
||||||
|
max_date: 1970-01-01
|
||||||
|
partition_id: all
|
||||||
|
min_block_number: 1
|
||||||
|
max_block_number: 2
|
||||||
|
level: 1
|
||||||
|
data_version: 1
|
||||||
|
primary_key_bytes_in_memory: 2
|
||||||
|
primary_key_bytes_in_memory_allocated: 64
|
||||||
|
database: default
|
||||||
|
table: 53r93yleapyears
|
||||||
|
engine: MergeTree
|
||||||
|
disk_name: default
|
||||||
|
path: /var/lib/clickhouse/data/default/53r93yleapyears/all_1_2_1/
|
||||||
|
column: id
|
||||||
|
type: Int8
|
||||||
|
column_position: 1
|
||||||
|
default_kind:
|
||||||
|
default_expression:
|
||||||
|
column_bytes_on_disk: 76
|
||||||
|
column_data_compressed_bytes: 28
|
||||||
|
column_data_uncompressed_bytes: 2
|
||||||
|
column_marks_bytes: 48
|
||||||
|
```
|
||||||
|
|
||||||
|
**See Also**
|
||||||
|
|
||||||
|
- [MergeTree family](../../engines/table-engines/mergetree-family/mergetree.md)
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/parts_columns) <!--hide-->
|
@ -20,8 +20,8 @@ The `system.query_log` table registers two kinds of queries:
|
|||||||
|
|
||||||
Each query creates one or two rows in the `query_log` table, depending on the status (see the `type` column) of the query:
|
Each query creates one or two rows in the `query_log` table, depending on the status (see the `type` column) of the query:
|
||||||
|
|
||||||
1. If the query execution was successful, two rows with the `QueryStart` and `QueryFinish` types are created .
|
1. If the query execution was successful, two rows with the `QueryStart` and `QueryFinish` types are created.
|
||||||
2. If an error occurred during query processing, two events with the `QueryStart` and `ExceptionWhileProcessing` types are created .
|
2. If an error occurred during query processing, two events with the `QueryStart` and `ExceptionWhileProcessing` types are created.
|
||||||
3. If an error occurred before launching the query, a single event with the `ExceptionBeforeStart` type is created.
|
3. If an error occurred before launching the query, a single event with the `ExceptionBeforeStart` type is created.
|
||||||
|
|
||||||
Columns:
|
Columns:
|
||||||
@ -37,8 +37,8 @@ Columns:
|
|||||||
- `query_start_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Start time of query execution.
|
- `query_start_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Start time of query execution.
|
||||||
- `query_start_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Start time of query execution with microsecond precision.
|
- `query_start_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Start time of query execution with microsecond precision.
|
||||||
- `query_duration_ms` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Duration of query execution in milliseconds.
|
- `query_duration_ms` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Duration of query execution in milliseconds.
|
||||||
- `read_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number or rows read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_rows` includes the total number of rows read at all replicas. Each replica sends it’s `read_rows` value, and the server-initiator of the query summarize all received and local values. The cache volumes doesn’t affect this value.
|
- `read_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number of rows read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_rows` includes the total number of rows read at all replicas. Each replica sends it’s `read_rows` value, and the server-initiator of the query summarizes all received and local values. The cache volumes don’t affect this value.
|
||||||
- `read_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number or bytes read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_bytes` includes the total number of rows read at all replicas. Each replica sends it’s `read_bytes` value, and the server-initiator of the query summarize all received and local values. The cache volumes doesn’t affect this value.
|
- `read_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number of bytes read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_bytes` includes the total number of rows read at all replicas. Each replica sends it’s `read_bytes` value, and the server-initiator of the query summarizes all received and local values. The cache volumes don’t affect this value.
|
||||||
- `written_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — For `INSERT` queries, the number of written rows. For other queries, the column value is 0.
|
- `written_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — For `INSERT` queries, the number of written rows. For other queries, the column value is 0.
|
||||||
- `written_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — For `INSERT` queries, the number of written bytes. For other queries, the column value is 0.
|
- `written_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — For `INSERT` queries, the number of written bytes. For other queries, the column value is 0.
|
||||||
- `result_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Number of rows in a result of the `SELECT` query, or a number of rows in the `INSERT` query.
|
- `result_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Number of rows in a result of the `SELECT` query, or a number of rows in the `INSERT` query.
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# system.query_thread_log {#system_tables-query_thread_log}
|
# system.query_thread_log {#system_tables-query_thread_log}
|
||||||
|
|
||||||
Contains information about threads which execute queries, for example, thread name, thread start time, duration of query processing.
|
Contains information about threads that execute queries, for example, thread name, thread start time, duration of query processing.
|
||||||
|
|
||||||
To start logging:
|
To start logging:
|
||||||
|
|
||||||
|
@ -53,9 +53,9 @@ Columns:
|
|||||||
- `table` (`String`) - Table name
|
- `table` (`String`) - Table name
|
||||||
- `engine` (`String`) - Table engine name
|
- `engine` (`String`) - Table engine name
|
||||||
- `is_leader` (`UInt8`) - Whether the replica is the leader.
|
- `is_leader` (`UInt8`) - Whether the replica is the leader.
|
||||||
Only one replica at a time can be the leader. The leader is responsible for selecting background merges to perform.
|
Multiple replicas can be leaders at the same time. A replica can be prevented from becoming a leader using the `merge_tree` setting `replicated_can_become_leader`. The leaders are responsible for scheduling background merges.
|
||||||
Note that writes can be performed to any replica that is available and has a session in ZK, regardless of whether it is a leader.
|
Note that writes can be performed to any replica that is available and has a session in ZK, regardless of whether it is a leader.
|
||||||
- `can_become_leader` (`UInt8`) - Whether the replica can be elected as a leader.
|
- `can_become_leader` (`UInt8`) - Whether the replica can be a leader.
|
||||||
- `is_readonly` (`UInt8`) - Whether the replica is in read-only mode.
|
- `is_readonly` (`UInt8`) - Whether the replica is in read-only mode.
|
||||||
This mode is turned on if the config doesn’t have sections with ZooKeeper, if an unknown error occurred when reinitializing sessions in ZooKeeper, and during session reinitialization in ZooKeeper.
|
This mode is turned on if the config doesn’t have sections with ZooKeeper, if an unknown error occurred when reinitializing sessions in ZooKeeper, and during session reinitialization in ZooKeeper.
|
||||||
- `is_session_expired` (`UInt8`) - the session with ZooKeeper has expired. Basically the same as `is_readonly`.
|
- `is_session_expired` (`UInt8`) - the session with ZooKeeper has expired. Basically the same as `is_readonly`.
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# system.text_log {#system_tables-text_log}
|
# system.text_log {#system_tables-text_log}
|
||||||
|
|
||||||
Contains logging entries. Logging level which goes to this table can be limited with `text_log.level` server setting.
|
Contains logging entries. The logging level which goes to this table can be limited to the `text_log.level` server setting.
|
||||||
|
|
||||||
Columns:
|
Columns:
|
||||||
|
|
||||||
|
@ -18,7 +18,7 @@ Columns:
|
|||||||
|
|
||||||
- `revision` ([UInt32](../../sql-reference/data-types/int-uint.md)) — ClickHouse server build revision.
|
- `revision` ([UInt32](../../sql-reference/data-types/int-uint.md)) — ClickHouse server build revision.
|
||||||
|
|
||||||
When connecting to server by `clickhouse-client`, you see the string similar to `Connected to ClickHouse server version 19.18.1 revision 54429.`. This field contains the `revision`, but not the `version` of a server.
|
When connecting to the server by `clickhouse-client`, you see the string similar to `Connected to ClickHouse server version 19.18.1 revision 54429.`. This field contains the `revision`, but not the `version` of a server.
|
||||||
|
|
||||||
- `timer_type` ([Enum8](../../sql-reference/data-types/enum.md)) — Timer type:
|
- `timer_type` ([Enum8](../../sql-reference/data-types/enum.md)) — Timer type:
|
||||||
|
|
||||||
|
@ -7,6 +7,9 @@ toc_title: clickhouse-copier
|
|||||||
|
|
||||||
Copies data from the tables in one cluster to tables in another (or the same) cluster.
|
Copies data from the tables in one cluster to tables in another (or the same) cluster.
|
||||||
|
|
||||||
|
!!! warning "Warning"
|
||||||
|
To get a consistent copy, the data in the source tables and partitions should not change during the entire process.
|
||||||
|
|
||||||
You can run multiple `clickhouse-copier` instances on different servers to perform the same job. ZooKeeper is used for syncing the processes.
|
You can run multiple `clickhouse-copier` instances on different servers to perform the same job. ZooKeeper is used for syncing the processes.
|
||||||
|
|
||||||
After starting, `clickhouse-copier`:
|
After starting, `clickhouse-copier`:
|
||||||
|
@ -16,7 +16,7 @@ By default `clickhouse-local` does not have access to data on the same host, but
|
|||||||
!!! warning "Warning"
|
!!! warning "Warning"
|
||||||
It is not recommended to load production server configuration into `clickhouse-local` because data can be damaged in case of human error.
|
It is not recommended to load production server configuration into `clickhouse-local` because data can be damaged in case of human error.
|
||||||
|
|
||||||
For temporary data an unique temporary data directory is created by default. If you want to override this behavior the data directory can be explicitly specified with the `-- --path` option.
|
For temporary data, a unique temporary data directory is created by default. If you want to override this behavior, the data directory can be explicitly specified with the `-- --path` option.
|
||||||
|
|
||||||
## Usage {#usage}
|
## Usage {#usage}
|
||||||
|
|
||||||
|
@ -53,13 +53,13 @@ Result:
|
|||||||
|
|
||||||
Similar to `quantileExact`, this computes the exact [quantile](https://en.wikipedia.org/wiki/Quantile) of a numeric data sequence.
|
Similar to `quantileExact`, this computes the exact [quantile](https://en.wikipedia.org/wiki/Quantile) of a numeric data sequence.
|
||||||
|
|
||||||
To get exact value, all the passed values are combined into an array, which is then fully sorted. The sorting [algorithm's](https://en.cppreference.com/w/cpp/algorithm/sort) complexity is `O(N·log(N))`, where `N = std::distance(first, last)` comparisons.
|
To get the exact value, all the passed values are combined into an array, which is then fully sorted. The sorting [algorithm's](https://en.cppreference.com/w/cpp/algorithm/sort) complexity is `O(N·log(N))`, where `N = std::distance(first, last)` comparisons.
|
||||||
|
|
||||||
Depending on the level, i.e if the level is 0.5 then the exact lower median value is returned if there are even number of elements and the middle value is returned if there are odd number of elements. Median is calculated similar to the [median_low](https://docs.python.org/3/library/statistics.html#statistics.median_low) implementation which is used in python.
|
The return value depends on the quantile level and the number of elements in the selection, i.e. if the level is 0.5, then the function returns the lower median value for an even number of elements and the middle median value for an odd number of elements. Median is calculated similarly to the [median_low](https://docs.python.org/3/library/statistics.html#statistics.median_low) implementation which is used in python.
|
||||||
|
|
||||||
For all other levels, the element at the the index corresponding to the value of `level * size_of_array` is returned. For example:
|
For all other levels, the element at the index corresponding to the value of `level * size_of_array` is returned. For example:
|
||||||
|
|
||||||
```$sql
|
``` sql
|
||||||
SELECT quantileExactLow(0.1)(number) FROM numbers(10)
|
SELECT quantileExactLow(0.1)(number) FROM numbers(10)
|
||||||
|
|
||||||
┌─quantileExactLow(0.1)(number)─┐
|
┌─quantileExactLow(0.1)(number)─┐
|
||||||
@ -111,9 +111,10 @@ Result:
|
|||||||
|
|
||||||
Similar to `quantileExact`, this computes the exact [quantile](https://en.wikipedia.org/wiki/Quantile) of a numeric data sequence.
|
Similar to `quantileExact`, this computes the exact [quantile](https://en.wikipedia.org/wiki/Quantile) of a numeric data sequence.
|
||||||
|
|
||||||
To get exact value, all the passed values are combined into an array, which is then fully sorted. The sorting [algorithm's](https://en.cppreference.com/w/cpp/algorithm/sort) complexity is `O(N·log(N))`, where `N = std::distance(first, last)` comparisons.
|
All the passed values are combined into an array, which is then fully sorted,
|
||||||
|
to get the exact value. The sorting [algorithm's](https://en.cppreference.com/w/cpp/algorithm/sort) complexity is `O(N·log(N))`, where `N = std::distance(first, last)` comparisons.
|
||||||
|
|
||||||
Depending on the level, i.e if the level is 0.5 then the exact higher median value is returned if there are even number of elements and the middle value is returned if there are odd number of elements. Median is calculated similar to the [median_high](https://docs.python.org/3/library/statistics.html#statistics.median_high) implementation which is used in python. For all other levels, the element at the the index corresponding to the value of `level * size_of_array` is returned.
|
The return value depends on the quantile level and the number of elements in the selection, i.e. if the level is 0.5, then the function returns the higher median value for an even number of elements and the middle median value for an odd number of elements. Median is calculated similarly to the [median_high](https://docs.python.org/3/library/statistics.html#statistics.median_high) implementation which is used in python. For all other levels, the element at the index corresponding to the value of `level * size_of_array` is returned.
|
||||||
|
|
||||||
This implementation behaves exactly similar to the current `quantileExact` implementation.
|
This implementation behaves exactly similar to the current `quantileExact` implementation.
|
||||||
|
|
||||||
|
@ -80,4 +80,4 @@ Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Wrong argu
|
|||||||
## See Also {#see-also}
|
## See Also {#see-also}
|
||||||
|
|
||||||
- [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) operator
|
- [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) operator
|
||||||
- [toInterval](../../../sql-reference/functions/type-conversion-functions.md#function-tointerval) type convertion functions
|
- [toInterval](../../../sql-reference/functions/type-conversion-functions.md#function-tointerval) type conversion functions
|
||||||
|
@ -59,7 +59,8 @@ LAYOUT(LAYOUT_TYPE(param value)) -- layout settings
|
|||||||
- [range_hashed](#range-hashed)
|
- [range_hashed](#range-hashed)
|
||||||
- [complex_key_hashed](#complex-key-hashed)
|
- [complex_key_hashed](#complex-key-hashed)
|
||||||
- [complex_key_cache](#complex-key-cache)
|
- [complex_key_cache](#complex-key-cache)
|
||||||
- [ssd_complex_key_cache](#ssd-cache)
|
- [ssd_cache](#ssd-cache)
|
||||||
|
- [ssd_complex_key_cache](#complex-key-ssd-cache)
|
||||||
- [complex_key_direct](#complex-key-direct)
|
- [complex_key_direct](#complex-key-direct)
|
||||||
- [ip_trie](#ip-trie)
|
- [ip_trie](#ip-trie)
|
||||||
|
|
||||||
|
@ -89,7 +89,7 @@ If the index falls outside of the bounds of an array, it returns some default va
|
|||||||
## has(arr, elem) {#hasarr-elem}
|
## has(arr, elem) {#hasarr-elem}
|
||||||
|
|
||||||
Checks whether the ‘arr’ array has the ‘elem’ element.
|
Checks whether the ‘arr’ array has the ‘elem’ element.
|
||||||
Returns 0 if the the element is not in the array, or 1 if it is.
|
Returns 0 if the element is not in the array, or 1 if it is.
|
||||||
|
|
||||||
`NULL` is processed as a value.
|
`NULL` is processed as a value.
|
||||||
|
|
||||||
|
@ -23,8 +23,6 @@ SELECT
|
|||||||
└─────────────────────┴────────────┴────────────┴─────────────────────┘
|
└─────────────────────┴────────────┴────────────┴─────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
Only time zones that differ from UTC by a whole number of hours are supported.
|
|
||||||
|
|
||||||
## toTimeZone {#totimezone}
|
## toTimeZone {#totimezone}
|
||||||
|
|
||||||
Convert time or date and time to the specified time zone.
|
Convert time or date and time to the specified time zone.
|
||||||
|
@ -6,7 +6,7 @@ toc_title: Encoding
|
|||||||
# Encoding Functions {#encoding-functions}
|
# Encoding Functions {#encoding-functions}
|
||||||
|
|
||||||
## char {#char}
|
## char {#char}
|
||||||
|
|
||||||
Returns the string with the length as the number of passed arguments and each byte has the value of corresponding argument. Accepts multiple arguments of numeric types. If the value of argument is out of range of UInt8 data type, it is converted to UInt8 with possible rounding and overflow.
|
Returns the string with the length as the number of passed arguments and each byte has the value of corresponding argument. Accepts multiple arguments of numeric types. If the value of argument is out of range of UInt8 data type, it is converted to UInt8 with possible rounding and overflow.
|
||||||
|
|
||||||
**Syntax**
|
**Syntax**
|
||||||
|
@ -153,15 +153,18 @@ A fast, decent-quality non-cryptographic hash function for a string obtained fro
|
|||||||
`URLHash(s, N)` – Calculates a hash from a string up to the N level in the URL hierarchy, without one of the trailing symbols `/`,`?` or `#` at the end, if present.
|
`URLHash(s, N)` – Calculates a hash from a string up to the N level in the URL hierarchy, without one of the trailing symbols `/`,`?` or `#` at the end, if present.
|
||||||
Levels are the same as in URLHierarchy. This function is specific to Yandex.Metrica.
|
Levels are the same as in URLHierarchy. This function is specific to Yandex.Metrica.
|
||||||
|
|
||||||
|
## farmFingerprint64 {#farmfingerprint64}
|
||||||
|
|
||||||
## farmHash64 {#farmhash64}
|
## farmHash64 {#farmhash64}
|
||||||
|
|
||||||
Produces a 64-bit [FarmHash](https://github.com/google/farmhash) hash value.
|
Produces a 64-bit [FarmHash](https://github.com/google/farmhash) or Fingerprint value. Prefer `farmFingerprint64` for a stable and portable value.
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
|
farmFingerprint64(par1, ...)
|
||||||
farmHash64(par1, ...)
|
farmHash64(par1, ...)
|
||||||
```
|
```
|
||||||
|
|
||||||
The function uses the `Hash64` method from all [available methods](https://github.com/google/farmhash/blob/master/src/farmhash.h).
|
These functions use the `Fingerprint64` and `Hash64` method respectively from all [available methods](https://github.com/google/farmhash/blob/master/src/farmhash.h).
|
||||||
|
|
||||||
**Parameters**
|
**Parameters**
|
||||||
|
|
||||||
|
@ -551,7 +551,7 @@ formatReadableTimeDelta(column[, maximum_unit])
|
|||||||
**Parameters**
|
**Parameters**
|
||||||
|
|
||||||
- `column` — A column with numeric time delta.
|
- `column` — A column with numeric time delta.
|
||||||
- `maximum_unit` — Optional. Maximum unit to show. Acceptable values seconds, minutes, hours, days, months, years.
|
- `maximum_unit` — Optional. Maximum unit to show. Acceptable values seconds, minutes, hours, days, months, years.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
@ -626,7 +626,12 @@ neighbor(column, offset[, default_value])
|
|||||||
```
|
```
|
||||||
|
|
||||||
The result of the function depends on the affected data blocks and the order of data in the block.
|
The result of the function depends on the affected data blocks and the order of data in the block.
|
||||||
If you make a subquery with ORDER BY and call the function from outside the subquery, you can get the expected result.
|
|
||||||
|
!!! warning "Warning"
|
||||||
|
It can reach the neighbor rows only inside the currently processed data block.
|
||||||
|
|
||||||
|
The rows order used during the calculation of `neighbor` can differ from the order of rows returned to the user.
|
||||||
|
To prevent that you can make a subquery with ORDER BY and call the function from outside the subquery.
|
||||||
|
|
||||||
**Parameters**
|
**Parameters**
|
||||||
|
|
||||||
@ -731,8 +736,13 @@ Result:
|
|||||||
Calculates the difference between successive row values in the data block.
|
Calculates the difference between successive row values in the data block.
|
||||||
Returns 0 for the first row and the difference from the previous row for each subsequent row.
|
Returns 0 for the first row and the difference from the previous row for each subsequent row.
|
||||||
|
|
||||||
|
!!! warning "Warning"
|
||||||
|
It can reach the previos row only inside the currently processed data block.
|
||||||
|
|
||||||
The result of the function depends on the affected data blocks and the order of data in the block.
|
The result of the function depends on the affected data blocks and the order of data in the block.
|
||||||
If you make a subquery with ORDER BY and call the function from outside the subquery, you can get the expected result.
|
|
||||||
|
The rows order used during the calculation of `runningDifference` can differ from the order of rows returned to the user.
|
||||||
|
To prevent that you can make a subquery with ORDER BY and call the function from outside the subquery.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
@ -1584,7 +1594,7 @@ isDecimalOverflow(d, [p])
|
|||||||
**Parameters**
|
**Parameters**
|
||||||
|
|
||||||
- `d` — value. [Decimal](../../sql-reference/data-types/decimal.md).
|
- `d` — value. [Decimal](../../sql-reference/data-types/decimal.md).
|
||||||
- `p` — precision. Optional. If omitted, the initial presicion of the first argument is used. Using of this paratemer could be helpful for data extraction to another DBMS or file. [UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges).
|
- `p` — precision. Optional. If omitted, the initial precision of the first argument is used. Using of this paratemer could be helpful for data extraction to another DBMS or file. [UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges).
|
||||||
|
|
||||||
**Returned values**
|
**Returned values**
|
||||||
|
|
||||||
@ -1647,4 +1657,24 @@ Result:
|
|||||||
10 10 19 19 39 39
|
10 10 19 19 39 39
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## errorCodeToName {#error-code-to-name}
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Variable name for the error code.
|
||||||
|
|
||||||
|
Type: [LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md).
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
errorCodeToName(1)
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
UNSUPPORTED_METHOD
|
||||||
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/other_functions/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/query_language/functions/other_functions/) <!--hide-->
|
||||||
|
@ -323,6 +323,10 @@ This function accepts a number or date or date with time, and returns a string c
|
|||||||
|
|
||||||
This function accepts a number or date or date with time, and returns a FixedString containing bytes representing the corresponding value in host order (little endian). Null bytes are dropped from the end. For example, a UInt32 type value of 255 is a FixedString that is one byte long.
|
This function accepts a number or date or date with time, and returns a FixedString containing bytes representing the corresponding value in host order (little endian). Null bytes are dropped from the end. For example, a UInt32 type value of 255 is a FixedString that is one byte long.
|
||||||
|
|
||||||
|
## reinterpretAsUUID {#reinterpretasuuid}
|
||||||
|
|
||||||
|
This function accepts FixedString, and returns UUID. Takes 16 bytes string. If the string isn't long enough, the functions work as if the string is padded with the necessary number of null bytes to the end. If the string longer than 16 bytes, the extra bytes at the end are ignored.
|
||||||
|
|
||||||
## CAST(x, T) {#type_conversion_function-cast}
|
## CAST(x, T) {#type_conversion_function-cast}
|
||||||
|
|
||||||
Converts ‘x’ to the ‘t’ data type. The syntax CAST(x AS t) is also supported.
|
Converts ‘x’ to the ‘t’ data type. The syntax CAST(x AS t) is also supported.
|
||||||
@ -780,4 +784,42 @@ Result:
|
|||||||
└──────────────────────────────────┘
|
└──────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## formatRowNoNewline {#formatrownonewline}
|
||||||
|
|
||||||
|
Converts arbitrary expressions into a string via given format. The function trims the last `\n` if any.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
formatRowNoNewline(format, x, y, ...)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `format` — Text format. For example, [CSV](../../interfaces/formats.md#csv), [TSV](../../interfaces/formats.md#tabseparated).
|
||||||
|
- `x`,`y`, ... — Expressions.
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- A formatted string.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT formatRowNoNewline('CSV', number, 'good')
|
||||||
|
FROM numbers(3)
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─formatRowNoNewline('CSV', number, 'good')─┐
|
||||||
|
│ 0,"good" │
|
||||||
|
│ 1,"good" │
|
||||||
|
│ 2,"good" │
|
||||||
|
└───────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/type_conversion_functions/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/query_language/functions/type_conversion_functions/) <!--hide-->
|
||||||
|
@ -61,6 +61,54 @@ SELECT toUUID('61f0c404-5cb3-11e7-907b-a6006ad3dba0') AS uuid
|
|||||||
└──────────────────────────────────────┘
|
└──────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## toUUIDOrNull (x) {#touuidornull-x}
|
||||||
|
|
||||||
|
It takes an argument of type String and tries to parse it into UUID. If failed, returns NULL.
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
toUUIDOrNull(String)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
The Nullable(UUID) type value.
|
||||||
|
|
||||||
|
**Usage example**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT toUUIDOrNull('61f0c404-5cb3-11e7-907b-a6006ad3dba0T') AS uuid
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─uuid─┐
|
||||||
|
│ ᴺᵁᴸᴸ │
|
||||||
|
└──────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## toUUIDOrZero (x) {#touuidorzero-x}
|
||||||
|
|
||||||
|
It takes an argument of type String and tries to parse it into UUID. If failed, returns zero UUID.
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
toUUIDOrZero(String)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
The UUID type value.
|
||||||
|
|
||||||
|
**Usage example**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT toUUIDOrZero('61f0c404-5cb3-11e7-907b-a6006ad3dba0T') AS uuid
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─────────────────────────────────uuid─┐
|
||||||
|
│ 00000000-0000-0000-0000-000000000000 │
|
||||||
|
└──────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
## UUIDStringToNum {#uuidstringtonum}
|
## UUIDStringToNum {#uuidstringtonum}
|
||||||
|
|
||||||
Accepts a string containing 36 characters in the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`, and returns it as a set of bytes in a [FixedString(16)](../../sql-reference/data-types/fixedstring.md).
|
Accepts a string containing 36 characters in the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`, and returns it as a set of bytes in a [FixedString(16)](../../sql-reference/data-types/fixedstring.md).
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
toc_priority: 37
|
toc_priority: 38
|
||||||
toc_title: Operators
|
toc_title: Operators
|
||||||
---
|
---
|
||||||
|
|
||||||
@ -151,25 +151,47 @@ Types of intervals:
|
|||||||
- `QUARTER`
|
- `QUARTER`
|
||||||
- `YEAR`
|
- `YEAR`
|
||||||
|
|
||||||
|
You can also use a string literal when setting the `INTERVAL` value. For example, `INTERVAL 1 HOUR` is identical to the `INTERVAL '1 hour'` or `INTERVAL '1' hour`.
|
||||||
|
|
||||||
!!! warning "Warning"
|
!!! warning "Warning"
|
||||||
Intervals with different types can’t be combined. You can’t use expressions like `INTERVAL 4 DAY 1 HOUR`. Specify intervals in units that are smaller or equal to the smallest unit of the interval, for example, `INTERVAL 25 HOUR`. You can use consecutive operations, like in the example below.
|
Intervals with different types can’t be combined. You can’t use expressions like `INTERVAL 4 DAY 1 HOUR`. Specify intervals in units that are smaller or equal to the smallest unit of the interval, for example, `INTERVAL 25 HOUR`. You can use consecutive operations, like in the example below.
|
||||||
|
|
||||||
Example:
|
Examples:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL 3 HOUR
|
SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL 3 HOUR;
|
||||||
```
|
```
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
┌───current_date_time─┬─plus(plus(now(), toIntervalDay(4)), toIntervalHour(3))─┐
|
┌───current_date_time─┬─plus(plus(now(), toIntervalDay(4)), toIntervalHour(3))─┐
|
||||||
│ 2019-10-23 11:16:28 │ 2019-10-27 14:16:28 │
|
│ 2020-11-03 22:09:50 │ 2020-11-08 01:09:50 │
|
||||||
└─────────────────────┴────────────────────────────────────────────────────────┘
|
└─────────────────────┴────────────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT now() AS current_date_time, current_date_time + INTERVAL '4 day' + INTERVAL '3 hour';
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌───current_date_time─┬─plus(plus(now(), toIntervalDay(4)), toIntervalHour(3))─┐
|
||||||
|
│ 2020-11-03 22:12:10 │ 2020-11-08 01:12:10 │
|
||||||
|
└─────────────────────┴────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT now() AS current_date_time, current_date_time + INTERVAL '4' day + INTERVAL '3' hour;
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌───current_date_time─┬─plus(plus(now(), toIntervalDay('4')), toIntervalHour('3'))─┐
|
||||||
|
│ 2020-11-03 22:33:19 │ 2020-11-08 01:33:19 │
|
||||||
|
└─────────────────────┴────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
**See Also**
|
**See Also**
|
||||||
|
|
||||||
- [Interval](../../sql-reference/data-types/special-data-types/interval.md) data type
|
- [Interval](../../sql-reference/data-types/special-data-types/interval.md) data type
|
||||||
- [toInterval](../../sql-reference/functions/type-conversion-functions.md#function-tointerval) type convertion functions
|
- [toInterval](../../sql-reference/functions/type-conversion-functions.md#function-tointerval) type conversion functions
|
||||||
|
|
||||||
## Logical Negation Operator {#logical-negation-operator}
|
## Logical Negation Operator {#logical-negation-operator}
|
||||||
|
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user