mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-12-11 17:02:25 +00:00
Merge branch 'master' into pytest
This commit is contained in:
commit
7cc0ad2884
4
.gitignore
vendored
4
.gitignore
vendored
@ -16,8 +16,8 @@
|
||||
/docs/publish
|
||||
/docs/edit
|
||||
/docs/website
|
||||
/docs/venv/
|
||||
/docs/tools/venv/
|
||||
/docs/venv
|
||||
/docs/tools/venv
|
||||
/docs/tools/translate/venv
|
||||
/docs/tools/translate/output.md
|
||||
/docs/en/single.md
|
||||
|
5
.gitmodules
vendored
5
.gitmodules
vendored
@ -13,7 +13,7 @@
|
||||
url = https://github.com/edenhill/librdkafka.git
|
||||
[submodule "contrib/cctz"]
|
||||
path = contrib/cctz
|
||||
url = https://github.com/google/cctz.git
|
||||
url = https://github.com/ClickHouse-Extras/cctz.git
|
||||
[submodule "contrib/zlib-ng"]
|
||||
path = contrib/zlib-ng
|
||||
url = https://github.com/ClickHouse-Extras/zlib-ng.git
|
||||
@ -151,3 +151,6 @@
|
||||
[submodule "website/images/feathericons"]
|
||||
path = website/images/feathericons
|
||||
url = https://github.com/feathericons/feather
|
||||
[submodule "contrib/msgpack-c"]
|
||||
path = contrib/msgpack-c
|
||||
url = https://github.com/msgpack/msgpack-c
|
||||
|
132
CHANGELOG.md
132
CHANGELOG.md
@ -1,5 +1,86 @@
|
||||
---
|
||||
toc_folder_title: Changelog
|
||||
toc_priority: 74
|
||||
toc_title: '2020'
|
||||
---
|
||||
|
||||
## ClickHouse release v20.3
|
||||
|
||||
### ClickHouse release v20.3.7.46, 2020-04-17
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix `Logical error: CROSS JOIN has expressions` error for queries with comma and names joins mix. [#10311](https://github.com/ClickHouse/ClickHouse/pull/10311) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix queries with `max_bytes_before_external_group_by`. [#10302](https://github.com/ClickHouse/ClickHouse/pull/10302) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix move-to-prewhere optimization in presense of arrayJoin functions (in certain cases). This fixes [#10092](https://github.com/ClickHouse/ClickHouse/issues/10092). [#10195](https://github.com/ClickHouse/ClickHouse/pull/10195) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add the ability to relax the restriction on non-deterministic functions usage in mutations with `allow_nondeterministic_mutations` setting. [#10186](https://github.com/ClickHouse/ClickHouse/pull/10186) ([filimonov](https://github.com/filimonov)).
|
||||
|
||||
### ClickHouse release v20.3.6.40, 2020-04-16
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Added function `isConstant`. This function checks whether its argument is constant expression and returns 1 or 0. It is intended for development, debugging and demonstration purposes. [#10198](https://github.com/ClickHouse/ClickHouse/pull/10198) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix error `Pipeline stuck` with `max_rows_to_group_by` and `group_by_overflow_mode = 'break'`. [#10279](https://github.com/ClickHouse/ClickHouse/pull/10279) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix rare possible exception `Cannot drain connections: cancel first`. [#10239](https://github.com/ClickHouse/ClickHouse/pull/10239) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed bug where ClickHouse would throw "Unknown function lambda." error message when user tries to run ALTER UPDATE/DELETE on tables with ENGINE = Replicated*. Check for nondeterministic functions now handles lambda expressions correctly. [#10237](https://github.com/ClickHouse/ClickHouse/pull/10237) ([Alexander Kazakov](https://github.com/Akazz)).
|
||||
* Fixed "generateRandom" function for Date type. This fixes [#9973](https://github.com/ClickHouse/ClickHouse/issues/9973). Fix an edge case when dates with year 2106 are inserted to MergeTree tables with old-style partitioning but partitions are named with year 1970. [#10218](https://github.com/ClickHouse/ClickHouse/pull/10218) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Convert types if the table definition of a View does not correspond to the SELECT query. This fixes [#10180](https://github.com/ClickHouse/ClickHouse/issues/10180) and [#10022](https://github.com/ClickHouse/ClickHouse/issues/10022). [#10217](https://github.com/ClickHouse/ClickHouse/pull/10217) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix `parseDateTimeBestEffort` for strings in RFC-2822 when day of week is Tuesday or Thursday. This fixes [#10082](https://github.com/ClickHouse/ClickHouse/issues/10082). [#10214](https://github.com/ClickHouse/ClickHouse/pull/10214) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix column names of constants inside JOIN that may clash with names of constants outside of JOIN. [#10207](https://github.com/ClickHouse/ClickHouse/pull/10207) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix possible inifinite query execution when the query actually should stop on LIMIT, while reading from infinite source like `system.numbers` or `system.zeros`. [#10206](https://github.com/ClickHouse/ClickHouse/pull/10206) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix using the current database for access checking when the database isn't specified. [#10192](https://github.com/ClickHouse/ClickHouse/pull/10192) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Convert blocks if structure does not match on INSERT into Distributed(). [#10135](https://github.com/ClickHouse/ClickHouse/pull/10135) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix possible incorrect result for extremes in processors pipeline. [#10131](https://github.com/ClickHouse/ClickHouse/pull/10131) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix some kinds of alters with compact parts. [#10130](https://github.com/ClickHouse/ClickHouse/pull/10130) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix incorrect `index_granularity_bytes` check while creating new replica. Fixes [#10098](https://github.com/ClickHouse/ClickHouse/issues/10098). [#10121](https://github.com/ClickHouse/ClickHouse/pull/10121) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix SIGSEGV on INSERT into Distributed table when its structure differs from the underlying tables. [#10105](https://github.com/ClickHouse/ClickHouse/pull/10105) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix possible rows loss for queries with `JOIN` and `UNION ALL`. Fixes [#9826](https://github.com/ClickHouse/ClickHouse/issues/9826), [#10113](https://github.com/ClickHouse/ClickHouse/issues/10113). [#10099](https://github.com/ClickHouse/ClickHouse/pull/10099) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed replicated tables startup when updating from an old ClickHouse version where `/table/replicas/replica_name/metadata` node doesn't exist. Fixes [#10037](https://github.com/ClickHouse/ClickHouse/issues/10037). [#10095](https://github.com/ClickHouse/ClickHouse/pull/10095) ([alesapin](https://github.com/alesapin)).
|
||||
* Add some arguments check and support identifier arguments for MySQL Database Engine. [#10077](https://github.com/ClickHouse/ClickHouse/pull/10077) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix bug in clickhouse dictionary source from localhost clickhouse server. The bug may lead to memory corruption if types in dictionary and source are not compatible. [#10071](https://github.com/ClickHouse/ClickHouse/pull/10071) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix bug in `CHECK TABLE` query when table contain skip indices. [#10068](https://github.com/ClickHouse/ClickHouse/pull/10068) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix error `Cannot clone block with columns because block has 0 columns ... While executing GroupingAggregatedTransform`. It happened when setting `distributed_aggregation_memory_efficient` was enabled, and distributed query read aggregating data with different level from different shards (mixed single and two level aggregation). [#10063](https://github.com/ClickHouse/ClickHouse/pull/10063) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix a segmentation fault that could occur in GROUP BY over string keys containing trailing zero bytes ([#8636](https://github.com/ClickHouse/ClickHouse/issues/8636), [#8925](https://github.com/ClickHouse/ClickHouse/issues/8925)). [#10025](https://github.com/ClickHouse/ClickHouse/pull/10025) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Fix parallel distributed INSERT SELECT for remote table. This PR fixes the solution provided in [#9759](https://github.com/ClickHouse/ClickHouse/pull/9759). [#9999](https://github.com/ClickHouse/ClickHouse/pull/9999) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix the number of threads used for remote query execution (performance regression, since 20.3). This happened when query from `Distributed` table was executed simultaneously on local and remote shards. Fixes [#9965](https://github.com/ClickHouse/ClickHouse/issues/9965). [#9971](https://github.com/ClickHouse/ClickHouse/pull/9971) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix bug in which the necessary tables weren't retrieved at one of the processing stages of queries to some databases. Fixes [#9699](https://github.com/ClickHouse/ClickHouse/issues/9699). [#9949](https://github.com/ClickHouse/ClickHouse/pull/9949) ([achulkov2](https://github.com/achulkov2)).
|
||||
* Fix 'Not found column in block' error when `JOIN` appears with `TOTALS`. Fixes [#9839](https://github.com/ClickHouse/ClickHouse/issues/9839). [#9939](https://github.com/ClickHouse/ClickHouse/pull/9939) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix a bug with `ON CLUSTER` DDL queries freezing on server startup. [#9927](https://github.com/ClickHouse/ClickHouse/pull/9927) ([Gagan Arneja](https://github.com/garneja)).
|
||||
* Fix parsing multiple hosts set in the CREATE USER command, e.g. `CREATE USER user6 HOST NAME REGEXP 'lo.?*host', NAME REGEXP 'lo*host'`. [#9924](https://github.com/ClickHouse/ClickHouse/pull/9924) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix `TRUNCATE` for Join table engine ([#9917](https://github.com/ClickHouse/ClickHouse/issues/9917)). [#9920](https://github.com/ClickHouse/ClickHouse/pull/9920) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix "scalar doesn't exist" error in ALTERs ([#9878](https://github.com/ClickHouse/ClickHouse/issues/9878)). [#9904](https://github.com/ClickHouse/ClickHouse/pull/9904) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix race condition between drop and optimize in `ReplicatedMergeTree`. [#9901](https://github.com/ClickHouse/ClickHouse/pull/9901) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix error with qualified names in `distributed_product_mode='local'`. Fixes [#4756](https://github.com/ClickHouse/ClickHouse/issues/4756). [#9891](https://github.com/ClickHouse/ClickHouse/pull/9891) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix calculating grants for introspection functions from the setting 'allow_introspection_functions'. [#9840](https://github.com/ClickHouse/ClickHouse/pull/9840) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Fix integration test `test_settings_constraints`. [#9962](https://github.com/ClickHouse/ClickHouse/pull/9962) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Removed dependency on `clock_getres`. [#9833](https://github.com/ClickHouse/ClickHouse/pull/9833) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
|
||||
### ClickHouse release v20.3.5.21, 2020-03-27
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix 'Different expressions with the same alias' error when query has PREWHERE and WHERE on distributed table and `SET distributed_product_mode = 'local'`. [#9871](https://github.com/ClickHouse/ClickHouse/pull/9871) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix mutations excessive memory consumption for tables with a composite primary key. This fixes [#9850](https://github.com/ClickHouse/ClickHouse/issues/9850). [#9860](https://github.com/ClickHouse/ClickHouse/pull/9860) ([alesapin](https://github.com/alesapin)).
|
||||
* For INSERT queries shard now clamps the settings got from the initiator to the shard's constaints instead of throwing an exception. This fix allows to send INSERT queries to a shard with another constraints. This change improves fix [#9447](https://github.com/ClickHouse/ClickHouse/issues/9447). [#9852](https://github.com/ClickHouse/ClickHouse/pull/9852) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix 'COMMA to CROSS JOIN rewriter is not enabled or cannot rewrite query' error in case of subqueries with COMMA JOIN out of tables lists (i.e. in WHERE). Fixes [#9782](https://github.com/ClickHouse/ClickHouse/issues/9782). [#9830](https://github.com/ClickHouse/ClickHouse/pull/9830) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix possible exception `Got 0 in totals chunk, expected 1` on client. It happened for queries with `JOIN` in case if right joined table had zero rows. Example: `select * from system.one t1 join system.one t2 on t1.dummy = t2.dummy limit 0 FORMAT TabSeparated;`. Fixes [#9777](https://github.com/ClickHouse/ClickHouse/issues/9777). [#9823](https://github.com/ClickHouse/ClickHouse/pull/9823) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix SIGSEGV with optimize_skip_unused_shards when type cannot be converted. [#9804](https://github.com/ClickHouse/ClickHouse/pull/9804) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix broken `ALTER TABLE DELETE COLUMN` query for compact parts. [#9779](https://github.com/ClickHouse/ClickHouse/pull/9779) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix max_distributed_connections (w/ and w/o Processors). [#9673](https://github.com/ClickHouse/ClickHouse/pull/9673) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed a few cases when timezone of the function argument wasn't used properly. [#9574](https://github.com/ClickHouse/ClickHouse/pull/9574) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Remove order by stage from mutations because we read from a single ordered part in a single thread. Also add check that the order of rows in mutation is ordered in sorting key order and this order is not violated. [#9886](https://github.com/ClickHouse/ClickHouse/pull/9886) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
|
||||
### ClickHouse release v20.3.4.10, 2020-03-20
|
||||
|
||||
#### Bug Fix
|
||||
@ -255,6 +336,55 @@
|
||||
|
||||
## ClickHouse release v20.1
|
||||
|
||||
### ClickHouse release v20.1.10.70, 2020-04-17
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix rare possible exception `Cannot drain connections: cancel first`. [#10239](https://github.com/ClickHouse/ClickHouse/pull/10239) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed bug where ClickHouse would throw `'Unknown function lambda.'` error message when user tries to run `ALTER UPDATE/DELETE` on tables with `ENGINE = Replicated*`. Check for nondeterministic functions now handles lambda expressions correctly. [#10237](https://github.com/ClickHouse/ClickHouse/pull/10237) ([Alexander Kazakov](https://github.com/Akazz)).
|
||||
* Fix `parseDateTimeBestEffort` for strings in RFC-2822 when day of week is Tuesday or Thursday. This fixes [#10082](https://github.com/ClickHouse/ClickHouse/issues/10082). [#10214](https://github.com/ClickHouse/ClickHouse/pull/10214) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix column names of constants inside `JOIN` that may clash with names of constants outside of `JOIN`. [#10207](https://github.com/ClickHouse/ClickHouse/pull/10207) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix possible inifinite query execution when the query actually should stop on LIMIT, while reading from infinite source like `system.numbers` or `system.zeros`. [#10206](https://github.com/ClickHouse/ClickHouse/pull/10206) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix move-to-prewhere optimization in presense of `arrayJoin` functions (in certain cases). This fixes [#10092](https://github.com/ClickHouse/ClickHouse/issues/10092). [#10195](https://github.com/ClickHouse/ClickHouse/pull/10195) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add the ability to relax the restriction on non-deterministic functions usage in mutations with `allow_nondeterministic_mutations` setting. [#10186](https://github.com/ClickHouse/ClickHouse/pull/10186) ([filimonov](https://github.com/filimonov)).
|
||||
* Convert blocks if structure does not match on `INSERT` into table with `Distributed` engine. [#10135](https://github.com/ClickHouse/ClickHouse/pull/10135) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix `SIGSEGV` on `INSERT` into `Distributed` table when its structure differs from the underlying tables. [#10105](https://github.com/ClickHouse/ClickHouse/pull/10105) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix possible rows loss for queries with `JOIN` and `UNION ALL`. Fixes [#9826](https://github.com/ClickHouse/ClickHouse/issues/9826), [#10113](https://github.com/ClickHouse/ClickHouse/issues/10113). [#10099](https://github.com/ClickHouse/ClickHouse/pull/10099) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Add arguments check and support identifier arguments for MySQL Database Engine. [#10077](https://github.com/ClickHouse/ClickHouse/pull/10077) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix bug in clickhouse dictionary source from localhost clickhouse server. The bug may lead to memory corruption if types in dictionary and source are not compatible. [#10071](https://github.com/ClickHouse/ClickHouse/pull/10071) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix error `Cannot clone block with columns because block has 0 columns ... While executing GroupingAggregatedTransform`. It happened when setting `distributed_aggregation_memory_efficient` was enabled, and distributed query read aggregating data with different level from different shards (mixed single and two level aggregation). [#10063](https://github.com/ClickHouse/ClickHouse/pull/10063) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix a segmentation fault that could occur in `GROUP BY` over string keys containing trailing zero bytes ([#8636](https://github.com/ClickHouse/ClickHouse/issues/8636), [#8925](https://github.com/ClickHouse/ClickHouse/issues/8925)). [#10025](https://github.com/ClickHouse/ClickHouse/pull/10025) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Fix bug in which the necessary tables weren't retrieved at one of the processing stages of queries to some databases. Fixes [#9699](https://github.com/ClickHouse/ClickHouse/issues/9699). [#9949](https://github.com/ClickHouse/ClickHouse/pull/9949) ([achulkov2](https://github.com/achulkov2)).
|
||||
* Fix `'Not found column in block'` error when `JOIN` appears with `TOTALS`. Fixes [#9839](https://github.com/ClickHouse/ClickHouse/issues/9839). [#9939](https://github.com/ClickHouse/ClickHouse/pull/9939) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix a bug with `ON CLUSTER` DDL queries freezing on server startup. [#9927](https://github.com/ClickHouse/ClickHouse/pull/9927) ([Gagan Arneja](https://github.com/garneja)).
|
||||
* Fix `TRUNCATE` for Join table engine ([#9917](https://github.com/ClickHouse/ClickHouse/issues/9917)). [#9920](https://github.com/ClickHouse/ClickHouse/pull/9920) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix `'scalar doesn't exist'` error in ALTER queries ([#9878](https://github.com/ClickHouse/ClickHouse/issues/9878)). [#9904](https://github.com/ClickHouse/ClickHouse/pull/9904) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix race condition between drop and optimize in `ReplicatedMergeTree`. [#9901](https://github.com/ClickHouse/ClickHouse/pull/9901) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed `DeleteOnDestroy` logic in `ATTACH PART` which could lead to automatic removal of attached part and added few tests. [#9410](https://github.com/ClickHouse/ClickHouse/pull/9410) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Fix unit test `collapsing_sorted_stream`. [#9367](https://github.com/ClickHouse/ClickHouse/pull/9367) ([Deleted user](https://github.com/ghost)).
|
||||
|
||||
### ClickHouse release v20.1.9.54, 2020-03-28
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix `'Different expressions with the same alias'` error when query has `PREWHERE` and `WHERE` on distributed table and `SET distributed_product_mode = 'local'`. [#9871](https://github.com/ClickHouse/ClickHouse/pull/9871) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix mutations excessive memory consumption for tables with a composite primary key. This fixes [#9850](https://github.com/ClickHouse/ClickHouse/issues/9850). [#9860](https://github.com/ClickHouse/ClickHouse/pull/9860) ([alesapin](https://github.com/alesapin)).
|
||||
* For INSERT queries shard now clamps the settings got from the initiator to the shard's constaints instead of throwing an exception. This fix allows to send `INSERT` queries to a shard with another constraints. This change improves fix [#9447](https://github.com/ClickHouse/ClickHouse/issues/9447). [#9852](https://github.com/ClickHouse/ClickHouse/pull/9852) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix possible exception `Got 0 in totals chunk, expected 1` on client. It happened for queries with `JOIN` in case if right joined table had zero rows. Example: `select * from system.one t1 join system.one t2 on t1.dummy = t2.dummy limit 0 FORMAT TabSeparated;`. Fixes [#9777](https://github.com/ClickHouse/ClickHouse/issues/9777). [#9823](https://github.com/ClickHouse/ClickHouse/pull/9823) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix `SIGSEGV` with `optimize_skip_unused_shards` when type cannot be converted. [#9804](https://github.com/ClickHouse/ClickHouse/pull/9804) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed a few cases when timezone of the function argument wasn't used properly. [#9574](https://github.com/ClickHouse/ClickHouse/pull/9574) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Remove `ORDER BY` stage from mutations because we read from a single ordered part in a single thread. Also add check that the order of rows in mutation is ordered in sorting key order and this order is not violated. [#9886](https://github.com/ClickHouse/ClickHouse/pull/9886) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Clean up duplicated linker flags. Make sure the linker won't look up an unexpected symbol. [#9433](https://github.com/ClickHouse/ClickHouse/pull/9433) ([Amos Bird](https://github.com/amosbird)).
|
||||
|
||||
### ClickHouse release v20.1.8.41, 2020-03-20
|
||||
|
||||
#### Bug Fix
|
||||
@ -641,4 +771,4 @@
|
||||
#### Security Fix
|
||||
* Fixed the possibility of reading directories structure in tables with `File` table engine. This fixes [#8536](https://github.com/ClickHouse/ClickHouse/issues/8536). [#8537](https://github.com/ClickHouse/ClickHouse/pull/8537) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
## [Changelog for 2019](https://github.com/ClickHouse/ClickHouse/blob/master/docs/en/changelog/2019.md)
|
||||
## [Changelog for 2019](https://github.com/ClickHouse/ClickHouse/blob/master/docs/en/whats_new/changelog/2019.md)
|
||||
|
@ -213,10 +213,14 @@ if (COMPILER_CLANG)
|
||||
# TODO investigate that
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-omit-frame-pointer")
|
||||
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fno-omit-frame-pointer")
|
||||
|
||||
if (OS_DARWIN)
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libc++ -Wl,-U,_inside_main")
|
||||
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wl,-U,_inside_main")
|
||||
endif()
|
||||
|
||||
# Display absolute paths in error messages. Otherwise KDevelop fails to navigate to correct file and opens a new file instead.
|
||||
set(COMPILER_FLAGS "${COMPILER_FLAGS} -fdiagnostics-absolute-paths")
|
||||
endif ()
|
||||
|
||||
option (ENABLE_LIBRARIES "Enable all libraries (Global default switch)" ON)
|
||||
@ -343,6 +347,7 @@ include (cmake/find/rapidjson.cmake)
|
||||
include (cmake/find/fastops.cmake)
|
||||
include (cmake/find/orc.cmake)
|
||||
include (cmake/find/avro.cmake)
|
||||
include (cmake/find/msgpack.cmake)
|
||||
|
||||
find_contrib_lib(cityhash)
|
||||
find_contrib_lib(farmhash)
|
||||
|
@ -11,10 +11,11 @@ ClickHouse is an open-source column-oriented database management system that all
|
||||
* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-d2zxkf9e-XyxDa_ucfPxzuH4SJIm~Ng) and [Telegram](https://telegram.me/clickhouse_en) allow to chat with ClickHouse users in real-time.
|
||||
* [Blog](https://clickhouse.yandex/blog/en/) contains various ClickHouse-related articles, as well as announces and reports about events.
|
||||
* [Contacts](https://clickhouse.tech/#contacts) can help to get your questions answered if there are any.
|
||||
* You can also [fill this form](https://forms.yandex.com/surveys/meet-yandex-clickhouse-team/) to meet Yandex ClickHouse team in person.
|
||||
* You can also [fill this form](https://clickhouse.tech/#meet) to meet Yandex ClickHouse team in person.
|
||||
|
||||
## Upcoming Events
|
||||
|
||||
* [ClickHouse in Avito (online in Russian)](https://avitotech.timepad.ru/event/1290051/) on April 9, 2020.
|
||||
* [ClickHouse Online Meetup West (in English)](https://www.eventbrite.com/e/clickhouse-online-meetup-registration-102886791162) on April 24, 2020.
|
||||
* [ClickHouse Online Meetup East (in English)](https://www.eventbrite.com/e/clickhouse-online-meetup-east-registration-102989325846) on April 28, 2020.
|
||||
* [ClickHouse Workshop in Novosibirsk](https://2020.codefest.ru/lecture/1628) on TBD date.
|
||||
* [Yandex C++ Open-Source Sprints in Moscow](https://events.yandex.ru/events/otkrytyj-kod-v-yandek-28-03-2020) on TBD date.
|
||||
|
@ -3,8 +3,10 @@ if (USE_CLANG_TIDY)
|
||||
endif ()
|
||||
|
||||
add_subdirectory (common)
|
||||
add_subdirectory (loggers)
|
||||
add_subdirectory (daemon)
|
||||
add_subdirectory (loggers)
|
||||
add_subdirectory (pcg-random)
|
||||
add_subdirectory (widechar_width)
|
||||
|
||||
if (USE_MYSQL)
|
||||
add_subdirectory (mysqlxx)
|
||||
|
@ -2,16 +2,19 @@
|
||||
|
||||
#include <cctz/civil_time.h>
|
||||
#include <cctz/time_zone.h>
|
||||
#include <cctz/zone_info_source.h>
|
||||
#include <common/unaligned.h>
|
||||
#include <Poco/Exception.h>
|
||||
|
||||
#include <dlfcn.h>
|
||||
|
||||
#include <algorithm>
|
||||
#include <cassert>
|
||||
#include <chrono>
|
||||
#include <cstring>
|
||||
#include <iostream>
|
||||
#include <memory>
|
||||
|
||||
#define DATE_LUT_MIN 0
|
||||
|
||||
|
||||
namespace
|
||||
{
|
||||
@ -47,7 +50,7 @@ DateLUTImpl::DateLUTImpl(const std::string & time_zone_)
|
||||
assert(inside_main);
|
||||
|
||||
size_t i = 0;
|
||||
time_t start_of_day = DATE_LUT_MIN;
|
||||
time_t start_of_day = 0;
|
||||
|
||||
cctz::time_zone cctz_time_zone;
|
||||
if (!cctz::load_time_zone(time_zone, &cctz_time_zone))
|
||||
@ -133,7 +136,10 @@ DateLUTImpl::DateLUTImpl(const std::string & time_zone_)
|
||||
}
|
||||
|
||||
/// Fill lookup table for years and months.
|
||||
for (size_t day = 0; day < DATE_LUT_SIZE && lut[day].year <= DATE_LUT_MAX_YEAR; ++day)
|
||||
size_t year_months_lut_index = 0;
|
||||
size_t first_day_of_last_month = 0;
|
||||
|
||||
for (size_t day = 0; day < DATE_LUT_SIZE; ++day)
|
||||
{
|
||||
const Values & values = lut[day];
|
||||
|
||||
@ -141,7 +147,93 @@ DateLUTImpl::DateLUTImpl(const std::string & time_zone_)
|
||||
{
|
||||
if (values.month == 1)
|
||||
years_lut[values.year - DATE_LUT_MIN_YEAR] = day;
|
||||
years_months_lut[(values.year - DATE_LUT_MIN_YEAR) * 12 + values.month - 1] = day;
|
||||
|
||||
year_months_lut_index = (values.year - DATE_LUT_MIN_YEAR) * 12 + values.month - 1;
|
||||
years_months_lut[year_months_lut_index] = day;
|
||||
first_day_of_last_month = day;
|
||||
}
|
||||
}
|
||||
|
||||
/// Fill the rest of lookup table with the same last month (2106-02-01).
|
||||
for (; year_months_lut_index < DATE_LUT_YEARS * 12; ++year_months_lut_index)
|
||||
{
|
||||
years_months_lut[year_months_lut_index] = first_day_of_last_month;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
#if !defined(ARCADIA_BUILD) /// Arcadia's variant of CCTZ already has the same implementation.
|
||||
|
||||
/// Prefer to load timezones from blobs linked to the binary.
|
||||
/// The blobs are provided by "tzdata" library.
|
||||
/// This allows to avoid dependency on system tzdata.
|
||||
namespace cctz_extension
|
||||
{
|
||||
namespace
|
||||
{
|
||||
class Source : public cctz::ZoneInfoSource
|
||||
{
|
||||
public:
|
||||
Source(const char * data_, size_t size_) : data(data_), size(size_) {}
|
||||
|
||||
size_t Read(void * buf, size_t bytes) override
|
||||
{
|
||||
if (bytes > size)
|
||||
bytes = size;
|
||||
memcpy(buf, data, bytes);
|
||||
data += bytes;
|
||||
size -= bytes;
|
||||
return bytes;
|
||||
}
|
||||
|
||||
int Skip(size_t offset) override
|
||||
{
|
||||
if (offset <= size)
|
||||
{
|
||||
data += offset;
|
||||
size -= offset;
|
||||
return 0;
|
||||
}
|
||||
else
|
||||
{
|
||||
errno = EINVAL;
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
private:
|
||||
const char * data;
|
||||
size_t size;
|
||||
};
|
||||
|
||||
std::unique_ptr<cctz::ZoneInfoSource> custom_factory(
|
||||
const std::string & name,
|
||||
const std::function<std::unique_ptr<cctz::ZoneInfoSource>(const std::string & name)> & fallback)
|
||||
{
|
||||
std::string name_replaced = name;
|
||||
std::replace(name_replaced.begin(), name_replaced.end(), '/', '_');
|
||||
std::replace(name_replaced.begin(), name_replaced.end(), '-', '_');
|
||||
|
||||
/// These are the names that are generated by "ld -r -b binary"
|
||||
std::string symbol_name_data = "_binary_" + name_replaced + "_start";
|
||||
std::string symbol_name_size = "_binary_" + name_replaced + "_size";
|
||||
|
||||
const void * sym_data = dlsym(RTLD_DEFAULT, symbol_name_data.c_str());
|
||||
const void * sym_size = dlsym(RTLD_DEFAULT, symbol_name_size.c_str());
|
||||
|
||||
if (sym_data && sym_size)
|
||||
return std::make_unique<Source>(static_cast<const char *>(sym_data), unalignedLoad<size_t>(&sym_size));
|
||||
|
||||
#if defined(NDEBUG)
|
||||
return fallback(name);
|
||||
#else
|
||||
/// In debug builds, ensure that only embedded timezones can be loaded - this is intended for tests.
|
||||
(void)fallback;
|
||||
return nullptr;
|
||||
#endif
|
||||
}
|
||||
}
|
||||
|
||||
ZoneInfoSourceFactory zone_info_source_factory = custom_factory;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
@ -12,7 +12,7 @@
|
||||
/// Table size is bigger than DATE_LUT_MAX_DAY_NUM to fill all indices within UInt16 range: this allows to remove extra check.
|
||||
#define DATE_LUT_SIZE 0x10000
|
||||
#define DATE_LUT_MIN_YEAR 1970
|
||||
#define DATE_LUT_MAX_YEAR 2105 /// Last supported year
|
||||
#define DATE_LUT_MAX_YEAR 2106 /// Last supported year (incomplete)
|
||||
#define DATE_LUT_YEARS (1 + DATE_LUT_MAX_YEAR - DATE_LUT_MIN_YEAR) /// Number of years in lookup table
|
||||
|
||||
#if defined(__PPC__)
|
||||
@ -99,7 +99,7 @@ private:
|
||||
return guess;
|
||||
|
||||
/// Time zones that have offset 0 from UTC do daylight saving time change (if any) towards increasing UTC offset (example: British Standard Time).
|
||||
if (offset_at_start_of_epoch >= 0)
|
||||
if (t >= lut[DayNum(guess + 1)].date)
|
||||
return DayNum(guess + 1);
|
||||
|
||||
return DayNum(guess - 1);
|
||||
@ -579,7 +579,7 @@ public:
|
||||
return t / 3600;
|
||||
|
||||
/// Assume that if offset was fractional, then the fraction is the same as at the beginning of epoch.
|
||||
/// NOTE This assumption is false for "Pacific/Pitcairn" time zone.
|
||||
/// NOTE This assumption is false for "Pacific/Pitcairn" and "Pacific/Kiritimati" time zones.
|
||||
return (t + 86400 - offset_at_start_of_epoch) / 3600;
|
||||
}
|
||||
|
||||
|
@ -127,7 +127,7 @@ LineReader::InputStatus LineReader::readOneLine(const String & prompt)
|
||||
#ifdef OS_LINUX
|
||||
if (!readline_ptr)
|
||||
{
|
||||
for (auto name : {"libreadline.so", "libreadline.so.0", "libeditline.so", "libeditline.so.0"})
|
||||
for (const auto * name : {"libreadline.so", "libreadline.so.0", "libeditline.so", "libeditline.so.0"})
|
||||
{
|
||||
void * dl_handle = dlopen(name, RTLD_LAZY);
|
||||
if (dl_handle)
|
||||
|
@ -37,13 +37,13 @@ ReplxxLineReader::ReplxxLineReader(const Suggest & suggest, const String & histo
|
||||
|
||||
/// By default C-p/C-n binded to COMPLETE_NEXT/COMPLETE_PREV,
|
||||
/// bind C-p/C-n to history-previous/history-next like readline.
|
||||
rx.bind_key(Replxx::KEY::control('N'), std::bind(&Replxx::invoke, &rx, Replxx::ACTION::HISTORY_NEXT, _1));
|
||||
rx.bind_key(Replxx::KEY::control('P'), std::bind(&Replxx::invoke, &rx, Replxx::ACTION::HISTORY_PREVIOUS, _1));
|
||||
rx.bind_key(Replxx::KEY::control('N'), [this](char32_t code) { return rx.invoke(Replxx::ACTION::HISTORY_NEXT, code); });
|
||||
rx.bind_key(Replxx::KEY::control('P'), [this](char32_t code) { return rx.invoke(Replxx::ACTION::HISTORY_PREVIOUS, code); });
|
||||
/// By default COMPLETE_NEXT/COMPLETE_PREV was binded to C-p/C-n, re-bind
|
||||
/// to M-P/M-N (that was used for HISTORY_COMMON_PREFIX_SEARCH before, but
|
||||
/// it also binded to M-p/M-n).
|
||||
rx.bind_key(Replxx::KEY::meta('N'), std::bind(&Replxx::invoke, &rx, Replxx::ACTION::COMPLETE_NEXT, _1));
|
||||
rx.bind_key(Replxx::KEY::meta('P'), std::bind(&Replxx::invoke, &rx, Replxx::ACTION::COMPLETE_PREVIOUS, _1));
|
||||
rx.bind_key(Replxx::KEY::meta('N'), [this](char32_t code) { return rx.invoke(Replxx::ACTION::COMPLETE_NEXT, code); });
|
||||
rx.bind_key(Replxx::KEY::meta('P'), [this](char32_t code) { return rx.invoke(Replxx::ACTION::COMPLETE_PREVIOUS, code); });
|
||||
}
|
||||
|
||||
ReplxxLineReader::~ReplxxLineReader()
|
||||
|
@ -11,7 +11,7 @@ void argsToConfig(const Poco::Util::Application::ArgVec & argv, Poco::Util::Laye
|
||||
/// Test: -- --1=1 --1=2 --3 5 7 8 -9 10 -11=12 14= 15== --16==17 --=18 --19= --20 21 22 --23 --24 25 --26 -27 28 ---29=30 -- ----31 32 --33 3-4
|
||||
Poco::AutoPtr<Poco::Util::MapConfiguration> map_config = new Poco::Util::MapConfiguration;
|
||||
std::string key;
|
||||
for (auto & arg : argv)
|
||||
for (const auto & arg : argv)
|
||||
{
|
||||
auto key_start = arg.find_first_not_of('-');
|
||||
auto pos_minus = arg.find('-');
|
||||
|
@ -70,7 +70,7 @@ extern "C"
|
||||
#endif
|
||||
int dl_iterate_phdr(int (*callback) (dl_phdr_info * info, size_t size, void * data), void * data)
|
||||
{
|
||||
auto current_phdr_cache = phdr_cache.load();
|
||||
auto * current_phdr_cache = phdr_cache.load();
|
||||
if (!current_phdr_cache)
|
||||
{
|
||||
// Cache is not yet populated, pass through to the original function.
|
||||
|
@ -1,15 +1,9 @@
|
||||
#pragma once
|
||||
|
||||
#include <boost/operators.hpp>
|
||||
#include <type_traits>
|
||||
|
||||
/** https://svn.boost.org/trac/boost/ticket/5182
|
||||
*/
|
||||
|
||||
template <class T, class Tag>
|
||||
struct StrongTypedef
|
||||
: boost::totally_ordered1< StrongTypedef<T, Tag>
|
||||
, boost::totally_ordered2< StrongTypedef<T, Tag>, T> >
|
||||
{
|
||||
private:
|
||||
using Self = StrongTypedef;
|
||||
|
@ -11,6 +11,10 @@ using Int16 = int16_t;
|
||||
using Int32 = int32_t;
|
||||
using Int64 = int64_t;
|
||||
|
||||
#if __cplusplus <= 201703L
|
||||
using char8_t = unsigned char;
|
||||
#endif
|
||||
|
||||
using UInt8 = char8_t;
|
||||
using UInt16 = uint16_t;
|
||||
using UInt32 = uint32_t;
|
||||
|
@ -1,12 +1,47 @@
|
||||
LIBRARY()
|
||||
|
||||
ADDINCL(
|
||||
GLOBAL clickhouse/base
|
||||
contrib/libs/cctz/include
|
||||
)
|
||||
|
||||
CFLAGS (GLOBAL -DARCADIA_BUILD)
|
||||
|
||||
IF (OS_DARWIN)
|
||||
CFLAGS (GLOBAL -DOS_DARWIN)
|
||||
ELSEIF (OS_FREEBSD)
|
||||
CFLAGS (GLOBAL -DOS_FREEBSD)
|
||||
ELSEIF (OS_LINUX)
|
||||
CFLAGS (GLOBAL -DOS_LINUX)
|
||||
ENDIF ()
|
||||
|
||||
PEERDIR(
|
||||
contrib/libs/cctz/src
|
||||
contrib/libs/cxxsupp/libcxx-filesystem
|
||||
contrib/libs/poco/Net
|
||||
contrib/libs/poco/Util
|
||||
contrib/restricted/boost
|
||||
contrib/restricted/cityhash-1.0.2
|
||||
)
|
||||
|
||||
SRCS(
|
||||
argsToConfig.cpp
|
||||
coverage.cpp
|
||||
DateLUT.cpp
|
||||
DateLUTImpl.cpp
|
||||
demangle.cpp
|
||||
getFQDNOrHostName.cpp
|
||||
getMemoryAmount.cpp
|
||||
getThreadId.cpp
|
||||
JSON.cpp
|
||||
LineReader.cpp
|
||||
mremap.cpp
|
||||
phdr_cache.cpp
|
||||
preciseExp10.c
|
||||
setTerminalEcho.cpp
|
||||
shift10.cpp
|
||||
sleep.cpp
|
||||
terminalColors.cpp
|
||||
)
|
||||
|
||||
END()
|
||||
|
@ -50,11 +50,13 @@
|
||||
#include <Common/getMultipleKeysFromConfig.h>
|
||||
#include <Common/ClickHouseRevision.h>
|
||||
#include <Common/Config/ConfigProcessor.h>
|
||||
#include <Common/config_version.h>
|
||||
|
||||
#ifdef __APPLE__
|
||||
// ucontext is not available without _XOPEN_SOURCE
|
||||
#define _XOPEN_SOURCE 700
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config_version.h>
|
||||
#endif
|
||||
|
||||
#if defined(OS_DARWIN)
|
||||
# define _XOPEN_SOURCE 700 // ucontext is not available without _XOPEN_SOURCE
|
||||
#endif
|
||||
#include <ucontext.h>
|
||||
|
||||
@ -231,7 +233,6 @@ private:
|
||||
Logger * log;
|
||||
BaseDaemon & daemon;
|
||||
|
||||
private:
|
||||
void onTerminate(const std::string & message, UInt32 thread_num) const
|
||||
{
|
||||
LOG_FATAL(log, "(version " << VERSION_STRING << VERSION_OFFICIAL << ") (from thread " << thread_num << ") " << message);
|
||||
@ -410,7 +411,7 @@ std::string BaseDaemon::getDefaultCorePath() const
|
||||
|
||||
void BaseDaemon::closeFDs()
|
||||
{
|
||||
#if defined(__FreeBSD__) || (defined(__APPLE__) && defined(__MACH__))
|
||||
#if defined(OS_FREEBSD) || defined(OS_DARWIN)
|
||||
Poco::File proc_path{"/dev/fd"};
|
||||
#else
|
||||
Poco::File proc_path{"/proc/self/fd"};
|
||||
@ -430,7 +431,7 @@ void BaseDaemon::closeFDs()
|
||||
else
|
||||
{
|
||||
int max_fd = -1;
|
||||
#ifdef _SC_OPEN_MAX
|
||||
#if defined(_SC_OPEN_MAX)
|
||||
max_fd = sysconf(_SC_OPEN_MAX);
|
||||
if (max_fd == -1)
|
||||
#endif
|
||||
@ -448,7 +449,7 @@ namespace
|
||||
/// the maximum is 1000, and chromium uses 300 for its tab processes. Ignore
|
||||
/// whatever errors that occur, because it's just a debugging aid and we don't
|
||||
/// care if it breaks.
|
||||
#if defined(__linux__) && !defined(NDEBUG)
|
||||
#if defined(OS_LINUX) && !defined(NDEBUG)
|
||||
void debugIncreaseOOMScore()
|
||||
{
|
||||
const std::string new_score = "555";
|
||||
|
14
base/daemon/ya.make
Normal file
14
base/daemon/ya.make
Normal file
@ -0,0 +1,14 @@
|
||||
LIBRARY()
|
||||
|
||||
NO_COMPILER_WARNINGS()
|
||||
|
||||
PEERDIR(
|
||||
clickhouse/src/Common
|
||||
)
|
||||
|
||||
SRCS(
|
||||
BaseDaemon.cpp
|
||||
GraphiteWriter.cpp
|
||||
)
|
||||
|
||||
END()
|
@ -75,7 +75,11 @@ void OwnPatternFormatter::formatExtended(const DB::ExtendedLogMessage & msg_ext,
|
||||
if (color)
|
||||
writeCString(resetColor(), wb);
|
||||
writeCString("> ", wb);
|
||||
if (color)
|
||||
writeString(setColor(std::hash<std::string>()(msg.getSource())), wb);
|
||||
DB::writeString(msg.getSource(), wb);
|
||||
if (color)
|
||||
writeCString(resetColor(), wb);
|
||||
writeCString(": ", wb);
|
||||
DB::writeString(msg.getText(), wb);
|
||||
}
|
||||
|
@ -20,7 +20,7 @@ void OwnSplitChannel::log(const Poco::Message & msg)
|
||||
if (channels.empty() && (logs_queue == nullptr || msg.getPriority() > logs_queue->max_priority))
|
||||
return;
|
||||
|
||||
if (auto masker = SensitiveDataMasker::getInstance())
|
||||
if (auto * masker = SensitiveDataMasker::getInstance())
|
||||
{
|
||||
auto message_text = msg.getText();
|
||||
auto matches = masker->wipeSensitiveData(message_text);
|
||||
|
15
base/loggers/ya.make
Normal file
15
base/loggers/ya.make
Normal file
@ -0,0 +1,15 @@
|
||||
LIBRARY()
|
||||
|
||||
PEERDIR(
|
||||
clickhouse/src/Common
|
||||
)
|
||||
|
||||
SRCS(
|
||||
ExtendedLogChannel.cpp
|
||||
Loggers.cpp
|
||||
OwnFormattingChannel.cpp
|
||||
OwnPatternFormatter.cpp
|
||||
OwnSplitChannel.cpp
|
||||
)
|
||||
|
||||
END()
|
2
base/pcg-random/CMakeLists.txt
Normal file
2
base/pcg-random/CMakeLists.txt
Normal file
@ -0,0 +1,2 @@
|
||||
add_library(pcg_random INTERFACE)
|
||||
target_include_directories(pcg_random INTERFACE .)
|
@ -292,7 +292,7 @@ inline itype rotl(itype value, bitcount_t rot)
|
||||
{
|
||||
constexpr bitcount_t bits = sizeof(itype) * 8;
|
||||
constexpr bitcount_t mask = bits - 1;
|
||||
#if PCG_USE_ZEROCHECK_ROTATE_IDIOM
|
||||
#if defined(PCG_USE_ZEROCHECK_ROTATE_IDIOM)
|
||||
return rot ? (value << rot) | (value >> (bits - rot)) : value;
|
||||
#else
|
||||
return (value << rot) | (value >> ((- rot) & mask));
|
||||
@ -304,7 +304,7 @@ inline itype rotr(itype value, bitcount_t rot)
|
||||
{
|
||||
constexpr bitcount_t bits = sizeof(itype) * 8;
|
||||
constexpr bitcount_t mask = bits - 1;
|
||||
#if PCG_USE_ZEROCHECK_ROTATE_IDIOM
|
||||
#if defined(PCG_USE_ZEROCHECK_ROTATE_IDIOM)
|
||||
return rot ? (value >> rot) | (value << (bits - rot)) : value;
|
||||
#else
|
||||
return (value >> rot) | (value << ((- rot) & mask));
|
||||
@ -318,7 +318,7 @@ inline itype rotr(itype value, bitcount_t rot)
|
||||
*
|
||||
* These overloads will be preferred over the general template code above.
|
||||
*/
|
||||
#if PCG_USE_INLINE_ASM && __GNUC__ && (__x86_64__ || __i386__)
|
||||
#if defined(PCG_USE_INLINE_ASM) && __GNUC__ && (__x86_64__ || __i386__)
|
||||
|
||||
inline uint8_t rotr(uint8_t value, bitcount_t rot)
|
||||
{
|
||||
@ -600,7 +600,7 @@ std::ostream& operator<<(std::ostream& out, printable_typename<T>) {
|
||||
#ifdef __GNUC__
|
||||
int status;
|
||||
char* pretty_name =
|
||||
abi::__cxa_demangle(implementation_typename, NULL, NULL, &status);
|
||||
abi::__cxa_demangle(implementation_typename, nullptr, nullptr, &status);
|
||||
if (status == 0)
|
||||
out << pretty_name;
|
||||
free(static_cast<void*>(pretty_name));
|
5
base/pcg-random/ya.make
Normal file
5
base/pcg-random/ya.make
Normal file
@ -0,0 +1,5 @@
|
||||
LIBRARY()
|
||||
|
||||
ADDINCL (GLOBAL clickhouse/base/pcg-random)
|
||||
|
||||
END()
|
9
base/widechar_width/ya.make
Normal file
9
base/widechar_width/ya.make
Normal file
@ -0,0 +1,9 @@
|
||||
LIBRARY()
|
||||
|
||||
ADDINCL(GLOBAL clickhouse/base/widechar_width)
|
||||
|
||||
SRCS(
|
||||
widechar_width.cpp
|
||||
)
|
||||
|
||||
END()
|
@ -1,3 +1,7 @@
|
||||
RECURSE(
|
||||
common
|
||||
daemon
|
||||
loggers
|
||||
pcg-random
|
||||
widechar_width
|
||||
)
|
||||
|
17
cmake/find/msgpack.cmake
Normal file
17
cmake/find/msgpack.cmake
Normal file
@ -0,0 +1,17 @@
|
||||
option (USE_INTERNAL_MSGPACK_LIBRARY "Set to FALSE to use system msgpack library instead of bundled" ${NOT_UNBUNDLED})
|
||||
|
||||
if (USE_INTERNAL_MSGPACK_LIBRARY)
|
||||
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/msgpack-c/include/msgpack.hpp")
|
||||
message(WARNING "Submodule contrib/msgpack-c is missing. To fix try run: \n git submodule update --init --recursive")
|
||||
set(USE_INTERNAL_MSGPACK_LIBRARY 0)
|
||||
set(MISSING_INTERNAL_MSGPACK_LIBRARY 1)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if (USE_INTERNAL_MSGPACK_LIBRARY)
|
||||
set(MSGPACK_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/msgpack-c/include)
|
||||
else()
|
||||
find_path(MSGPACK_INCLUDE_DIR NAMES msgpack.hpp PATHS ${MSGPACK_INCLUDE_PATHS})
|
||||
endif()
|
||||
|
||||
message(STATUS "Using msgpack: ${MSGPACK_INCLUDE_DIR}")
|
@ -4,7 +4,11 @@ if (NOT COMPILER_CLANG)
|
||||
message (FATAL_ERROR "FreeBSD build is supported only for Clang")
|
||||
endif ()
|
||||
|
||||
execute_process (COMMAND ${CMAKE_CXX_COMPILER} --print-file-name=libclang_rt.builtins-${CMAKE_SYSTEM_PROCESSOR}.a OUTPUT_VARIABLE BUILTINS_LIBRARY OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
if (${CMAKE_SYSTEM_PROCESSOR} STREQUAL "amd64")
|
||||
execute_process (COMMAND ${CMAKE_CXX_COMPILER} --print-file-name=libclang_rt.builtins-x86_64.a OUTPUT_VARIABLE BUILTINS_LIBRARY OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
else ()
|
||||
execute_process (COMMAND ${CMAKE_CXX_COMPILER} --print-file-name=libclang_rt.builtins-${CMAKE_SYSTEM_PROCESSOR}.a OUTPUT_VARIABLE BUILTINS_LIBRARY OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
endif ()
|
||||
|
||||
set (DEFAULT_LIBS "${DEFAULT_LIBS} ${BUILTINS_LIBRARY} ${COVERAGE_OPTION} -lc -lm -lrt -lpthread")
|
||||
|
||||
|
@ -2,4 +2,3 @@ set(DIVIDE_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/libdivide)
|
||||
set(DBMS_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/src ${ClickHouse_BINARY_DIR}/src)
|
||||
set(DOUBLE_CONVERSION_CONTRIB_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/double-conversion)
|
||||
set(METROHASH_CONTRIB_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/libmetrohash/src)
|
||||
set(PCG_RANDOM_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/libpcg-random/include)
|
||||
|
@ -6,18 +6,18 @@ endif ()
|
||||
|
||||
if (COMPILER_GCC)
|
||||
# Require minimum version of gcc
|
||||
set (GCC_MINIMUM_VERSION 8)
|
||||
set (GCC_MINIMUM_VERSION 9)
|
||||
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS ${GCC_MINIMUM_VERSION} AND NOT CMAKE_VERSION VERSION_LESS 2.8.9)
|
||||
message (FATAL_ERROR "GCC version must be at least ${GCC_MINIMUM_VERSION}. For example, if GCC ${GCC_MINIMUM_VERSION} is available under gcc-${GCC_MINIMUM_VERSION}, g++-${GCC_MINIMUM_VERSION} names, do the following: export CC=gcc-${GCC_MINIMUM_VERSION} CXX=g++-${GCC_MINIMUM_VERSION}; rm -rf CMakeCache.txt CMakeFiles; and re run cmake or ./release.")
|
||||
endif ()
|
||||
elseif (COMPILER_CLANG)
|
||||
# Require minimum version of clang
|
||||
set (CLANG_MINIMUM_VERSION 7)
|
||||
set (CLANG_MINIMUM_VERSION 8)
|
||||
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS ${CLANG_MINIMUM_VERSION})
|
||||
message (FATAL_ERROR "Clang version must be at least ${CLANG_MINIMUM_VERSION}.")
|
||||
endif ()
|
||||
else ()
|
||||
message (WARNING "You are using an unsupported compiler. Compilation has only been tested with Clang 6+ and GCC 7+.")
|
||||
message (WARNING "You are using an unsupported compiler. Compilation has only been tested with Clang and GCC.")
|
||||
endif ()
|
||||
|
||||
STRING(REGEX MATCHALL "[0-9]+" COMPILER_VERSION_LIST ${CMAKE_CXX_COMPILER_VERSION})
|
||||
|
1
contrib/CMakeLists.txt
vendored
1
contrib/CMakeLists.txt
vendored
@ -333,6 +333,5 @@ add_subdirectory(grpc-cmake)
|
||||
|
||||
add_subdirectory(replxx-cmake)
|
||||
add_subdirectory(FastMemcpy)
|
||||
add_subdirectory(widecharwidth)
|
||||
add_subdirectory(consistent-hashing)
|
||||
add_subdirectory(consistent-hashing-sumbur)
|
||||
|
2
contrib/cctz
vendored
2
contrib/cctz
vendored
@ -1 +1 @@
|
||||
Subproject commit 4f9776a310f4952454636363def82c2bf6641d5f
|
||||
Subproject commit 5a3f785329cecdd2b68cd950e0647e9246774ef2
|
@ -23,6 +23,600 @@ if (USE_INTERNAL_CCTZ)
|
||||
# yes, need linux, because bsd check inside linux in time_zone_libc.cc:24
|
||||
target_compile_definitions (cctz PRIVATE __USE_BSD linux _XOPEN_SOURCE=600)
|
||||
endif ()
|
||||
|
||||
# Build a libray with embedded tzdata
|
||||
|
||||
# We invoke 'ld' and 'objcopy' directly because lld linker has no option to generate object file with binary data.
|
||||
# Note: we can invoke specific ld from toolchain and relax condition on ARCH_AMD64.
|
||||
|
||||
if (OS_LINUX AND ARCH_AMD64)
|
||||
|
||||
set (TIMEZONES
|
||||
Africa/Abidjan
|
||||
Africa/Accra
|
||||
Africa/Addis_Ababa
|
||||
Africa/Algiers
|
||||
Africa/Asmara
|
||||
Africa/Asmera
|
||||
Africa/Bamako
|
||||
Africa/Bangui
|
||||
Africa/Banjul
|
||||
Africa/Bissau
|
||||
Africa/Blantyre
|
||||
Africa/Brazzaville
|
||||
Africa/Bujumbura
|
||||
Africa/Cairo
|
||||
Africa/Casablanca
|
||||
Africa/Ceuta
|
||||
Africa/Conakry
|
||||
Africa/Dakar
|
||||
Africa/Dar_es_Salaam
|
||||
Africa/Djibouti
|
||||
Africa/Douala
|
||||
Africa/El_Aaiun
|
||||
Africa/Freetown
|
||||
Africa/Gaborone
|
||||
Africa/Harare
|
||||
Africa/Johannesburg
|
||||
Africa/Juba
|
||||
Africa/Kampala
|
||||
Africa/Khartoum
|
||||
Africa/Kigali
|
||||
Africa/Kinshasa
|
||||
Africa/Lagos
|
||||
Africa/Libreville
|
||||
Africa/Lome
|
||||
Africa/Luanda
|
||||
Africa/Lubumbashi
|
||||
Africa/Lusaka
|
||||
Africa/Malabo
|
||||
Africa/Maputo
|
||||
Africa/Maseru
|
||||
Africa/Mbabane
|
||||
Africa/Mogadishu
|
||||
Africa/Monrovia
|
||||
Africa/Nairobi
|
||||
Africa/Ndjamena
|
||||
Africa/Niamey
|
||||
Africa/Nouakchott
|
||||
Africa/Ouagadougou
|
||||
Africa/Porto-Novo
|
||||
Africa/Sao_Tome
|
||||
Africa/Timbuktu
|
||||
Africa/Tripoli
|
||||
Africa/Tunis
|
||||
Africa/Windhoek
|
||||
America/Adak
|
||||
America/Anchorage
|
||||
America/Anguilla
|
||||
America/Antigua
|
||||
America/Araguaina
|
||||
America/Argentina/Buenos_Aires
|
||||
America/Argentina/Catamarca
|
||||
America/Argentina/ComodRivadavia
|
||||
America/Argentina/Cordoba
|
||||
America/Argentina/Jujuy
|
||||
America/Argentina/La_Rioja
|
||||
America/Argentina/Mendoza
|
||||
America/Argentina/Rio_Gallegos
|
||||
America/Argentina/Salta
|
||||
America/Argentina/San_Juan
|
||||
America/Argentina/San_Luis
|
||||
America/Argentina/Tucuman
|
||||
America/Argentina/Ushuaia
|
||||
America/Aruba
|
||||
America/Asuncion
|
||||
America/Atikokan
|
||||
America/Atka
|
||||
America/Bahia
|
||||
America/Bahia_Banderas
|
||||
America/Barbados
|
||||
America/Belem
|
||||
America/Belize
|
||||
America/Blanc-Sablon
|
||||
America/Boa_Vista
|
||||
America/Bogota
|
||||
America/Boise
|
||||
America/Buenos_Aires
|
||||
America/Cambridge_Bay
|
||||
America/Campo_Grande
|
||||
America/Cancun
|
||||
America/Caracas
|
||||
America/Catamarca
|
||||
America/Cayenne
|
||||
America/Cayman
|
||||
America/Chicago
|
||||
America/Chihuahua
|
||||
America/Coral_Harbour
|
||||
America/Cordoba
|
||||
America/Costa_Rica
|
||||
America/Creston
|
||||
America/Cuiaba
|
||||
America/Curacao
|
||||
America/Danmarkshavn
|
||||
America/Dawson
|
||||
America/Dawson_Creek
|
||||
America/Denver
|
||||
America/Detroit
|
||||
America/Dominica
|
||||
America/Edmonton
|
||||
America/Eirunepe
|
||||
America/El_Salvador
|
||||
America/Ensenada
|
||||
America/Fortaleza
|
||||
America/Fort_Nelson
|
||||
America/Fort_Wayne
|
||||
America/Glace_Bay
|
||||
America/Godthab
|
||||
America/Goose_Bay
|
||||
America/Grand_Turk
|
||||
America/Grenada
|
||||
America/Guadeloupe
|
||||
America/Guatemala
|
||||
America/Guayaquil
|
||||
America/Guyana
|
||||
America/Halifax
|
||||
America/Havana
|
||||
America/Hermosillo
|
||||
America/Indiana/Indianapolis
|
||||
America/Indiana/Knox
|
||||
America/Indiana/Marengo
|
||||
America/Indiana/Petersburg
|
||||
America/Indianapolis
|
||||
America/Indiana/Tell_City
|
||||
America/Indiana/Vevay
|
||||
America/Indiana/Vincennes
|
||||
America/Indiana/Winamac
|
||||
America/Inuvik
|
||||
America/Iqaluit
|
||||
America/Jamaica
|
||||
America/Jujuy
|
||||
America/Juneau
|
||||
America/Kentucky/Louisville
|
||||
America/Kentucky/Monticello
|
||||
America/Knox_IN
|
||||
America/Kralendijk
|
||||
America/La_Paz
|
||||
America/Lima
|
||||
America/Los_Angeles
|
||||
America/Louisville
|
||||
America/Lower_Princes
|
||||
America/Maceio
|
||||
America/Managua
|
||||
America/Manaus
|
||||
America/Marigot
|
||||
America/Martinique
|
||||
America/Matamoros
|
||||
America/Mazatlan
|
||||
America/Mendoza
|
||||
America/Menominee
|
||||
America/Merida
|
||||
America/Metlakatla
|
||||
America/Mexico_City
|
||||
America/Miquelon
|
||||
America/Moncton
|
||||
America/Monterrey
|
||||
America/Montevideo
|
||||
America/Montreal
|
||||
America/Montserrat
|
||||
America/Nassau
|
||||
America/New_York
|
||||
America/Nipigon
|
||||
America/Nome
|
||||
America/Noronha
|
||||
America/North_Dakota/Beulah
|
||||
America/North_Dakota/Center
|
||||
America/North_Dakota/New_Salem
|
||||
America/Ojinaga
|
||||
America/Panama
|
||||
America/Pangnirtung
|
||||
America/Paramaribo
|
||||
America/Phoenix
|
||||
America/Port-au-Prince
|
||||
America/Porto_Acre
|
||||
America/Port_of_Spain
|
||||
America/Porto_Velho
|
||||
America/Puerto_Rico
|
||||
America/Punta_Arenas
|
||||
America/Rainy_River
|
||||
America/Rankin_Inlet
|
||||
America/Recife
|
||||
America/Regina
|
||||
America/Resolute
|
||||
America/Rio_Branco
|
||||
America/Rosario
|
||||
America/Santa_Isabel
|
||||
America/Santarem
|
||||
America/Santiago
|
||||
America/Santo_Domingo
|
||||
America/Sao_Paulo
|
||||
America/Scoresbysund
|
||||
America/Shiprock
|
||||
America/Sitka
|
||||
America/St_Barthelemy
|
||||
America/St_Johns
|
||||
America/St_Kitts
|
||||
America/St_Lucia
|
||||
America/St_Thomas
|
||||
America/St_Vincent
|
||||
America/Swift_Current
|
||||
America/Tegucigalpa
|
||||
America/Thule
|
||||
America/Thunder_Bay
|
||||
America/Tijuana
|
||||
America/Toronto
|
||||
America/Tortola
|
||||
America/Vancouver
|
||||
America/Virgin
|
||||
America/Whitehorse
|
||||
America/Winnipeg
|
||||
America/Yakutat
|
||||
America/Yellowknife
|
||||
Antarctica/Casey
|
||||
Antarctica/Davis
|
||||
Antarctica/DumontDUrville
|
||||
Antarctica/Macquarie
|
||||
Antarctica/Mawson
|
||||
Antarctica/McMurdo
|
||||
Antarctica/Palmer
|
||||
Antarctica/Rothera
|
||||
Antarctica/South_Pole
|
||||
Antarctica/Syowa
|
||||
Antarctica/Troll
|
||||
Antarctica/Vostok
|
||||
Arctic/Longyearbyen
|
||||
Asia/Aden
|
||||
Asia/Almaty
|
||||
Asia/Amman
|
||||
Asia/Anadyr
|
||||
Asia/Aqtau
|
||||
Asia/Aqtobe
|
||||
Asia/Ashgabat
|
||||
Asia/Ashkhabad
|
||||
Asia/Atyrau
|
||||
Asia/Baghdad
|
||||
Asia/Bahrain
|
||||
Asia/Baku
|
||||
Asia/Bangkok
|
||||
Asia/Barnaul
|
||||
Asia/Beirut
|
||||
Asia/Bishkek
|
||||
Asia/Brunei
|
||||
Asia/Calcutta
|
||||
Asia/Chita
|
||||
Asia/Choibalsan
|
||||
Asia/Chongqing
|
||||
Asia/Chungking
|
||||
Asia/Colombo
|
||||
Asia/Dacca
|
||||
Asia/Damascus
|
||||
Asia/Dhaka
|
||||
Asia/Dili
|
||||
Asia/Dubai
|
||||
Asia/Dushanbe
|
||||
Asia/Famagusta
|
||||
Asia/Gaza
|
||||
Asia/Harbin
|
||||
Asia/Hebron
|
||||
Asia/Ho_Chi_Minh
|
||||
Asia/Hong_Kong
|
||||
Asia/Hovd
|
||||
Asia/Irkutsk
|
||||
Asia/Istanbul
|
||||
Asia/Jakarta
|
||||
Asia/Jayapura
|
||||
Asia/Jerusalem
|
||||
Asia/Kabul
|
||||
Asia/Kamchatka
|
||||
Asia/Karachi
|
||||
Asia/Kashgar
|
||||
Asia/Kathmandu
|
||||
Asia/Katmandu
|
||||
Asia/Khandyga
|
||||
Asia/Kolkata
|
||||
Asia/Krasnoyarsk
|
||||
Asia/Kuala_Lumpur
|
||||
Asia/Kuching
|
||||
Asia/Kuwait
|
||||
Asia/Macao
|
||||
Asia/Macau
|
||||
Asia/Magadan
|
||||
Asia/Makassar
|
||||
Asia/Manila
|
||||
Asia/Muscat
|
||||
Asia/Nicosia
|
||||
Asia/Novokuznetsk
|
||||
Asia/Novosibirsk
|
||||
Asia/Omsk
|
||||
Asia/Oral
|
||||
Asia/Phnom_Penh
|
||||
Asia/Pontianak
|
||||
Asia/Pyongyang
|
||||
Asia/Qatar
|
||||
Asia/Qostanay
|
||||
Asia/Qyzylorda
|
||||
Asia/Rangoon
|
||||
Asia/Riyadh
|
||||
Asia/Saigon
|
||||
Asia/Sakhalin
|
||||
Asia/Samarkand
|
||||
Asia/Seoul
|
||||
Asia/Shanghai
|
||||
Asia/Singapore
|
||||
Asia/Srednekolymsk
|
||||
Asia/Taipei
|
||||
Asia/Tashkent
|
||||
Asia/Tbilisi
|
||||
Asia/Tehran
|
||||
Asia/Tel_Aviv
|
||||
Asia/Thimbu
|
||||
Asia/Thimphu
|
||||
Asia/Tokyo
|
||||
Asia/Tomsk
|
||||
Asia/Ujung_Pandang
|
||||
Asia/Ulaanbaatar
|
||||
Asia/Ulan_Bator
|
||||
Asia/Urumqi
|
||||
Asia/Ust-Nera
|
||||
Asia/Vientiane
|
||||
Asia/Vladivostok
|
||||
Asia/Yakutsk
|
||||
Asia/Yangon
|
||||
Asia/Yekaterinburg
|
||||
Asia/Yerevan
|
||||
Atlantic/Azores
|
||||
Atlantic/Bermuda
|
||||
Atlantic/Canary
|
||||
Atlantic/Cape_Verde
|
||||
Atlantic/Faeroe
|
||||
Atlantic/Faroe
|
||||
Atlantic/Jan_Mayen
|
||||
Atlantic/Madeira
|
||||
Atlantic/Reykjavik
|
||||
Atlantic/South_Georgia
|
||||
Atlantic/Stanley
|
||||
Atlantic/St_Helena
|
||||
Australia/ACT
|
||||
Australia/Adelaide
|
||||
Australia/Brisbane
|
||||
Australia/Broken_Hill
|
||||
Australia/Canberra
|
||||
Australia/Currie
|
||||
Australia/Darwin
|
||||
Australia/Eucla
|
||||
Australia/Hobart
|
||||
Australia/LHI
|
||||
Australia/Lindeman
|
||||
Australia/Lord_Howe
|
||||
Australia/Melbourne
|
||||
Australia/North
|
||||
Australia/NSW
|
||||
Australia/Perth
|
||||
Australia/Queensland
|
||||
Australia/South
|
||||
Australia/Sydney
|
||||
Australia/Tasmania
|
||||
Australia/Victoria
|
||||
Australia/West
|
||||
Australia/Yancowinna
|
||||
Brazil/Acre
|
||||
Brazil/DeNoronha
|
||||
Brazil/East
|
||||
Brazil/West
|
||||
Canada/Atlantic
|
||||
Canada/Central
|
||||
Canada/Eastern
|
||||
Canada/Mountain
|
||||
Canada/Newfoundland
|
||||
Canada/Pacific
|
||||
Canada/Saskatchewan
|
||||
Canada/Yukon
|
||||
CET
|
||||
Chile/Continental
|
||||
Chile/EasterIsland
|
||||
CST6CDT
|
||||
Cuba
|
||||
EET
|
||||
Egypt
|
||||
Eire
|
||||
EST
|
||||
EST5EDT
|
||||
Etc/GMT
|
||||
Etc/Greenwich
|
||||
Etc/UCT
|
||||
Etc/Universal
|
||||
Etc/UTC
|
||||
Etc/Zulu
|
||||
Europe/Amsterdam
|
||||
Europe/Andorra
|
||||
Europe/Astrakhan
|
||||
Europe/Athens
|
||||
Europe/Belfast
|
||||
Europe/Belgrade
|
||||
Europe/Berlin
|
||||
Europe/Bratislava
|
||||
Europe/Brussels
|
||||
Europe/Bucharest
|
||||
Europe/Budapest
|
||||
Europe/Busingen
|
||||
Europe/Chisinau
|
||||
Europe/Copenhagen
|
||||
Europe/Dublin
|
||||
Europe/Gibraltar
|
||||
Europe/Guernsey
|
||||
Europe/Helsinki
|
||||
Europe/Isle_of_Man
|
||||
Europe/Istanbul
|
||||
Europe/Jersey
|
||||
Europe/Kaliningrad
|
||||
Europe/Kiev
|
||||
Europe/Kirov
|
||||
Europe/Lisbon
|
||||
Europe/Ljubljana
|
||||
Europe/London
|
||||
Europe/Luxembourg
|
||||
Europe/Madrid
|
||||
Europe/Malta
|
||||
Europe/Mariehamn
|
||||
Europe/Minsk
|
||||
Europe/Monaco
|
||||
Europe/Moscow
|
||||
Europe/Nicosia
|
||||
Europe/Oslo
|
||||
Europe/Paris
|
||||
Europe/Podgorica
|
||||
Europe/Prague
|
||||
Europe/Riga
|
||||
Europe/Rome
|
||||
Europe/Samara
|
||||
Europe/San_Marino
|
||||
Europe/Sarajevo
|
||||
Europe/Saratov
|
||||
Europe/Simferopol
|
||||
Europe/Skopje
|
||||
Europe/Sofia
|
||||
Europe/Stockholm
|
||||
Europe/Tallinn
|
||||
Europe/Tirane
|
||||
Europe/Tiraspol
|
||||
Europe/Ulyanovsk
|
||||
Europe/Uzhgorod
|
||||
Europe/Vaduz
|
||||
Europe/Vatican
|
||||
Europe/Vienna
|
||||
Europe/Vilnius
|
||||
Europe/Volgograd
|
||||
Europe/Warsaw
|
||||
Europe/Zagreb
|
||||
Europe/Zaporozhye
|
||||
Europe/Zurich
|
||||
Factory
|
||||
GB
|
||||
GB-Eire
|
||||
GMT
|
||||
GMT0
|
||||
Greenwich
|
||||
Hongkong
|
||||
HST
|
||||
Iceland
|
||||
Indian/Antananarivo
|
||||
Indian/Chagos
|
||||
Indian/Christmas
|
||||
Indian/Cocos
|
||||
Indian/Comoro
|
||||
Indian/Kerguelen
|
||||
Indian/Mahe
|
||||
Indian/Maldives
|
||||
Indian/Mauritius
|
||||
Indian/Mayotte
|
||||
Indian/Reunion
|
||||
Iran
|
||||
Israel
|
||||
Jamaica
|
||||
Japan
|
||||
Kwajalein
|
||||
Libya
|
||||
MET
|
||||
Mexico/BajaNorte
|
||||
Mexico/BajaSur
|
||||
Mexico/General
|
||||
MST
|
||||
MST7MDT
|
||||
Navajo
|
||||
NZ
|
||||
NZ-CHAT
|
||||
Pacific/Apia
|
||||
Pacific/Auckland
|
||||
Pacific/Bougainville
|
||||
Pacific/Chatham
|
||||
Pacific/Chuuk
|
||||
Pacific/Easter
|
||||
Pacific/Efate
|
||||
Pacific/Enderbury
|
||||
Pacific/Fakaofo
|
||||
Pacific/Fiji
|
||||
Pacific/Funafuti
|
||||
Pacific/Galapagos
|
||||
Pacific/Gambier
|
||||
Pacific/Guadalcanal
|
||||
Pacific/Guam
|
||||
Pacific/Honolulu
|
||||
Pacific/Johnston
|
||||
Pacific/Kiritimati
|
||||
Pacific/Kosrae
|
||||
Pacific/Kwajalein
|
||||
Pacific/Majuro
|
||||
Pacific/Marquesas
|
||||
Pacific/Midway
|
||||
Pacific/Nauru
|
||||
Pacific/Niue
|
||||
Pacific/Norfolk
|
||||
Pacific/Noumea
|
||||
Pacific/Pago_Pago
|
||||
Pacific/Palau
|
||||
Pacific/Pitcairn
|
||||
Pacific/Pohnpei
|
||||
Pacific/Ponape
|
||||
Pacific/Port_Moresby
|
||||
Pacific/Rarotonga
|
||||
Pacific/Saipan
|
||||
Pacific/Samoa
|
||||
Pacific/Tahiti
|
||||
Pacific/Tarawa
|
||||
Pacific/Tongatapu
|
||||
Pacific/Truk
|
||||
Pacific/Wake
|
||||
Pacific/Wallis
|
||||
Pacific/Yap
|
||||
Poland
|
||||
Portugal
|
||||
PRC
|
||||
PST8PDT
|
||||
ROC
|
||||
ROK
|
||||
Singapore
|
||||
Turkey
|
||||
UCT
|
||||
Universal
|
||||
US/Alaska
|
||||
US/Aleutian
|
||||
US/Arizona
|
||||
US/Central
|
||||
US/Eastern
|
||||
US/East-Indiana
|
||||
US/Hawaii
|
||||
US/Indiana-Starke
|
||||
US/Michigan
|
||||
US/Mountain
|
||||
US/Pacific
|
||||
US/Samoa
|
||||
UTC
|
||||
WET
|
||||
W-SU
|
||||
Zulu)
|
||||
|
||||
set(TZDIR ${LIBRARY_DIR}/testdata/zoneinfo)
|
||||
set(TZ_OBJS)
|
||||
|
||||
foreach(TIMEZONE ${TIMEZONES})
|
||||
string(REPLACE "/" "_" TIMEZONE_ID ${TIMEZONE})
|
||||
set(TZ_OBJ ${TIMEZONE_ID}.o)
|
||||
set(TZ_OBJS ${TZ_OBJS} ${TZ_OBJ})
|
||||
|
||||
# https://stackoverflow.com/questions/14776463/compile-and-add-an-object-file-from-a-binary-with-cmake
|
||||
add_custom_command(OUTPUT ${TZ_OBJ}
|
||||
COMMAND cd ${TZDIR} && ld -r -b binary -o ${CMAKE_CURRENT_BINARY_DIR}/${TZ_OBJ} ${TIMEZONE}
|
||||
COMMAND objcopy --rename-section .data=.rodata,alloc,load,readonly,data,contents
|
||||
${CMAKE_CURRENT_BINARY_DIR}/${TZ_OBJ} ${CMAKE_CURRENT_BINARY_DIR}/${TZ_OBJ})
|
||||
|
||||
set_source_files_properties(${TZ_OBJ} PROPERTIES EXTERNAL_OBJECT true GENERATED true)
|
||||
endforeach(TIMEZONE)
|
||||
|
||||
add_library(tzdata STATIC ${TZ_OBJS})
|
||||
set_target_properties(tzdata PROPERTIES LINKER_LANGUAGE C)
|
||||
target_link_libraries(cctz -Wl,--whole-archive tzdata -Wl,--no-whole-archive) # whole-archive prevents symbols from being discarded
|
||||
endif ()
|
||||
|
||||
else ()
|
||||
find_library (LIBRARY_CCTZ cctz)
|
||||
find_path (INCLUDE_CCTZ NAMES cctz/civil_time.h)
|
||||
|
2
contrib/libc-headers
vendored
2
contrib/libc-headers
vendored
@ -1 +1 @@
|
||||
Subproject commit 9676d2645a713e679dc981ffd84dee99fcd68b8e
|
||||
Subproject commit 92c74f938cf2c4dd529cae4f3d2923d153b029a7
|
File diff suppressed because it is too large
Load Diff
2
contrib/libgsasl
vendored
2
contrib/libgsasl
vendored
@ -1 +1 @@
|
||||
Subproject commit 42ef20687042637252e64df1934b6d47771486d1
|
||||
Subproject commit 140fb58250588c8323285b75fcf127c4adc33dfa
|
@ -1,52 +0,0 @@
|
||||
# PCG Random Number Generation, C++ Edition
|
||||
|
||||
[PCG-Random website]: http://www.pcg-random.org
|
||||
|
||||
This code provides an implementation of the PCG family of random number
|
||||
generators, which are fast, statistically excellent, and offer a number of
|
||||
useful features.
|
||||
|
||||
Full details can be found at the [PCG-Random website]. This version
|
||||
of the code provides many family members -- if you just want one
|
||||
simple generator, you may prefer the minimal C version of the library.
|
||||
|
||||
There are two kinds of generator, normal generators and extended generators.
|
||||
Extended generators provide *k* dimensional equidistribution and can perform
|
||||
party tricks, but generally speaking most people only need the normal
|
||||
generators.
|
||||
|
||||
There are two ways to access the generators, using a convenience typedef
|
||||
or by using the underlying templates directly (similar to C++11's `std::mt19937` typedef vs its `std::mersenne_twister_engine` template). For most users, the convenience typedef is what you want, and probably you're fine with `pcg32` for 32-bit numbers. If you want 64-bit numbers, either use `pcg64` (or, if you're on a 32-bit system, making 64 bits from two calls to `pcg32_k2` may be faster).
|
||||
|
||||
## Documentation and Examples
|
||||
|
||||
Visit [PCG-Random website] for information on how to use this library, or look
|
||||
at the sample code in the `sample` directory -- hopefully it should be fairly
|
||||
self explanatory.
|
||||
|
||||
## Building
|
||||
|
||||
The code is written in C++11, as an include-only library (i.e., there is
|
||||
nothing you need to build). There are some provided demo programs and tests
|
||||
however. On a Unix-style system (e.g., Linux, Mac OS X) you should be able
|
||||
to just type
|
||||
|
||||
make
|
||||
|
||||
To build the demo programs.
|
||||
|
||||
## Testing
|
||||
|
||||
Run
|
||||
|
||||
make test
|
||||
|
||||
## Directory Structure
|
||||
|
||||
The directories are arranged as follows:
|
||||
|
||||
* `include` -- contains `pcg_random.hpp` and supporting include files
|
||||
* `test-high` -- test code for the high-level API where the functions have
|
||||
shorter, less scary-looking names.
|
||||
* `sample` -- sample code, some similar to the code in `test-high` but more
|
||||
human readable, some other examples too
|
2
contrib/llvm
vendored
2
contrib/llvm
vendored
@ -1 +1 @@
|
||||
Subproject commit 5dab18f4861677548b8f7f6815f49384480ecead
|
||||
Subproject commit 3d6c7e916760b395908f28a1c885c8334d4fa98b
|
1
contrib/msgpack-c
vendored
Submodule
1
contrib/msgpack-c
vendored
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit 46684265d50b5d1b062d4c5c428ba08462844b1d
|
@ -124,11 +124,9 @@ namespace pdqsort_detail {
|
||||
inline bool partial_insertion_sort(Iter begin, Iter end, Compare comp) {
|
||||
typedef typename std::iterator_traits<Iter>::value_type T;
|
||||
if (begin == end) return true;
|
||||
|
||||
int limit = 0;
|
||||
for (Iter cur = begin + 1; cur != end; ++cur) {
|
||||
if (limit > partial_insertion_sort_limit) return false;
|
||||
|
||||
std::size_t limit = 0;
|
||||
for (Iter cur = begin + 1; cur != end; ++cur) {
|
||||
Iter sift = cur;
|
||||
Iter sift_1 = cur - 1;
|
||||
|
||||
@ -142,6 +140,8 @@ namespace pdqsort_detail {
|
||||
*sift = PDQSORT_PREFER_MOVE(tmp);
|
||||
limit += cur - sift;
|
||||
}
|
||||
|
||||
if (limit > partial_insertion_sort_limit) return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
@ -232,7 +232,7 @@ namespace pdqsort_detail {
|
||||
unsigned char* offsets_r = align_cacheline(offsets_r_storage);
|
||||
int num_l, num_r, start_l, start_r;
|
||||
num_l = num_r = start_l = start_r = 0;
|
||||
|
||||
|
||||
while (last - first > 2 * block_size) {
|
||||
// Fill up offset blocks with elements that are on the wrong side.
|
||||
if (num_l == 0) {
|
||||
@ -275,7 +275,7 @@ namespace pdqsort_detail {
|
||||
}
|
||||
|
||||
int l_size = 0, r_size = 0;
|
||||
int unknown_left = (last - first) - ((num_r || num_l) ? block_size : 0);
|
||||
int unknown_left = (int)(last - first) - ((num_r || num_l) ? block_size : 0);
|
||||
if (num_r) {
|
||||
// Handle leftover block by assigning the unknown elements to the other block.
|
||||
l_size = unknown_left;
|
||||
@ -311,7 +311,7 @@ namespace pdqsort_detail {
|
||||
start_l += num; start_r += num;
|
||||
if (num_l == 0) first += l_size;
|
||||
if (num_r == 0) last -= r_size;
|
||||
|
||||
|
||||
// We have now fully identified [first, last)'s proper position. Swap the last elements.
|
||||
if (num_l) {
|
||||
offsets_l += start_l;
|
||||
@ -340,7 +340,7 @@ namespace pdqsort_detail {
|
||||
template<class Iter, class Compare>
|
||||
inline std::pair<Iter, bool> partition_right(Iter begin, Iter end, Compare comp) {
|
||||
typedef typename std::iterator_traits<Iter>::value_type T;
|
||||
|
||||
|
||||
// Move pivot into local for speed.
|
||||
T pivot(PDQSORT_PREFER_MOVE(*begin));
|
||||
|
||||
@ -359,7 +359,7 @@ namespace pdqsort_detail {
|
||||
// If the first pair of elements that should be swapped to partition are the same element,
|
||||
// the passed in sequence already was correctly partitioned.
|
||||
bool already_partitioned = first >= last;
|
||||
|
||||
|
||||
// Keep swapping pairs of elements that are on the wrong side of the pivot. Previously
|
||||
// swapped pairs guard the searches, which is why the first iteration is special-cased
|
||||
// above.
|
||||
@ -388,7 +388,7 @@ namespace pdqsort_detail {
|
||||
T pivot(PDQSORT_PREFER_MOVE(*begin));
|
||||
Iter first = begin;
|
||||
Iter last = end;
|
||||
|
||||
|
||||
while (comp(pivot, *--last));
|
||||
|
||||
if (last + 1 == end) while (first < last && !comp(pivot, *++first));
|
||||
@ -475,11 +475,11 @@ namespace pdqsort_detail {
|
||||
std::iter_swap(pivot_pos - 3, pivot_pos - (l_size / 4 + 2));
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
if (r_size >= insertion_sort_threshold) {
|
||||
std::iter_swap(pivot_pos + 1, pivot_pos + (1 + r_size / 4));
|
||||
std::iter_swap(end - 1, end - r_size / 4);
|
||||
|
||||
|
||||
if (r_size > ninther_threshold) {
|
||||
std::iter_swap(pivot_pos + 2, pivot_pos + (2 + r_size / 4));
|
||||
std::iter_swap(pivot_pos + 3, pivot_pos + (3 + r_size / 4));
|
||||
@ -493,7 +493,7 @@ namespace pdqsort_detail {
|
||||
if (already_partitioned && partial_insertion_sort(begin, pivot_pos, comp)
|
||||
&& partial_insertion_sort(pivot_pos + 1, end, comp)) return;
|
||||
}
|
||||
|
||||
|
||||
// Sort the left partition first using recursion and do tail recursion elimination for
|
||||
// the right-hand partition.
|
||||
pdqsort_loop<Iter, Compare, Branchless>(begin, pivot_pos, comp, bad_allowed, leftmost);
|
||||
|
2
contrib/poco
vendored
2
contrib/poco
vendored
@ -1 +1 @@
|
||||
Subproject commit ddca76ba4956cb57150082394536cc43ff28f6fa
|
||||
Subproject commit 7d605a1ae5d878294f91f68feb62ae51e9a04426
|
2
contrib/rapidjson
vendored
2
contrib/rapidjson
vendored
@ -1 +1 @@
|
||||
Subproject commit 01950eb7acec78818d68b762efc869bba2420d82
|
||||
Subproject commit 8f4c021fa2f1e001d2376095928fc0532adf2ae6
|
2
debian/control
vendored
2
debian/control
vendored
@ -28,7 +28,7 @@ Description: Client binary for ClickHouse
|
||||
|
||||
Package: clickhouse-common-static
|
||||
Architecture: any
|
||||
Depends: ${shlibs:Depends}, ${misc:Depends}, tzdata
|
||||
Depends: ${shlibs:Depends}, ${misc:Depends}
|
||||
Suggests: clickhouse-common-static-dbg
|
||||
Replaces: clickhouse-common, clickhouse-server-base
|
||||
Provides: clickhouse-common, clickhouse-server-base
|
||||
|
@ -1,5 +1,5 @@
|
||||
## ClickHouse Dockerfiles
|
||||
|
||||
This directory contain Dockerfiles for `clickhouse-client` and `clickhouse-server`. They updated each release.
|
||||
This directory contain Dockerfiles for `clickhouse-client` and `clickhouse-server`. They are updated in each release.
|
||||
|
||||
Also there is bunch of images for testing and CI. They are listed in `images.json` file and updated on each commit to master. If you need to add another image, place information about it into `images.json`.
|
||||
|
@ -1,5 +1,9 @@
|
||||
FROM ubuntu:19.10
|
||||
|
||||
RUN apt-get --allow-unauthenticated update -y && apt-get install --yes wget gnupg
|
||||
RUN wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | apt-key add -
|
||||
RUN echo "deb [trusted=yes] http://apt.llvm.org/eoan/ llvm-toolchain-eoan-10 main" >> /etc/apt/sources.list
|
||||
|
||||
RUN apt-get update -y \
|
||||
&& env DEBIAN_FRONTEND=noninteractive \
|
||||
apt-get install --yes --no-install-recommends \
|
||||
@ -19,10 +23,10 @@ RUN apt-get update -y \
|
||||
python-termcolor \
|
||||
sudo \
|
||||
tzdata \
|
||||
clang \
|
||||
clang-tidy \
|
||||
lld \
|
||||
lldb
|
||||
clang-10 \
|
||||
clang-tidy-10 \
|
||||
lld-10 \
|
||||
lldb-10
|
||||
|
||||
COPY build.sh /
|
||||
|
||||
|
@ -16,8 +16,7 @@ RUN apt-get update \
|
||||
apt-get install --allow-unauthenticated --yes --no-install-recommends \
|
||||
clickhouse-client=$version \
|
||||
clickhouse-common-static=$version \
|
||||
locales \
|
||||
tzdata \
|
||||
locales
|
||||
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf \
|
||||
&& apt-get clean
|
||||
|
||||
|
@ -1,10 +1,10 @@
|
||||
{
|
||||
"docker/packager/deb": "yandex/clickhouse-deb-builder",
|
||||
"docker/packager/binary": "yandex/clickhouse-binary-builder",
|
||||
"docker/test/coverage": "yandex/clickhouse-coverage",
|
||||
"docker/test/compatibility/centos": "yandex/clickhouse-test-old-centos",
|
||||
"docker/test/compatibility/ubuntu": "yandex/clickhouse-test-old-ubuntu",
|
||||
"docker/test/integration": "yandex/clickhouse-integration-test",
|
||||
"docker/test/performance": "yandex/clickhouse-performance-test",
|
||||
"docker/test/integration/base": "yandex/clickhouse-integration-test",
|
||||
"docker/test/performance-comparison": "yandex/clickhouse-performance-comparison",
|
||||
"docker/test/pvs": "yandex/clickhouse-pvs-test",
|
||||
"docker/test/stateful": "yandex/clickhouse-stateful-test",
|
||||
@ -14,5 +14,6 @@
|
||||
"docker/test/unit": "yandex/clickhouse-unit-test",
|
||||
"docker/test/stress": "yandex/clickhouse-stress-test",
|
||||
"docker/test/split_build_smoke_test": "yandex/clickhouse-split-build-smoke-test",
|
||||
"tests/integration/image": "yandex/clickhouse-integration-tests-runner"
|
||||
"docker/test/codebrowser": "yandex/clickhouse-codebrowser",
|
||||
"docker/test/integration/runner": "yandex/clickhouse-integration-tests-runner"
|
||||
}
|
||||
|
@ -3,10 +3,10 @@ compilers and build settings. Correctly configured Docker daemon is single depen
|
||||
|
||||
Usage:
|
||||
|
||||
Build deb package with `gcc-8` in `debug` mode:
|
||||
Build deb package with `gcc-9` in `debug` mode:
|
||||
```
|
||||
$ mkdir deb/test_output
|
||||
$ ./packager --output-dir deb/test_output/ --package-type deb --compiler=gcc-8 --build-type=debug
|
||||
$ ./packager --output-dir deb/test_output/ --package-type deb --compiler=gcc-9 --build-type=debug
|
||||
$ ls -l deb/test_output
|
||||
-rw-r--r-- 1 root root 3730 clickhouse-client_18.14.2+debug_all.deb
|
||||
-rw-r--r-- 1 root root 84221888 clickhouse-common-static_18.14.2+debug_amd64.deb
|
||||
@ -18,11 +18,11 @@ $ ls -l deb/test_output
|
||||
|
||||
```
|
||||
|
||||
Build ClickHouse binary with `clang-6.0` and `address` sanitizer in `relwithdebuginfo`
|
||||
Build ClickHouse binary with `clang-9.0` and `address` sanitizer in `relwithdebuginfo`
|
||||
mode:
|
||||
```
|
||||
$ mkdir $HOME/some_clickhouse
|
||||
$ ./packager --output-dir=$HOME/some_clickhouse --package-type binary --compiler=clang-6.0 --sanitizer=address
|
||||
$ ./packager --output-dir=$HOME/some_clickhouse --package-type binary --compiler=clang-9.0 --sanitizer=address
|
||||
$ ls -l $HOME/some_clickhouse
|
||||
-rwxr-xr-x 1 root root 787061952 clickhouse
|
||||
lrwxrwxrwx 1 root root 10 clickhouse-benchmark -> clickhouse
|
||||
|
@ -2,6 +2,10 @@
|
||||
# docker build -t yandex/clickhouse-binary-builder .
|
||||
FROM ubuntu:19.10
|
||||
|
||||
RUN apt-get --allow-unauthenticated update -y && apt-get install --yes wget gnupg
|
||||
RUN wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | apt-key add -
|
||||
RUN echo "deb [trusted=yes] http://apt.llvm.org/eoan/ llvm-toolchain-eoan-10 main" >> /etc/apt/sources.list
|
||||
|
||||
RUN apt-get --allow-unauthenticated update -y \
|
||||
&& env DEBIAN_FRONTEND=noninteractive \
|
||||
apt-get --allow-unauthenticated install --yes --no-install-recommends \
|
||||
@ -23,6 +27,9 @@ RUN apt-get update -y \
|
||||
curl \
|
||||
gcc-9 \
|
||||
g++-9 \
|
||||
clang-10 \
|
||||
lld-10 \
|
||||
clang-tidy-10 \
|
||||
clang-9 \
|
||||
lld-9 \
|
||||
clang-tidy-9 \
|
||||
@ -43,10 +50,10 @@ RUN apt-get update -y \
|
||||
build-essential
|
||||
|
||||
# This symlink required by gcc to find lld compiler
|
||||
RUN ln -s /usr/bin/lld-9 /usr/bin/ld.lld
|
||||
RUN ln -s /usr/bin/lld-10 /usr/bin/ld.lld
|
||||
|
||||
ENV CC=clang-9
|
||||
ENV CXX=clang++-9
|
||||
ENV CC=clang-10
|
||||
ENV CXX=clang++-10
|
||||
|
||||
# libtapi is required to support .tbh format from recent MacOS SDKs
|
||||
RUN git clone https://github.com/tpoechtrager/apple-libtapi.git
|
||||
|
@ -1,6 +1,10 @@
|
||||
# docker build -t yandex/clickhouse-deb-builder .
|
||||
FROM ubuntu:19.10
|
||||
|
||||
RUN apt-get --allow-unauthenticated update -y && apt-get install --yes wget gnupg
|
||||
RUN wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | apt-key add -
|
||||
RUN echo "deb [trusted=yes] http://apt.llvm.org/eoan/ llvm-toolchain-eoan-10 main" >> /etc/apt/sources.list
|
||||
|
||||
RUN apt-get --allow-unauthenticated update -y \
|
||||
&& env DEBIAN_FRONTEND=noninteractive \
|
||||
apt-get --allow-unauthenticated install --yes --no-install-recommends \
|
||||
@ -19,6 +23,9 @@ RUN apt-get --allow-unauthenticated update -y \
|
||||
apt-get --allow-unauthenticated install --yes --no-install-recommends \
|
||||
gcc-9 \
|
||||
g++-9 \
|
||||
clang-10 \
|
||||
lld-10 \
|
||||
clang-tidy-10 \
|
||||
clang-9 \
|
||||
lld-9 \
|
||||
clang-tidy-9 \
|
||||
@ -48,6 +55,7 @@ RUN apt-get --allow-unauthenticated update -y \
|
||||
libltdl-dev \
|
||||
libre2-dev \
|
||||
libjemalloc-dev \
|
||||
libmsgpack-dev \
|
||||
unixodbc-dev \
|
||||
odbcinst \
|
||||
tzdata \
|
||||
@ -68,7 +76,7 @@ RUN chmod +x dpkg-deb
|
||||
RUN cp dpkg-deb /usr/bin
|
||||
|
||||
# This symlink required by gcc to find lld compiler
|
||||
RUN ln -s /usr/bin/lld-9 /usr/bin/ld.lld
|
||||
RUN ln -s /usr/bin/lld-10 /usr/bin/ld.lld
|
||||
|
||||
COPY build.sh /
|
||||
|
||||
|
4
docker/packager/freebsd/Vagrantfile
vendored
4
docker/packager/freebsd/Vagrantfile
vendored
@ -1,4 +0,0 @@
|
||||
Vagrant.configure("2") do |config|
|
||||
config.vm.box = "robot-clickhouse/clickhouse-freebsd"
|
||||
config.vm.synced_folder ".", "/vagrant", disabled: true
|
||||
end
|
@ -11,48 +11,8 @@ SCRIPT_PATH = os.path.realpath(__file__)
|
||||
IMAGE_MAP = {
|
||||
"deb": "yandex/clickhouse-deb-builder",
|
||||
"binary": "yandex/clickhouse-binary-builder",
|
||||
"freebsd": os.path.join(os.path.dirname(SCRIPT_PATH), "freebsd"),
|
||||
}
|
||||
|
||||
class Vagrant(object):
|
||||
def __init__(self, path_to_vagrant_file):
|
||||
self.prefix = "VAGRANT_CWD=" + path_to_vagrant_file
|
||||
|
||||
def __enter__(self):
|
||||
subprocess.check_call("{} vagrant up".format(self.prefix), shell=True)
|
||||
self.ssh_path = "/tmp/vagrant-ssh"
|
||||
subprocess.check_call("{} vagrant ssh-config > {}".format(self.prefix, self.ssh_path), shell=True)
|
||||
return self
|
||||
|
||||
def copy_to_image(self, local_path, remote_path):
|
||||
cmd = "scp -F {ssh} -r {lpath} default:{rpath}".format(ssh=self.ssh_path, lpath=local_path, rpath=remote_path)
|
||||
logging.info("Copying to image %s", cmd)
|
||||
subprocess.check_call(
|
||||
cmd,
|
||||
shell=True
|
||||
)
|
||||
|
||||
def copy_from_image(self, remote_path, local_path):
|
||||
cmd = "scp -F {ssh} -r default:{rpath} {lpath}".format(ssh=self.ssh_path, rpath=remote_path, lpath=local_path)
|
||||
logging.info("Copying from image %s", cmd)
|
||||
subprocess.check_call(
|
||||
cmd,
|
||||
shell=True
|
||||
)
|
||||
|
||||
def execute_cmd(self, cmd):
|
||||
cmd = '{} vagrant ssh -c "{}"'.format(self.prefix, cmd)
|
||||
logging.info("Executin cmd %s", cmd)
|
||||
subprocess.check_call(
|
||||
cmd,
|
||||
shell=True
|
||||
)
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
logging.info("Destroying image")
|
||||
subprocess.check_call("{} vagrant destroy --force".format(self.prefix), shell=True)
|
||||
|
||||
|
||||
def check_image_exists_locally(image_name):
|
||||
try:
|
||||
output = subprocess.check_output("docker images -q {} 2> /dev/null".format(image_name), shell=True)
|
||||
@ -94,15 +54,6 @@ def run_docker_image_with_env(image_name, output, env_variables, ch_root, ccache
|
||||
|
||||
subprocess.check_call(cmd, shell=True)
|
||||
|
||||
def run_vagrant_box_with_env(image_path, output_dir, ch_root):
|
||||
with Vagrant(image_path) as vagrant:
|
||||
logging.info("Copying folder to vagrant machine")
|
||||
vagrant.copy_to_image(ch_root, "~/ClickHouse")
|
||||
logging.info("Running build")
|
||||
vagrant.execute_cmd("cd ~/ClickHouse && cmake . && ninja")
|
||||
logging.info("Copying binary back")
|
||||
vagrant.copy_from_image("~/ClickHouse/programs/clickhouse", output_dir)
|
||||
|
||||
def parse_env_variables(build_type, compiler, sanitizer, package_type, image_type, cache, distcc_hosts, unbundled, split_binary, clang_tidy, version, author, official, alien_pkgs, with_coverage):
|
||||
CLANG_PREFIX = "clang"
|
||||
DARWIN_SUFFIX = "-darwin"
|
||||
@ -210,11 +161,11 @@ if __name__ == "__main__":
|
||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')
|
||||
parser = argparse.ArgumentParser(description="ClickHouse building script using prebuilt Docker image")
|
||||
# 'performance' creates a combined .tgz with server and configs to be used for performance test.
|
||||
parser.add_argument("--package-type", choices=['deb', 'binary', 'performance', 'freebsd'], required=True)
|
||||
parser.add_argument("--package-type", choices=['deb', 'binary', 'performance'], required=True)
|
||||
parser.add_argument("--clickhouse-repo-path", default="../../")
|
||||
parser.add_argument("--output-dir", required=True)
|
||||
parser.add_argument("--build-type", choices=("debug", ""), default="")
|
||||
parser.add_argument("--compiler", choices=("clang-8", "clang-8-darwin", "clang-8-aarch64", "gcc-8", "gcc-9", "clang-9"), default="gcc-8")
|
||||
parser.add_argument("--compiler", choices=("clang-8", "clang-8-darwin", "clang-9-aarch64", "clang-9-freebsd", "gcc-8", "gcc-9", "clang-9"), default="gcc-8")
|
||||
parser.add_argument("--sanitizer", choices=("address", "thread", "memory", "undefined", ""), default="")
|
||||
parser.add_argument("--unbundled", action="store_true")
|
||||
parser.add_argument("--split-binary", action="store_true")
|
||||
@ -252,9 +203,5 @@ if __name__ == "__main__":
|
||||
args.build_type, args.compiler, args.sanitizer, args.package_type, image_type,
|
||||
args.cache, args.distcc_hosts, args.unbundled, args.split_binary, args.clang_tidy,
|
||||
args.version, args.author, args.official, args.alien_pkgs, args.with_coverage)
|
||||
if image_type != "freebsd":
|
||||
run_docker_image_with_env(image_name, args.output_dir, env_prepared, ch_root, args.ccache_dir)
|
||||
else:
|
||||
logging.info("Running freebsd build, arguments will be ignored")
|
||||
run_vagrant_box_with_env(image_name, args.output_dir, ch_root)
|
||||
run_docker_image_with_env(image_name, args.output_dir, env_prepared, ch_root, args.ccache_dir)
|
||||
logging.info("Output placed into {}".format(args.output_dir))
|
||||
|
@ -19,7 +19,6 @@ RUN apt-get update \
|
||||
clickhouse-client=$version \
|
||||
clickhouse-server=$version \
|
||||
locales \
|
||||
tzdata \
|
||||
wget \
|
||||
&& rm -rf \
|
||||
/var/lib/apt/lists/* \
|
||||
|
6
docker/test/integration/README.md
Normal file
6
docker/test/integration/README.md
Normal file
@ -0,0 +1,6 @@
|
||||
## Docker containers for integration tests
|
||||
- `base` container with required packages
|
||||
- `runner` container with that runs integration tests in docker
|
||||
- `compose` contains docker_compose YaML files that are used in tests
|
||||
|
||||
How to run integration tests is described in tests/integration/README.md
|
@ -41,32 +41,32 @@ ENV DOCKER_CHANNEL stable
|
||||
ENV DOCKER_VERSION 17.09.1-ce
|
||||
|
||||
RUN set -eux; \
|
||||
\
|
||||
\
|
||||
# this "case" statement is generated via "update.sh"
|
||||
\
|
||||
if ! wget -O docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz"; then \
|
||||
echo >&2 "error: failed to download 'docker-${DOCKER_VERSION}' from '${DOCKER_CHANNEL}' for '${x86_64}'"; \
|
||||
exit 1; \
|
||||
fi; \
|
||||
\
|
||||
tar --extract \
|
||||
--file docker.tgz \
|
||||
--strip-components 1 \
|
||||
--directory /usr/local/bin/ \
|
||||
; \
|
||||
rm docker.tgz; \
|
||||
\
|
||||
dockerd --version; \
|
||||
docker --version
|
||||
\
|
||||
if ! wget -O docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz"; then \
|
||||
echo >&2 "error: failed to download 'docker-${DOCKER_VERSION}' from '${DOCKER_CHANNEL}' for '${x86_64}'"; \
|
||||
exit 1; \
|
||||
fi; \
|
||||
\
|
||||
tar --extract \
|
||||
--file docker.tgz \
|
||||
--strip-components 1 \
|
||||
--directory /usr/local/bin/ \
|
||||
; \
|
||||
rm docker.tgz; \
|
||||
\
|
||||
dockerd --version; \
|
||||
docker --version
|
||||
|
||||
COPY modprobe.sh /usr/local/bin/modprobe
|
||||
COPY dockerd-entrypoint.sh /usr/local/bin/
|
||||
|
||||
RUN set -x \
|
||||
&& addgroup --system dockremap \
|
||||
&& addgroup --system dockremap \
|
||||
&& adduser --system dockremap \
|
||||
&& adduser dockremap dockremap \
|
||||
&& echo 'dockremap:165536:65536' >> /etc/subuid \
|
||||
&& adduser dockremap dockremap \
|
||||
&& echo 'dockremap:165536:65536' >> /etc/subuid \
|
||||
&& echo 'dockremap:165536:65536' >> /etc/subgid
|
||||
|
||||
VOLUME /var/lib/docker
|
@ -9,10 +9,10 @@ set -eu
|
||||
# Docker often uses "modprobe -va foo bar baz"
|
||||
# so we ignore modules that start with "-"
|
||||
for module; do
|
||||
if [ "${module#-}" = "$module" ]; then
|
||||
ip link show "$module" || true
|
||||
lsmod | grep "$module" || true
|
||||
fi
|
||||
if [ "${module#-}" = "$module" ]; then
|
||||
ip link show "$module" || true
|
||||
lsmod | grep "$module" || true
|
||||
fi
|
||||
done
|
||||
|
||||
# remove /usr/local/... from PATH so we can exec the real modprobe as a last resort
|
@ -10,6 +10,7 @@ RUN apt-get update \
|
||||
bash \
|
||||
curl \
|
||||
g++ \
|
||||
gdb \
|
||||
git \
|
||||
libc6-dbg \
|
||||
moreutils \
|
||||
|
@ -2,7 +2,7 @@
|
||||
set -ex
|
||||
set -o pipefail
|
||||
trap "exit" INT TERM
|
||||
trap "kill $(jobs -pr) ||:" EXIT
|
||||
trap 'kill $(jobs -pr) ||:' EXIT
|
||||
|
||||
stage=${stage:-}
|
||||
script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
|
||||
@ -14,26 +14,26 @@ function configure
|
||||
rm right/config/config.d/text_log.xml ||:
|
||||
cp -rv right/config left ||:
|
||||
|
||||
sed -i 's/<tcp_port>9000/<tcp_port>9001/g' left/config/config.xml
|
||||
sed -i 's/<tcp_port>9000/<tcp_port>9002/g' right/config/config.xml
|
||||
sed -i 's/<tcp_port>900./<tcp_port>9001/g' left/config/config.xml
|
||||
sed -i 's/<tcp_port>900./<tcp_port>9002/g' right/config/config.xml
|
||||
|
||||
# Start a temporary server to rename the tables
|
||||
while killall clickhouse; do echo . ; sleep 1 ; done
|
||||
while killall clickhouse-server; do echo . ; sleep 1 ; done
|
||||
echo all killed
|
||||
|
||||
set -m # Spawn temporary in its own process groups
|
||||
left/clickhouse server --config-file=left/config/config.xml -- --path db0 &> setup-server-log.log &
|
||||
left/clickhouse-server --config-file=left/config/config.xml -- --path db0 &> setup-server-log.log &
|
||||
left_pid=$!
|
||||
kill -0 $left_pid
|
||||
disown $left_pid
|
||||
set +m
|
||||
while ! left/clickhouse client --port 9001 --query "select 1" ; do kill -0 $left_pid ; echo . ; sleep 1 ; done
|
||||
while ! clickhouse-client --port 9001 --query "select 1" ; do kill -0 $left_pid ; echo . ; sleep 1 ; done
|
||||
echo server for setup started
|
||||
|
||||
left/clickhouse client --port 9001 --query "create database test" ||:
|
||||
left/clickhouse client --port 9001 --query "rename table datasets.hits_v1 to test.hits" ||:
|
||||
clickhouse-client --port 9001 --query "create database test" ||:
|
||||
clickhouse-client --port 9001 --query "rename table datasets.hits_v1 to test.hits" ||:
|
||||
|
||||
while killall clickhouse; do echo . ; sleep 1 ; done
|
||||
while killall clickhouse-server; do echo . ; sleep 1 ; done
|
||||
echo all killed
|
||||
|
||||
# Remove logs etc, because they will be updated, and sharing them between
|
||||
@ -42,41 +42,50 @@ function configure
|
||||
rm db0/metadata/system/* -rf ||:
|
||||
|
||||
# Make copies of the original db for both servers. Use hardlinks instead
|
||||
# of copying.
|
||||
# of copying. Be careful to remove preprocessed configs and system tables,or
|
||||
# it can lead to weird effects.
|
||||
rm -r left/db ||:
|
||||
rm -r right/db ||:
|
||||
rm -r db0/preprocessed_configs ||:
|
||||
rm -r db/{data,metadata}/system ||:
|
||||
cp -al db0/ left/db/
|
||||
cp -al db0/ right/db/
|
||||
}
|
||||
|
||||
function restart
|
||||
{
|
||||
while killall clickhouse; do echo . ; sleep 1 ; done
|
||||
while killall clickhouse-server; do echo . ; sleep 1 ; done
|
||||
echo all killed
|
||||
|
||||
set -m # Spawn servers in their own process groups
|
||||
|
||||
left/clickhouse server --config-file=left/config/config.xml -- --path left/db &>> left-server-log.log &
|
||||
left/clickhouse-server --config-file=left/config/config.xml -- --path left/db &>> left-server-log.log &
|
||||
left_pid=$!
|
||||
kill -0 $left_pid
|
||||
disown $left_pid
|
||||
|
||||
right/clickhouse server --config-file=right/config/config.xml -- --path right/db &>> right-server-log.log &
|
||||
right/clickhouse-server --config-file=right/config/config.xml -- --path right/db &>> right-server-log.log &
|
||||
right_pid=$!
|
||||
kill -0 $right_pid
|
||||
disown $right_pid
|
||||
|
||||
set +m
|
||||
|
||||
while ! left/clickhouse client --port 9001 --query "select 1" ; do kill -0 $left_pid ; echo . ; sleep 1 ; done
|
||||
while ! clickhouse-client --port 9001 --query "select 1" ; do kill -0 $left_pid ; echo . ; sleep 1 ; done
|
||||
echo left ok
|
||||
while ! right/clickhouse client --port 9002 --query "select 1" ; do kill -0 $right_pid ; echo . ; sleep 1 ; done
|
||||
while ! clickhouse-client --port 9002 --query "select 1" ; do kill -0 $right_pid ; echo . ; sleep 1 ; done
|
||||
echo right ok
|
||||
|
||||
left/clickhouse client --port 9001 --query "select * from system.tables where database != 'system'"
|
||||
left/clickhouse client --port 9001 --query "select * from system.build_options"
|
||||
right/clickhouse client --port 9002 --query "select * from system.tables where database != 'system'"
|
||||
right/clickhouse client --port 9002 --query "select * from system.build_options"
|
||||
clickhouse-client --port 9001 --query "select * from system.tables where database != 'system'"
|
||||
clickhouse-client --port 9001 --query "select * from system.build_options"
|
||||
clickhouse-client --port 9002 --query "select * from system.tables where database != 'system'"
|
||||
clickhouse-client --port 9002 --query "select * from system.build_options"
|
||||
|
||||
# Check again that both servers we started are running -- this is important
|
||||
# for running locally, when there might be some other servers started and we
|
||||
# will connect to them instead.
|
||||
kill -0 $left_pid
|
||||
kill -0 $right_pid
|
||||
}
|
||||
|
||||
function run_tests
|
||||
@ -91,7 +100,7 @@ function run_tests
|
||||
# changes.
|
||||
test_prefix=$([ "$PR_TO_TEST" == "0" ] && echo left || echo right)/performance
|
||||
|
||||
for x in {test-times,skipped-tests}.tsv
|
||||
for x in {test-times,skipped-tests,wall-clock-times}.tsv
|
||||
do
|
||||
rm -v "$x" ||:
|
||||
touch "$x"
|
||||
@ -127,7 +136,7 @@ function run_tests
|
||||
# FIXME remove some broken long tests
|
||||
for test_name in {IPv4,IPv6,modulo,parse_engine_file,number_formatting_formats,select_format,arithmetic,cryptographic_hashes,logical_functions_{medium,small}}
|
||||
do
|
||||
printf "$test_name\tMarked as broken (see compare.sh)\n" >> skipped-tests.tsv
|
||||
printf "%s\tMarked as broken (see compare.sh)\n" "$test_name">> skipped-tests.tsv
|
||||
rm "$test_prefix/$test_name.xml" ||:
|
||||
done
|
||||
test_files=$(ls "$test_prefix"/*.xml)
|
||||
@ -138,9 +147,9 @@ function run_tests
|
||||
for test in $test_files
|
||||
do
|
||||
# Check that both servers are alive, to fail faster if they die.
|
||||
left/clickhouse client --port 9001 --query "select 1 format Null" \
|
||||
clickhouse-client --port 9001 --query "select 1 format Null" \
|
||||
|| { echo $test_name >> left-server-died.log ; restart ; continue ; }
|
||||
right/clickhouse client --port 9002 --query "select 1 format Null" \
|
||||
clickhouse-client --port 9002 --query "select 1 format Null" \
|
||||
|| { echo $test_name >> right-server-died.log ; restart ; continue ; }
|
||||
|
||||
test_name=$(basename "$test" ".xml")
|
||||
@ -158,7 +167,7 @@ function run_tests
|
||||
skipped=$(grep ^skipped "$test_name-raw.tsv" | cut -f2-)
|
||||
if [ "$skipped" != "" ]
|
||||
then
|
||||
printf "$test_name""\t""$skipped""\n" >> skipped-tests.tsv
|
||||
printf "%s\t%s\n" "$test_name" "$skipped">> skipped-tests.tsv
|
||||
fi
|
||||
done
|
||||
|
||||
@ -167,27 +176,49 @@ function run_tests
|
||||
wait
|
||||
}
|
||||
|
||||
function get_profiles_watchdog
|
||||
{
|
||||
sleep 3000
|
||||
|
||||
echo "The trace collection did not finish in time." >> profile-errors.log
|
||||
|
||||
for pid in $(pgrep -f clickhouse)
|
||||
do
|
||||
gdb -p $pid --batch --ex "info proc all" --ex "thread apply all bt" --ex quit &> "$pid.gdb.log" &
|
||||
done
|
||||
wait
|
||||
|
||||
for i in {1..10}
|
||||
do
|
||||
if ! pkill -f clickhouse
|
||||
then
|
||||
break
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
}
|
||||
|
||||
function get_profiles
|
||||
{
|
||||
# Collect the profiles
|
||||
left/clickhouse client --port 9001 --query "set query_profiler_cpu_time_period_ns = 0"
|
||||
left/clickhouse client --port 9001 --query "set query_profiler_real_time_period_ns = 0"
|
||||
right/clickhouse client --port 9001 --query "set query_profiler_cpu_time_period_ns = 0"
|
||||
right/clickhouse client --port 9001 --query "set query_profiler_real_time_period_ns = 0"
|
||||
left/clickhouse client --port 9001 --query "system flush logs"
|
||||
right/clickhouse client --port 9002 --query "system flush logs"
|
||||
clickhouse-client --port 9001 --query "set query_profiler_cpu_time_period_ns = 0"
|
||||
clickhouse-client --port 9001 --query "set query_profiler_real_time_period_ns = 0"
|
||||
clickhouse-client --port 9001 --query "set query_profiler_cpu_time_period_ns = 0"
|
||||
clickhouse-client --port 9001 --query "set query_profiler_real_time_period_ns = 0"
|
||||
clickhouse-client --port 9001 --query "system flush logs"
|
||||
clickhouse-client --port 9002 --query "system flush logs"
|
||||
|
||||
left/clickhouse client --port 9001 --query "select * from system.query_log where type = 2 format TSVWithNamesAndTypes" > left-query-log.tsv ||: &
|
||||
left/clickhouse client --port 9001 --query "select * from system.query_thread_log format TSVWithNamesAndTypes" > left-query-thread-log.tsv ||: &
|
||||
left/clickhouse client --port 9001 --query "select * from system.trace_log format TSVWithNamesAndTypes" > left-trace-log.tsv ||: &
|
||||
left/clickhouse client --port 9001 --query "select arrayJoin(trace) addr, concat(splitByChar('/', addressToLine(addr))[-1], '#', demangle(addressToSymbol(addr)) ) name from system.trace_log group by addr format TSVWithNamesAndTypes" > left-addresses.tsv ||: &
|
||||
left/clickhouse client --port 9001 --query "select * from system.metric_log format TSVWithNamesAndTypes" > left-metric-log.tsv ||: &
|
||||
clickhouse-client --port 9001 --query "select * from system.query_log where type = 2 format TSVWithNamesAndTypes" > left-query-log.tsv ||: &
|
||||
clickhouse-client --port 9001 --query "select * from system.query_thread_log format TSVWithNamesAndTypes" > left-query-thread-log.tsv ||: &
|
||||
clickhouse-client --port 9001 --query "select * from system.trace_log format TSVWithNamesAndTypes" > left-trace-log.tsv ||: &
|
||||
clickhouse-client --port 9001 --query "select arrayJoin(trace) addr, concat(splitByChar('/', addressToLine(addr))[-1], '#', demangle(addressToSymbol(addr)) ) name from system.trace_log group by addr format TSVWithNamesAndTypes" > left-addresses.tsv ||: &
|
||||
clickhouse-client --port 9001 --query "select * from system.metric_log format TSVWithNamesAndTypes" > left-metric-log.tsv ||: &
|
||||
|
||||
right/clickhouse client --port 9002 --query "select * from system.query_log where type = 2 format TSVWithNamesAndTypes" > right-query-log.tsv ||: &
|
||||
right/clickhouse client --port 9002 --query "select * from system.query_thread_log format TSVWithNamesAndTypes" > right-query-thread-log.tsv ||: &
|
||||
right/clickhouse client --port 9002 --query "select * from system.trace_log format TSVWithNamesAndTypes" > right-trace-log.tsv ||: &
|
||||
right/clickhouse client --port 9002 --query "select arrayJoin(trace) addr, concat(splitByChar('/', addressToLine(addr))[-1], '#', demangle(addressToSymbol(addr)) ) name from system.trace_log group by addr format TSVWithNamesAndTypes" > right-addresses.tsv ||: &
|
||||
right/clickhouse client --port 9002 --query "select * from system.metric_log format TSVWithNamesAndTypes" > right-metric-log.tsv ||: &
|
||||
clickhouse-client --port 9002 --query "select * from system.query_log where type = 2 format TSVWithNamesAndTypes" > right-query-log.tsv ||: &
|
||||
clickhouse-client --port 9002 --query "select * from system.query_thread_log format TSVWithNamesAndTypes" > right-query-thread-log.tsv ||: &
|
||||
clickhouse-client --port 9002 --query "select * from system.trace_log format TSVWithNamesAndTypes" > right-trace-log.tsv ||: &
|
||||
clickhouse-client --port 9002 --query "select arrayJoin(trace) addr, concat(splitByChar('/', addressToLine(addr))[-1], '#', demangle(addressToSymbol(addr)) ) name from system.trace_log group by addr format TSVWithNamesAndTypes" > right-addresses.tsv ||: &
|
||||
clickhouse-client --port 9002 --query "select * from system.metric_log format TSVWithNamesAndTypes" > right-metric-log.tsv ||: &
|
||||
|
||||
wait
|
||||
}
|
||||
@ -195,9 +226,9 @@ function get_profiles
|
||||
# Build and analyze randomization distribution for all queries.
|
||||
function analyze_queries
|
||||
{
|
||||
find . -maxdepth 1 -name "*-queries.tsv" -print | \
|
||||
xargs -n1 -I% basename % -queries.tsv | \
|
||||
parallel --verbose right/clickhouse local --file "{}-queries.tsv" \
|
||||
find . -maxdepth 1 -name "*-queries.tsv" -print0 | \
|
||||
xargs -0 -n1 -I% basename % -queries.tsv | \
|
||||
parallel --verbose clickhouse-local --file "{}-queries.tsv" \
|
||||
--structure "\"query text, run int, version UInt32, time float\"" \
|
||||
--query "\"$(cat "$script_dir/eqmed.sql")\"" \
|
||||
">" {}-report.tsv
|
||||
@ -219,33 +250,39 @@ done
|
||||
|
||||
rm ./*.{rep,svg} test-times.tsv test-dump.tsv unstable.tsv unstable-query-ids.tsv unstable-query-metrics.tsv changed-perf.tsv unstable-tests.tsv unstable-queries.tsv bad-tests.tsv slow-on-client.tsv all-queries.tsv ||:
|
||||
|
||||
right/clickhouse local --query "
|
||||
cat profile-errors.log >> report-errors.rep
|
||||
|
||||
clickhouse-local --query "
|
||||
create table queries engine File(TSVWithNamesAndTypes, 'queries.rep')
|
||||
as select
|
||||
-- FIXME Comparison mode doesn't make sense for queries that complete
|
||||
-- immediately, so for now we pretend they don't exist. We don't want to
|
||||
-- remove them altogether because we want to be able to detect regressions,
|
||||
-- but the right way to do this is not yet clear.
|
||||
left + right < 0.05 as short,
|
||||
(left + right) / 2 < 0.02 as short,
|
||||
|
||||
not short and abs(diff) < 0.10 and rd[3] > 0.10 as unstable,
|
||||
|
||||
-- Do not consider changed the queries with 5% RD below 5% -- e.g., we're
|
||||
-- likely to observe a difference > 5% in less than 5% cases.
|
||||
-- Not sure it is correct, but empirically it filters out a lot of noise.
|
||||
not short and abs(diff) > 0.15 and abs(diff) > rd[3] and rd[1] > 0.05 as changed,
|
||||
-- Difference > 5% and > rd(99%) -- changed. We can't filter out flaky
|
||||
-- queries by rd(5%), because it can be zero when the difference is smaller
|
||||
-- than a typical distribution width. The difference is still real though.
|
||||
not short and abs(diff) > 0.05 and abs(diff) > rd[4] as changed,
|
||||
|
||||
-- Not changed but rd(99%) > 5% -- unstable.
|
||||
not short and not changed and rd[4] > 0.05 as unstable,
|
||||
|
||||
left, right, diff, rd,
|
||||
replaceAll(_file, '-report.tsv', '') test,
|
||||
query
|
||||
|
||||
-- Truncate long queries.
|
||||
if(length(query) < 300, query, substr(query, 1, 298) || '...') query
|
||||
from file('*-report.tsv', TSV, 'left float, right float, diff float, rd Array(float), query text');
|
||||
|
||||
create table changed_perf_tsv engine File(TSV, 'changed-perf.tsv') as
|
||||
select left, right, diff, rd, test, query from queries where changed
|
||||
order by rd[3] desc;
|
||||
order by abs(diff) desc;
|
||||
|
||||
create table unstable_queries_tsv engine File(TSV, 'unstable-queries.tsv') as
|
||||
select left, right, diff, rd, test, query from queries where unstable
|
||||
order by rd[3] desc;
|
||||
order by rd[4] desc;
|
||||
|
||||
create table unstable_tests_tsv engine File(TSV, 'bad-tests.tsv') as
|
||||
select test, sum(unstable) u, sum(changed) c, u + c s from queries
|
||||
@ -291,10 +328,10 @@ create table all_tests_tsv engine File(TSV, 'all-queries.tsv') as
|
||||
|
||||
for version in {right,left}
|
||||
do
|
||||
right/clickhouse local --query "
|
||||
clickhouse-local --query "
|
||||
create view queries as
|
||||
select * from file('queries.rep', TSVWithNamesAndTypes,
|
||||
'short int, unstable int, changed int, left float, right float,
|
||||
'short int, changed int, unstable int, left float, right float,
|
||||
diff float, rd Array(float), test text, query text');
|
||||
|
||||
create view query_log as select *
|
||||
@ -359,8 +396,8 @@ create table unstable_run_traces engine File(TSVWithNamesAndTypes,
|
||||
|
||||
create table metric_devation engine File(TSVWithNamesAndTypes,
|
||||
'metric-deviation.$version.rep') as
|
||||
select floor((q[3] - q[1])/q[2], 3) d,
|
||||
quantilesExact(0, 0.5, 1)(value) q, metric, query
|
||||
select query, floor((q[3] - q[1])/q[2], 3) d,
|
||||
quantilesExact(0, 0.5, 1)(value) q, metric
|
||||
from (select * from unstable_run_metrics
|
||||
union all select * from unstable_run_traces
|
||||
union all select * from unstable_run_metrics_2) mm
|
||||
@ -394,11 +431,17 @@ do
|
||||
for query in $(cut -d' ' -f1 "stacks.$version.rep" | sort | uniq)
|
||||
do
|
||||
query_file=$(echo "$query" | cut -c-120 | sed 's/[/]/_/g')
|
||||
|
||||
# Build separate .svg flamegraph for each query.
|
||||
grep -F "$query " "stacks.$version.rep" \
|
||||
| cut -d' ' -f 2- \
|
||||
| sed 's/\t/ /g' \
|
||||
| tee "$query_file.stacks.$version.rep" \
|
||||
| ~/fg/flamegraph.pl > "$query_file.$version.svg" &
|
||||
|
||||
# Copy metric stats into separate files as well.
|
||||
grep -F "$query " "metric-deviation.$version.rep" \
|
||||
| cut -f2- > "$query_file.$version.metrics.rep" &
|
||||
done
|
||||
done
|
||||
wait
|
||||
@ -406,9 +449,13 @@ unset IFS
|
||||
|
||||
# Remember that grep sets error code when nothing is found, hence the bayan
|
||||
# operator.
|
||||
grep -H -m2 -i '\(Exception\|Error\):[^:]' ./*-err.log | sed 's/:/\t/' > run-errors.tsv ||:
|
||||
grep -H -m2 -i '\(Exception\|Error\):[^:]' ./*-err.log | sed 's/:/\t/' >> run-errors.tsv ||:
|
||||
}
|
||||
|
||||
# Check that local and client are in PATH
|
||||
clickhouse-local --version > /dev/null
|
||||
clickhouse-client --version > /dev/null
|
||||
|
||||
case "$stage" in
|
||||
"")
|
||||
;&
|
||||
@ -423,10 +470,28 @@ case "$stage" in
|
||||
time run_tests ||:
|
||||
;&
|
||||
"get_profiles")
|
||||
# Getting profiles inexplicably hangs sometimes, so try to save some logs if
|
||||
# this happens again. Give the servers 5 minutes to collect all info, then
|
||||
# trace and kill. Start in a subshell, so that both function don't interfere
|
||||
# with each other's jobs through `wait`. Also make the subshell have its own
|
||||
# process group, so that we can then kill it with all its child processes.
|
||||
# Somehow it doesn't kill the children by itself when dying.
|
||||
set -m
|
||||
( get_profiles_watchdog ) &
|
||||
watchdog_pid=$!
|
||||
set +m
|
||||
# Check that the watchdog started OK.
|
||||
kill -0 $watchdog_pid
|
||||
|
||||
# If the tests fail with OOM or something, still try to restart the servers
|
||||
# to collect the logs. Prefer not to restart, because addresses might change
|
||||
# and we won't be able to process trace_log data.
|
||||
time get_profiles || restart || get_profiles ||:
|
||||
# and we won't be able to process trace_log data. Start in a subshell, so that
|
||||
# it doesn't interfere with the watchdog through `wait`.
|
||||
( time get_profiles || restart || get_profiles ||: )
|
||||
|
||||
# Kill the whole process group, because somehow when the subshell is killed,
|
||||
# the sleep inside remains alive and orphaned.
|
||||
while env kill -- -$watchdog_pid ; do sleep 1; done
|
||||
|
||||
# Stop the servers to free memory for the subsequent query analysis.
|
||||
while killall clickhouse; do echo . ; sleep 1 ; done
|
||||
@ -442,3 +507,7 @@ case "$stage" in
|
||||
time "$script_dir/report.py" > report.html
|
||||
;&
|
||||
esac
|
||||
|
||||
# Print some final debug info to help debug Weirdness, of which there is plenty.
|
||||
jobs
|
||||
pstree -apgT
|
||||
|
@ -1,4 +1,9 @@
|
||||
<yandex>
|
||||
<yandex>
|
||||
<http_port remove="remove"/>
|
||||
<mysql_port remove="remove"/>
|
||||
<interserver_http_port remove="remove"/>
|
||||
<listen_host>::</listen_host>
|
||||
|
||||
<logger>
|
||||
<console>true</console>
|
||||
</logger>
|
||||
|
@ -2,9 +2,11 @@
|
||||
set -ex
|
||||
set -o pipefail
|
||||
trap "exit" INT TERM
|
||||
trap "kill $(jobs -pr) ||:" EXIT
|
||||
trap 'kill $(jobs -pr) ||:' EXIT
|
||||
|
||||
mkdir db0 ||:
|
||||
mkdir left ||:
|
||||
mkdir right ||:
|
||||
|
||||
left_pr=$1
|
||||
left_sha=$2
|
||||
@ -22,11 +24,6 @@ dataset_paths["values"]="https://clickhouse-datasets.s3.yandex.net/values_with_e
|
||||
|
||||
function download
|
||||
{
|
||||
rm -r left ||:
|
||||
mkdir left ||:
|
||||
rm -r right ||:
|
||||
mkdir right ||:
|
||||
|
||||
# might have the same version on left and right
|
||||
if ! [ "$left_sha" = "$right_sha" ]
|
||||
then
|
||||
@ -39,7 +36,11 @@ function download
|
||||
for dataset_name in $datasets
|
||||
do
|
||||
dataset_path="${dataset_paths[$dataset_name]}"
|
||||
[ "$dataset_path" != "" ]
|
||||
if [ "$dataset_path" = "" ]
|
||||
then
|
||||
>&2 echo "Unknown dataset '$dataset_name'"
|
||||
exit 1
|
||||
fi
|
||||
cd db0 && wget -nv -nd -c "$dataset_path" -O- | tar -xv &
|
||||
done
|
||||
|
||||
|
@ -87,9 +87,6 @@ git -C ch diff --name-only "$SHA_TO_TEST" "$(git -C ch merge-base "$SHA_TO_TEST"
|
||||
# Set python output encoding so that we can print queries with Russian letters.
|
||||
export PYTHONIOENCODING=utf-8
|
||||
|
||||
# Use a default number of runs if not told otherwise
|
||||
export CHPC_RUNS=${CHPC_RUNS:-7}
|
||||
|
||||
# By default, use the main comparison script from the tested package, so that we
|
||||
# can change it in PRs.
|
||||
script_path="right/scripts"
|
||||
@ -101,14 +98,15 @@ fi
|
||||
# Even if we have some errors, try our best to save the logs.
|
||||
set +e
|
||||
|
||||
# Older version use 'kill 0', so put the script into a separate process group
|
||||
# FIXME remove set +m in April 2020
|
||||
set +m
|
||||
# Use clickhouse-client and clickhouse-local from the right server.
|
||||
PATH="$(readlink -f right/)":"$PATH"
|
||||
export PATH
|
||||
|
||||
# Start the main comparison script.
|
||||
{ \
|
||||
time ../download.sh "$REF_PR" "$REF_SHA" "$PR_TO_TEST" "$SHA_TO_TEST" && \
|
||||
time stage=configure "$script_path"/compare.sh ; \
|
||||
} 2>&1 | ts "$(printf '%%Y-%%m-%%d %%H:%%M:%%S\t')" | tee compare.log
|
||||
set -m
|
||||
|
||||
# Stop the servers to free memory. Normally they are restarted before getting
|
||||
# the profile info, so they shouldn't use much, but if the comparison script
|
||||
|
54
docker/test/performance-comparison/manual-run.sh
Executable file
54
docker/test/performance-comparison/manual-run.sh
Executable file
@ -0,0 +1,54 @@
|
||||
#!/bin/bash
|
||||
set -ex
|
||||
set -o pipefail
|
||||
trap "exit" INT TERM
|
||||
trap 'kill $(jobs -pr) ||:' EXIT
|
||||
|
||||
stage=${stage:-}
|
||||
script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
|
||||
repo_dir=${repo_dir:-$(readlink -f "$script_dir/../../..")}
|
||||
|
||||
|
||||
function download
|
||||
{
|
||||
rm -r left right db0 ||:
|
||||
mkdir left right db0 ||:
|
||||
|
||||
"$script_dir/download.sh" ||: &
|
||||
cp -vP "$repo_dir"/../build-gcc9-rel/programs/clickhouse* right &
|
||||
cp -vP "$repo_dir"/../build-clang10-rel/programs/clickhouse* left &
|
||||
wait
|
||||
}
|
||||
|
||||
function configure
|
||||
{
|
||||
# Test files
|
||||
cp -av "$repo_dir/tests/performance" right
|
||||
cp -av "$repo_dir/tests/performance" left
|
||||
|
||||
# Configs
|
||||
cp -av "$script_dir/config" right
|
||||
cp -av "$script_dir/config" left
|
||||
cp -av "$repo_dir"/programs/server/config* right/config
|
||||
cp -av "$repo_dir"/programs/server/user* right/config
|
||||
cp -av "$repo_dir"/programs/server/config* left/config
|
||||
cp -av "$repo_dir"/programs/server/user* left/config
|
||||
|
||||
tree left
|
||||
}
|
||||
|
||||
function run
|
||||
{
|
||||
left/clickhouse-local --query "select * from system.build_options format PrettySpace" | sed 's/ *$//' | fold -w 80 -s > left-commit.txt
|
||||
right/clickhouse-local --query "select * from system.build_options format PrettySpace" | sed 's/ *$//' | fold -w 80 -s > right-commit.txt
|
||||
|
||||
PATH=right:"$PATH" stage=configure "$script_dir/compare.sh" &> >(tee compare.log)
|
||||
}
|
||||
|
||||
download
|
||||
configure
|
||||
run
|
||||
|
||||
rm output.7z
|
||||
7z a output.7z ./*.{log,tsv,html,txt,rep,svg} {right,left}/{performance,db/preprocessed_configs}
|
||||
|
@ -25,7 +25,7 @@ parser = argparse.ArgumentParser(description='Run performance test.')
|
||||
parser.add_argument('file', metavar='FILE', type=argparse.FileType('r', encoding='utf-8'), nargs=1, help='test description file')
|
||||
parser.add_argument('--host', nargs='*', default=['localhost'], help="Server hostname(s). Corresponds to '--port' options.")
|
||||
parser.add_argument('--port', nargs='*', default=[9000], help="Server port(s). Corresponds to '--host' options.")
|
||||
parser.add_argument('--runs', type=int, default=int(os.environ.get('CHPC_RUNS', 7)), help='Number of query runs per server. Defaults to CHPC_RUNS environment variable.')
|
||||
parser.add_argument('--runs', type=int, default=int(os.environ.get('CHPC_RUNS', 13)), help='Number of query runs per server. Defaults to CHPC_RUNS environment variable.')
|
||||
parser.add_argument('--no-long', type=bool, default=True, help='Skip the tests tagged as long.')
|
||||
args = parser.parse_args()
|
||||
|
||||
@ -140,9 +140,16 @@ report_stage_end('substitute2')
|
||||
for q in test_queries:
|
||||
# Prewarm: run once on both servers. Helps to bring the data into memory,
|
||||
# precompile the queries, etc.
|
||||
for conn_index, c in enumerate(connections):
|
||||
res = c.execute(q, query_id = 'prewarm {} {}'.format(0, q))
|
||||
print('prewarm\t' + tsv_escape(q) + '\t' + str(conn_index) + '\t' + str(c.last_query.elapsed))
|
||||
try:
|
||||
for conn_index, c in enumerate(connections):
|
||||
res = c.execute(q, query_id = 'prewarm {} {}'.format(0, q))
|
||||
print('prewarm\t' + tsv_escape(q) + '\t' + str(conn_index) + '\t' + str(c.last_query.elapsed))
|
||||
except:
|
||||
# If prewarm fails for some query -- skip it, and try to test the others.
|
||||
# This might happen if the new test introduces some function that the
|
||||
# old server doesn't support. Still, report it as an error.
|
||||
print(traceback.format_exc(), file=sys.stderr)
|
||||
continue
|
||||
|
||||
# Now, perform measured runs.
|
||||
# Track the time spent by the client to process this query, so that we can notice
|
||||
|
@ -169,12 +169,14 @@ if args.report == 'main':
|
||||
|
||||
attrs = ['' for c in columns]
|
||||
for row in rows:
|
||||
if float(row[2]) < 0.:
|
||||
faster_queries += 1
|
||||
attrs[2] = 'style="background: #adbdff"'
|
||||
else:
|
||||
slower_queries += 1
|
||||
attrs[2] = 'style="background: #ffb0a0"'
|
||||
attrs[2] = ''
|
||||
if abs(float(row[2])) > 0.10:
|
||||
if float(row[2]) < 0.:
|
||||
faster_queries += 1
|
||||
attrs[2] = 'style="background: #adbdff"'
|
||||
else:
|
||||
slower_queries += 1
|
||||
attrs[2] = 'style="background: #ffb0a0"'
|
||||
|
||||
print(tableRow(row, attrs))
|
||||
|
||||
@ -256,17 +258,18 @@ if args.report == 'main':
|
||||
|
||||
print(tableStart('Test times'))
|
||||
print(tableHeader(columns))
|
||||
|
||||
|
||||
runs = 13 # FIXME pass this as an argument
|
||||
attrs = ['' for c in columns]
|
||||
for r in rows:
|
||||
if float(r[6]) > 22:
|
||||
if float(r[6]) > 3 * runs:
|
||||
# FIXME should be 15s max -- investigate parallel_insert
|
||||
slow_average_tests += 1
|
||||
attrs[6] = 'style="background: #ffb0a0"'
|
||||
else:
|
||||
attrs[6] = ''
|
||||
|
||||
if float(r[5]) > 30:
|
||||
if float(r[5]) > 4 * runs:
|
||||
slow_average_tests += 1
|
||||
attrs[5] = 'style="background: #ffb0a0"'
|
||||
else:
|
||||
@ -366,10 +369,20 @@ elif args.report == 'all-queries':
|
||||
|
||||
attrs = ['' for c in columns]
|
||||
for r in rows:
|
||||
if float(r[2]) > 0.05:
|
||||
attrs[3] = 'style="background: #ffb0a0"'
|
||||
elif float(r[2]) < -0.05:
|
||||
attrs[3] = 'style="background: #adbdff"'
|
||||
rd = ast.literal_eval(r[4])
|
||||
# Note the zero-based array index, this is rd[4] in SQL.
|
||||
threshold = rd[3]
|
||||
if threshold > 0.2:
|
||||
attrs[4] = 'style="background: #ffb0a0"'
|
||||
else:
|
||||
attrs[4] = ''
|
||||
|
||||
diff = float(r[2])
|
||||
if abs(diff) > threshold and threshold >= 0.05:
|
||||
if diff > 0.:
|
||||
attrs[3] = 'style="background: #ffb0a0"'
|
||||
else:
|
||||
attrs[3] = 'style="background: #adbdff"'
|
||||
else:
|
||||
attrs[3] = ''
|
||||
|
||||
|
@ -20,9 +20,9 @@ RUN apt-get --allow-unauthenticated update -y \
|
||||
# apt-get --allow-unauthenticated install --yes --no-install-recommends \
|
||||
# pvs-studio
|
||||
|
||||
ENV PKG_VERSION="pvs-studio-7.04.34029.84-amd64.deb"
|
||||
ENV PKG_VERSION="pvs-studio-7.07.37949.43-amd64.deb"
|
||||
|
||||
RUN wget -q http://files.viva64.com/beta/$PKG_VERSION
|
||||
RUN wget -q http://files.viva64.com/$PKG_VERSION
|
||||
RUN sudo dpkg -i $PKG_VERSION
|
||||
|
||||
CMD cd /repo_folder && pvs-studio-analyzer credentials $LICENCE_NAME $LICENCE_KEY -o ./licence.lic \
|
||||
|
@ -70,6 +70,7 @@ CMD dpkg -i package_folder/clickhouse-common-static_*.deb; \
|
||||
ln -s /usr/share/clickhouse-test/config/disks.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/share/clickhouse-test/config/secure_ports.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/share/clickhouse-test/config/clusters.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/share/clickhouse-test/config/graphite.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/share/clickhouse-test/config/server.key /etc/clickhouse-server/; \
|
||||
ln -s /usr/share/clickhouse-test/config/server.crt /etc/clickhouse-server/; \
|
||||
ln -s /usr/share/clickhouse-test/config/dhparam.pem /etc/clickhouse-server/; \
|
||||
|
@ -14,6 +14,11 @@ kill_clickhouse () {
|
||||
sleep 10
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Will try to send second kill signal for sure"
|
||||
kill `pgrep -u clickhouse` 2>/dev/null
|
||||
sleep 5
|
||||
echo "clickhouse pids" `ps aux | grep clickhouse` | ts '%Y-%m-%d %H:%M:%S'
|
||||
}
|
||||
|
||||
start_clickhouse () {
|
||||
@ -50,6 +55,7 @@ ln -s /usr/share/clickhouse-test/config/zookeeper.xml /etc/clickhouse-server/con
|
||||
ln -s /usr/share/clickhouse-test/config/disks.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/share/clickhouse-test/config/secure_ports.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/share/clickhouse-test/config/clusters.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/share/clickhouse-test/config/graphite.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/share/clickhouse-test/config/server.key /etc/clickhouse-server/; \
|
||||
ln -s /usr/share/clickhouse-test/config/server.crt /etc/clickhouse-server/; \
|
||||
ln -s /usr/share/clickhouse-test/config/dhparam.pem /etc/clickhouse-server/; \
|
||||
|
@ -36,6 +36,7 @@ CMD dpkg -i package_folder/clickhouse-common-static_*.deb; \
|
||||
ln -s /usr/lib/llvm-9/bin/llvm-symbolizer /usr/bin/llvm-symbolizer; \
|
||||
echo "TSAN_OPTIONS='halt_on_error=1 history_size=7 ignore_noninstrumented_modules=1 verbosity=1'" >> /etc/environment; \
|
||||
echo "UBSAN_OPTIONS='print_stacktrace=1'" >> /etc/environment; \
|
||||
echo "ASAN_OPTIONS='malloc_context_size=10 verbosity=1 allocator_release_to_os_interval_ms=10000'" >> /etc/environment; \
|
||||
service clickhouse-server start && sleep 5 \
|
||||
&& /s3downloader --dataset-names $DATASETS \
|
||||
&& chmod 777 -R /var/lib/clickhouse \
|
||||
|
@ -135,16 +135,13 @@ When adding a new file:
|
||||
$ ln -sr en/new/file.md lang/new/file.md
|
||||
```
|
||||
|
||||
- Reference the file from `toc_{en,ru,zh,ja,fa}.yaml` files with the pages index.
|
||||
|
||||
|
||||
<a name="adding-a-new-language"/>
|
||||
|
||||
### Adding a New Language
|
||||
|
||||
1. Create a new docs subfolder named using the [ISO-639-1 language code](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes).
|
||||
2. Add Markdown files with the translation, mirroring the folder structure of other languages.
|
||||
3. Commit and open a pull request with the new content.
|
||||
3. Commit and open a pull-request with the new content.
|
||||
|
||||
When everything is ready, we will add the new language to the website.
|
||||
|
||||
@ -206,4 +203,4 @@ Templates:
|
||||
|
||||
## How to Build Documentation
|
||||
|
||||
You can build your documentation manually by following the instructions in [docs/tools/README.md](docs/tools/README.md). Also, our CI runs the documentation build after the `documentation` label is added to PR. You can see the results of a build in the GitHub interface. If you have no permissions to add labels, a reviewer of your PR will add it.
|
||||
You can build your documentation manually by following the instructions in [docs/tools/README.md](../docs/tools/README.md). Also, our CI runs the documentation build after the `documentation` label is added to PR. You can see the results of a build in the GitHub interface. If you have no permissions to add labels, a reviewer of your PR will add it.
|
||||
|
@ -1,3 +1,8 @@
|
||||
---
|
||||
toc_title: Cloud
|
||||
toc_priority: 1
|
||||
---
|
||||
|
||||
# ClickHouse Cloud Service Providers {#clickhouse-cloud-service-providers}
|
||||
|
||||
!!! info "Info"
|
||||
|
21
docs/en/commercial/support.md
Normal file
21
docs/en/commercial/support.md
Normal file
@ -0,0 +1,21 @@
|
||||
---
|
||||
toc_title: Support
|
||||
toc_priority: 3
|
||||
---
|
||||
|
||||
# ClickHouse Commercial Support Service Providers {#clickhouse-commercial-support-service-providers}
|
||||
|
||||
!!! info "Info"
|
||||
If you have launched a ClickHouse commercial support service, feel free to [open a pull-request](https://github.com/ClickHouse/ClickHouse/edit/master/docs/en/commercial/support.md) adding it to the following list.
|
||||
|
||||
## Altinity {#altinity}
|
||||
|
||||
[Service description](https://www.altinity.com/24x7-support)
|
||||
|
||||
## Mafiree {#mafiree}
|
||||
|
||||
[Service description](http://mafiree.com/clickhouse-analytics-services.php)
|
||||
|
||||
## MinervaDB {#minervadb}
|
||||
|
||||
[Service description](https://minervadb.com/index.php/clickhouse-consulting-and-support-by-minervadb/)
|
@ -1,11 +1,11 @@
|
||||
---
|
||||
toc_priority: 63
|
||||
toc_title: Browse ClickHouse Source Code
|
||||
toc_title: Browse Source Code
|
||||
---
|
||||
|
||||
# Browse ClickHouse Source Code {#browse-clickhouse-source-code}
|
||||
|
||||
You can use **Woboq** online code browser available [here](https://clickhouse-test-reports.s3.yandex.net/codebrowser/html_report///ClickHouse/src/index.html). It provides code navigation and semantic highlighting, search and indexing. The code snapshot is updated daily.
|
||||
You can use **Woboq** online code browser available [here](https://clickhouse.tech/codebrowser/html_report/ClickHouse/src/index.html). It provides code navigation and semantic highlighting, search and indexing. The code snapshot is updated daily.
|
||||
|
||||
Also, you can browse sources on [GitHub](https://github.com/ClickHouse/ClickHouse) as usual.
|
||||
|
||||
|
@ -15,7 +15,7 @@ Tests are located in `queries` directory. There are two subdirectories: `statele
|
||||
|
||||
Each test can be one of two types: `.sql` and `.sh`. `.sql` test is the simple SQL script that is piped to `clickhouse-client --multiquery --testmode`. `.sh` test is a script that is run by itself.
|
||||
|
||||
To run all tests, use `testskhouse-test` tool. Look `--help` for the list of possible options. You can simply run all tests or run subset of tests filtered by substring in test name: `./clickhouse-test substring`.
|
||||
To run all tests, use `clickhouse-test` tool. Look `--help` for the list of possible options. You can simply run all tests or run subset of tests filtered by substring in test name: `./clickhouse-test substring`.
|
||||
|
||||
The most simple way to invoke functional tests is to copy `clickhouse-client` to `/usr/bin/`, run `clickhouse-server` and then run `./clickhouse-test` from its own directory.
|
||||
|
||||
@ -34,13 +34,13 @@ disable these groups of tests using `--no-zookeeper`, `--no-shard` and
|
||||
|
||||
## Known Bugs {#known-bugs}
|
||||
|
||||
If we know some bugs that can be easily reproduced by functional tests, we place prepared functional tests in `queries/bugs` directory. These tests will be moved to `teststests_stateless` when bugs are fixed.
|
||||
If we know some bugs that can be easily reproduced by functional tests, we place prepared functional tests in `tests/queries/bugs` directory. These tests will be moved to `tests/queries/0_stateless` when bugs are fixed.
|
||||
|
||||
## Integration Tests {#integration-tests}
|
||||
|
||||
Integration tests allow to test ClickHouse in clustered configuration and ClickHouse interaction with other servers like MySQL, Postgres, MongoDB. They are useful to emulate network splits, packet drops, etc. These tests are run under Docker and create multiple containers with various software.
|
||||
|
||||
See `testsgration/README.md` on how to run these tests.
|
||||
See `tests/integration/README.md` on how to run these tests.
|
||||
|
||||
Note that integration of ClickHouse with third-party drivers is not tested. Also we currently don’t have integration tests with our JDBC and ODBC drivers.
|
||||
|
||||
@ -54,7 +54,7 @@ It’s not necessarily to have unit tests if the code is already covered by func
|
||||
|
||||
Performance tests allow to measure and compare performance of some isolated part of ClickHouse on synthetic queries. Tests are located at `tests/performance`. Each test is represented by `.xml` file with description of test case. Tests are run with `clickhouse performance-test` tool (that is embedded in `clickhouse` binary). See `--help` for invocation.
|
||||
|
||||
Each test run one or miltiple queries (possibly with combinations of parameters) in a loop with some conditions for stop (like “maximum execution speed is not changing in three seconds”) and measure some metrics about query performance (like “maximum execution speed”). Some tests can contain preconditions on preloaded test dataset.
|
||||
Each test run one or multiple queries (possibly with combinations of parameters) in a loop with some conditions for stop (like “maximum execution speed is not changing in three seconds”) and measure some metrics about query performance (like “maximum execution speed”). Some tests can contain preconditions on preloaded test dataset.
|
||||
|
||||
If you want to improve performance of ClickHouse in some scenario, and if improvements can be observed on simple queries, it is highly recommended to write a performance test. It always makes sense to use `perf top` or other perf tools during your tests.
|
||||
|
||||
@ -64,13 +64,13 @@ Some programs in `tests` directory are not prepared tests, but are test tools. F
|
||||
|
||||
You can also place pair of files `.sh` and `.reference` along with the tool to run it on some predefined input - then script result can be compared to `.reference` file. These kind of tests are not automated.
|
||||
|
||||
## Miscellanous Tests {#miscellanous-tests}
|
||||
## Miscellaneous Tests {#miscellaneous-tests}
|
||||
|
||||
There are tests for external dictionaries located at `tests/external_dictionaries` and for machine learned models in `tests/external_models`. These tests are not updated and must be transferred to integration tests.
|
||||
|
||||
There is separate test for quorum inserts. This test run ClickHouse cluster on separate servers and emulate various failure cases: network split, packet drop (between ClickHouse nodes, between ClickHouse and ZooKeeper, between ClickHouse server and client, etc.), `kill -9`, `kill -STOP` and `kill -CONT` , like [Jepsen](https://aphyr.com/tags/Jepsen). Then the test checks that all acknowledged inserts was written and all rejected inserts was not.
|
||||
|
||||
Quorum test was written by separate team before ClickHouse was open-sourced. This team no longer work with ClickHouse. Test was accidentially written in Java. For these reasons, quorum test must be rewritten and moved to integration tests.
|
||||
Quorum test was written by separate team before ClickHouse was open-sourced. This team no longer work with ClickHouse. Test was accidentally written in Java. For these reasons, quorum test must be rewritten and moved to integration tests.
|
||||
|
||||
## Manual Testing {#manual-testing}
|
||||
|
||||
|
@ -3,7 +3,7 @@ toc_priority: 30
|
||||
toc_title: MySQL
|
||||
---
|
||||
|
||||
# Mysql {#mysql}
|
||||
# MySQL {#mysql}
|
||||
|
||||
Allows to connect to databases on a remote MySQL server and perform `INSERT` and `SELECT` queries to exchange data between ClickHouse and MySQL.
|
||||
|
||||
@ -19,7 +19,7 @@ You cannot perform the following queries:
|
||||
|
||||
``` sql
|
||||
CREATE DATABASE [IF NOT EXISTS] db_name [ON CLUSTER cluster]
|
||||
ENGINE = MySQL('host:port', 'database', 'user', 'password')
|
||||
ENGINE = MySQL('host:port', ['database' | database], 'user', 'password')
|
||||
```
|
||||
|
||||
**Engine Parameters**
|
||||
|
@ -9,7 +9,10 @@ The engine inherits from [MergeTree](mergetree.md#table_engines-mergetree), alte
|
||||
|
||||
You can use `AggregatingMergeTree` tables for incremental data aggregation, including for aggregated materialized views.
|
||||
|
||||
The engine processes all columns with [AggregateFunction](../../../sql_reference/data_types/aggregatefunction.md) type.
|
||||
The engine processes all columns with the following types:
|
||||
|
||||
- [AggregateFunction](../../../sql_reference/data_types/aggregatefunction.md)
|
||||
- [SimpleAggregateFunction](../../../sql_reference/data_types/simpleaggregatefunction.md)
|
||||
|
||||
It is appropriate to use `AggregatingMergeTree` if it reduces the number of rows by orders.
|
||||
|
||||
|
@ -505,14 +505,14 @@ Configuration structure:
|
||||
<storage_configuration>
|
||||
<disks>
|
||||
<disk_name_1> <!-- disk name -->
|
||||
<path>/mnt/fast_ssd/clickhouse</path>
|
||||
<path>/mnt/fast_ssd/clickhouse/</path>
|
||||
</disk_name_1>
|
||||
<disk_name_2>
|
||||
<path>/mnt/hdd1/clickhouse</path>
|
||||
<path>/mnt/hdd1/clickhouse/</path>
|
||||
<keep_free_space_bytes>10485760</keep_free_space_bytes>
|
||||
</disk_name_2>
|
||||
<disk_name_3>
|
||||
<path>/mnt/hdd2/clickhouse</path>
|
||||
<path>/mnt/hdd2/clickhouse/</path>
|
||||
<keep_free_space_bytes>10485760</keep_free_space_bytes>
|
||||
</disk_name_3>
|
||||
|
||||
|
@ -38,7 +38,7 @@ sudo apt-get update
|
||||
sudo apt-get install clickhouse-client clickhouse-server
|
||||
```
|
||||
|
||||
You can also download and install packages manually from here: https://repo.yandex.ru/clickhouse/deb/stable/main/.
|
||||
You can also download and install packages manually from [here](https://repo.yandex.ru/clickhouse/deb/stable/main/).
|
||||
|
||||
#### Packages {#packages}
|
||||
|
||||
@ -67,7 +67,7 @@ Then run these commands to install packages:
|
||||
sudo yum install clickhouse-server clickhouse-client
|
||||
```
|
||||
|
||||
You can also download and install packages manually from here: https://repo.clickhouse.tech/rpm/stable/x86\_64.
|
||||
You can also download and install packages manually from [here](https://repo.clickhouse.tech/rpm/stable/x86_64).
|
||||
|
||||
### From Tgz Archives {#from-tgz-archives}
|
||||
|
||||
|
@ -108,7 +108,7 @@ Syntax for creating tables is way more complicated compared to databases (see [r
|
||||
|
||||
1. Name of table to create.
|
||||
2. Table schema, i.e. list of columns and their [data types](../sql_reference/data_types/index.md).
|
||||
3. [Table engine](../engines/table_engines/index.md) and it’s settings, which determines all the details on how queries to this table will be physically executed.
|
||||
3. [Table engine](../engines/table_engines/index.md) and its settings, which determines all the details on how queries to this table will be physically executed.
|
||||
|
||||
Yandex.Metrica is a web analytics service, and sample dataset doesn’t cover its full functionality, so there are only two tables to create:
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_priority: 3
|
||||
toc_priority: 0
|
||||
toc_title: Overview
|
||||
---
|
||||
|
||||
@ -78,48 +78,6 @@ See the difference?
|
||||
|
||||
For example, the query “count the number of records for each advertising platform” requires reading one “advertising platform ID” column, which takes up 1 byte uncompressed. If most of the traffic was not from advertising platforms, you can expect at least 10-fold compression of this column. When using a quick compression algorithm, data decompression is possible at a speed of at least several gigabytes of uncompressed data per second. In other words, this query can be processed at a speed of approximately several billion rows per second on a single server. This speed is actually achieved in practice.
|
||||
|
||||
<details markdown="1">
|
||||
|
||||
<summary>Example</summary>
|
||||
|
||||
``` bash
|
||||
$ clickhouse-client
|
||||
ClickHouse client version 0.0.52053.
|
||||
Connecting to localhost:9000.
|
||||
Connected to ClickHouse server version 0.0.52053.
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─CounterID─┬──count()─┐
|
||||
│ 114208 │ 56057344 │
|
||||
│ 115080 │ 51619590 │
|
||||
│ 3228 │ 44658301 │
|
||||
│ 38230 │ 42045932 │
|
||||
│ 145263 │ 42042158 │
|
||||
│ 91244 │ 38297270 │
|
||||
│ 154139 │ 26647572 │
|
||||
│ 150748 │ 24112755 │
|
||||
│ 242232 │ 21302571 │
|
||||
│ 338158 │ 13507087 │
|
||||
│ 62180 │ 12229491 │
|
||||
│ 82264 │ 12187441 │
|
||||
│ 232261 │ 12148031 │
|
||||
│ 146272 │ 11438516 │
|
||||
│ 168777 │ 11403636 │
|
||||
│ 4120072 │ 11227824 │
|
||||
│ 10938808 │ 10519739 │
|
||||
│ 74088 │ 9047015 │
|
||||
│ 115079 │ 8837972 │
|
||||
│ 337234 │ 8205961 │
|
||||
└───────────┴──────────┘
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### CPU {#cpu}
|
||||
|
||||
Since executing a query requires processing a large number of rows, it helps to dispatch all operations for entire vectors instead of for separate rows, or to implement the query engine so that there is almost no dispatching cost. If you don’t do this, with any half-decent disk subsystem, the query interpreter inevitably stalls the CPU. It makes sense to both store data in columns and process it, when possible, by columns.
|
||||
|
@ -24,7 +24,10 @@ toc_title: Integrations
|
||||
- [ClickHouseMigrator](https://github.com/zlzforever/ClickHouseMigrator)
|
||||
- Message queues
|
||||
- [Kafka](https://kafka.apache.org)
|
||||
- [clickhouse\_sinker](https://github.com/housepower/clickhouse_sinker) (uses [Go client](https://github.com/kshvakov/clickhouse/))
|
||||
- [clickhouse\_sinker](https://github.com/housepower/clickhouse_sinker) (uses [Go client](https://github.com/ClickHouse/clickhouse-go/))
|
||||
- Stream processing
|
||||
- [Flink](https://flink.apache.org)
|
||||
- [flink-clickhouse-sink](https://github.com/ivi-ru/flink-clickhouse-sink)
|
||||
- Object storages
|
||||
- [S3](https://en.wikipedia.org/wiki/Amazon_S3)
|
||||
- [clickhouse-backup](https://github.com/AlexAkulov/clickhouse-backup)
|
||||
@ -72,6 +75,9 @@ toc_title: Integrations
|
||||
- [sqlalchemy-clickhouse](https://github.com/cloudflare/sqlalchemy-clickhouse) (uses [infi.clickhouse\_orm](https://github.com/Infinidat/infi.clickhouse_orm))
|
||||
- [pandas](https://pandas.pydata.org)
|
||||
- [pandahouse](https://github.com/kszucs/pandahouse)
|
||||
- PHP
|
||||
- [Doctrine](https://www.doctrine-project.org/)
|
||||
- [dbal-clickhouse](https://packagist.org/packages/friendsofdoctrine/dbal-clickhouse)
|
||||
- R
|
||||
- [dplyr](https://db.rstudio.com/dplyr/)
|
||||
- [RClickHouse](https://github.com/IMSMWU/RClickHouse) (uses [clickhouse-cpp](https://github.com/artpaul/clickhouse-cpp))
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user