mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-21 15:12:02 +00:00
Merge branch 'master' into database_atomic
This commit is contained in:
commit
b29bddac12
4
.gitignore
vendored
4
.gitignore
vendored
@ -16,8 +16,8 @@
|
||||
/docs/publish
|
||||
/docs/edit
|
||||
/docs/website
|
||||
/docs/venv/
|
||||
/docs/tools/venv/
|
||||
/docs/venv
|
||||
/docs/tools/venv
|
||||
/docs/tools/translate/venv
|
||||
/docs/tools/translate/output.md
|
||||
/docs/en/single.md
|
||||
|
2
.gitmodules
vendored
2
.gitmodules
vendored
@ -13,7 +13,7 @@
|
||||
url = https://github.com/edenhill/librdkafka.git
|
||||
[submodule "contrib/cctz"]
|
||||
path = contrib/cctz
|
||||
url = https://github.com/google/cctz.git
|
||||
url = https://github.com/ClickHouse-Extras/cctz.git
|
||||
[submodule "contrib/zlib-ng"]
|
||||
path = contrib/zlib-ng
|
||||
url = https://github.com/ClickHouse-Extras/zlib-ng.git
|
||||
|
124
CHANGELOG.md
124
CHANGELOG.md
@ -1,5 +1,80 @@
|
||||
## ClickHouse release v20.3
|
||||
|
||||
### ClickHouse release v20.3.7.46, 2020-04-17
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix `Logical error: CROSS JOIN has expressions` error for queries with comma and names joins mix. [#10311](https://github.com/ClickHouse/ClickHouse/pull/10311) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix queries with `max_bytes_before_external_group_by`. [#10302](https://github.com/ClickHouse/ClickHouse/pull/10302) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix move-to-prewhere optimization in presense of arrayJoin functions (in certain cases). This fixes [#10092](https://github.com/ClickHouse/ClickHouse/issues/10092). [#10195](https://github.com/ClickHouse/ClickHouse/pull/10195) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add the ability to relax the restriction on non-deterministic functions usage in mutations with `allow_nondeterministic_mutations` setting. [#10186](https://github.com/ClickHouse/ClickHouse/pull/10186) ([filimonov](https://github.com/filimonov)).
|
||||
|
||||
### ClickHouse release v20.3.6.40, 2020-04-16
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Added function `isConstant`. This function checks whether its argument is constant expression and returns 1 or 0. It is intended for development, debugging and demonstration purposes. [#10198](https://github.com/ClickHouse/ClickHouse/pull/10198) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix error `Pipeline stuck` with `max_rows_to_group_by` and `group_by_overflow_mode = 'break'`. [#10279](https://github.com/ClickHouse/ClickHouse/pull/10279) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix rare possible exception `Cannot drain connections: cancel first`. [#10239](https://github.com/ClickHouse/ClickHouse/pull/10239) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed bug where ClickHouse would throw "Unknown function lambda." error message when user tries to run ALTER UPDATE/DELETE on tables with ENGINE = Replicated*. Check for nondeterministic functions now handles lambda expressions correctly. [#10237](https://github.com/ClickHouse/ClickHouse/pull/10237) ([Alexander Kazakov](https://github.com/Akazz)).
|
||||
* Fixed "generateRandom" function for Date type. This fixes [#9973](https://github.com/ClickHouse/ClickHouse/issues/9973). Fix an edge case when dates with year 2106 are inserted to MergeTree tables with old-style partitioning but partitions are named with year 1970. [#10218](https://github.com/ClickHouse/ClickHouse/pull/10218) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Convert types if the table definition of a View does not correspond to the SELECT query. This fixes [#10180](https://github.com/ClickHouse/ClickHouse/issues/10180) and [#10022](https://github.com/ClickHouse/ClickHouse/issues/10022). [#10217](https://github.com/ClickHouse/ClickHouse/pull/10217) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix `parseDateTimeBestEffort` for strings in RFC-2822 when day of week is Tuesday or Thursday. This fixes [#10082](https://github.com/ClickHouse/ClickHouse/issues/10082). [#10214](https://github.com/ClickHouse/ClickHouse/pull/10214) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix column names of constants inside JOIN that may clash with names of constants outside of JOIN. [#10207](https://github.com/ClickHouse/ClickHouse/pull/10207) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix possible inifinite query execution when the query actually should stop on LIMIT, while reading from infinite source like `system.numbers` or `system.zeros`. [#10206](https://github.com/ClickHouse/ClickHouse/pull/10206) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix using the current database for access checking when the database isn't specified. [#10192](https://github.com/ClickHouse/ClickHouse/pull/10192) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Convert blocks if structure does not match on INSERT into Distributed(). [#10135](https://github.com/ClickHouse/ClickHouse/pull/10135) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix possible incorrect result for extremes in processors pipeline. [#10131](https://github.com/ClickHouse/ClickHouse/pull/10131) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix some kinds of alters with compact parts. [#10130](https://github.com/ClickHouse/ClickHouse/pull/10130) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix incorrect `index_granularity_bytes` check while creating new replica. Fixes [#10098](https://github.com/ClickHouse/ClickHouse/issues/10098). [#10121](https://github.com/ClickHouse/ClickHouse/pull/10121) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix SIGSEGV on INSERT into Distributed table when its structure differs from the underlying tables. [#10105](https://github.com/ClickHouse/ClickHouse/pull/10105) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix possible rows loss for queries with `JOIN` and `UNION ALL`. Fixes [#9826](https://github.com/ClickHouse/ClickHouse/issues/9826), [#10113](https://github.com/ClickHouse/ClickHouse/issues/10113). [#10099](https://github.com/ClickHouse/ClickHouse/pull/10099) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed replicated tables startup when updating from an old ClickHouse version where `/table/replicas/replica_name/metadata` node doesn't exist. Fixes [#10037](https://github.com/ClickHouse/ClickHouse/issues/10037). [#10095](https://github.com/ClickHouse/ClickHouse/pull/10095) ([alesapin](https://github.com/alesapin)).
|
||||
* Add some arguments check and support identifier arguments for MySQL Database Engine. [#10077](https://github.com/ClickHouse/ClickHouse/pull/10077) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix bug in clickhouse dictionary source from localhost clickhouse server. The bug may lead to memory corruption if types in dictionary and source are not compatible. [#10071](https://github.com/ClickHouse/ClickHouse/pull/10071) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix bug in `CHECK TABLE` query when table contain skip indices. [#10068](https://github.com/ClickHouse/ClickHouse/pull/10068) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix error `Cannot clone block with columns because block has 0 columns ... While executing GroupingAggregatedTransform`. It happened when setting `distributed_aggregation_memory_efficient` was enabled, and distributed query read aggregating data with different level from different shards (mixed single and two level aggregation). [#10063](https://github.com/ClickHouse/ClickHouse/pull/10063) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix a segmentation fault that could occur in GROUP BY over string keys containing trailing zero bytes ([#8636](https://github.com/ClickHouse/ClickHouse/issues/8636), [#8925](https://github.com/ClickHouse/ClickHouse/issues/8925)). [#10025](https://github.com/ClickHouse/ClickHouse/pull/10025) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Fix parallel distributed INSERT SELECT for remote table. This PR fixes the solution provided in [#9759](https://github.com/ClickHouse/ClickHouse/pull/9759). [#9999](https://github.com/ClickHouse/ClickHouse/pull/9999) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix the number of threads used for remote query execution (performance regression, since 20.3). This happened when query from `Distributed` table was executed simultaneously on local and remote shards. Fixes [#9965](https://github.com/ClickHouse/ClickHouse/issues/9965). [#9971](https://github.com/ClickHouse/ClickHouse/pull/9971) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix bug in which the necessary tables weren't retrieved at one of the processing stages of queries to some databases. Fixes [#9699](https://github.com/ClickHouse/ClickHouse/issues/9699). [#9949](https://github.com/ClickHouse/ClickHouse/pull/9949) ([achulkov2](https://github.com/achulkov2)).
|
||||
* Fix 'Not found column in block' error when `JOIN` appears with `TOTALS`. Fixes [#9839](https://github.com/ClickHouse/ClickHouse/issues/9839). [#9939](https://github.com/ClickHouse/ClickHouse/pull/9939) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix a bug with `ON CLUSTER` DDL queries freezing on server startup. [#9927](https://github.com/ClickHouse/ClickHouse/pull/9927) ([Gagan Arneja](https://github.com/garneja)).
|
||||
* Fix parsing multiple hosts set in the CREATE USER command, e.g. `CREATE USER user6 HOST NAME REGEXP 'lo.?*host', NAME REGEXP 'lo*host'`. [#9924](https://github.com/ClickHouse/ClickHouse/pull/9924) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix `TRUNCATE` for Join table engine ([#9917](https://github.com/ClickHouse/ClickHouse/issues/9917)). [#9920](https://github.com/ClickHouse/ClickHouse/pull/9920) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix "scalar doesn't exist" error in ALTERs ([#9878](https://github.com/ClickHouse/ClickHouse/issues/9878)). [#9904](https://github.com/ClickHouse/ClickHouse/pull/9904) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix race condition between drop and optimize in `ReplicatedMergeTree`. [#9901](https://github.com/ClickHouse/ClickHouse/pull/9901) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix error with qualified names in `distributed_product_mode='local'`. Fixes [#4756](https://github.com/ClickHouse/ClickHouse/issues/4756). [#9891](https://github.com/ClickHouse/ClickHouse/pull/9891) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix calculating grants for introspection functions from the setting 'allow_introspection_functions'. [#9840](https://github.com/ClickHouse/ClickHouse/pull/9840) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Fix integration test `test_settings_constraints`. [#9962](https://github.com/ClickHouse/ClickHouse/pull/9962) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Removed dependency on `clock_getres`. [#9833](https://github.com/ClickHouse/ClickHouse/pull/9833) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
|
||||
### ClickHouse release v20.3.5.21, 2020-03-27
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix 'Different expressions with the same alias' error when query has PREWHERE and WHERE on distributed table and `SET distributed_product_mode = 'local'`. [#9871](https://github.com/ClickHouse/ClickHouse/pull/9871) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix mutations excessive memory consumption for tables with a composite primary key. This fixes [#9850](https://github.com/ClickHouse/ClickHouse/issues/9850). [#9860](https://github.com/ClickHouse/ClickHouse/pull/9860) ([alesapin](https://github.com/alesapin)).
|
||||
* For INSERT queries shard now clamps the settings got from the initiator to the shard's constaints instead of throwing an exception. This fix allows to send INSERT queries to a shard with another constraints. This change improves fix [#9447](https://github.com/ClickHouse/ClickHouse/issues/9447). [#9852](https://github.com/ClickHouse/ClickHouse/pull/9852) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix 'COMMA to CROSS JOIN rewriter is not enabled or cannot rewrite query' error in case of subqueries with COMMA JOIN out of tables lists (i.e. in WHERE). Fixes [#9782](https://github.com/ClickHouse/ClickHouse/issues/9782). [#9830](https://github.com/ClickHouse/ClickHouse/pull/9830) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix possible exception `Got 0 in totals chunk, expected 1` on client. It happened for queries with `JOIN` in case if right joined table had zero rows. Example: `select * from system.one t1 join system.one t2 on t1.dummy = t2.dummy limit 0 FORMAT TabSeparated;`. Fixes [#9777](https://github.com/ClickHouse/ClickHouse/issues/9777). [#9823](https://github.com/ClickHouse/ClickHouse/pull/9823) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix SIGSEGV with optimize_skip_unused_shards when type cannot be converted. [#9804](https://github.com/ClickHouse/ClickHouse/pull/9804) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix broken `ALTER TABLE DELETE COLUMN` query for compact parts. [#9779](https://github.com/ClickHouse/ClickHouse/pull/9779) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix max_distributed_connections (w/ and w/o Processors). [#9673](https://github.com/ClickHouse/ClickHouse/pull/9673) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed a few cases when timezone of the function argument wasn't used properly. [#9574](https://github.com/ClickHouse/ClickHouse/pull/9574) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Remove order by stage from mutations because we read from a single ordered part in a single thread. Also add check that the order of rows in mutation is ordered in sorting key order and this order is not violated. [#9886](https://github.com/ClickHouse/ClickHouse/pull/9886) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
|
||||
### ClickHouse release v20.3.4.10, 2020-03-20
|
||||
|
||||
#### Bug Fix
|
||||
@ -255,6 +330,55 @@
|
||||
|
||||
## ClickHouse release v20.1
|
||||
|
||||
### ClickHouse release v20.1.10.70, 2020-04-17
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix rare possible exception `Cannot drain connections: cancel first`. [#10239](https://github.com/ClickHouse/ClickHouse/pull/10239) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed bug where ClickHouse would throw `'Unknown function lambda.'` error message when user tries to run `ALTER UPDATE/DELETE` on tables with `ENGINE = Replicated*`. Check for nondeterministic functions now handles lambda expressions correctly. [#10237](https://github.com/ClickHouse/ClickHouse/pull/10237) ([Alexander Kazakov](https://github.com/Akazz)).
|
||||
* Fix `parseDateTimeBestEffort` for strings in RFC-2822 when day of week is Tuesday or Thursday. This fixes [#10082](https://github.com/ClickHouse/ClickHouse/issues/10082). [#10214](https://github.com/ClickHouse/ClickHouse/pull/10214) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix column names of constants inside `JOIN` that may clash with names of constants outside of `JOIN`. [#10207](https://github.com/ClickHouse/ClickHouse/pull/10207) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix possible inifinite query execution when the query actually should stop on LIMIT, while reading from infinite source like `system.numbers` or `system.zeros`. [#10206](https://github.com/ClickHouse/ClickHouse/pull/10206) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix move-to-prewhere optimization in presense of `arrayJoin` functions (in certain cases). This fixes [#10092](https://github.com/ClickHouse/ClickHouse/issues/10092). [#10195](https://github.com/ClickHouse/ClickHouse/pull/10195) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add the ability to relax the restriction on non-deterministic functions usage in mutations with `allow_nondeterministic_mutations` setting. [#10186](https://github.com/ClickHouse/ClickHouse/pull/10186) ([filimonov](https://github.com/filimonov)).
|
||||
* Convert blocks if structure does not match on `INSERT` into table with `Distributed` engine. [#10135](https://github.com/ClickHouse/ClickHouse/pull/10135) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix `SIGSEGV` on `INSERT` into `Distributed` table when its structure differs from the underlying tables. [#10105](https://github.com/ClickHouse/ClickHouse/pull/10105) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix possible rows loss for queries with `JOIN` and `UNION ALL`. Fixes [#9826](https://github.com/ClickHouse/ClickHouse/issues/9826), [#10113](https://github.com/ClickHouse/ClickHouse/issues/10113). [#10099](https://github.com/ClickHouse/ClickHouse/pull/10099) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Add arguments check and support identifier arguments for MySQL Database Engine. [#10077](https://github.com/ClickHouse/ClickHouse/pull/10077) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix bug in clickhouse dictionary source from localhost clickhouse server. The bug may lead to memory corruption if types in dictionary and source are not compatible. [#10071](https://github.com/ClickHouse/ClickHouse/pull/10071) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix error `Cannot clone block with columns because block has 0 columns ... While executing GroupingAggregatedTransform`. It happened when setting `distributed_aggregation_memory_efficient` was enabled, and distributed query read aggregating data with different level from different shards (mixed single and two level aggregation). [#10063](https://github.com/ClickHouse/ClickHouse/pull/10063) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix a segmentation fault that could occur in `GROUP BY` over string keys containing trailing zero bytes ([#8636](https://github.com/ClickHouse/ClickHouse/issues/8636), [#8925](https://github.com/ClickHouse/ClickHouse/issues/8925)). [#10025](https://github.com/ClickHouse/ClickHouse/pull/10025) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Fix bug in which the necessary tables weren't retrieved at one of the processing stages of queries to some databases. Fixes [#9699](https://github.com/ClickHouse/ClickHouse/issues/9699). [#9949](https://github.com/ClickHouse/ClickHouse/pull/9949) ([achulkov2](https://github.com/achulkov2)).
|
||||
* Fix `'Not found column in block'` error when `JOIN` appears with `TOTALS`. Fixes [#9839](https://github.com/ClickHouse/ClickHouse/issues/9839). [#9939](https://github.com/ClickHouse/ClickHouse/pull/9939) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix a bug with `ON CLUSTER` DDL queries freezing on server startup. [#9927](https://github.com/ClickHouse/ClickHouse/pull/9927) ([Gagan Arneja](https://github.com/garneja)).
|
||||
* Fix `TRUNCATE` for Join table engine ([#9917](https://github.com/ClickHouse/ClickHouse/issues/9917)). [#9920](https://github.com/ClickHouse/ClickHouse/pull/9920) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix `'scalar doesn't exist'` error in ALTER queries ([#9878](https://github.com/ClickHouse/ClickHouse/issues/9878)). [#9904](https://github.com/ClickHouse/ClickHouse/pull/9904) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix race condition between drop and optimize in `ReplicatedMergeTree`. [#9901](https://github.com/ClickHouse/ClickHouse/pull/9901) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed `DeleteOnDestroy` logic in `ATTACH PART` which could lead to automatic removal of attached part and added few tests. [#9410](https://github.com/ClickHouse/ClickHouse/pull/9410) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Fix unit test `collapsing_sorted_stream`. [#9367](https://github.com/ClickHouse/ClickHouse/pull/9367) ([Deleted user](https://github.com/ghost)).
|
||||
|
||||
### ClickHouse release v20.1.9.54, 2020-03-28
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix `'Different expressions with the same alias'` error when query has `PREWHERE` and `WHERE` on distributed table and `SET distributed_product_mode = 'local'`. [#9871](https://github.com/ClickHouse/ClickHouse/pull/9871) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix mutations excessive memory consumption for tables with a composite primary key. This fixes [#9850](https://github.com/ClickHouse/ClickHouse/issues/9850). [#9860](https://github.com/ClickHouse/ClickHouse/pull/9860) ([alesapin](https://github.com/alesapin)).
|
||||
* For INSERT queries shard now clamps the settings got from the initiator to the shard's constaints instead of throwing an exception. This fix allows to send `INSERT` queries to a shard with another constraints. This change improves fix [#9447](https://github.com/ClickHouse/ClickHouse/issues/9447). [#9852](https://github.com/ClickHouse/ClickHouse/pull/9852) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix possible exception `Got 0 in totals chunk, expected 1` on client. It happened for queries with `JOIN` in case if right joined table had zero rows. Example: `select * from system.one t1 join system.one t2 on t1.dummy = t2.dummy limit 0 FORMAT TabSeparated;`. Fixes [#9777](https://github.com/ClickHouse/ClickHouse/issues/9777). [#9823](https://github.com/ClickHouse/ClickHouse/pull/9823) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix `SIGSEGV` with `optimize_skip_unused_shards` when type cannot be converted. [#9804](https://github.com/ClickHouse/ClickHouse/pull/9804) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed a few cases when timezone of the function argument wasn't used properly. [#9574](https://github.com/ClickHouse/ClickHouse/pull/9574) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Remove `ORDER BY` stage from mutations because we read from a single ordered part in a single thread. Also add check that the order of rows in mutation is ordered in sorting key order and this order is not violated. [#9886](https://github.com/ClickHouse/ClickHouse/pull/9886) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Clean up duplicated linker flags. Make sure the linker won't look up an unexpected symbol. [#9433](https://github.com/ClickHouse/ClickHouse/pull/9433) ([Amos Bird](https://github.com/amosbird)).
|
||||
|
||||
### ClickHouse release v20.1.8.41, 2020-03-20
|
||||
|
||||
#### Bug Fix
|
||||
|
@ -11,10 +11,11 @@ ClickHouse is an open-source column-oriented database management system that all
|
||||
* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-d2zxkf9e-XyxDa_ucfPxzuH4SJIm~Ng) and [Telegram](https://telegram.me/clickhouse_en) allow to chat with ClickHouse users in real-time.
|
||||
* [Blog](https://clickhouse.yandex/blog/en/) contains various ClickHouse-related articles, as well as announces and reports about events.
|
||||
* [Contacts](https://clickhouse.tech/#contacts) can help to get your questions answered if there are any.
|
||||
* You can also [fill this form](https://forms.yandex.com/surveys/meet-yandex-clickhouse-team/) to meet Yandex ClickHouse team in person.
|
||||
* You can also [fill this form](https://clickhouse.tech/#meet) to meet Yandex ClickHouse team in person.
|
||||
|
||||
## Upcoming Events
|
||||
|
||||
* [ClickHouse Monitoring Round Table (online in English)](https://www.eventbrite.com/e/clickhouse-april-virtual-meetup-tickets-102272923066) on April 15, 2020.
|
||||
* [ClickHouse Online Meetup West (in English)](https://www.eventbrite.com/e/clickhouse-online-meetup-registration-102886791162) on April 24, 2020.
|
||||
* [ClickHouse Online Meetup East (in English)](https://www.eventbrite.com/e/clickhouse-online-meetup-east-registration-102989325846) on April 28, 2020.
|
||||
* [ClickHouse Workshop in Novosibirsk](https://2020.codefest.ru/lecture/1628) on TBD date.
|
||||
* [Yandex C++ Open-Source Sprints in Moscow](https://events.yandex.ru/events/otkrytyj-kod-v-yandek-28-03-2020) on TBD date.
|
||||
|
@ -3,8 +3,10 @@ if (USE_CLANG_TIDY)
|
||||
endif ()
|
||||
|
||||
add_subdirectory (common)
|
||||
add_subdirectory (loggers)
|
||||
add_subdirectory (daemon)
|
||||
add_subdirectory (loggers)
|
||||
add_subdirectory (pcg-random)
|
||||
add_subdirectory (widechar_width)
|
||||
|
||||
if (USE_MYSQL)
|
||||
add_subdirectory (mysqlxx)
|
||||
|
@ -11,6 +11,10 @@ using Int16 = int16_t;
|
||||
using Int32 = int32_t;
|
||||
using Int64 = int64_t;
|
||||
|
||||
#if __cplusplus <= 201703L
|
||||
using char8_t = unsigned char;
|
||||
#endif
|
||||
|
||||
using UInt8 = char8_t;
|
||||
using UInt16 = uint16_t;
|
||||
using UInt32 = uint32_t;
|
||||
|
@ -1,12 +1,47 @@
|
||||
LIBRARY()
|
||||
|
||||
ADDINCL(
|
||||
GLOBAL clickhouse/base
|
||||
contrib/libs/cctz/include
|
||||
)
|
||||
|
||||
CFLAGS (GLOBAL -DARCADIA_BUILD)
|
||||
|
||||
IF (OS_DARWIN)
|
||||
CFLAGS (GLOBAL -DOS_DARWIN)
|
||||
ELSEIF (OS_FREEBSD)
|
||||
CFLAGS (GLOBAL -DOS_FREEBSD)
|
||||
ELSEIF (OS_LINUX)
|
||||
CFLAGS (GLOBAL -DOS_LINUX)
|
||||
ENDIF ()
|
||||
|
||||
PEERDIR(
|
||||
contrib/libs/cctz/src
|
||||
contrib/libs/cxxsupp/libcxx-filesystem
|
||||
contrib/libs/poco/Net
|
||||
contrib/libs/poco/Util
|
||||
contrib/restricted/boost
|
||||
contrib/restricted/cityhash-1.0.2
|
||||
)
|
||||
|
||||
SRCS(
|
||||
argsToConfig.cpp
|
||||
coverage.cpp
|
||||
DateLUT.cpp
|
||||
DateLUTImpl.cpp
|
||||
demangle.cpp
|
||||
getFQDNOrHostName.cpp
|
||||
getMemoryAmount.cpp
|
||||
getThreadId.cpp
|
||||
JSON.cpp
|
||||
LineReader.cpp
|
||||
mremap.cpp
|
||||
phdr_cache.cpp
|
||||
preciseExp10.c
|
||||
setTerminalEcho.cpp
|
||||
shift10.cpp
|
||||
sleep.cpp
|
||||
terminalColors.cpp
|
||||
)
|
||||
|
||||
END()
|
||||
|
@ -50,11 +50,13 @@
|
||||
#include <Common/getMultipleKeysFromConfig.h>
|
||||
#include <Common/ClickHouseRevision.h>
|
||||
#include <Common/Config/ConfigProcessor.h>
|
||||
#include <Common/config_version.h>
|
||||
|
||||
#ifdef __APPLE__
|
||||
// ucontext is not available without _XOPEN_SOURCE
|
||||
#define _XOPEN_SOURCE 700
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config_version.h>
|
||||
#endif
|
||||
|
||||
#if defined(OS_DARWIN)
|
||||
# define _XOPEN_SOURCE 700 // ucontext is not available without _XOPEN_SOURCE
|
||||
#endif
|
||||
#include <ucontext.h>
|
||||
|
||||
@ -410,7 +412,7 @@ std::string BaseDaemon::getDefaultCorePath() const
|
||||
|
||||
void BaseDaemon::closeFDs()
|
||||
{
|
||||
#if defined(__FreeBSD__) || (defined(__APPLE__) && defined(__MACH__))
|
||||
#if defined(OS_FREEBSD) || defined(OS_DARWIN)
|
||||
Poco::File proc_path{"/dev/fd"};
|
||||
#else
|
||||
Poco::File proc_path{"/proc/self/fd"};
|
||||
@ -430,7 +432,7 @@ void BaseDaemon::closeFDs()
|
||||
else
|
||||
{
|
||||
int max_fd = -1;
|
||||
#ifdef _SC_OPEN_MAX
|
||||
#if defined(_SC_OPEN_MAX)
|
||||
max_fd = sysconf(_SC_OPEN_MAX);
|
||||
if (max_fd == -1)
|
||||
#endif
|
||||
@ -448,7 +450,7 @@ namespace
|
||||
/// the maximum is 1000, and chromium uses 300 for its tab processes. Ignore
|
||||
/// whatever errors that occur, because it's just a debugging aid and we don't
|
||||
/// care if it breaks.
|
||||
#if defined(__linux__) && !defined(NDEBUG)
|
||||
#if defined(OS_LINUX) && !defined(NDEBUG)
|
||||
void debugIncreaseOOMScore()
|
||||
{
|
||||
const std::string new_score = "555";
|
||||
|
14
base/daemon/ya.make
Normal file
14
base/daemon/ya.make
Normal file
@ -0,0 +1,14 @@
|
||||
LIBRARY()
|
||||
|
||||
NO_COMPILER_WARNINGS()
|
||||
|
||||
PEERDIR(
|
||||
clickhouse/src/Common
|
||||
)
|
||||
|
||||
SRCS(
|
||||
BaseDaemon.cpp
|
||||
GraphiteWriter.cpp
|
||||
)
|
||||
|
||||
END()
|
@ -75,7 +75,11 @@ void OwnPatternFormatter::formatExtended(const DB::ExtendedLogMessage & msg_ext,
|
||||
if (color)
|
||||
writeCString(resetColor(), wb);
|
||||
writeCString("> ", wb);
|
||||
if (color)
|
||||
writeString(setColor(std::hash<std::string>()(msg.getSource())), wb);
|
||||
DB::writeString(msg.getSource(), wb);
|
||||
if (color)
|
||||
writeCString(resetColor(), wb);
|
||||
writeCString(": ", wb);
|
||||
DB::writeString(msg.getText(), wb);
|
||||
}
|
||||
|
15
base/loggers/ya.make
Normal file
15
base/loggers/ya.make
Normal file
@ -0,0 +1,15 @@
|
||||
LIBRARY()
|
||||
|
||||
PEERDIR(
|
||||
clickhouse/src/Common
|
||||
)
|
||||
|
||||
SRCS(
|
||||
ExtendedLogChannel.cpp
|
||||
Loggers.cpp
|
||||
OwnFormattingChannel.cpp
|
||||
OwnPatternFormatter.cpp
|
||||
OwnSplitChannel.cpp
|
||||
)
|
||||
|
||||
END()
|
2
base/pcg-random/CMakeLists.txt
Normal file
2
base/pcg-random/CMakeLists.txt
Normal file
@ -0,0 +1,2 @@
|
||||
add_library(pcg_random INTERFACE)
|
||||
target_include_directories(pcg_random INTERFACE .)
|
@ -292,7 +292,7 @@ inline itype rotl(itype value, bitcount_t rot)
|
||||
{
|
||||
constexpr bitcount_t bits = sizeof(itype) * 8;
|
||||
constexpr bitcount_t mask = bits - 1;
|
||||
#if PCG_USE_ZEROCHECK_ROTATE_IDIOM
|
||||
#if defined(PCG_USE_ZEROCHECK_ROTATE_IDIOM)
|
||||
return rot ? (value << rot) | (value >> (bits - rot)) : value;
|
||||
#else
|
||||
return (value << rot) | (value >> ((- rot) & mask));
|
||||
@ -304,7 +304,7 @@ inline itype rotr(itype value, bitcount_t rot)
|
||||
{
|
||||
constexpr bitcount_t bits = sizeof(itype) * 8;
|
||||
constexpr bitcount_t mask = bits - 1;
|
||||
#if PCG_USE_ZEROCHECK_ROTATE_IDIOM
|
||||
#if defined(PCG_USE_ZEROCHECK_ROTATE_IDIOM)
|
||||
return rot ? (value >> rot) | (value << (bits - rot)) : value;
|
||||
#else
|
||||
return (value >> rot) | (value << ((- rot) & mask));
|
||||
@ -318,7 +318,7 @@ inline itype rotr(itype value, bitcount_t rot)
|
||||
*
|
||||
* These overloads will be preferred over the general template code above.
|
||||
*/
|
||||
#if PCG_USE_INLINE_ASM && __GNUC__ && (__x86_64__ || __i386__)
|
||||
#if defined(PCG_USE_INLINE_ASM) && __GNUC__ && (__x86_64__ || __i386__)
|
||||
|
||||
inline uint8_t rotr(uint8_t value, bitcount_t rot)
|
||||
{
|
||||
@ -600,7 +600,7 @@ std::ostream& operator<<(std::ostream& out, printable_typename<T>) {
|
||||
#ifdef __GNUC__
|
||||
int status;
|
||||
char* pretty_name =
|
||||
abi::__cxa_demangle(implementation_typename, NULL, NULL, &status);
|
||||
abi::__cxa_demangle(implementation_typename, nullptr, nullptr, &status);
|
||||
if (status == 0)
|
||||
out << pretty_name;
|
||||
free(static_cast<void*>(pretty_name));
|
5
base/pcg-random/ya.make
Normal file
5
base/pcg-random/ya.make
Normal file
@ -0,0 +1,5 @@
|
||||
LIBRARY()
|
||||
|
||||
ADDINCL (GLOBAL clickhouse/base/pcg-random)
|
||||
|
||||
END()
|
9
base/widechar_width/ya.make
Normal file
9
base/widechar_width/ya.make
Normal file
@ -0,0 +1,9 @@
|
||||
LIBRARY()
|
||||
|
||||
ADDINCL(GLOBAL clickhouse/base/widechar_width)
|
||||
|
||||
SRCS(
|
||||
widechar_width.cpp
|
||||
)
|
||||
|
||||
END()
|
@ -1,3 +1,7 @@
|
||||
RECURSE(
|
||||
common
|
||||
daemon
|
||||
loggers
|
||||
pcg-random
|
||||
widechar_width
|
||||
)
|
||||
|
@ -4,7 +4,11 @@ if (NOT COMPILER_CLANG)
|
||||
message (FATAL_ERROR "FreeBSD build is supported only for Clang")
|
||||
endif ()
|
||||
|
||||
execute_process (COMMAND ${CMAKE_CXX_COMPILER} --print-file-name=libclang_rt.builtins-${CMAKE_SYSTEM_PROCESSOR}.a OUTPUT_VARIABLE BUILTINS_LIBRARY OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
if (${CMAKE_SYSTEM_PROCESSOR} STREQUAL "amd64")
|
||||
execute_process (COMMAND ${CMAKE_CXX_COMPILER} --print-file-name=libclang_rt.builtins-x86_64.a OUTPUT_VARIABLE BUILTINS_LIBRARY OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
else ()
|
||||
execute_process (COMMAND ${CMAKE_CXX_COMPILER} --print-file-name=libclang_rt.builtins-${CMAKE_SYSTEM_PROCESSOR}.a OUTPUT_VARIABLE BUILTINS_LIBRARY OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
endif ()
|
||||
|
||||
set (DEFAULT_LIBS "${DEFAULT_LIBS} ${BUILTINS_LIBRARY} ${COVERAGE_OPTION} -lc -lm -lrt -lpthread")
|
||||
|
||||
|
@ -2,4 +2,3 @@ set(DIVIDE_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/libdivide)
|
||||
set(DBMS_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/src ${ClickHouse_BINARY_DIR}/src)
|
||||
set(DOUBLE_CONVERSION_CONTRIB_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/double-conversion)
|
||||
set(METROHASH_CONTRIB_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/libmetrohash/src)
|
||||
set(PCG_RANDOM_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/libpcg-random/include)
|
||||
|
@ -6,18 +6,18 @@ endif ()
|
||||
|
||||
if (COMPILER_GCC)
|
||||
# Require minimum version of gcc
|
||||
set (GCC_MINIMUM_VERSION 8)
|
||||
set (GCC_MINIMUM_VERSION 9)
|
||||
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS ${GCC_MINIMUM_VERSION} AND NOT CMAKE_VERSION VERSION_LESS 2.8.9)
|
||||
message (FATAL_ERROR "GCC version must be at least ${GCC_MINIMUM_VERSION}. For example, if GCC ${GCC_MINIMUM_VERSION} is available under gcc-${GCC_MINIMUM_VERSION}, g++-${GCC_MINIMUM_VERSION} names, do the following: export CC=gcc-${GCC_MINIMUM_VERSION} CXX=g++-${GCC_MINIMUM_VERSION}; rm -rf CMakeCache.txt CMakeFiles; and re run cmake or ./release.")
|
||||
endif ()
|
||||
elseif (COMPILER_CLANG)
|
||||
# Require minimum version of clang
|
||||
set (CLANG_MINIMUM_VERSION 7)
|
||||
set (CLANG_MINIMUM_VERSION 8)
|
||||
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS ${CLANG_MINIMUM_VERSION})
|
||||
message (FATAL_ERROR "Clang version must be at least ${CLANG_MINIMUM_VERSION}.")
|
||||
endif ()
|
||||
else ()
|
||||
message (WARNING "You are using an unsupported compiler. Compilation has only been tested with Clang 6+ and GCC 7+.")
|
||||
message (WARNING "You are using an unsupported compiler. Compilation has only been tested with Clang and GCC.")
|
||||
endif ()
|
||||
|
||||
STRING(REGEX MATCHALL "[0-9]+" COMPILER_VERSION_LIST ${CMAKE_CXX_COMPILER_VERSION})
|
||||
|
1
contrib/CMakeLists.txt
vendored
1
contrib/CMakeLists.txt
vendored
@ -333,6 +333,5 @@ add_subdirectory(grpc-cmake)
|
||||
|
||||
add_subdirectory(replxx-cmake)
|
||||
add_subdirectory(FastMemcpy)
|
||||
add_subdirectory(widecharwidth)
|
||||
add_subdirectory(consistent-hashing)
|
||||
add_subdirectory(consistent-hashing-sumbur)
|
||||
|
2
contrib/cctz
vendored
2
contrib/cctz
vendored
@ -1 +1 @@
|
||||
Subproject commit 4f9776a310f4952454636363def82c2bf6641d5f
|
||||
Subproject commit 5a3f785329cecdd2b68cd950e0647e9246774ef2
|
@ -1,52 +0,0 @@
|
||||
# PCG Random Number Generation, C++ Edition
|
||||
|
||||
[PCG-Random website]: http://www.pcg-random.org
|
||||
|
||||
This code provides an implementation of the PCG family of random number
|
||||
generators, which are fast, statistically excellent, and offer a number of
|
||||
useful features.
|
||||
|
||||
Full details can be found at the [PCG-Random website]. This version
|
||||
of the code provides many family members -- if you just want one
|
||||
simple generator, you may prefer the minimal C version of the library.
|
||||
|
||||
There are two kinds of generator, normal generators and extended generators.
|
||||
Extended generators provide *k* dimensional equidistribution and can perform
|
||||
party tricks, but generally speaking most people only need the normal
|
||||
generators.
|
||||
|
||||
There are two ways to access the generators, using a convenience typedef
|
||||
or by using the underlying templates directly (similar to C++11's `std::mt19937` typedef vs its `std::mersenne_twister_engine` template). For most users, the convenience typedef is what you want, and probably you're fine with `pcg32` for 32-bit numbers. If you want 64-bit numbers, either use `pcg64` (or, if you're on a 32-bit system, making 64 bits from two calls to `pcg32_k2` may be faster).
|
||||
|
||||
## Documentation and Examples
|
||||
|
||||
Visit [PCG-Random website] for information on how to use this library, or look
|
||||
at the sample code in the `sample` directory -- hopefully it should be fairly
|
||||
self explanatory.
|
||||
|
||||
## Building
|
||||
|
||||
The code is written in C++11, as an include-only library (i.e., there is
|
||||
nothing you need to build). There are some provided demo programs and tests
|
||||
however. On a Unix-style system (e.g., Linux, Mac OS X) you should be able
|
||||
to just type
|
||||
|
||||
make
|
||||
|
||||
To build the demo programs.
|
||||
|
||||
## Testing
|
||||
|
||||
Run
|
||||
|
||||
make test
|
||||
|
||||
## Directory Structure
|
||||
|
||||
The directories are arranged as follows:
|
||||
|
||||
* `include` -- contains `pcg_random.hpp` and supporting include files
|
||||
* `test-high` -- test code for the high-level API where the functions have
|
||||
shorter, less scary-looking names.
|
||||
* `sample` -- sample code, some similar to the code in `test-high` but more
|
||||
human readable, some other examples too
|
2
contrib/poco
vendored
2
contrib/poco
vendored
@ -1 +1 @@
|
||||
Subproject commit ddca76ba4956cb57150082394536cc43ff28f6fa
|
||||
Subproject commit 7d605a1ae5d878294f91f68feb62ae51e9a04426
|
@ -1,10 +1,10 @@
|
||||
{
|
||||
"docker/packager/deb": "yandex/clickhouse-deb-builder",
|
||||
"docker/packager/binary": "yandex/clickhouse-binary-builder",
|
||||
"docker/test/coverage": "yandex/clickhouse-coverage",
|
||||
"docker/test/compatibility/centos": "yandex/clickhouse-test-old-centos",
|
||||
"docker/test/compatibility/ubuntu": "yandex/clickhouse-test-old-ubuntu",
|
||||
"docker/test/integration": "yandex/clickhouse-integration-test",
|
||||
"docker/test/performance": "yandex/clickhouse-performance-test",
|
||||
"docker/test/performance-comparison": "yandex/clickhouse-performance-comparison",
|
||||
"docker/test/pvs": "yandex/clickhouse-pvs-test",
|
||||
"docker/test/stateful": "yandex/clickhouse-stateful-test",
|
||||
@ -14,5 +14,6 @@
|
||||
"docker/test/unit": "yandex/clickhouse-unit-test",
|
||||
"docker/test/stress": "yandex/clickhouse-stress-test",
|
||||
"docker/test/split_build_smoke_test": "yandex/clickhouse-split-build-smoke-test",
|
||||
"docker/test/codebrowser": "yandex/clickhouse-codebrowser",
|
||||
"tests/integration/image": "yandex/clickhouse-integration-tests-runner"
|
||||
}
|
||||
|
@ -2,7 +2,7 @@
|
||||
set -ex
|
||||
set -o pipefail
|
||||
trap "exit" INT TERM
|
||||
trap "kill $(jobs -pr) ||:" EXIT
|
||||
trap 'kill $(jobs -pr) ||:' EXIT
|
||||
|
||||
stage=${stage:-}
|
||||
script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
|
||||
@ -18,22 +18,22 @@ function configure
|
||||
sed -i 's/<tcp_port>9000/<tcp_port>9002/g' right/config/config.xml
|
||||
|
||||
# Start a temporary server to rename the tables
|
||||
while killall clickhouse; do echo . ; sleep 1 ; done
|
||||
while killall clickhouse-server; do echo . ; sleep 1 ; done
|
||||
echo all killed
|
||||
|
||||
set -m # Spawn temporary in its own process groups
|
||||
left/clickhouse server --config-file=left/config/config.xml -- --path db0 &> setup-server-log.log &
|
||||
left/clickhouse-server --config-file=left/config/config.xml -- --path db0 &> setup-server-log.log &
|
||||
left_pid=$!
|
||||
kill -0 $left_pid
|
||||
disown $left_pid
|
||||
set +m
|
||||
while ! left/clickhouse client --port 9001 --query "select 1" ; do kill -0 $left_pid ; echo . ; sleep 1 ; done
|
||||
while ! clickhouse-client --port 9001 --query "select 1" ; do kill -0 $left_pid ; echo . ; sleep 1 ; done
|
||||
echo server for setup started
|
||||
|
||||
left/clickhouse client --port 9001 --query "create database test" ||:
|
||||
left/clickhouse client --port 9001 --query "rename table datasets.hits_v1 to test.hits" ||:
|
||||
clickhouse-client --port 9001 --query "create database test" ||:
|
||||
clickhouse-client --port 9001 --query "rename table datasets.hits_v1 to test.hits" ||:
|
||||
|
||||
while killall clickhouse; do echo . ; sleep 1 ; done
|
||||
while killall clickhouse-server; do echo . ; sleep 1 ; done
|
||||
echo all killed
|
||||
|
||||
# Remove logs etc, because they will be updated, and sharing them between
|
||||
@ -42,41 +42,50 @@ function configure
|
||||
rm db0/metadata/system/* -rf ||:
|
||||
|
||||
# Make copies of the original db for both servers. Use hardlinks instead
|
||||
# of copying.
|
||||
# of copying. Be careful to remove preprocessed configs and system tables,or
|
||||
# it can lead to weird effects.
|
||||
rm -r left/db ||:
|
||||
rm -r right/db ||:
|
||||
rm -r db0/preprocessed_configs ||:
|
||||
rm -r db/{data,metadata}/system ||:
|
||||
cp -al db0/ left/db/
|
||||
cp -al db0/ right/db/
|
||||
}
|
||||
|
||||
function restart
|
||||
{
|
||||
while killall clickhouse; do echo . ; sleep 1 ; done
|
||||
while killall clickhouse-server; do echo . ; sleep 1 ; done
|
||||
echo all killed
|
||||
|
||||
set -m # Spawn servers in their own process groups
|
||||
|
||||
left/clickhouse server --config-file=left/config/config.xml -- --path left/db &>> left-server-log.log &
|
||||
left/clickhouse-server --config-file=left/config/config.xml -- --path left/db &>> left-server-log.log &
|
||||
left_pid=$!
|
||||
kill -0 $left_pid
|
||||
disown $left_pid
|
||||
|
||||
right/clickhouse server --config-file=right/config/config.xml -- --path right/db &>> right-server-log.log &
|
||||
right/clickhouse-server --config-file=right/config/config.xml -- --path right/db &>> right-server-log.log &
|
||||
right_pid=$!
|
||||
kill -0 $right_pid
|
||||
disown $right_pid
|
||||
|
||||
set +m
|
||||
|
||||
while ! left/clickhouse client --port 9001 --query "select 1" ; do kill -0 $left_pid ; echo . ; sleep 1 ; done
|
||||
while ! clickhouse-client --port 9001 --query "select 1" ; do kill -0 $left_pid ; echo . ; sleep 1 ; done
|
||||
echo left ok
|
||||
while ! right/clickhouse client --port 9002 --query "select 1" ; do kill -0 $right_pid ; echo . ; sleep 1 ; done
|
||||
while ! clickhouse-client --port 9002 --query "select 1" ; do kill -0 $right_pid ; echo . ; sleep 1 ; done
|
||||
echo right ok
|
||||
|
||||
left/clickhouse client --port 9001 --query "select * from system.tables where database != 'system'"
|
||||
left/clickhouse client --port 9001 --query "select * from system.build_options"
|
||||
right/clickhouse client --port 9002 --query "select * from system.tables where database != 'system'"
|
||||
right/clickhouse client --port 9002 --query "select * from system.build_options"
|
||||
clickhouse-client --port 9001 --query "select * from system.tables where database != 'system'"
|
||||
clickhouse-client --port 9001 --query "select * from system.build_options"
|
||||
clickhouse-client --port 9002 --query "select * from system.tables where database != 'system'"
|
||||
clickhouse-client --port 9002 --query "select * from system.build_options"
|
||||
|
||||
# Check again that both servers we started are running -- this is important
|
||||
# for running locally, when there might be some other servers started and we
|
||||
# will connect to them instead.
|
||||
kill -0 $left_pid
|
||||
kill -0 $right_pid
|
||||
}
|
||||
|
||||
function run_tests
|
||||
@ -127,7 +136,7 @@ function run_tests
|
||||
# FIXME remove some broken long tests
|
||||
for test_name in {IPv4,IPv6,modulo,parse_engine_file,number_formatting_formats,select_format,arithmetic,cryptographic_hashes,logical_functions_{medium,small}}
|
||||
do
|
||||
printf "$test_name\tMarked as broken (see compare.sh)\n" >> skipped-tests.tsv
|
||||
printf "%s\tMarked as broken (see compare.sh)\n" "$test_name">> skipped-tests.tsv
|
||||
rm "$test_prefix/$test_name.xml" ||:
|
||||
done
|
||||
test_files=$(ls "$test_prefix"/*.xml)
|
||||
@ -138,9 +147,9 @@ function run_tests
|
||||
for test in $test_files
|
||||
do
|
||||
# Check that both servers are alive, to fail faster if they die.
|
||||
left/clickhouse client --port 9001 --query "select 1 format Null" \
|
||||
clickhouse-client --port 9001 --query "select 1 format Null" \
|
||||
|| { echo $test_name >> left-server-died.log ; restart ; continue ; }
|
||||
right/clickhouse client --port 9002 --query "select 1 format Null" \
|
||||
clickhouse-client --port 9002 --query "select 1 format Null" \
|
||||
|| { echo $test_name >> right-server-died.log ; restart ; continue ; }
|
||||
|
||||
test_name=$(basename "$test" ".xml")
|
||||
@ -158,7 +167,7 @@ function run_tests
|
||||
skipped=$(grep ^skipped "$test_name-raw.tsv" | cut -f2-)
|
||||
if [ "$skipped" != "" ]
|
||||
then
|
||||
printf "$test_name""\t""$skipped""\n" >> skipped-tests.tsv
|
||||
printf "%s\t%s\n" "$test_name" "$skipped">> skipped-tests.tsv
|
||||
fi
|
||||
done
|
||||
|
||||
@ -170,24 +179,24 @@ function run_tests
|
||||
function get_profiles
|
||||
{
|
||||
# Collect the profiles
|
||||
left/clickhouse client --port 9001 --query "set query_profiler_cpu_time_period_ns = 0"
|
||||
left/clickhouse client --port 9001 --query "set query_profiler_real_time_period_ns = 0"
|
||||
right/clickhouse client --port 9001 --query "set query_profiler_cpu_time_period_ns = 0"
|
||||
right/clickhouse client --port 9001 --query "set query_profiler_real_time_period_ns = 0"
|
||||
left/clickhouse client --port 9001 --query "system flush logs"
|
||||
right/clickhouse client --port 9002 --query "system flush logs"
|
||||
clickhouse-client --port 9001 --query "set query_profiler_cpu_time_period_ns = 0"
|
||||
clickhouse-client --port 9001 --query "set query_profiler_real_time_period_ns = 0"
|
||||
clickhouse-client --port 9001 --query "set query_profiler_cpu_time_period_ns = 0"
|
||||
clickhouse-client --port 9001 --query "set query_profiler_real_time_period_ns = 0"
|
||||
clickhouse-client --port 9001 --query "system flush logs"
|
||||
clickhouse-client --port 9002 --query "system flush logs"
|
||||
|
||||
left/clickhouse client --port 9001 --query "select * from system.query_log where type = 2 format TSVWithNamesAndTypes" > left-query-log.tsv ||: &
|
||||
left/clickhouse client --port 9001 --query "select * from system.query_thread_log format TSVWithNamesAndTypes" > left-query-thread-log.tsv ||: &
|
||||
left/clickhouse client --port 9001 --query "select * from system.trace_log format TSVWithNamesAndTypes" > left-trace-log.tsv ||: &
|
||||
left/clickhouse client --port 9001 --query "select arrayJoin(trace) addr, concat(splitByChar('/', addressToLine(addr))[-1], '#', demangle(addressToSymbol(addr)) ) name from system.trace_log group by addr format TSVWithNamesAndTypes" > left-addresses.tsv ||: &
|
||||
left/clickhouse client --port 9001 --query "select * from system.metric_log format TSVWithNamesAndTypes" > left-metric-log.tsv ||: &
|
||||
clickhouse-client --port 9001 --query "select * from system.query_log where type = 2 format TSVWithNamesAndTypes" > left-query-log.tsv ||: &
|
||||
clickhouse-client --port 9001 --query "select * from system.query_thread_log format TSVWithNamesAndTypes" > left-query-thread-log.tsv ||: &
|
||||
clickhouse-client --port 9001 --query "select * from system.trace_log format TSVWithNamesAndTypes" > left-trace-log.tsv ||: &
|
||||
clickhouse-client --port 9001 --query "select arrayJoin(trace) addr, concat(splitByChar('/', addressToLine(addr))[-1], '#', demangle(addressToSymbol(addr)) ) name from system.trace_log group by addr format TSVWithNamesAndTypes" > left-addresses.tsv ||: &
|
||||
clickhouse-client --port 9001 --query "select * from system.metric_log format TSVWithNamesAndTypes" > left-metric-log.tsv ||: &
|
||||
|
||||
right/clickhouse client --port 9002 --query "select * from system.query_log where type = 2 format TSVWithNamesAndTypes" > right-query-log.tsv ||: &
|
||||
right/clickhouse client --port 9002 --query "select * from system.query_thread_log format TSVWithNamesAndTypes" > right-query-thread-log.tsv ||: &
|
||||
right/clickhouse client --port 9002 --query "select * from system.trace_log format TSVWithNamesAndTypes" > right-trace-log.tsv ||: &
|
||||
right/clickhouse client --port 9002 --query "select arrayJoin(trace) addr, concat(splitByChar('/', addressToLine(addr))[-1], '#', demangle(addressToSymbol(addr)) ) name from system.trace_log group by addr format TSVWithNamesAndTypes" > right-addresses.tsv ||: &
|
||||
right/clickhouse client --port 9002 --query "select * from system.metric_log format TSVWithNamesAndTypes" > right-metric-log.tsv ||: &
|
||||
clickhouse-client --port 9002 --query "select * from system.query_log where type = 2 format TSVWithNamesAndTypes" > right-query-log.tsv ||: &
|
||||
clickhouse-client --port 9002 --query "select * from system.query_thread_log format TSVWithNamesAndTypes" > right-query-thread-log.tsv ||: &
|
||||
clickhouse-client --port 9002 --query "select * from system.trace_log format TSVWithNamesAndTypes" > right-trace-log.tsv ||: &
|
||||
clickhouse-client --port 9002 --query "select arrayJoin(trace) addr, concat(splitByChar('/', addressToLine(addr))[-1], '#', demangle(addressToSymbol(addr)) ) name from system.trace_log group by addr format TSVWithNamesAndTypes" > right-addresses.tsv ||: &
|
||||
clickhouse-client --port 9002 --query "select * from system.metric_log format TSVWithNamesAndTypes" > right-metric-log.tsv ||: &
|
||||
|
||||
wait
|
||||
}
|
||||
@ -195,9 +204,9 @@ function get_profiles
|
||||
# Build and analyze randomization distribution for all queries.
|
||||
function analyze_queries
|
||||
{
|
||||
find . -maxdepth 1 -name "*-queries.tsv" -print | \
|
||||
xargs -n1 -I% basename % -queries.tsv | \
|
||||
parallel --verbose right/clickhouse local --file "{}-queries.tsv" \
|
||||
find . -maxdepth 1 -name "*-queries.tsv" -print0 | \
|
||||
xargs -0 -n1 -I% basename % -queries.tsv | \
|
||||
parallel --verbose clickhouse-local --file "{}-queries.tsv" \
|
||||
--structure "\"query text, run int, version UInt32, time float\"" \
|
||||
--query "\"$(cat "$script_dir/eqmed.sql")\"" \
|
||||
">" {}-report.tsv
|
||||
@ -219,7 +228,7 @@ done
|
||||
|
||||
rm ./*.{rep,svg} test-times.tsv test-dump.tsv unstable.tsv unstable-query-ids.tsv unstable-query-metrics.tsv changed-perf.tsv unstable-tests.tsv unstable-queries.tsv bad-tests.tsv slow-on-client.tsv all-queries.tsv ||:
|
||||
|
||||
right/clickhouse local --query "
|
||||
clickhouse-local --query "
|
||||
create table queries engine File(TSVWithNamesAndTypes, 'queries.rep')
|
||||
as select
|
||||
-- FIXME Comparison mode doesn't make sense for queries that complete
|
||||
@ -228,12 +237,14 @@ create table queries engine File(TSVWithNamesAndTypes, 'queries.rep')
|
||||
-- but the right way to do this is not yet clear.
|
||||
left + right < 0.05 as short,
|
||||
|
||||
not short and abs(diff) < 0.10 and rd[3] > 0.10 as unstable,
|
||||
|
||||
-- Do not consider changed the queries with 5% RD below 5% -- e.g., we're
|
||||
-- likely to observe a difference > 5% in less than 5% cases.
|
||||
-- Not sure it is correct, but empirically it filters out a lot of noise.
|
||||
not short and abs(diff) > 0.15 and abs(diff) > rd[3] and rd[1] > 0.05 as changed,
|
||||
-- Difference > 15% and > rd(99%) -- changed. We can't filter out flaky
|
||||
-- queries by rd(5%), because it can be zero when the difference is smaller
|
||||
-- than a typical distribution width. The difference is still real though.
|
||||
not short and abs(diff) > 0.15 and abs(diff) > rd[4] as changed,
|
||||
|
||||
-- Not changed but rd(99%) > 10% -- unstable.
|
||||
not short and not changed and rd[4] > 0.10 as unstable,
|
||||
|
||||
left, right, diff, rd,
|
||||
replaceAll(_file, '-report.tsv', '') test,
|
||||
query
|
||||
@ -291,7 +302,7 @@ create table all_tests_tsv engine File(TSV, 'all-queries.tsv') as
|
||||
|
||||
for version in {right,left}
|
||||
do
|
||||
right/clickhouse local --query "
|
||||
clickhouse-local --query "
|
||||
create view queries as
|
||||
select * from file('queries.rep', TSVWithNamesAndTypes,
|
||||
'short int, unstable int, changed int, left float, right float,
|
||||
@ -409,6 +420,10 @@ unset IFS
|
||||
grep -H -m2 -i '\(Exception\|Error\):[^:]' ./*-err.log | sed 's/:/\t/' > run-errors.tsv ||:
|
||||
}
|
||||
|
||||
# Check that local and client are in PATH
|
||||
clickhouse-local --version > /dev/null
|
||||
clickhouse-client --version > /dev/null
|
||||
|
||||
case "$stage" in
|
||||
"")
|
||||
;&
|
||||
|
@ -1,4 +1,9 @@
|
||||
<yandex>
|
||||
<yandex>
|
||||
<http_port remove="remove"/>
|
||||
<mysql_port remove="remove"/>
|
||||
<interserver_http_port remove="remove"/>
|
||||
<listen_host>::</listen_host>
|
||||
|
||||
<logger>
|
||||
<console>true</console>
|
||||
</logger>
|
||||
|
@ -2,7 +2,7 @@
|
||||
set -ex
|
||||
set -o pipefail
|
||||
trap "exit" INT TERM
|
||||
trap "kill $(jobs -pr) ||:" EXIT
|
||||
trap 'kill $(jobs -pr) ||:' EXIT
|
||||
|
||||
mkdir db0 ||:
|
||||
|
||||
|
@ -87,9 +87,6 @@ git -C ch diff --name-only "$SHA_TO_TEST" "$(git -C ch merge-base "$SHA_TO_TEST"
|
||||
# Set python output encoding so that we can print queries with Russian letters.
|
||||
export PYTHONIOENCODING=utf-8
|
||||
|
||||
# Use a default number of runs if not told otherwise
|
||||
export CHPC_RUNS=${CHPC_RUNS:-7}
|
||||
|
||||
# By default, use the main comparison script from the tested package, so that we
|
||||
# can change it in PRs.
|
||||
script_path="right/scripts"
|
||||
@ -101,14 +98,15 @@ fi
|
||||
# Even if we have some errors, try our best to save the logs.
|
||||
set +e
|
||||
|
||||
# Older version use 'kill 0', so put the script into a separate process group
|
||||
# FIXME remove set +m in April 2020
|
||||
set +m
|
||||
# Use clickhouse-client and clickhouse-local from the right server.
|
||||
PATH="$(readlink -f right/)":"$PATH"
|
||||
export PATH
|
||||
|
||||
# Start the main comparison script.
|
||||
{ \
|
||||
time ../download.sh "$REF_PR" "$REF_SHA" "$PR_TO_TEST" "$SHA_TO_TEST" && \
|
||||
time stage=configure "$script_path"/compare.sh ; \
|
||||
} 2>&1 | ts "$(printf '%%Y-%%m-%%d %%H:%%M:%%S\t')" | tee compare.log
|
||||
set -m
|
||||
|
||||
# Stop the servers to free memory. Normally they are restarted before getting
|
||||
# the profile info, so they shouldn't use much, but if the comparison script
|
||||
|
@ -25,7 +25,7 @@ parser = argparse.ArgumentParser(description='Run performance test.')
|
||||
parser.add_argument('file', metavar='FILE', type=argparse.FileType('r', encoding='utf-8'), nargs=1, help='test description file')
|
||||
parser.add_argument('--host', nargs='*', default=['localhost'], help="Server hostname(s). Corresponds to '--port' options.")
|
||||
parser.add_argument('--port', nargs='*', default=[9000], help="Server port(s). Corresponds to '--host' options.")
|
||||
parser.add_argument('--runs', type=int, default=int(os.environ.get('CHPC_RUNS', 7)), help='Number of query runs per server. Defaults to CHPC_RUNS environment variable.')
|
||||
parser.add_argument('--runs', type=int, default=int(os.environ.get('CHPC_RUNS', 11)), help='Number of query runs per server. Defaults to CHPC_RUNS environment variable.')
|
||||
parser.add_argument('--no-long', type=bool, default=True, help='Skip the tests tagged as long.')
|
||||
args = parser.parse_args()
|
||||
|
||||
@ -140,9 +140,16 @@ report_stage_end('substitute2')
|
||||
for q in test_queries:
|
||||
# Prewarm: run once on both servers. Helps to bring the data into memory,
|
||||
# precompile the queries, etc.
|
||||
for conn_index, c in enumerate(connections):
|
||||
res = c.execute(q, query_id = 'prewarm {} {}'.format(0, q))
|
||||
print('prewarm\t' + tsv_escape(q) + '\t' + str(conn_index) + '\t' + str(c.last_query.elapsed))
|
||||
try:
|
||||
for conn_index, c in enumerate(connections):
|
||||
res = c.execute(q, query_id = 'prewarm {} {}'.format(0, q))
|
||||
print('prewarm\t' + tsv_escape(q) + '\t' + str(conn_index) + '\t' + str(c.last_query.elapsed))
|
||||
except:
|
||||
# If prewarm fails for some query -- skip it, and try to test the others.
|
||||
# This might happen if the new test introduces some function that the
|
||||
# old server doesn't support. Still, report it as an error.
|
||||
print(traceback.format_exc(), file=sys.stderr)
|
||||
continue
|
||||
|
||||
# Now, perform measured runs.
|
||||
# Track the time spent by the client to process this query, so that we can notice
|
||||
|
@ -256,17 +256,18 @@ if args.report == 'main':
|
||||
|
||||
print(tableStart('Test times'))
|
||||
print(tableHeader(columns))
|
||||
|
||||
|
||||
runs = 11 # FIXME pass this as an argument
|
||||
attrs = ['' for c in columns]
|
||||
for r in rows:
|
||||
if float(r[6]) > 22:
|
||||
if float(r[6]) > 3 * runs:
|
||||
# FIXME should be 15s max -- investigate parallel_insert
|
||||
slow_average_tests += 1
|
||||
attrs[6] = 'style="background: #ffb0a0"'
|
||||
else:
|
||||
attrs[6] = ''
|
||||
|
||||
if float(r[5]) > 30:
|
||||
if float(r[5]) > 4 * runs:
|
||||
slow_average_tests += 1
|
||||
attrs[5] = 'style="background: #ffb0a0"'
|
||||
else:
|
||||
|
@ -70,6 +70,7 @@ CMD dpkg -i package_folder/clickhouse-common-static_*.deb; \
|
||||
ln -s /usr/share/clickhouse-test/config/disks.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/share/clickhouse-test/config/secure_ports.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/share/clickhouse-test/config/clusters.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/share/clickhouse-test/config/graphite.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/share/clickhouse-test/config/server.key /etc/clickhouse-server/; \
|
||||
ln -s /usr/share/clickhouse-test/config/server.crt /etc/clickhouse-server/; \
|
||||
ln -s /usr/share/clickhouse-test/config/dhparam.pem /etc/clickhouse-server/; \
|
||||
|
@ -14,6 +14,11 @@ kill_clickhouse () {
|
||||
sleep 10
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Will try to send second kill signal for sure"
|
||||
kill `pgrep -u clickhouse` 2>/dev/null
|
||||
sleep 5
|
||||
echo "clickhouse pids" `ps aux | grep clickhouse` | ts '%Y-%m-%d %H:%M:%S'
|
||||
}
|
||||
|
||||
start_clickhouse () {
|
||||
@ -50,6 +55,7 @@ ln -s /usr/share/clickhouse-test/config/zookeeper.xml /etc/clickhouse-server/con
|
||||
ln -s /usr/share/clickhouse-test/config/disks.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/share/clickhouse-test/config/secure_ports.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/share/clickhouse-test/config/clusters.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/share/clickhouse-test/config/graphite.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/share/clickhouse-test/config/server.key /etc/clickhouse-server/; \
|
||||
ln -s /usr/share/clickhouse-test/config/server.crt /etc/clickhouse-server/; \
|
||||
ln -s /usr/share/clickhouse-test/config/dhparam.pem /etc/clickhouse-server/; \
|
||||
|
@ -1,3 +1,8 @@
|
||||
---
|
||||
toc_title: Cloud
|
||||
toc_priority: 1
|
||||
---
|
||||
|
||||
# ClickHouse Cloud Service Providers {#clickhouse-cloud-service-providers}
|
||||
|
||||
!!! info "Info"
|
||||
|
21
docs/en/commercial/support.md
Normal file
21
docs/en/commercial/support.md
Normal file
@ -0,0 +1,21 @@
|
||||
---
|
||||
toc_title: Support
|
||||
toc_priority: 3
|
||||
---
|
||||
|
||||
# ClickHouse Commercial Support Service Providers {#clickhouse-commercial-support-service-providers}
|
||||
|
||||
!!! info "Info"
|
||||
If you have launched a ClickHouse commercial support service, feel free to [open a pull-request](https://github.com/ClickHouse/ClickHouse/edit/master/docs/en/commercial/support.md) adding it to the following list.
|
||||
|
||||
## Altinity {#altinity}
|
||||
|
||||
[Service description](https://www.altinity.com/24x7-support)
|
||||
|
||||
## Mafiree {#mafiree}
|
||||
|
||||
[Service description](http://mafiree.com/clickhouse-analytics-services.php)
|
||||
|
||||
## MinervaDB {#minervadb}
|
||||
|
||||
[Service description](https://minervadb.com/index.php/clickhouse-consulting-and-support-by-minervadb/)
|
@ -1,11 +1,11 @@
|
||||
---
|
||||
toc_priority: 63
|
||||
toc_title: Browse ClickHouse Source Code
|
||||
toc_title: Browse Source Code
|
||||
---
|
||||
|
||||
# Browse ClickHouse Source Code {#browse-clickhouse-source-code}
|
||||
|
||||
You can use **Woboq** online code browser available [here](https://clickhouse.tech/codebrowser/html_report///ClickHouse/src/index.html). It provides code navigation and semantic highlighting, search and indexing. The code snapshot is updated daily.
|
||||
You can use **Woboq** online code browser available [here](https://clickhouse.tech/codebrowser/html_report/ClickHouse/src/index.html). It provides code navigation and semantic highlighting, search and indexing. The code snapshot is updated daily.
|
||||
|
||||
Also, you can browse sources on [GitHub](https://github.com/ClickHouse/ClickHouse) as usual.
|
||||
|
||||
|
@ -9,7 +9,10 @@ The engine inherits from [MergeTree](mergetree.md#table_engines-mergetree), alte
|
||||
|
||||
You can use `AggregatingMergeTree` tables for incremental data aggregation, including for aggregated materialized views.
|
||||
|
||||
The engine processes all columns with [AggregateFunction](../../../sql_reference/data_types/aggregatefunction.md) type.
|
||||
The engine processes all columns with the following types:
|
||||
|
||||
- [AggregateFunction](../../../sql_reference/data_types/aggregatefunction.md)
|
||||
- [SimpleAggregateFunction](../../../sql_reference/data_types/simpleaggregatefunction.md)
|
||||
|
||||
It is appropriate to use `AggregatingMergeTree` if it reduces the number of rows by orders.
|
||||
|
||||
|
@ -38,7 +38,7 @@ sudo apt-get update
|
||||
sudo apt-get install clickhouse-client clickhouse-server
|
||||
```
|
||||
|
||||
You can also download and install packages manually from here: https://repo.yandex.ru/clickhouse/deb/stable/main/.
|
||||
You can also download and install packages manually from [here](https://repo.yandex.ru/clickhouse/deb/stable/main/).
|
||||
|
||||
#### Packages {#packages}
|
||||
|
||||
@ -67,7 +67,7 @@ Then run these commands to install packages:
|
||||
sudo yum install clickhouse-server clickhouse-client
|
||||
```
|
||||
|
||||
You can also download and install packages manually from here: https://repo.clickhouse.tech/rpm/stable/x86\_64.
|
||||
You can also download and install packages manually from [here](https://repo.clickhouse.tech/rpm/stable/x86_64).
|
||||
|
||||
### From Tgz Archives {#from-tgz-archives}
|
||||
|
||||
|
@ -78,48 +78,6 @@ See the difference?
|
||||
|
||||
For example, the query “count the number of records for each advertising platform” requires reading one “advertising platform ID” column, which takes up 1 byte uncompressed. If most of the traffic was not from advertising platforms, you can expect at least 10-fold compression of this column. When using a quick compression algorithm, data decompression is possible at a speed of at least several gigabytes of uncompressed data per second. In other words, this query can be processed at a speed of approximately several billion rows per second on a single server. This speed is actually achieved in practice.
|
||||
|
||||
<details markdown="1">
|
||||
|
||||
<summary>Example</summary>
|
||||
|
||||
``` bash
|
||||
$ clickhouse-client
|
||||
ClickHouse client version 0.0.52053.
|
||||
Connecting to localhost:9000.
|
||||
Connected to ClickHouse server version 0.0.52053.
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─CounterID─┬──count()─┐
|
||||
│ 114208 │ 56057344 │
|
||||
│ 115080 │ 51619590 │
|
||||
│ 3228 │ 44658301 │
|
||||
│ 38230 │ 42045932 │
|
||||
│ 145263 │ 42042158 │
|
||||
│ 91244 │ 38297270 │
|
||||
│ 154139 │ 26647572 │
|
||||
│ 150748 │ 24112755 │
|
||||
│ 242232 │ 21302571 │
|
||||
│ 338158 │ 13507087 │
|
||||
│ 62180 │ 12229491 │
|
||||
│ 82264 │ 12187441 │
|
||||
│ 232261 │ 12148031 │
|
||||
│ 146272 │ 11438516 │
|
||||
│ 168777 │ 11403636 │
|
||||
│ 4120072 │ 11227824 │
|
||||
│ 10938808 │ 10519739 │
|
||||
│ 74088 │ 9047015 │
|
||||
│ 115079 │ 8837972 │
|
||||
│ 337234 │ 8205961 │
|
||||
└───────────┴──────────┘
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### CPU {#cpu}
|
||||
|
||||
Since executing a query requires processing a large number of rows, it helps to dispatch all operations for entire vectors instead of for separate rows, or to implement the query engine so that there is almost no dispatching cost. If you don’t do this, with any half-decent disk subsystem, the query interpreter inevitably stalls the CPU. It makes sense to both store data in columns and process it, when possible, by columns.
|
||||
|
@ -24,7 +24,10 @@ toc_title: Integrations
|
||||
- [ClickHouseMigrator](https://github.com/zlzforever/ClickHouseMigrator)
|
||||
- Message queues
|
||||
- [Kafka](https://kafka.apache.org)
|
||||
- [clickhouse\_sinker](https://github.com/housepower/clickhouse_sinker) (uses [Go client](https://github.com/kshvakov/clickhouse/))
|
||||
- [clickhouse\_sinker](https://github.com/housepower/clickhouse_sinker) (uses [Go client](https://github.com/ClickHouse/clickhouse-go/))
|
||||
- Stream processing
|
||||
- [Flink](https://flink.apache.org)
|
||||
- [flink-clickhouse-sink](https://github.com/ivi-ru/flink-clickhouse-sink)
|
||||
- Object storages
|
||||
- [S3](https://en.wikipedia.org/wiki/Amazon_S3)
|
||||
- [clickhouse-backup](https://github.com/AlexAkulov/clickhouse-backup)
|
||||
@ -72,6 +75,9 @@ toc_title: Integrations
|
||||
- [sqlalchemy-clickhouse](https://github.com/cloudflare/sqlalchemy-clickhouse) (uses [infi.clickhouse\_orm](https://github.com/Infinidat/infi.clickhouse_orm))
|
||||
- [pandas](https://pandas.pydata.org)
|
||||
- [pandahouse](https://github.com/kszucs/pandahouse)
|
||||
- PHP
|
||||
- [Doctrine](https://www.doctrine-project.org/)
|
||||
- [dbal-clickhouse](https://packagist.org/packages/friendsofdoctrine/dbal-clickhouse)
|
||||
- R
|
||||
- [dplyr](https://db.rstudio.com/dplyr/)
|
||||
- [RClickHouse](https://github.com/IMSMWU/RClickHouse) (uses [clickhouse-cpp](https://github.com/artpaul/clickhouse-cpp))
|
||||
|
@ -77,5 +77,9 @@ toc_title: Adopters
|
||||
| [МКБ](https://mkb.ru/) | Bank | Web-system monitoring | — | — | [Slides in Russian, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/mkb.pdf) |
|
||||
| [金数据](https://jinshuju.net) | BI Analytics | Main product | — | — | [Slides in Chinese, October 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup24/3.%20金数据数据架构调整方案Public.pdf) |
|
||||
| [Instana](https://www.instana.com) | APM Platform | Main product | — | — | [Twitter post](https://twitter.com/mieldonkers/status/1248884119158882304) |
|
||||
| [Wargaming](https://wargaming.com/en/) | Games | | — | — | [Interview](https://habr.com/en/post/496954/) |
|
||||
| [Crazypanda](https://crazypanda.ru/en/) | Games | | — | — | Live session on ClickHouse meetup |
|
||||
| [FunCorp](https://fun.co/rp) | Games | | — | — | [Article](https://www.altinity.com/blog/migrating-from-redshift-to-clickhouse) |
|
||||
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/introduction/adopters/) <!--hide-->
|
||||
|
@ -220,21 +220,28 @@ Ok.
|
||||
|
||||
## input\_format\_values\_deduce\_templates\_of\_expressions {#settings-input_format_values_deduce_templates_of_expressions}
|
||||
|
||||
Enables or disables template deduction for an SQL expressions in [Values](../../interfaces/formats.md#data-format-values) format. It allows to parse and interpret expressions in `Values` much faster if expressions in consecutive rows have the same structure. ClickHouse will try to deduce template of an expression, parse the following rows using this template and evaluate the expression on a batch of successfully parsed rows. For the following query:
|
||||
Enables or disables template deduction for SQL expressions in [Values](../../interfaces/formats.md#data-format-values) format. It allows parsing and interpreting expressions in `Values` much faster if expressions in consecutive rows have the same structure. ClickHouse tries to deduce template of an expression, parse the following rows using this template and evaluate the expression on a batch of successfully parsed rows.
|
||||
|
||||
Possible values:
|
||||
|
||||
- 0 — Disabled.
|
||||
- 1 — Enabled.
|
||||
|
||||
Default value: 1.
|
||||
|
||||
For the following query:
|
||||
|
||||
``` sql
|
||||
INSERT INTO test VALUES (lower('Hello')), (lower('world')), (lower('INSERT')), (upper('Values')), ...
|
||||
```
|
||||
|
||||
- if `input_format_values_interpret_expressions=1` and `format_values_deduce_templates_of_expressions=0` expressions will be interpreted separately for each row (this is very slow for large number of rows)
|
||||
- if `input_format_values_interpret_expressions=0` and `format_values_deduce_templates_of_expressions=1` expressions in the first, second and third rows will be parsed using template `lower(String)` and interpreted together, expression is the forth row will be parsed with another template (`upper(String)`)
|
||||
- if `input_format_values_interpret_expressions=1` and `format_values_deduce_templates_of_expressions=1` - the same as in previous case, but also allows fallback to interpreting expressions separately if it’s not possible to deduce template.
|
||||
|
||||
Enabled by default.
|
||||
- If `input_format_values_interpret_expressions=1` and `format_values_deduce_templates_of_expressions=0`, expressions are interpreted separately for each row (this is very slow for large number of rows).
|
||||
- If `input_format_values_interpret_expressions=0` and `format_values_deduce_templates_of_expressions=1`, expressions in the first, second and third rows are parsed using template `lower(String)` and interpreted together, expression in the forth row is parsed with another template (`upper(String)`).
|
||||
- If `input_format_values_interpret_expressions=1` and `format_values_deduce_templates_of_expressions=1`, the same as in previous case, but also allows fallback to interpreting expressions separately if it’s not possible to deduce template.
|
||||
|
||||
## input\_format\_values\_accurate\_types\_of\_literals {#settings-input-format-values-accurate-types-of-literals}
|
||||
|
||||
This setting is used only when `input_format_values_deduce_templates_of_expressions = 1`. It can happen, that expressions for some column have the same structure, but contain numeric literals of different types, e.g
|
||||
This setting is used only when `input_format_values_deduce_templates_of_expressions = 1`. It can happen, that expressions for some column have the same structure, but contain numeric literals of different types, e.g.
|
||||
|
||||
``` sql
|
||||
(..., abs(0), ...), -- UInt64 literal
|
||||
@ -242,9 +249,17 @@ This setting is used only when `input_format_values_deduce_templates_of_expressi
|
||||
(..., abs(-1), ...), -- Int64 literal
|
||||
```
|
||||
|
||||
When this setting is enabled, ClickHouse will check the actual type of literal and will use an expression template of the corresponding type. In some cases, it may significantly slow down expression evaluation in `Values`.
|
||||
When disabled, ClickHouse may use more general type for some literals (e.g. `Float64` or `Int64` instead of `UInt64` for `42`), but it may cause overflow and precision issues.
|
||||
Enabled by default.
|
||||
Possible values:
|
||||
|
||||
- 0 — Disabled.
|
||||
|
||||
In this case, ClickHouse may use a more general type for some literals (e.g., `Float64` or `Int64` instead of `UInt64` for `42`), but it may cause overflow and precision issues.
|
||||
|
||||
- 1 — Enabled.
|
||||
|
||||
In this case, ClickHouse checks the actual type of literal and uses an expression template of the corresponding type. In some cases, it may significantly slow down expression evaluation in `Values`.
|
||||
|
||||
Default value: 1.
|
||||
|
||||
## input\_format\_defaults\_for\_omitted\_fields {#session_settings-input_format_defaults_for_omitted_fields}
|
||||
|
||||
|
@ -709,7 +709,7 @@ When the table is deleted manually, it will be automatically created on the fly.
|
||||
|
||||
You can specify an arbitrary partitioning key for the `system.query_thread_log` table in the [query\_thread\_log](server_configuration_parameters/settings.md#server_configuration_parameters-query-thread-log) server setting (see the `partition_by` parameter).
|
||||
|
||||
## system.trace\_log {#system_tables-trace_log}
|
||||
## system.trace_log {#system_tables-trace_log}
|
||||
|
||||
Contains stack traces collected by the sampling query profiler.
|
||||
|
||||
@ -719,24 +719,26 @@ To analyze logs, use the `addressToLine`, `addressToSymbol` and `demangle` intro
|
||||
|
||||
Columns:
|
||||
|
||||
- `event_date`([Date](../sql_reference/data_types/date.md)) — Date of sampling moment.
|
||||
- `event_date` ([Date](../sql_reference/data_types/date.md)) — Date of sampling moment.
|
||||
|
||||
- `event_time`([DateTime](../sql_reference/data_types/datetime.md)) — Timestamp of sampling moment.
|
||||
- `event_time` ([DateTime](../sql_reference/data_types/datetime.md)) — Timestamp of the sampling moment.
|
||||
|
||||
- `revision`([UInt32](../sql_reference/data_types/int_uint.md)) — ClickHouse server build revision.
|
||||
- `timestamp_ns` ([UInt64](../sql_reference/data_types/int_uint.md)) — Timestamp of the sampling moment in nanoseconds.
|
||||
|
||||
- `revision` ([UInt32](../sql_reference/data_types/int_uint.md)) — ClickHouse server build revision.
|
||||
|
||||
When connecting to server by `clickhouse-client`, you see the string similar to `Connected to ClickHouse server version 19.18.1 revision 54429.`. This field contains the `revision`, but not the `version` of a server.
|
||||
|
||||
- `timer_type`([Enum8](../sql_reference/data_types/enum.md)) — Timer type:
|
||||
- `timer_type` ([Enum8](../sql_reference/data_types/enum.md)) — Timer type:
|
||||
|
||||
- `Real` represents wall-clock time.
|
||||
- `CPU` represents CPU time.
|
||||
|
||||
- `thread_number`([UInt32](../sql_reference/data_types/int_uint.md)) — Thread identifier.
|
||||
- `thread_number` ([UInt32](../sql_reference/data_types/int_uint.md)) — Thread identifier.
|
||||
|
||||
- `query_id`([String](../sql_reference/data_types/string.md)) — Query identifier that can be used to get details about a query that was running from the [query\_log](#system_tables-query_log) system table.
|
||||
- `query_id` ([String](../sql_reference/data_types/string.md)) — Query identifier that can be used to get details about a query that was running from the [query\_log](#system_tables-query_log) system table.
|
||||
|
||||
- `trace`([Array(UInt64)](../sql_reference/data_types/array.md)) — Stack trace at the moment of sampling. Each element is a virtual memory address inside ClickHouse server process.
|
||||
- `trace` ([Array(UInt64)](../sql_reference/data_types/array.md)) — Stack trace at the moment of sampling. Each element is a virtual memory address inside ClickHouse server process.
|
||||
|
||||
**Example**
|
||||
|
||||
|
34
docs/en/sql_reference/data_types/simpleaggregatefunction.md
Normal file
34
docs/en/sql_reference/data_types/simpleaggregatefunction.md
Normal file
@ -0,0 +1,34 @@
|
||||
# SimpleAggregateFunction(name, types\_of\_arguments…) {#data-type-simpleaggregatefunction}
|
||||
|
||||
`SimpleAggregateFunction` data type stores current value of the aggregate function, and does not store its full state as [`AggregateFunction`](aggregatefunction.md) does. This optimization can be applied to functions for which the following property holds: the result of applying a function `f` to a row set `S1 UNION ALL S2` can be obtained by applying `f` to parts of the row set separately, and then again applying `f` to the results: `f(S1 UNION ALL S2) = f(f(S1) UNION ALL f(S2))`. This property guarantees that partial aggregation results are enough to compute the combined one, so we don't have to store and process any extra data.
|
||||
|
||||
The following aggregate functions are supported:
|
||||
|
||||
- [`any`](../../sql_reference/aggregate_functions/reference.md#agg_function-any)
|
||||
- [`anyLast`](../../sql_reference/aggregate_functions/reference.md#anylastx)
|
||||
- [`min`](../../sql_reference/aggregate_functions/reference.md#agg_function-min)
|
||||
- [`max`](../../sql_reference/aggregate_functions/reference.md#agg_function-max)
|
||||
- [`sum`](../../sql_reference/aggregate_functions/reference.md#agg_function-sum)
|
||||
- [`groupBitAnd`](../../sql_reference/aggregate_functions/reference.md#groupbitand)
|
||||
- [`groupBitOr`](../../sql_reference/aggregate_functions/reference.md#groupbitor)
|
||||
- [`groupBitXor`](../../sql_reference/aggregate_functions/reference.md#groupbitxor)
|
||||
|
||||
|
||||
Values of the `SimpleAggregateFunction(func, Type)` look and stored the same way as `Type`, so you do not need to apply functions with `-Merge`/`-State` suffixes. `SimpleAggregateFunction` has better performance than `AggregateFunction` with same aggregation function.
|
||||
|
||||
**Parameters**
|
||||
|
||||
- Name of the aggregate function.
|
||||
- Types of the aggregate function arguments.
|
||||
|
||||
**Example**
|
||||
|
||||
``` sql
|
||||
CREATE TABLE t
|
||||
(
|
||||
column1 SimpleAggregateFunction(sum, UInt64),
|
||||
column2 SimpleAggregateFunction(any, String)
|
||||
) ENGINE = ...
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/data_types/simpleaggregatefunction/) <!--hide-->
|
@ -5,6 +5,6 @@ toc_title: Set
|
||||
|
||||
# Set {#set}
|
||||
|
||||
Used for the right half of an [IN](../../../sql_reference/statements/select.md#select-in-operators) expression.
|
||||
Used for the right half of an [IN](../../statements/select.md#select-in-operators) expression.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/data_types/special_data_types/set/) <!--hide-->
|
||||
|
@ -989,7 +989,7 @@ Result:
|
||||
|
||||
## arrayZip {#arrayzip}
|
||||
|
||||
Combine multiple Array type columns into one Array\[Tuple(…)\] column
|
||||
Combines multiple arrays into a single array. The resulting array contains the corresponding elements of the source arrays grouped into tuples in the listed order of arguments.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -999,28 +999,33 @@ arrayZip(arr1, arr2, ..., arrN)
|
||||
|
||||
**Parameters**
|
||||
|
||||
`arr` — Any number of [array](../../sql_reference/data_types/array.md) type columns to combine.
|
||||
- `arrN` — [Array](../data_types/array.md).
|
||||
|
||||
The function can take any number of arrays of different types. All the input arrays must be of equal size.
|
||||
|
||||
**Returned value**
|
||||
|
||||
The result of Array\[Tuple(…)\] type after the combination of these arrays
|
||||
- Array with elements from the source arrays grouped into [tuples](../data_types/tuple.md). Data types in the tuple are the same as types of the input arrays and in the same order as arrays are passed.
|
||||
|
||||
Type: [Array](../data_types/array.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT arrayZip(['a', 'b', 'c'], ['d', 'e', 'f']);
|
||||
SELECT arrayZip(['a', 'b', 'c'], [5, 2, 1])
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─arrayZip(['a', 'b', 'c'], ['d', 'e', 'f'])─┐
|
||||
│ [('a','d'),('b','e'),('c','f')] │
|
||||
└────────────────────────────────────────────┘
|
||||
┌─arrayZip(['a', 'b', 'c'], [5, 2, 1])─┐
|
||||
│ [('a',5),('b',2),('c',1)] │
|
||||
└──────────────────────────────────────┘
|
||||
```
|
||||
|
||||
|
||||
## arrayAUC {#arrayauc}
|
||||
Calculate AUC (Area Under the Curve, which is a concept in machine learning, see more details: https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve).
|
||||
|
||||
|
@ -204,7 +204,6 @@ The following operations with [partitions](../../engines/table_engines/mergetree
|
||||
- [DETACH PARTITION](#alter_detach-partition) – Moves a partition to the `detached` directory and forget it.
|
||||
- [DROP PARTITION](#alter_drop-partition) – Deletes a partition.
|
||||
- [ATTACH PART\|PARTITION](#alter_attach-partition) – Adds a part or partition from the `detached` directory to the table.
|
||||
- [REPLACE PARTITION](#alter_replace-partition) - Copies the data partition from one table to another.
|
||||
- [ATTACH PARTITION FROM](#alter_attach-partition-from) – Copies the data partition from one table to another and adds.
|
||||
- [REPLACE PARTITION](#alter_replace-partition) - Copies the data partition from one table to another and replaces.
|
||||
- [MOVE PARTITION TO TABLE](#alter_move_to_table-partition) (\#alter\_move\_to\_table-partition) - Move the data partition from one table to another.
|
||||
|
@ -179,7 +179,7 @@ CREATE TABLE codec_example
|
||||
ENGINE = MergeTree()
|
||||
```
|
||||
|
||||
#### Common Purpose Codecs {#create-query-common-purpose-codecs}
|
||||
#### General Purpose Codecs {#create-query-general-purpose-codecs}
|
||||
|
||||
Codecs:
|
||||
|
||||
|
@ -248,7 +248,7 @@ Here, a sample of 10% is taken from the second half of the data.
|
||||
|
||||
### ARRAY JOIN Clause {#select-array-join-clause}
|
||||
|
||||
Allows executing `JOIN` with an array or nested data structure. The intent is similar to the [arrayJoin](../../sql_reference/functions/array_join.md#functions_arrayjoin) function, but its functionality is broader.
|
||||
Allows executing `JOIN` with an array or nested data structure. The intent is similar to the [arrayJoin](../functions/array_join.md#functions_arrayjoin) function, but its functionality is broader.
|
||||
|
||||
``` sql
|
||||
SELECT <expr_list>
|
||||
@ -602,7 +602,777 @@ USING (equi_column1, ... equi_columnN, asof_column)
|
||||
|
||||
For example, consider the following tables:
|
||||
|
||||
\`\`\` text
|
||||
table\_1 table\_2
|
||||
table_1 table_2
|
||||
event | ev_time | user_id event | ev_time | user_id
|
||||
----------|---------|---------- ----------|---------|----------
|
||||
... ...
|
||||
event_1_1 | 12:00 | 42 event_2_1 | 11:59 | 42
|
||||
... event_2_2 | 12:30 | 42
|
||||
event_1_2 | 13:00 | 42 event_2_3 | 13:00 | 42
|
||||
... ...
|
||||
|
||||
event \| ev\_time \| user\_id event \| ev\_time \| user\_id
|
||||
|
||||
`ASOF JOIN` can take the timestamp of a user event from `table_1` and find an event in `table_2` where the timestamp is closest to the timestamp of the event from `table_1` corresponding to the closest match condition. Equal timestamp values are the closest if available. Here, the `user_id` column can be used for joining on equality and the `ev_time` column can be used for joining on the closest match. In our example, `event_1_1` can be joined with `event_2_1` and `event_1_2` can be joined with `event_2_3`, but `event_2_2` can’t be joined.
|
||||
|
||||
!!! note "Note"
|
||||
`ASOF` join is **not** supported in the [Join](../../engines/table_engines/special/join.md) table engine.
|
||||
|
||||
To set the default strictness value, use the session configuration parameter [join\_default\_strictness](../../operations/settings/settings.md#settings-join_default_strictness).
|
||||
|
||||
#### GLOBAL JOIN {#global-join}
|
||||
|
||||
When using a normal `JOIN`, the query is sent to remote servers. Subqueries are run on each of them in order to make the right table, and the join is performed with this table. In other words, the right table is formed on each server separately.
|
||||
|
||||
When using `GLOBAL ... JOIN`, first the requestor server runs a subquery to calculate the right table. This temporary table is passed to each remote server, and queries are run on them using the temporary data that was transmitted.
|
||||
|
||||
Be careful when using `GLOBAL`. For more information, see the section [Distributed subqueries](#select-distributed-subqueries).
|
||||
|
||||
#### Usage Recommendations {#usage-recommendations}
|
||||
|
||||
When running a `JOIN`, there is no optimization of the order of execution in relation to other stages of the query. The join (a search in the right table) is run before filtering in `WHERE` and before aggregation. In order to explicitly set the processing order, we recommend running a `JOIN` subquery with a subquery.
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
CounterID,
|
||||
hits,
|
||||
visits
|
||||
FROM
|
||||
(
|
||||
SELECT
|
||||
CounterID,
|
||||
count() AS hits
|
||||
FROM test.hits
|
||||
GROUP BY CounterID
|
||||
) ANY LEFT JOIN
|
||||
(
|
||||
SELECT
|
||||
CounterID,
|
||||
sum(Sign) AS visits
|
||||
FROM test.visits
|
||||
GROUP BY CounterID
|
||||
) USING CounterID
|
||||
ORDER BY hits DESC
|
||||
LIMIT 10
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─CounterID─┬───hits─┬─visits─┐
|
||||
│ 1143050 │ 523264 │ 13665 │
|
||||
│ 731962 │ 475698 │ 102716 │
|
||||
│ 722545 │ 337212 │ 108187 │
|
||||
│ 722889 │ 252197 │ 10547 │
|
||||
│ 2237260 │ 196036 │ 9522 │
|
||||
│ 23057320 │ 147211 │ 7689 │
|
||||
│ 722818 │ 90109 │ 17847 │
|
||||
│ 48221 │ 85379 │ 4652 │
|
||||
│ 19762435 │ 77807 │ 7026 │
|
||||
│ 722884 │ 77492 │ 11056 │
|
||||
└───────────┴────────┴────────┘
|
||||
```
|
||||
|
||||
Subqueries don’t allow you to set names or use them for referencing a column from a specific subquery.
|
||||
The columns specified in `USING` must have the same names in both subqueries, and the other columns must be named differently. You can use aliases to change the names of columns in subqueries (the example uses the aliases `hits` and `visits`).
|
||||
|
||||
The `USING` clause specifies one or more columns to join, which establishes the equality of these columns. The list of columns is set without brackets. More complex join conditions are not supported.
|
||||
|
||||
The right table (the subquery result) resides in RAM. If there isn’t enough memory, you can’t run a `JOIN`.
|
||||
|
||||
Each time a query is run with the same `JOIN`, the subquery is run again because the result is not cached. To avoid this, use the special [Join](../../engines/table_engines/special/join.md) table engine, which is a prepared array for joining that is always in RAM.
|
||||
|
||||
In some cases, it is more efficient to use `IN` instead of `JOIN`.
|
||||
Among the various types of `JOIN`, the most efficient is `ANY LEFT JOIN`, then `ANY INNER JOIN`. The least efficient are `ALL LEFT JOIN` and `ALL INNER JOIN`.
|
||||
|
||||
If you need a `JOIN` for joining with dimension tables (these are relatively small tables that contain dimension properties, such as names for advertising campaigns), a `JOIN` might not be very convenient due to the fact that the right table is re-accessed for every query. For such cases, there is an “external dictionaries” feature that you should use instead of `JOIN`. For more information, see the section [External dictionaries](../dictionaries/external_dictionaries/external_dicts.md).
|
||||
|
||||
**Memory Limitations**
|
||||
|
||||
ClickHouse uses the [hash join](https://en.wikipedia.org/wiki/Hash_join) algorithm. ClickHouse takes the `<right_subquery>` and creates a hash table for it in RAM. If you need to restrict join operation memory consumption use the following settings:
|
||||
|
||||
- [max\_rows\_in\_join](../../operations/settings/query_complexity.md#settings-max_rows_in_join) — Limits number of rows in the hash table.
|
||||
- [max\_bytes\_in\_join](../../operations/settings/query_complexity.md#settings-max_bytes_in_join) — Limits size of the hash table.
|
||||
|
||||
When any of these limits is reached, ClickHouse acts as the [join\_overflow\_mode](../../operations/settings/query_complexity.md#settings-join_overflow_mode) setting instructs.
|
||||
|
||||
#### Processing of Empty or NULL Cells {#processing-of-empty-or-null-cells}
|
||||
|
||||
While joining tables, the empty cells may appear. The setting [join\_use\_nulls](../../operations/settings/settings.md#join_use_nulls) define how ClickHouse fills these cells.
|
||||
|
||||
If the `JOIN` keys are [Nullable](../data_types/nullable.md) fields, the rows where at least one of the keys has the value [NULL](../syntax.md#null-literal) are not joined.
|
||||
|
||||
#### Syntax Limitations {#syntax-limitations}
|
||||
|
||||
For multiple `JOIN` clauses in a single `SELECT` query:
|
||||
|
||||
- Taking all the columns via `*` is available only if tables are joined, not subqueries.
|
||||
- The `PREWHERE` clause is not available.
|
||||
|
||||
For `ON`, `WHERE`, and `GROUP BY` clauses:
|
||||
|
||||
- Arbitrary expressions cannot be used in `ON`, `WHERE`, and `GROUP BY` clauses, but you can define an expression in a `SELECT` clause and then use it in these clauses via an alias.
|
||||
|
||||
### WHERE Clause {#select-where}
|
||||
|
||||
If there is a WHERE clause, it must contain an expression with the UInt8 type. This is usually an expression with comparison and logical operators.
|
||||
This expression will be used for filtering data before all other transformations.
|
||||
|
||||
If indexes are supported by the database table engine, the expression is evaluated on the ability to use indexes.
|
||||
|
||||
### PREWHERE Clause {#prewhere-clause}
|
||||
|
||||
This clause has the same meaning as the WHERE clause. The difference is in which data is read from the table.
|
||||
When using PREWHERE, first only the columns necessary for executing PREWHERE are read. Then the other columns are read that are needed for running the query, but only those blocks where the PREWHERE expression is true.
|
||||
|
||||
It makes sense to use PREWHERE if there are filtration conditions that are used by a minority of the columns in the query, but that provide strong data filtration. This reduces the volume of data to read.
|
||||
|
||||
For example, it is useful to write PREWHERE for queries that extract a large number of columns, but that only have filtration for a few columns.
|
||||
|
||||
PREWHERE is only supported by tables from the `*MergeTree` family.
|
||||
|
||||
A query may simultaneously specify PREWHERE and WHERE. In this case, PREWHERE precedes WHERE.
|
||||
|
||||
If the ‘optimize\_move\_to\_prewhere’ setting is set to 1 and PREWHERE is omitted, the system uses heuristics to automatically move parts of expressions from WHERE to PREWHERE.
|
||||
|
||||
### GROUP BY Clause {#select-group-by-clause}
|
||||
|
||||
This is one of the most important parts of a column-oriented DBMS.
|
||||
|
||||
If there is a GROUP BY clause, it must contain a list of expressions. Each expression will be referred to here as a “key”.
|
||||
All the expressions in the SELECT, HAVING, and ORDER BY clauses must be calculated from keys or from aggregate functions. In other words, each column selected from the table must be used either in keys or inside aggregate functions.
|
||||
|
||||
If a query contains only table columns inside aggregate functions, the GROUP BY clause can be omitted, and aggregation by an empty set of keys is assumed.
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
count(),
|
||||
median(FetchTiming > 60 ? 60 : FetchTiming),
|
||||
count() - sum(Refresh)
|
||||
FROM hits
|
||||
```
|
||||
|
||||
However, in contrast to standard SQL, if the table doesn’t have any rows (either there aren’t any at all, or there aren’t any after using WHERE to filter), an empty result is returned, and not the result from one of the rows containing the initial values of aggregate functions.
|
||||
|
||||
As opposed to MySQL (and conforming to standard SQL), you can’t get some value of some column that is not in a key or aggregate function (except constant expressions). To work around this, you can use the ‘any’ aggregate function (get the first encountered value) or ‘min/max’.
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
domainWithoutWWW(URL) AS domain,
|
||||
count(),
|
||||
any(Title) AS title -- getting the first occurred page header for each domain.
|
||||
FROM hits
|
||||
GROUP BY domain
|
||||
```
|
||||
|
||||
For every different key value encountered, GROUP BY calculates a set of aggregate function values.
|
||||
|
||||
GROUP BY is not supported for array columns.
|
||||
|
||||
A constant can’t be specified as arguments for aggregate functions. Example: sum(1). Instead of this, you can get rid of the constant. Example: `count()`.
|
||||
|
||||
#### NULL processing {#null-processing}
|
||||
|
||||
For grouping, ClickHouse interprets [NULL](../syntax.md) as a value, and `NULL=NULL`.
|
||||
|
||||
Here’s an example to show what this means.
|
||||
|
||||
Assume you have this table:
|
||||
|
||||
``` text
|
||||
┌─x─┬────y─┐
|
||||
│ 1 │ 2 │
|
||||
│ 2 │ ᴺᵁᴸᴸ │
|
||||
│ 3 │ 2 │
|
||||
│ 3 │ 3 │
|
||||
│ 3 │ ᴺᵁᴸᴸ │
|
||||
└───┴──────┘
|
||||
```
|
||||
|
||||
The query `SELECT sum(x), y FROM t_null_big GROUP BY y` results in:
|
||||
|
||||
``` text
|
||||
┌─sum(x)─┬────y─┐
|
||||
│ 4 │ 2 │
|
||||
│ 3 │ 3 │
|
||||
│ 5 │ ᴺᵁᴸᴸ │
|
||||
└────────┴──────┘
|
||||
```
|
||||
|
||||
You can see that `GROUP BY` for `y = NULL` summed up `x`, as if `NULL` is this value.
|
||||
|
||||
If you pass several keys to `GROUP BY`, the result will give you all the combinations of the selection, as if `NULL` were a specific value.
|
||||
|
||||
#### WITH TOTALS Modifier {#with-totals-modifier}
|
||||
|
||||
If the WITH TOTALS modifier is specified, another row will be calculated. This row will have key columns containing default values (zeros or empty lines), and columns of aggregate functions with the values calculated across all the rows (the “total” values).
|
||||
|
||||
This extra row is output in JSON\*, TabSeparated\*, and Pretty\* formats, separately from the other rows. In the other formats, this row is not output.
|
||||
|
||||
In JSON\* formats, this row is output as a separate ‘totals’ field. In TabSeparated\* formats, the row comes after the main result, preceded by an empty row (after the other data). In Pretty\* formats, the row is output as a separate table after the main result.
|
||||
|
||||
`WITH TOTALS` can be run in different ways when HAVING is present. The behavior depends on the ‘totals\_mode’ setting.
|
||||
By default, `totals_mode = 'before_having'`. In this case, ‘totals’ is calculated across all rows, including the ones that don’t pass through HAVING and ‘max\_rows\_to\_group\_by’.
|
||||
|
||||
The other alternatives include only the rows that pass through HAVING in ‘totals’, and behave differently with the setting `max_rows_to_group_by` and `group_by_overflow_mode = 'any'`.
|
||||
|
||||
`after_having_exclusive` – Don’t include rows that didn’t pass through `max_rows_to_group_by`. In other words, ‘totals’ will have less than or the same number of rows as it would if `max_rows_to_group_by` were omitted.
|
||||
|
||||
`after_having_inclusive` – Include all the rows that didn’t pass through ‘max\_rows\_to\_group\_by’ in ‘totals’. In other words, ‘totals’ will have more than or the same number of rows as it would if `max_rows_to_group_by` were omitted.
|
||||
|
||||
`after_having_auto` – Count the number of rows that passed through HAVING. If it is more than a certain amount (by default, 50%), include all the rows that didn’t pass through ‘max\_rows\_to\_group\_by’ in ‘totals’. Otherwise, do not include them.
|
||||
|
||||
`totals_auto_threshold` – By default, 0.5. The coefficient for `after_having_auto`.
|
||||
|
||||
If `max_rows_to_group_by` and `group_by_overflow_mode = 'any'` are not used, all variations of `after_having` are the same, and you can use any of them (for example, `after_having_auto`).
|
||||
|
||||
You can use WITH TOTALS in subqueries, including subqueries in the JOIN clause (in this case, the respective total values are combined).
|
||||
|
||||
#### GROUP BY in External Memory {#select-group-by-in-external-memory}
|
||||
|
||||
You can enable dumping temporary data to the disk to restrict memory usage during `GROUP BY`.
|
||||
The [max\_bytes\_before\_external\_group\_by](../../operations/settings/settings.md#settings-max_bytes_before_external_group_by) setting determines the threshold RAM consumption for dumping `GROUP BY` temporary data to the file system. If set to 0 (the default), it is disabled.
|
||||
|
||||
When using `max_bytes_before_external_group_by`, we recommend that you set `max_memory_usage` about twice as high. This is necessary because there are two stages to aggregation: reading the date and forming intermediate data (1) and merging the intermediate data (2). Dumping data to the file system can only occur during stage 1. If the temporary data wasn’t dumped, then stage 2 might require up to the same amount of memory as in stage 1.
|
||||
|
||||
For example, if [max\_memory\_usage](../../operations/settings/settings.md#settings_max_memory_usage) was set to 10000000000 and you want to use external aggregation, it makes sense to set `max_bytes_before_external_group_by` to 10000000000, and max\_memory\_usage to 20000000000. When external aggregation is triggered (if there was at least one dump of temporary data), maximum consumption of RAM is only slightly more than `max_bytes_before_external_group_by`.
|
||||
|
||||
With distributed query processing, external aggregation is performed on remote servers. In order for the requester server to use only a small amount of RAM, set `distributed_aggregation_memory_efficient` to 1.
|
||||
|
||||
When merging data flushed to the disk, as well as when merging results from remote servers when the `distributed_aggregation_memory_efficient` setting is enabled, consumes up to `1/256 * the_number_of_threads` from the total amount of RAM.
|
||||
|
||||
When external aggregation is enabled, if there was less than `max_bytes_before_external_group_by` of data (i.e. data was not flushed), the query runs just as fast as without external aggregation. If any temporary data was flushed, the run time will be several times longer (approximately three times).
|
||||
|
||||
If you have an `ORDER BY` with a `LIMIT` after `GROUP BY`, then the amount of used RAM depends on the amount of data in `LIMIT`, not in the whole table. But if the `ORDER BY` doesn’t have `LIMIT`, don’t forget to enable external sorting (`max_bytes_before_external_sort`).
|
||||
|
||||
### LIMIT BY Clause {#limit-by-clause}
|
||||
|
||||
A query with the `LIMIT n BY expressions` clause selects the first `n` rows for each distinct value of `expressions`. The key for `LIMIT BY` can contain any number of [expressions](../syntax.md#syntax-expressions).
|
||||
|
||||
ClickHouse supports the following syntax:
|
||||
|
||||
- `LIMIT [offset_value, ]n BY expressions`
|
||||
- `LIMIT n OFFSET offset_value BY expressions`
|
||||
|
||||
During query processing, ClickHouse selects data ordered by sorting key. The sorting key is set explicitly using an [ORDER BY](#select-order-by) clause or implicitly as a property of the table engine. Then ClickHouse applies `LIMIT n BY expressions` and returns the first `n` rows for each distinct combination of `expressions`. If `OFFSET` is specified, then for each data block that belongs to a distinct combination of `expressions`, ClickHouse skips `offset_value` number of rows from the beginning of the block and returns a maximum of `n` rows as a result. If `offset_value` is bigger than the number of rows in the data block, ClickHouse returns zero rows from the block.
|
||||
|
||||
`LIMIT BY` is not related to `LIMIT`. They can both be used in the same query.
|
||||
|
||||
**Examples**
|
||||
|
||||
Sample table:
|
||||
|
||||
``` sql
|
||||
CREATE TABLE limit_by(id Int, val Int) ENGINE = Memory;
|
||||
INSERT INTO limit_by values(1, 10), (1, 11), (1, 12), (2, 20), (2, 21);
|
||||
```
|
||||
|
||||
Queries:
|
||||
|
||||
``` sql
|
||||
SELECT * FROM limit_by ORDER BY id, val LIMIT 2 BY id
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─id─┬─val─┐
|
||||
│ 1 │ 10 │
|
||||
│ 1 │ 11 │
|
||||
│ 2 │ 20 │
|
||||
│ 2 │ 21 │
|
||||
└────┴─────┘
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT * FROM limit_by ORDER BY id, val LIMIT 1, 2 BY id
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─id─┬─val─┐
|
||||
│ 1 │ 11 │
|
||||
│ 1 │ 12 │
|
||||
│ 2 │ 21 │
|
||||
└────┴─────┘
|
||||
```
|
||||
|
||||
The `SELECT * FROM limit_by ORDER BY id, val LIMIT 2 OFFSET 1 BY id` query returns the same result.
|
||||
|
||||
The following query returns the top 5 referrers for each `domain, device_type` pair with a maximum of 100 rows in total (`LIMIT n BY + LIMIT`).
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
domainWithoutWWW(URL) AS domain,
|
||||
domainWithoutWWW(REFERRER_URL) AS referrer,
|
||||
device_type,
|
||||
count() cnt
|
||||
FROM hits
|
||||
GROUP BY domain, referrer, device_type
|
||||
ORDER BY cnt DESC
|
||||
LIMIT 5 BY domain, device_type
|
||||
LIMIT 100
|
||||
```
|
||||
|
||||
### HAVING Clause {#having-clause}
|
||||
|
||||
Allows filtering the result received after GROUP BY, similar to the WHERE clause.
|
||||
WHERE and HAVING differ in that WHERE is performed before aggregation (GROUP BY), while HAVING is performed after it.
|
||||
If aggregation is not performed, HAVING can’t be used.
|
||||
|
||||
### ORDER BY Clause {#select-order-by}
|
||||
|
||||
The ORDER BY clause contains a list of expressions, which can each be assigned DESC or ASC (the sorting direction). If the direction is not specified, ASC is assumed. ASC is sorted in ascending order, and DESC in descending order. The sorting direction applies to a single expression, not to the entire list. Example: `ORDER BY Visits DESC, SearchPhrase`
|
||||
|
||||
For sorting by String values, you can specify collation (comparison). Example: `ORDER BY SearchPhrase COLLATE 'tr'` - for sorting by keyword in ascending order, using the Turkish alphabet, case insensitive, assuming that strings are UTF-8 encoded. COLLATE can be specified or not for each expression in ORDER BY independently. If ASC or DESC is specified, COLLATE is specified after it. When using COLLATE, sorting is always case-insensitive.
|
||||
|
||||
We only recommend using COLLATE for final sorting of a small number of rows, since sorting with COLLATE is less efficient than normal sorting by bytes.
|
||||
|
||||
Rows that have identical values for the list of sorting expressions are output in an arbitrary order, which can also be nondeterministic (different each time).
|
||||
If the ORDER BY clause is omitted, the order of the rows is also undefined, and may be nondeterministic as well.
|
||||
|
||||
`NaN` and `NULL` sorting order:
|
||||
|
||||
- With the modifier `NULLS FIRST` — First `NULL`, then `NaN`, then other values.
|
||||
- With the modifier `NULLS LAST` — First the values, then `NaN`, then `NULL`.
|
||||
- Default — The same as with the `NULLS LAST` modifier.
|
||||
|
||||
Example:
|
||||
|
||||
For the table
|
||||
|
||||
``` text
|
||||
┌─x─┬────y─┐
|
||||
│ 1 │ ᴺᵁᴸᴸ │
|
||||
│ 2 │ 2 │
|
||||
│ 1 │ nan │
|
||||
│ 2 │ 2 │
|
||||
│ 3 │ 4 │
|
||||
│ 5 │ 6 │
|
||||
│ 6 │ nan │
|
||||
│ 7 │ ᴺᵁᴸᴸ │
|
||||
│ 6 │ 7 │
|
||||
│ 8 │ 9 │
|
||||
└───┴──────┘
|
||||
```
|
||||
|
||||
Run the query `SELECT * FROM t_null_nan ORDER BY y NULLS FIRST` to get:
|
||||
|
||||
``` text
|
||||
┌─x─┬────y─┐
|
||||
│ 1 │ ᴺᵁᴸᴸ │
|
||||
│ 7 │ ᴺᵁᴸᴸ │
|
||||
│ 1 │ nan │
|
||||
│ 6 │ nan │
|
||||
│ 2 │ 2 │
|
||||
│ 2 │ 2 │
|
||||
│ 3 │ 4 │
|
||||
│ 5 │ 6 │
|
||||
│ 6 │ 7 │
|
||||
│ 8 │ 9 │
|
||||
└───┴──────┘
|
||||
```
|
||||
|
||||
When floating point numbers are sorted, NaNs are separate from the other values. Regardless of the sorting order, NaNs come at the end. In other words, for ascending sorting they are placed as if they are larger than all the other numbers, while for descending sorting they are placed as if they are smaller than the rest.
|
||||
|
||||
Less RAM is used if a small enough LIMIT is specified in addition to ORDER BY. Otherwise, the amount of memory spent is proportional to the volume of data for sorting. For distributed query processing, if GROUP BY is omitted, sorting is partially done on remote servers, and the results are merged on the requestor server. This means that for distributed sorting, the volume of data to sort can be greater than the amount of memory on a single server.
|
||||
|
||||
If there is not enough RAM, it is possible to perform sorting in external memory (creating temporary files on a disk). Use the setting `max_bytes_before_external_sort` for this purpose. If it is set to 0 (the default), external sorting is disabled. If it is enabled, when the volume of data to sort reaches the specified number of bytes, the collected data is sorted and dumped into a temporary file. After all data is read, all the sorted files are merged and the results are output. Files are written to the /var/lib/clickhouse/tmp/ directory in the config (by default, but you can use the ‘tmp\_path’ parameter to change this setting).
|
||||
|
||||
Running a query may use more memory than ‘max\_bytes\_before\_external\_sort’. For this reason, this setting must have a value significantly smaller than ‘max\_memory\_usage’. As an example, if your server has 128 GB of RAM and you need to run a single query, set ‘max\_memory\_usage’ to 100 GB, and ‘max\_bytes\_before\_external\_sort’ to 80 GB.
|
||||
|
||||
External sorting works much less effectively than sorting in RAM.
|
||||
|
||||
### SELECT Clause {#select-select}
|
||||
|
||||
[Expressions](../syntax.md#syntax-expressions) specified in the `SELECT` clause are calculated after all the operations in the clauses described above are finished. These expressions work as if they apply to separate rows in the result. If expressions in the `SELECT` clause contain aggregate functions, then ClickHouse processes aggregate functions and expressions used as their arguments during the [GROUP BY](#select-group-by-clause) aggregation.
|
||||
|
||||
If you want to include all columns in the result, use the asterisk (`*`) symbol. For example, `SELECT * FROM ...`.
|
||||
|
||||
To match some columns in the result with a [re2](https://en.wikipedia.org/wiki/RE2_(software)) regular expression, you can use the `COLUMNS` expression.
|
||||
|
||||
``` sql
|
||||
COLUMNS('regexp')
|
||||
```
|
||||
|
||||
For example, consider the table:
|
||||
|
||||
``` sql
|
||||
CREATE TABLE default.col_names (aa Int8, ab Int8, bc Int8) ENGINE = TinyLog
|
||||
```
|
||||
|
||||
The following query selects data from all the columns containing the `a` symbol in their name.
|
||||
|
||||
``` sql
|
||||
SELECT COLUMNS('a') FROM col_names
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─aa─┬─ab─┐
|
||||
│ 1 │ 1 │
|
||||
└────┴────┘
|
||||
```
|
||||
|
||||
The selected columns are returned not in the alphabetical order.
|
||||
|
||||
You can use multiple `COLUMNS` expressions in a query and apply functions to them.
|
||||
|
||||
For example:
|
||||
|
||||
``` sql
|
||||
SELECT COLUMNS('a'), COLUMNS('c'), toTypeName(COLUMNS('c')) FROM col_names
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─aa─┬─ab─┬─bc─┬─toTypeName(bc)─┐
|
||||
│ 1 │ 1 │ 1 │ Int8 │
|
||||
└────┴────┴────┴────────────────┘
|
||||
```
|
||||
|
||||
Each column returned by the `COLUMNS` expression is passed to the function as a separate argument. Also you can pass other arguments to the function if it supports them. Be careful when using functions. If a function doesn’t support the number of arguments you have passed to it, ClickHouse throws an exception.
|
||||
|
||||
For example:
|
||||
|
||||
``` sql
|
||||
SELECT COLUMNS('a') + COLUMNS('c') FROM col_names
|
||||
```
|
||||
|
||||
``` text
|
||||
Received exception from server (version 19.14.1):
|
||||
Code: 42. DB::Exception: Received from localhost:9000. DB::Exception: Number of arguments for function plus doesn't match: passed 3, should be 2.
|
||||
```
|
||||
|
||||
In this example, `COLUMNS('a')` returns two columns: `aa` and `ab`. `COLUMNS('c')` returns the `bc` column. The `+` operator can’t apply to 3 arguments, so ClickHouse throws an exception with the relevant message.
|
||||
|
||||
Columns that matched the `COLUMNS` expression can have different data types. If `COLUMNS` doesn’t match any columns and is the only expression in `SELECT`, ClickHouse throws an exception.
|
||||
|
||||
### DISTINCT Clause {#select-distinct}
|
||||
|
||||
If DISTINCT is specified, only a single row will remain out of all the sets of fully matching rows in the result.
|
||||
The result will be the same as if GROUP BY were specified across all the fields specified in SELECT without aggregate functions. But there are several differences from GROUP BY:
|
||||
|
||||
- DISTINCT can be applied together with GROUP BY.
|
||||
- When ORDER BY is omitted and LIMIT is defined, the query stops running immediately after the required number of different rows has been read.
|
||||
- Data blocks are output as they are processed, without waiting for the entire query to finish running.
|
||||
|
||||
DISTINCT is not supported if SELECT has at least one array column.
|
||||
|
||||
`DISTINCT` works with [NULL](../syntax.md) as if `NULL` were a specific value, and `NULL=NULL`. In other words, in the `DISTINCT` results, different combinations with `NULL` only occur once.
|
||||
|
||||
ClickHouse supports using the `DISTINCT` and `ORDER BY` clauses for different columns in one query. The `DISTINCT` clause is executed before the `ORDER BY` clause.
|
||||
|
||||
Example table:
|
||||
|
||||
``` text
|
||||
┌─a─┬─b─┐
|
||||
│ 2 │ 1 │
|
||||
│ 1 │ 2 │
|
||||
│ 3 │ 3 │
|
||||
│ 2 │ 4 │
|
||||
└───┴───┘
|
||||
```
|
||||
|
||||
When selecting data with the `SELECT DISTINCT a FROM t1 ORDER BY b ASC` query, we get the following result:
|
||||
|
||||
``` text
|
||||
┌─a─┐
|
||||
│ 2 │
|
||||
│ 1 │
|
||||
│ 3 │
|
||||
└───┘
|
||||
```
|
||||
|
||||
If we change the sorting direction `SELECT DISTINCT a FROM t1 ORDER BY b DESC`, we get the following result:
|
||||
|
||||
``` text
|
||||
┌─a─┐
|
||||
│ 3 │
|
||||
│ 1 │
|
||||
│ 2 │
|
||||
└───┘
|
||||
```
|
||||
|
||||
Row `2, 4` was cut before sorting.
|
||||
|
||||
Take this implementation specificity into account when programming queries.
|
||||
|
||||
### LIMIT Clause {#limit-clause}
|
||||
|
||||
`LIMIT m` allows you to select the first `m` rows from the result.
|
||||
|
||||
`LIMIT n, m` allows you to select the first `m` rows from the result after skipping the first `n` rows. The `LIMIT m OFFSET n` syntax is also supported.
|
||||
|
||||
`n` and `m` must be non-negative integers.
|
||||
|
||||
If there isn’t an `ORDER BY` clause that explicitly sorts results, the result may be arbitrary and nondeterministic.
|
||||
|
||||
### UNION ALL Clause {#union-all-clause}
|
||||
|
||||
You can use UNION ALL to combine any number of queries. Example:
|
||||
|
||||
``` sql
|
||||
SELECT CounterID, 1 AS table, toInt64(count()) AS c
|
||||
FROM test.hits
|
||||
GROUP BY CounterID
|
||||
|
||||
UNION ALL
|
||||
|
||||
SELECT CounterID, 2 AS table, sum(Sign) AS c
|
||||
FROM test.visits
|
||||
GROUP BY CounterID
|
||||
HAVING c > 0
|
||||
```
|
||||
|
||||
Only UNION ALL is supported. The regular UNION (UNION DISTINCT) is not supported. If you need UNION DISTINCT, you can write SELECT DISTINCT from a subquery containing UNION ALL.
|
||||
|
||||
Queries that are parts of UNION ALL can be run simultaneously, and their results can be mixed together.
|
||||
|
||||
The structure of results (the number and type of columns) must match for the queries. But the column names can differ. In this case, the column names for the final result will be taken from the first query. Type casting is performed for unions. For example, if two queries being combined have the same field with non-`Nullable` and `Nullable` types from a compatible type, the resulting `UNION ALL` has a `Nullable` type field.
|
||||
|
||||
Queries that are parts of UNION ALL can’t be enclosed in brackets. ORDER BY and LIMIT are applied to separate queries, not to the final result. If you need to apply a conversion to the final result, you can put all the queries with UNION ALL in a subquery in the FROM clause.
|
||||
|
||||
### INTO OUTFILE Clause {#into-outfile-clause}
|
||||
|
||||
Add the `INTO OUTFILE filename` clause (where filename is a string literal) to redirect query output to the specified file.
|
||||
In contrast to MySQL, the file is created on the client side. The query will fail if a file with the same filename already exists.
|
||||
This functionality is available in the command-line client and clickhouse-local (a query sent via HTTP interface will fail).
|
||||
|
||||
The default output format is TabSeparated (the same as in the command-line client batch mode).
|
||||
|
||||
### FORMAT Clause {#format-clause}
|
||||
|
||||
Specify ‘FORMAT format’ to get data in any specified format.
|
||||
You can use this for convenience, or for creating dumps.
|
||||
For more information, see the section “Formats”.
|
||||
If the FORMAT clause is omitted, the default format is used, which depends on both the settings and the interface used for accessing the DB. For the HTTP interface and the command-line client in batch mode, the default format is TabSeparated. For the command-line client in interactive mode, the default format is PrettyCompact (it has attractive and compact tables).
|
||||
|
||||
When using the command-line client, data is passed to the client in an internal efficient format. The client independently interprets the FORMAT clause of the query and formats the data itself (thus relieving the network and the server from the load).
|
||||
|
||||
### IN Operators {#select-in-operators}
|
||||
|
||||
The `IN`, `NOT IN`, `GLOBAL IN`, and `GLOBAL NOT IN` operators are covered separately, since their functionality is quite rich.
|
||||
|
||||
The left side of the operator is either a single column or a tuple.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT UserID IN (123, 456) FROM ...
|
||||
SELECT (CounterID, UserID) IN ((34, 123), (101500, 456)) FROM ...
|
||||
```
|
||||
|
||||
If the left side is a single column that is in the index, and the right side is a set of constants, the system uses the index for processing the query.
|
||||
|
||||
Don’t list too many values explicitly (i.e. millions). If a data set is large, put it in a temporary table (for example, see the section “External data for query processing”), then use a subquery.
|
||||
|
||||
The right side of the operator can be a set of constant expressions, a set of tuples with constant expressions (shown in the examples above), or the name of a database table or SELECT subquery in brackets.
|
||||
|
||||
If the right side of the operator is the name of a table (for example, `UserID IN users`), this is equivalent to the subquery `UserID IN (SELECT * FROM users)`. Use this when working with external data that is sent along with the query. For example, the query can be sent together with a set of user IDs loaded to the ‘users’ temporary table, which should be filtered.
|
||||
|
||||
If the right side of the operator is a table name that has the Set engine (a prepared data set that is always in RAM), the data set will not be created over again for each query.
|
||||
|
||||
The subquery may specify more than one column for filtering tuples.
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
SELECT (CounterID, UserID) IN (SELECT CounterID, UserID FROM ...) FROM ...
|
||||
```
|
||||
|
||||
The columns to the left and right of the IN operator should have the same type.
|
||||
|
||||
The IN operator and subquery may occur in any part of the query, including in aggregate functions and lambda functions.
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
EventDate,
|
||||
avg(UserID IN
|
||||
(
|
||||
SELECT UserID
|
||||
FROM test.hits
|
||||
WHERE EventDate = toDate('2014-03-17')
|
||||
)) AS ratio
|
||||
FROM test.hits
|
||||
GROUP BY EventDate
|
||||
ORDER BY EventDate ASC
|
||||
```
|
||||
|
||||
``` text
|
||||
┌──EventDate─┬────ratio─┐
|
||||
│ 2014-03-17 │ 1 │
|
||||
│ 2014-03-18 │ 0.807696 │
|
||||
│ 2014-03-19 │ 0.755406 │
|
||||
│ 2014-03-20 │ 0.723218 │
|
||||
│ 2014-03-21 │ 0.697021 │
|
||||
│ 2014-03-22 │ 0.647851 │
|
||||
│ 2014-03-23 │ 0.648416 │
|
||||
└────────────┴──────────┘
|
||||
```
|
||||
|
||||
For each day after March 17th, count the percentage of pageviews made by users who visited the site on March 17th.
|
||||
A subquery in the IN clause is always run just one time on a single server. There are no dependent subqueries.
|
||||
|
||||
#### NULL processing {#null-processing-1}
|
||||
|
||||
During request processing, the IN operator assumes that the result of an operation with [NULL](../syntax.md) is always equal to `0`, regardless of whether `NULL` is on the right or left side of the operator. `NULL` values are not included in any dataset, do not correspond to each other and cannot be compared.
|
||||
|
||||
Here is an example with the `t_null` table:
|
||||
|
||||
``` text
|
||||
┌─x─┬────y─┐
|
||||
│ 1 │ ᴺᵁᴸᴸ │
|
||||
│ 2 │ 3 │
|
||||
└───┴──────┘
|
||||
```
|
||||
|
||||
Running the query `SELECT x FROM t_null WHERE y IN (NULL,3)` gives you the following result:
|
||||
|
||||
``` text
|
||||
┌─x─┐
|
||||
│ 2 │
|
||||
└───┘
|
||||
```
|
||||
|
||||
You can see that the row in which `y = NULL` is thrown out of the query results. This is because ClickHouse can’t decide whether `NULL` is included in the `(NULL,3)` set, returns `0` as the result of the operation, and `SELECT` excludes this row from the final output.
|
||||
|
||||
``` sql
|
||||
SELECT y IN (NULL, 3)
|
||||
FROM t_null
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─in(y, tuple(NULL, 3))─┐
|
||||
│ 0 │
|
||||
│ 1 │
|
||||
└───────────────────────┘
|
||||
```
|
||||
|
||||
#### Distributed Subqueries {#select-distributed-subqueries}
|
||||
|
||||
There are two options for IN-s with subqueries (similar to JOINs): normal `IN` / `JOIN` and `GLOBAL IN` / `GLOBAL JOIN`. They differ in how they are run for distributed query processing.
|
||||
|
||||
!!! attention "Attention"
|
||||
Remember that the algorithms described below may work differently depending on the [settings](../../operations/settings/settings.md) `distributed_product_mode` setting.
|
||||
|
||||
When using the regular IN, the query is sent to remote servers, and each of them runs the subqueries in the `IN` or `JOIN` clause.
|
||||
|
||||
When using `GLOBAL IN` / `GLOBAL JOINs`, first all the subqueries are run for `GLOBAL IN` / `GLOBAL JOINs`, and the results are collected in temporary tables. Then the temporary tables are sent to each remote server, where the queries are run using this temporary data.
|
||||
|
||||
For a non-distributed query, use the regular `IN` / `JOIN`.
|
||||
|
||||
Be careful when using subqueries in the `IN` / `JOIN` clauses for distributed query processing.
|
||||
|
||||
Let’s look at some examples. Assume that each server in the cluster has a normal **local\_table**. Each server also has a **distributed\_table** table with the **Distributed** type, which looks at all the servers in the cluster.
|
||||
|
||||
For a query to the **distributed\_table**, the query will be sent to all the remote servers and run on them using the **local\_table**.
|
||||
|
||||
For example, the query
|
||||
|
||||
``` sql
|
||||
SELECT uniq(UserID) FROM distributed_table
|
||||
```
|
||||
|
||||
will be sent to all remote servers as
|
||||
|
||||
``` sql
|
||||
SELECT uniq(UserID) FROM local_table
|
||||
```
|
||||
|
||||
and run on each of them in parallel, until it reaches the stage where intermediate results can be combined. Then the intermediate results will be returned to the requestor server and merged on it, and the final result will be sent to the client.
|
||||
|
||||
Now let’s examine a query with IN:
|
||||
|
||||
``` sql
|
||||
SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM local_table WHERE CounterID = 34)
|
||||
```
|
||||
|
||||
- Calculation of the intersection of audiences of two sites.
|
||||
|
||||
This query will be sent to all remote servers as
|
||||
|
||||
``` sql
|
||||
SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM local_table WHERE CounterID = 34)
|
||||
```
|
||||
|
||||
In other words, the data set in the IN clause will be collected on each server independently, only across the data that is stored locally on each of the servers.
|
||||
|
||||
This will work correctly and optimally if you are prepared for this case and have spread data across the cluster servers such that the data for a single UserID resides entirely on a single server. In this case, all the necessary data will be available locally on each server. Otherwise, the result will be inaccurate. We refer to this variation of the query as “local IN”.
|
||||
|
||||
To correct how the query works when data is spread randomly across the cluster servers, you could specify **distributed\_table** inside a subquery. The query would look like this:
|
||||
|
||||
``` sql
|
||||
SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM distributed_table WHERE CounterID = 34)
|
||||
```
|
||||
|
||||
This query will be sent to all remote servers as
|
||||
|
||||
``` sql
|
||||
SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM distributed_table WHERE CounterID = 34)
|
||||
```
|
||||
|
||||
The subquery will begin running on each remote server. Since the subquery uses a distributed table, the subquery that is on each remote server will be resent to every remote server as
|
||||
|
||||
``` sql
|
||||
SELECT UserID FROM local_table WHERE CounterID = 34
|
||||
```
|
||||
|
||||
For example, if you have a cluster of 100 servers, executing the entire query will require 10,000 elementary requests, which is generally considered unacceptable.
|
||||
|
||||
In such cases, you should always use GLOBAL IN instead of IN. Let’s look at how it works for the query
|
||||
|
||||
``` sql
|
||||
SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID GLOBAL IN (SELECT UserID FROM distributed_table WHERE CounterID = 34)
|
||||
```
|
||||
|
||||
The requestor server will run the subquery
|
||||
|
||||
``` sql
|
||||
SELECT UserID FROM distributed_table WHERE CounterID = 34
|
||||
```
|
||||
|
||||
and the result will be put in a temporary table in RAM. Then the request will be sent to each remote server as
|
||||
|
||||
``` sql
|
||||
SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID GLOBAL IN _data1
|
||||
```
|
||||
|
||||
and the temporary table `_data1` will be sent to every remote server with the query (the name of the temporary table is implementation-defined).
|
||||
|
||||
This is more optimal than using the normal IN. However, keep the following points in mind:
|
||||
|
||||
1. When creating a temporary table, data is not made unique. To reduce the volume of data transmitted over the network, specify DISTINCT in the subquery. (You don’t need to do this for a normal IN.)
|
||||
2. The temporary table will be sent to all the remote servers. Transmission does not account for network topology. For example, if 10 remote servers reside in a datacenter that is very remote in relation to the requestor server, the data will be sent 10 times over the channel to the remote datacenter. Try to avoid large data sets when using GLOBAL IN.
|
||||
3. When transmitting data to remote servers, restrictions on network bandwidth are not configurable. You might overload the network.
|
||||
4. Try to distribute data across servers so that you don’t need to use GLOBAL IN on a regular basis.
|
||||
5. If you need to use GLOBAL IN often, plan the location of the ClickHouse cluster so that a single group of replicas resides in no more than one data center with a fast network between them, so that a query can be processed entirely within a single data center.
|
||||
|
||||
It also makes sense to specify a local table in the `GLOBAL IN` clause, in case this local table is only available on the requestor server and you want to use data from it on remote servers.
|
||||
|
||||
### Extreme Values {#extreme-values}
|
||||
|
||||
In addition to results, you can also get minimum and maximum values for the results columns. To do this, set the **extremes** setting to 1. Minimums and maximums are calculated for numeric types, dates, and dates with times. For other columns, the default values are output.
|
||||
|
||||
An extra two rows are calculated – the minimums and maximums, respectively. These extra two rows are output in `JSON*`, `TabSeparated*`, and `Pretty*` [formats](../../interfaces/formats.md), separate from the other rows. They are not output for other formats.
|
||||
|
||||
In `JSON*` formats, the extreme values are output in a separate ‘extremes’ field. In `TabSeparated*` formats, the row comes after the main result, and after ‘totals’ if present. It is preceded by an empty row (after the other data). In `Pretty*` formats, the row is output as a separate table after the main result, and after `totals` if present.
|
||||
|
||||
Extreme values are calculated for rows before `LIMIT`, but after `LIMIT BY`. However, when using `LIMIT offset, size`, the rows before `offset` are included in `extremes`. In stream requests, the result may also include a small number of rows that passed through `LIMIT`.
|
||||
|
||||
### Notes {#notes}
|
||||
|
||||
The `GROUP BY` and `ORDER BY` clauses do not support positional arguments. This contradicts MySQL, but conforms to standard SQL.
|
||||
For example, `GROUP BY 1, 2` will be interpreted as grouping by constants (i.e. aggregation of all rows into one).
|
||||
|
||||
You can use synonyms (`AS` aliases) in any part of a query.
|
||||
|
||||
You can put an asterisk in any part of a query instead of an expression. When the query is analyzed, the asterisk is expanded to a list of all table columns (excluding the `MATERIALIZED` and `ALIAS` columns). There are only a few cases when using an asterisk is justified:
|
||||
|
||||
- When creating a table dump.
|
||||
- For tables containing just a few columns, such as system tables.
|
||||
- For getting information about what columns are in a table. In this case, set `LIMIT 1`. But it is better to use the `DESC TABLE` query.
|
||||
- When there is strong filtration on a small number of columns using `PREWHERE`.
|
||||
- In subqueries (since columns that aren’t needed for the external query are excluded from subqueries).
|
||||
|
||||
In all other cases, we don’t recommend using the asterisk, since it only gives you the drawbacks of a columnar DBMS instead of the advantages. In other words using the asterisk is not recommended.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/query_language/select/) <!--hide-->
|
||||
|
@ -80,48 +80,6 @@ Ver la diferencia?
|
||||
|
||||
Por ejemplo, la consulta “count the number of records for each advertising platform” requiere leer uno “advertising platform ID” columna, que ocupa 1 byte sin comprimir. Si la mayor parte del tráfico no proviene de plataformas publicitarias, puede esperar al menos una compresión de 10 veces de esta columna. Cuando se utiliza un algoritmo de compresión rápida, la descompresión de datos es posible a una velocidad de al menos varios gigabytes de datos sin comprimir por segundo. En otras palabras, esta consulta se puede procesar a una velocidad de aproximadamente varios miles de millones de filas por segundo en un único servidor. Esta velocidad se logra realmente en la práctica.
|
||||
|
||||
<details markdown="1">
|
||||
|
||||
<summary>Ejemplo</summary>
|
||||
|
||||
``` bash
|
||||
$ clickhouse-client
|
||||
ClickHouse client version 0.0.52053.
|
||||
Connecting to localhost:9000.
|
||||
Connected to ClickHouse server version 0.0.52053.
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─CounterID─┬──count()─┐
|
||||
│ 114208 │ 56057344 │
|
||||
│ 115080 │ 51619590 │
|
||||
│ 3228 │ 44658301 │
|
||||
│ 38230 │ 42045932 │
|
||||
│ 145263 │ 42042158 │
|
||||
│ 91244 │ 38297270 │
|
||||
│ 154139 │ 26647572 │
|
||||
│ 150748 │ 24112755 │
|
||||
│ 242232 │ 21302571 │
|
||||
│ 338158 │ 13507087 │
|
||||
│ 62180 │ 12229491 │
|
||||
│ 82264 │ 12187441 │
|
||||
│ 232261 │ 12148031 │
|
||||
│ 146272 │ 11438516 │
|
||||
│ 168777 │ 11403636 │
|
||||
│ 4120072 │ 11227824 │
|
||||
│ 10938808 │ 10519739 │
|
||||
│ 74088 │ 9047015 │
|
||||
│ 115079 │ 8837972 │
|
||||
│ 337234 │ 8205961 │
|
||||
└───────────┴──────────┘
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### CPU {#cpu}
|
||||
|
||||
Dado que la ejecución de una consulta requiere procesar un gran número de filas, ayuda enviar todas las operaciones para vectores completos en lugar de para filas separadas, o implementar el motor de consultas para que casi no haya costo de envío. Si no hace esto, con cualquier subsistema de disco medio decente, el intérprete de consultas inevitablemente detiene la CPU. Tiene sentido almacenar datos en columnas y procesarlos, cuando sea posible, por columnas.
|
||||
|
@ -26,7 +26,10 @@ toc_title: "Integraci\xF3n"
|
||||
- [Método de codificación de datos:](https://github.com/zlzforever/ClickHouseMigrator)
|
||||
- Colas de mensajes
|
||||
- [Kafka](https://kafka.apache.org)
|
||||
- [Método de codificación de datos:](https://github.com/housepower/clickhouse_sinker) (utilizar [Ir cliente](https://github.com/kshvakov/clickhouse/))
|
||||
- [clickhouse\_sinker](https://github.com/housepower/clickhouse_sinker) (usos [Go client](https://github.com/ClickHouse/clickhouse-go/))
|
||||
- Procesamiento de flujo
|
||||
- [Flink](https://flink.apache.org)
|
||||
- [flink-clickhouse-sink](https://github.com/ivi-ru/flink-clickhouse-sink)
|
||||
- Almacenamiento de objetos
|
||||
- [S3](https://en.wikipedia.org/wiki/Amazon_S3)
|
||||
- [Haga clic en el botón de copia de seguridad](https://github.com/AlexAkulov/clickhouse-backup)
|
||||
@ -74,6 +77,9 @@ toc_title: "Integraci\xF3n"
|
||||
- [sqlalchemy-clickhouse](https://github.com/cloudflare/sqlalchemy-clickhouse) (utilizar [InformaciónSistema abierto.](https://github.com/Infinidat/infi.clickhouse_orm))
|
||||
- [pandas](https://pandas.pydata.org)
|
||||
- [Pandahouse](https://github.com/kszucs/pandahouse)
|
||||
- PHP
|
||||
- [Doctrine](https://www.doctrine-project.org/)
|
||||
- [dbal-clickhouse](https://packagist.org/packages/friendsofdoctrine/dbal-clickhouse)
|
||||
- R
|
||||
- [Dplyr](https://db.rstudio.com/dplyr/)
|
||||
- [Bienvenidos al Portal de Licitación Electrónica de Licitación Electrónica](https://github.com/IMSMWU/RClickhouse) (utilizar [Bienvenidos](https://github.com/artpaul/clickhouse-cpp))
|
||||
|
@ -79,54 +79,6 @@ ClickHouse یک مدیریت دیتابیس (DBMS) ستون گرا برای پر
|
||||
|
||||
برای مثال، query «تعداد رکوردها به ازای هر بستر نیازمندی» نیازمند خواندن ستون «آیدی بستر آگهی»، که 1 بایت بدون فشرده طول می کشد، خواهد بود. اگر بیشتر ترافیک مربوط به بستر های نیازمندی نبود، شما می توانید انتظار حداقل 10 برابر فشرده سازی این ستون را داشته باشید. زمانی که از الگوریتم فشرده سازی quick استفاده می کنید، عملیات decompression داده ها با سرعت حداقل چندین گیگابایت در ثانیه انجام می شود. به عبارت دیگر، این query توانایی پردازش تقریبا چندین میلیارد رکورد در ثانیه به ازای یک سرور را دارد. این سرعت در عمل واقعی و دست یافتنی است.
|
||||
|
||||
<details markdown="1">
|
||||
|
||||
<summary>مثال</summary>
|
||||
|
||||
$ clickhouse-client
|
||||
ClickHouse client version 0.0.52053.
|
||||
Connecting to localhost:9000.
|
||||
Connected to ClickHouse server version 0.0.52053.
|
||||
|
||||
:) SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
|
||||
|
||||
SELECT
|
||||
CounterID,
|
||||
count()
|
||||
FROM hits
|
||||
GROUP BY CounterID
|
||||
ORDER BY count() DESC
|
||||
LIMIT 20
|
||||
|
||||
┌─CounterID─┬──count()─┐
|
||||
│ 114208 │ 56057344 │
|
||||
│ 115080 │ 51619590 │
|
||||
│ 3228 │ 44658301 │
|
||||
│ 38230 │ 42045932 │
|
||||
│ 145263 │ 42042158 │
|
||||
│ 91244 │ 38297270 │
|
||||
│ 154139 │ 26647572 │
|
||||
│ 150748 │ 24112755 │
|
||||
│ 242232 │ 21302571 │
|
||||
│ 338158 │ 13507087 │
|
||||
│ 62180 │ 12229491 │
|
||||
│ 82264 │ 12187441 │
|
||||
│ 232261 │ 12148031 │
|
||||
│ 146272 │ 11438516 │
|
||||
│ 168777 │ 11403636 │
|
||||
│ 4120072 │ 11227824 │
|
||||
│ 10938808 │ 10519739 │
|
||||
│ 74088 │ 9047015 │
|
||||
│ 115079 │ 8837972 │
|
||||
│ 337234 │ 8205961 │
|
||||
└───────────┴──────────┘
|
||||
|
||||
20 rows in set. Elapsed: 0.153 sec. Processed 1.00 billion rows, 4.00 GB (6.53 billion rows/s., 26.10 GB/s.)
|
||||
|
||||
:)
|
||||
|
||||
</details>
|
||||
|
||||
### CPU {#cpu}
|
||||
|
||||
از آنجایی که اجرای یک query نیازمند پردازش تعداد زیادی سطر می باشد، این کمک می کند تا تمام عملیات ها به جای ارسال به سطرهای جداگانه، برای کل بردار ارسال شود، یا برای ترکیب query engine به طوری که هیچ هزینه ی ارسالی وجود ندارد. اگر این کار رو نکنید، با هر half-decent disk subsystem، تفسیرگر query ناگزیر است که CPU را متوقف کند. این منطقی است که که در صورت امکان هر دو کار ذخیره سازی داده در ستون ها و پردازش ستون ها با هم انجام شود.
|
||||
|
@ -74,6 +74,9 @@ toc_title: "\u06CC\u06A9\u067E\u0627\u0631\u0686\u06AF\u06CC"
|
||||
- [sqlalchemy-clickhouse](https://github.com/cloudflare/sqlalchemy-clickhouse) (استفاده [اطالعات.کلیک \_شورم](https://github.com/Infinidat/infi.clickhouse_orm))
|
||||
- [پانداها](https://pandas.pydata.org)
|
||||
- [پانداهاوس](https://github.com/kszucs/pandahouse)
|
||||
- PHP
|
||||
- [Doctrine](https://www.doctrine-project.org/)
|
||||
- [dbal-clickhouse](https://packagist.org/packages/friendsofdoctrine/dbal-clickhouse)
|
||||
- R
|
||||
- [هواپیمای دوباله](https://db.rstudio.com/dplyr/)
|
||||
- [خانه روستایی](https://github.com/IMSMWU/RClickhouse) (استفاده [صفحه اصلی](https://github.com/artpaul/clickhouse-cpp))
|
||||
|
@ -80,48 +80,6 @@ Vous voyez la différence?
|
||||
|
||||
Par exemple, la requête “count the number of records for each advertising platform” nécessite la lecture d'un “advertising platform ID” colonne, qui prend 1 octet non compressé. Si la majeure partie du trafic ne provenait pas de plates-formes publicitaires, vous pouvez vous attendre à une compression d'au moins 10 fois de cette colonne. Lors de l'utilisation d'un algorithme de compression rapide, la décompression des données est possible à une vitesse d'au moins plusieurs gigaoctets de données non compressées par seconde. En d'autres termes, cette requête ne peut être traitée qu'à une vitesse d'environ plusieurs milliards de lignes par seconde sur un seul serveur. Cette vitesse est effectivement atteinte dans la pratique.
|
||||
|
||||
<details markdown="1">
|
||||
|
||||
<summary>Exemple</summary>
|
||||
|
||||
``` bash
|
||||
$ clickhouse-client
|
||||
ClickHouse client version 0.0.52053.
|
||||
Connecting to localhost:9000.
|
||||
Connected to ClickHouse server version 0.0.52053.
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─CounterID─┬──count()─┐
|
||||
│ 114208 │ 56057344 │
|
||||
│ 115080 │ 51619590 │
|
||||
│ 3228 │ 44658301 │
|
||||
│ 38230 │ 42045932 │
|
||||
│ 145263 │ 42042158 │
|
||||
│ 91244 │ 38297270 │
|
||||
│ 154139 │ 26647572 │
|
||||
│ 150748 │ 24112755 │
|
||||
│ 242232 │ 21302571 │
|
||||
│ 338158 │ 13507087 │
|
||||
│ 62180 │ 12229491 │
|
||||
│ 82264 │ 12187441 │
|
||||
│ 232261 │ 12148031 │
|
||||
│ 146272 │ 11438516 │
|
||||
│ 168777 │ 11403636 │
|
||||
│ 4120072 │ 11227824 │
|
||||
│ 10938808 │ 10519739 │
|
||||
│ 74088 │ 9047015 │
|
||||
│ 115079 │ 8837972 │
|
||||
│ 337234 │ 8205961 │
|
||||
└───────────┴──────────┘
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### CPU {#cpu}
|
||||
|
||||
Étant donné que l'exécution d'une requête nécessite le traitement d'un grand nombre de lignes, il est utile de répartir toutes les opérations pour des vecteurs entiers au lieu de lignes séparées, ou d'implémenter le moteur de requête de sorte qu'il n'y ait presque aucun coût d'expédition. Si vous ne le faites pas, avec un sous-système de disque à moitié décent, l'interpréteur de requête bloque inévitablement le processeur. Il est logique de stocker des données dans des colonnes et de les traiter, si possible, par des colonnes.
|
||||
|
@ -74,6 +74,9 @@ toc_title: "Int\xE9gration"
|
||||
- [sqlalchemy-clickhouse](https://github.com/cloudflare/sqlalchemy-clickhouse) (utiliser [infi.clickhouse\_orm](https://github.com/Infinidat/infi.clickhouse_orm))
|
||||
- [Panda](https://pandas.pydata.org)
|
||||
- [pandahouse](https://github.com/kszucs/pandahouse)
|
||||
- PHP
|
||||
- [Doctrine](https://www.doctrine-project.org/)
|
||||
- [dbal-clickhouse](https://packagist.org/packages/friendsofdoctrine/dbal-clickhouse)
|
||||
- R
|
||||
- [dplyr](https://db.rstudio.com/dplyr/)
|
||||
- [RClickhouse](https://github.com/IMSMWU/RClickhouse) (utiliser [clickhouse-cpp](https://github.com/artpaul/clickhouse-cpp))
|
||||
|
@ -82,48 +82,6 @@ OLAPシナリオは、他の一般的なシナリオ(OLTPやKey-Valueアクセ
|
||||
|
||||
たとえば、「各広告プラットフォームのレコード数をカウントする」クエリでは、1つの「広告プラットフォームID」列を読み取る必要がありますが、これは非圧縮では1バイトの領域を要します。トラフィックのほとんどが広告プラットフォームからのものではない場合、この列は少なくとも10倍の圧縮が期待できます。高速な圧縮アルゴリズムを使用すれば、1秒あたり少なくとも非圧縮データに換算して数ギガバイトの速度でデータを展開できます。つまり、このクエリは、単一のサーバーで1秒あたり約数十億行の速度で処理できます。この速度はまさに実際に達成されます。
|
||||
|
||||
<details markdown="1">
|
||||
|
||||
<summary>Example</summary>
|
||||
|
||||
``` bash
|
||||
$ clickhouse-client
|
||||
ClickHouse client version 0.0.52053.
|
||||
Connecting to localhost:9000.
|
||||
Connected to ClickHouse server version 0.0.52053.
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─CounterID─┬──count()─┐
|
||||
│ 114208 │ 56057344 │
|
||||
│ 115080 │ 51619590 │
|
||||
│ 3228 │ 44658301 │
|
||||
│ 38230 │ 42045932 │
|
||||
│ 145263 │ 42042158 │
|
||||
│ 91244 │ 38297270 │
|
||||
│ 154139 │ 26647572 │
|
||||
│ 150748 │ 24112755 │
|
||||
│ 242232 │ 21302571 │
|
||||
│ 338158 │ 13507087 │
|
||||
│ 62180 │ 12229491 │
|
||||
│ 82264 │ 12187441 │
|
||||
│ 232261 │ 12148031 │
|
||||
│ 146272 │ 11438516 │
|
||||
│ 168777 │ 11403636 │
|
||||
│ 4120072 │ 11227824 │
|
||||
│ 10938808 │ 10519739 │
|
||||
│ 74088 │ 9047015 │
|
||||
│ 115079 │ 8837972 │
|
||||
│ 337234 │ 8205961 │
|
||||
└───────────┴──────────┘
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### CPU {#cpu}
|
||||
|
||||
クエリを実行するには大量の行を処理する必要があるため、個別の行ではなくベクター全体のすべての操作をディスパッチするか、ディスパッチコストがほとんどないようにクエリエンジンを実装すると効率的です。 適切なディスクサブシステムでこれを行わないと、クエリインタープリターが必然的にCPUを失速させます。
|
||||
|
@ -74,6 +74,9 @@ toc_title: "\u7D71\u5408"
|
||||
- [sqlalchemy-clickhouse](https://github.com/cloudflare/sqlalchemy-clickhouse) (用途 [infi.clickhouse\_orm](https://github.com/Infinidat/infi.clickhouse_orm))
|
||||
- [パンダ](https://pandas.pydata.org)
|
||||
- [パンダハウス](https://github.com/kszucs/pandahouse)
|
||||
- PHP
|
||||
- [Doctrine](https://www.doctrine-project.org/)
|
||||
- [dbal-clickhouse](https://packagist.org/packages/friendsofdoctrine/dbal-clickhouse)
|
||||
- R
|
||||
- [dplyr](https://db.rstudio.com/dplyr/)
|
||||
- [Rクリックハウス](https://github.com/IMSMWU/RClickhouse) (用途 [クリックハウス-cpp](https://github.com/artpaul/clickhouse-cpp))
|
||||
|
@ -43,6 +43,7 @@ dicts/external_dicts_dict_sources.md query_language/dicts/external_dicts_dict_so
|
||||
dicts/external_dicts_dict_structure.md query_language/dicts/external_dicts_dict_structure.md
|
||||
dicts/index.md query_language/dicts/index.md
|
||||
dicts/internal_dicts.md query_language/dicts/internal_dicts.md
|
||||
extended_roadmap.md whats_new/extended_roadmap.md
|
||||
formats.md interfaces/formats.md
|
||||
formats/capnproto.md interfaces/formats.md
|
||||
formats/csv.md interfaces/formats.md
|
||||
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
toc_folder_title: Commercial
|
||||
toc_folder_title: Коммерческие услуги
|
||||
toc_priority: 70
|
||||
toc_title: Commercial
|
||||
toc_title: Коммерческие услуги
|
||||
---
|
||||
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_folder_title: Engines
|
||||
toc_folder_title: Движки
|
||||
toc_priority: 25
|
||||
---
|
||||
|
||||
|
@ -1,3 +1,8 @@
|
||||
---
|
||||
toc_folder_title: Тестовые массивы данных
|
||||
toc_priority: 12
|
||||
toc_title: Обзор
|
||||
---
|
||||
# Тестовые массивы данных
|
||||
|
||||
Этот раздел описывает как получить тестовые массивы данных и загрузить их в ClickHouse.
|
||||
|
@ -1,3 +1,10 @@
|
||||
---
|
||||
toc_folder_title: Начало работы
|
||||
toc_hidden: true
|
||||
toc_priority: 8
|
||||
toc_title: hidden
|
||||
---
|
||||
|
||||
# Начало работы {#nachalo-raboty}
|
||||
|
||||
Если вы новичок в ClickHouse и хотите вживую оценить его производительность, прежде всего нужно пройти через [процесс установки](install.md).
|
||||
|
@ -1,3 +1,9 @@
|
||||
---
|
||||
toc_folder_title: Руководства
|
||||
toc_priority: 38
|
||||
toc_title: Обзор
|
||||
---
|
||||
|
||||
# Руководства {#rukovodstva}
|
||||
|
||||
Подробные пошаговые инструкции, которые помогут вам решать различные задачи с помощью ClickHouse.
|
||||
|
@ -1,3 +1,8 @@
|
||||
---
|
||||
toc_priority: 0
|
||||
toc_title: Обзор
|
||||
---
|
||||
|
||||
# Что такое ClickHouse {#chto-takoe-clickhouse}
|
||||
|
||||
ClickHouse - столбцовая система управления базами данных (СУБД) для онлайн обработки аналитических запросов (OLAP).
|
||||
@ -77,48 +82,6 @@ ClickHouse - столбцовая система управления базам
|
||||
|
||||
Например, для запроса «посчитать количество записей для каждой рекламной системы», требуется прочитать один столбец «идентификатор рекламной системы», который занимает 1 байт в несжатом виде. Если большинство переходов было не с рекламных систем, то можно рассчитывать хотя бы на десятикратное сжатие этого столбца. При использовании быстрого алгоритма сжатия, возможно разжатие данных со скоростью более нескольких гигабайт несжатых данных в секунду. То есть, такой запрос может выполняться со скоростью около нескольких миллиардов строк в секунду на одном сервере. На практике, такая скорость действительно достигается.
|
||||
|
||||
<details markdown="1">
|
||||
|
||||
<summary>Пример</summary>
|
||||
|
||||
``` bash
|
||||
$ clickhouse-client
|
||||
ClickHouse client version 0.0.52053.
|
||||
Connecting to localhost:9000.
|
||||
Connected to ClickHouse server version 0.0.52053.
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─CounterID─┬──count()─┐
|
||||
│ 114208 │ 56057344 │
|
||||
│ 115080 │ 51619590 │
|
||||
│ 3228 │ 44658301 │
|
||||
│ 38230 │ 42045932 │
|
||||
│ 145263 │ 42042158 │
|
||||
│ 91244 │ 38297270 │
|
||||
│ 154139 │ 26647572 │
|
||||
│ 150748 │ 24112755 │
|
||||
│ 242232 │ 21302571 │
|
||||
│ 338158 │ 13507087 │
|
||||
│ 62180 │ 12229491 │
|
||||
│ 82264 │ 12187441 │
|
||||
│ 232261 │ 12148031 │
|
||||
│ 146272 │ 11438516 │
|
||||
│ 168777 │ 11403636 │
|
||||
│ 4120072 │ 11227824 │
|
||||
│ 10938808 │ 10519739 │
|
||||
│ 74088 │ 9047015 │
|
||||
│ 115079 │ 8837972 │
|
||||
│ 337234 │ 8205961 │
|
||||
└───────────┴──────────┘
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### По вычислениям {#po-vychisleniiam}
|
||||
|
||||
Так как для выполнения запроса надо обработать достаточно большое количество строк, становится актуальным диспетчеризовывать все операции не для отдельных строк, а для целых векторов, или реализовать движок выполнения запроса так, чтобы издержки на диспетчеризацию были примерно нулевыми. Если этого не делать, то при любой не слишком плохой дисковой подсистеме, интерпретатор запроса неизбежно упрётся в CPU.
|
||||
|
@ -1,3 +1,9 @@
|
||||
---
|
||||
toc_folder_title: Интерфейсы
|
||||
toc_priority: 14
|
||||
toc_title: Введение
|
||||
---
|
||||
|
||||
# Интерфейсы {#interfaces}
|
||||
|
||||
ClickHouse предоставляет два сетевых интерфейса (оба могут быть дополнительно обернуты в TLS для дополнительной безопасности):
|
||||
|
2
docs/ru/interfaces/third-party/index.md
vendored
2
docs/ru/interfaces/third-party/index.md
vendored
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_folder_title: Third-Party
|
||||
toc_folder_title: От сторонних разработчиков
|
||||
toc_priority: 24
|
||||
---
|
||||
|
||||
|
70
docs/ru/interfaces/third-party/integrations.md
vendored
70
docs/ru/interfaces/third-party/integrations.md
vendored
@ -7,66 +7,72 @@
|
||||
|
||||
- Реляционные системы управления базами данных
|
||||
- [MySQL](https://www.mysql.com)
|
||||
- [ProxySQL](https://github.com/sysown/proxysql/wiki/ClickHouse-Support)
|
||||
- [clickhouse-mysql-data-reader](https://github.com/Altinity/clickhouse-mysql-data-reader)
|
||||
- [horgh-replicator](https://github.com/larsnovikov/horgh-replicator)
|
||||
- [ProxySQL](https://github.com/sysown/proxysql/wiki/ClickHouse-Support)
|
||||
- [clickhouse-mysql-data-reader](https://github.com/Altinity/clickhouse-mysql-data-reader)
|
||||
- [horgh-replicator](https://github.com/larsnovikov/horgh-replicator)
|
||||
- [PostgreSQL](https://www.postgresql.org)
|
||||
- [clickhousedb\_fdw](https://github.com/Percona-Lab/clickhousedb_fdw)
|
||||
- [infi.clickhouse\_fdw](https://github.com/Infinidat/infi.clickhouse_fdw) (использует [infi.clickhouse\_orm](https://github.com/Infinidat/infi.clickhouse_orm))
|
||||
- [pg2ch](https://github.com/mkabilov/pg2ch)
|
||||
- [clickhouse\_fdw](https://github.com/adjust/clickhouse_fdw)
|
||||
- [clickhousedb\_fdw](https://github.com/Percona-Lab/clickhousedb_fdw)
|
||||
- [infi.clickhouse\_fdw](https://github.com/Infinidat/infi.clickhouse_fdw) (использует [infi.clickhouse\_orm](https://github.com/Infinidat/infi.clickhouse_orm))
|
||||
- [pg2ch](https://github.com/mkabilov/pg2ch)
|
||||
- [clickhouse\_fdw](https://github.com/adjust/clickhouse_fdw)
|
||||
- [MSSQL](https://en.wikipedia.org/wiki/Microsoft_SQL_Server)
|
||||
- [ClickHouseMightrator](https://github.com/zlzforever/ClickHouseMigrator)
|
||||
- [ClickHouseMightrator](https://github.com/zlzforever/ClickHouseMigrator)
|
||||
- Очереди сообщений
|
||||
- [Kafka](https://kafka.apache.org)
|
||||
- [clickhouse\_sinker](https://github.com/housepower/clickhouse_sinker) (использует [Go client](https://github.com/kshvakov/clickhouse/))
|
||||
- [clickhouse\_sinker](https://github.com/housepower/clickhouse_sinker) (использует [Go client](https://github.com/ClickHouse/clickhouse-go/))
|
||||
- Потоковая обработка
|
||||
- [Flink](https://flink.apache.org)
|
||||
- [flink-clickhouse-sink](https://github.com/ivi-ru/flink-clickhouse-sink)
|
||||
- Хранилища объектов
|
||||
- [S3](https://en.wikipedia.org/wiki/Amazon_S3)
|
||||
- [clickhouse-backup](https://github.com/AlexAkulov/clickhouse-backup)
|
||||
- [clickhouse-backup](https://github.com/AlexAkulov/clickhouse-backup)
|
||||
- Оркестрация контейнеров
|
||||
- [Kubernetes](https://kubernetes.io)
|
||||
- [clickhouse-operator](https://github.com/Altinity/clickhouse-operator)
|
||||
- [clickhouse-operator](https://github.com/Altinity/clickhouse-operator)
|
||||
- Системы управления конфигурацией
|
||||
- [puppet](https://puppet.com)
|
||||
- [innogames/clickhouse](https://forge.puppet.com/innogames/clickhouse)
|
||||
- [mfedotov/clickhouse](https://forge.puppet.com/mfedotov/clickhouse)
|
||||
- [innogames/clickhouse](https://forge.puppet.com/innogames/clickhouse)
|
||||
- [mfedotov/clickhouse](https://forge.puppet.com/mfedotov/clickhouse)
|
||||
- Мониторинг
|
||||
- [Graphite](https://graphiteapp.org)
|
||||
- [graphouse](https://github.com/yandex/graphouse)
|
||||
- [carbon-clickhouse](https://github.com/lomik/carbon-clickhouse) +
|
||||
- [graphite-clickhouse](https://github.com/lomik/graphite-clickhouse)
|
||||
- [graphite-ch-optimizer](https://github.com/innogames/graphite-ch-optimizer) - оптимизирует партиции таблиц [\*GraphiteMergeTree](../../engines/table_engines/mergetree_family/graphitemergetree.md#graphitemergetree) согласно правилам в [конфигурации rollup](../../engines/table_engines/mergetree_family/graphitemergetree.md#rollup-configuration)
|
||||
- [graphouse](https://github.com/yandex/graphouse)
|
||||
- [carbon-clickhouse](https://github.com/lomik/carbon-clickhouse) +
|
||||
- [graphite-clickhouse](https://github.com/lomik/graphite-clickhouse)
|
||||
- [graphite-ch-optimizer](https://github.com/innogames/graphite-ch-optimizer) - оптимизирует партиции таблиц [\*GraphiteMergeTree](../../engines/table_engines/mergetree_family/graphitemergetree.md#graphitemergetree) согласно правилам в [конфигурации rollup](../../engines/table_engines/mergetree_family/graphitemergetree.md#rollup-configuration)
|
||||
- [Grafana](https://grafana.com/)
|
||||
- [clickhouse-grafana](https://github.com/Vertamedia/clickhouse-grafana)
|
||||
- [clickhouse-grafana](https://github.com/Vertamedia/clickhouse-grafana)
|
||||
- [Prometheus](https://prometheus.io/)
|
||||
- [clickhouse\_exporter](https://github.com/f1yegor/clickhouse_exporter)
|
||||
- [PromHouse](https://github.com/Percona-Lab/PromHouse)
|
||||
- [clickhouse\_exporter](https://github.com/hot-wifi/clickhouse_exporter) (использует [Go client](https://github.com/kshvakov/clickhouse/))
|
||||
- [clickhouse\_exporter](https://github.com/f1yegor/clickhouse_exporter)
|
||||
- [PromHouse](https://github.com/Percona-Lab/PromHouse)
|
||||
- [clickhouse\_exporter](https://github.com/hot-wifi/clickhouse_exporter) (использует [Go client](https://github.com/kshvakov/clickhouse/))
|
||||
- [Nagios](https://www.nagios.org/)
|
||||
- [check\_clickhouse](https://github.com/exogroup/check_clickhouse/)
|
||||
- [check\_clickhouse.py](https://github.com/innogames/igmonplugins/blob/master/src/check_clickhouse.py)
|
||||
- [check\_clickhouse](https://github.com/exogroup/check_clickhouse/)
|
||||
- [check\_clickhouse.py](https://github.com/innogames/igmonplugins/blob/master/src/check_clickhouse.py)
|
||||
- [Zabbix](https://www.zabbix.com)
|
||||
- [clickhouse-zabbix-template](https://github.com/Altinity/clickhouse-zabbix-template)
|
||||
- [clickhouse-zabbix-template](https://github.com/Altinity/clickhouse-zabbix-template)
|
||||
- [Sematext](https://sematext.com/)
|
||||
- [clickhouse интеграция](https://github.com/sematext/sematext-agent-integrations/tree/master/clickhouse)
|
||||
- [clickhouse интеграция](https://github.com/sematext/sematext-agent-integrations/tree/master/clickhouse)
|
||||
- Логирование
|
||||
- [rsyslog](https://www.rsyslog.com/)
|
||||
- [omclickhouse](https://www.rsyslog.com/doc/master/configuration/modules/omclickhouse.html)
|
||||
- [omclickhouse](https://www.rsyslog.com/doc/master/configuration/modules/omclickhouse.html)
|
||||
- [fluentd](https://www.fluentd.org)
|
||||
- [loghouse](https://github.com/flant/loghouse) (для [Kubernetes](https://kubernetes.io))
|
||||
- [logagent](https://www.sematext.com/logagent)
|
||||
- [logagent output-plugin-clickhouse](https://sematext.com/docs/logagent/output-plugin-clickhouse/)
|
||||
- [loghouse](https://github.com/flant/loghouse) (для [Kubernetes](https://kubernetes.io))
|
||||
- [Sematext](https://www.sematext.com/logagent)
|
||||
- [logagent output-plugin-clickhouse](https://sematext.com/docs/logagent/output-plugin-clickhouse/)
|
||||
- Гео
|
||||
- [MaxMind](https://dev.maxmind.com/geoip/)
|
||||
- [clickhouse-maxmind-geoip](https://github.com/AlexeyKupershtokh/clickhouse-maxmind-geoip)
|
||||
- [clickhouse-maxmind-geoip](https://github.com/AlexeyKupershtokh/clickhouse-maxmind-geoip)
|
||||
|
||||
## Экосистемы вокруг языков программирования {#ekosistemy-vokrug-iazykov-programmirovaniia}
|
||||
|
||||
- Python
|
||||
- [SQLAlchemy](https://www.sqlalchemy.org)
|
||||
- [sqlalchemy-clickhouse](https://github.com/cloudflare/sqlalchemy-clickhouse) (использует [infi.clickhouse\_orm](https://github.com/Infinidat/infi.clickhouse_orm))
|
||||
- [sqlalchemy-clickhouse](https://github.com/cloudflare/sqlalchemy-clickhouse) (использует [infi.clickhouse\_orm](https://github.com/Infinidat/infi.clickhouse_orm))
|
||||
- [pandas](https://pandas.pydata.org)
|
||||
- [pandahouse](https://github.com/kszucs/pandahouse)
|
||||
- [pandahouse](https://github.com/kszucs/pandahouse)
|
||||
- PHP
|
||||
- [Doctrine](https://www.doctrine-project.org/)
|
||||
- [dbal-clickhouse](https://packagist.org/packages/friendsofdoctrine/dbal-clickhouse)
|
||||
- R
|
||||
- [dplyr](https://db.rstudio.com/dplyr/)
|
||||
- [RClickhouse](https://github.com/IMSMWU/RClickhouse) (использует [clickhouse-cpp](https://github.com/artpaul/clickhouse-cpp))
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_folder_title: Introduction
|
||||
toc_folder_title: Введение
|
||||
toc_priority: 1
|
||||
---
|
||||
|
||||
|
@ -1,3 +1,9 @@
|
||||
---
|
||||
toc_folder_title: Эксплуатация
|
||||
toc_priority: 41
|
||||
toc_title: Введение
|
||||
---
|
||||
|
||||
# Эксплуатация {#ekspluatatsiia}
|
||||
|
||||
Руководство по эксплуатации ClickHouse состоит из следующих основных разделов:
|
||||
|
96
docs/ru/operations/settings/merge_tree_settings.md
Normal file
96
docs/ru/operations/settings/merge_tree_settings.md
Normal file
@ -0,0 +1,96 @@
|
||||
# Настройки MergeTree таблиц {#merge-tree-settings}
|
||||
|
||||
Значения настроек merge-tree (для всех MergeTree таблиц) можно посмотреть в таблице `system.merge_tree_settings`, их можно переопределить в `config.xml` в секции `merge_tree`, или задать в секции `SETTINGS` у каждой таблицы.
|
||||
|
||||
Пример переопределения в `config.xml`:
|
||||
```text
|
||||
<merge_tree>
|
||||
<max_suspicious_broken_parts>5</max_suspicious_broken_parts>
|
||||
</merge_tree>
|
||||
```
|
||||
|
||||
Пример для определения в `SETTINGS` у конкретной таблицы:
|
||||
```sql
|
||||
CREATE TABLE foo
|
||||
(
|
||||
`A` Int64
|
||||
)
|
||||
ENGINE = MergeTree
|
||||
ORDER BY tuple()
|
||||
SETTINGS max_suspicious_broken_parts = 500;
|
||||
```
|
||||
|
||||
Пример изменения настроек у конкретной таблицы командой `ALTER TABLE ... MODIFY SETTING`:
|
||||
```sql
|
||||
ALTER TABLE foo
|
||||
MODIFY SETTING max_suspicious_broken_parts = 100;
|
||||
```
|
||||
|
||||
|
||||
## parts_to_throw_insert {#parts-to-throw-insert}
|
||||
|
||||
Eсли число кусков в партиции превышает значение `parts_to_throw_insert`, INSERT прерывается с исключением `Too many parts (N). Merges are processing significantly slower than inserts`.
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- Положительное целое число.
|
||||
|
||||
Значение по умолчанию: 300.
|
||||
|
||||
Для достижения максимальной производительности запросов `SELECT` необходимо минимизировать количество обрабатываемых кусков, см. [Дизайн MergeTree](../../development/architecture.md#merge-tree).
|
||||
|
||||
Можно установить большее значение 600 (1200), это уменьшит вероятность возникновения ошибки `Too many parts`, но в тоже время вы позже обнаружите возможную проблему со слияниями (например, из-за недостатка места на диске) и деградацию производительности `SELECT`.
|
||||
|
||||
|
||||
## parts_to_delay_insert {#parts-to-delay-insert}
|
||||
|
||||
Eсли число кусков в партиции превышает значение `parts_to_delay_insert`, `INSERT` искусственно замедляется.
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- Положительное целое число.
|
||||
|
||||
Значение по умолчанию: 150.
|
||||
|
||||
ClickHouse искусственно выполняет `INSERT` дольше (добавляет 'sleep'), чтобы фоновый механизм слияния успевал слиять куски быстрее, чем они добавляются.
|
||||
|
||||
|
||||
## max_delay_to_insert {#max-delay-to-insert}
|
||||
|
||||
Величина в секундах, которая используется для расчета задержки `INSERT`, если число кусков в партиции превышает значение [parts_to_delay_insert](#parts-to-delay-insert).
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- Положительное целое число.
|
||||
|
||||
Значение по умолчанию: 1.
|
||||
|
||||
Величина задержи (в миллисекундах) для `INSERT` вычисляется по формуле:
|
||||
|
||||
```code
|
||||
max_k = parts_to_throw_insert - parts_to_delay_insert
|
||||
k = 1 + parts_count_in_partition - parts_to_delay_insert
|
||||
delay_milliseconds = pow(max_delay_to_insert * 1000, k / max_k)
|
||||
```
|
||||
|
||||
Т.е. если в партиции уже 299 кусков и parts_to_throw_insert = 300, parts_to_delay_insert = 150, max_delay_to_insert = 1, `INSERT` замедлится на `pow( 1 * 1000, (1 + 299 - 150) / (300 - 150) ) = 1000` миллисекунд.
|
||||
|
||||
## old_parts_lifetime {#old-parts-lifetime}
|
||||
|
||||
Время (в секундах) хранения неактивных кусков, для защиты от потери данных при спонтанной перезагрузке сервера или О.С.
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- Положительное целое число.
|
||||
|
||||
Значение по умолчанию: 480.
|
||||
|
||||
После слияния нескольких кусков в новый кусок, ClickHouse помечает исходные куски как неактивные и удаляет их после `old_parts_lifetime` секунд.
|
||||
Неактивные куски удаляются, если они не используются в текущих запросах, т.е. если счетчик ссылок куска -- `refcount` равен нулю.
|
||||
|
||||
Неактивные куски удаляются не сразу, потому что при записи нового куска не вызывается `fsync`, т.е. некоторое время новый кусок находится только в оперативной памяти сервера (кеше О.С.). Т.о. при спонтанной перезагрузке сервера новый (смерженный) кусок может быть потерян или испорчен. В этом случае ClickHouse в процессе старта при проверке целостности кусков обнаружит проблему, вернет неактивные куски в список активных и позже заново их смержит. Сломанный кусок в этом случае переименовывается (добавляется префикс broken_) и перемещается в папку detached. Если проверка целостности не обнаруживает проблем в смерженном куске, то исходные неактивные куски переименовываются (добавляется префикс ignored_) и перемещаются в папку detached.
|
||||
|
||||
Стандартное значение Linux dirty_expire_centisecs - 30 секунд (максимальное время, которое записанные данные хранятся только в оперативной памяти), но при больших нагрузках на дисковую систему, данные могут быть записаны намного позже. Экспериментально было найдено время - 480 секунд, за которое гарантированно новый кусок будет записан на диск.
|
||||
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/settings/merge_tree_settings/) <!--hide-->
|
@ -213,6 +213,49 @@ INSERT INTO datetime_t SELECT now()
|
||||
Ok.
|
||||
```
|
||||
|
||||
## input\_format\_values\_deduce\_templates\_of\_expressions {#settings-input_format_values_deduce_templates_of_expressions}
|
||||
|
||||
Включает или отключает попытку вычисления шаблона для выражений SQL в формате [Values](../../interfaces/formats.md#data-format-values). Это позволяет гораздо быстрее парсить и интерпретировать выражения в `Values`, если выражения в последовательных строках имеют одинаковую структуру. ClickHouse пытается вычислить шаблон выражения, распарсить следующие строки с помощью этого шаблона и вычислить выражение в пачке успешно проанализированных строк.
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- 0 — Выключена.
|
||||
- 1 — Включена.
|
||||
|
||||
Значение по умолчанию: 1.
|
||||
|
||||
Для следующего запроса:
|
||||
|
||||
``` sql
|
||||
INSERT INTO test VALUES (lower('Hello')), (lower('world')), (lower('INSERT')), (upper('Values')), ...
|
||||
```
|
||||
|
||||
- Если `input_format_values_interpret_expressions=1` и `format_values_deduce_templates_of_expressions=0`, выражения интерпретируются отдельно для каждой строки (это очень медленно для большого количества строк).
|
||||
- Если `input_format_values_interpret_expressions=0` и `format_values_deduce_templates_of_expressions=1`, выражения в первой, второй и третьей строках парсятся с помощью шаблона `lower(String)` и интерпретируется вместе, выражение в четвертой строке парсится с другим шаблоном (`upper(String)`).
|
||||
- Если `input_format_values_interpret_expressions=1` и `format_values_deduce_templates_of_expressions=1`, то же самое, что и в предыдущем случае, но также позволяет выполнять резервную интерпретацию выражений отдельно, если невозможно вычислить шаблон.
|
||||
|
||||
## input\_format\_values\_accurate\_types\_of\_literals {#settings-input-format-values-accurate-types-of-literals}
|
||||
|
||||
Эта настройка используется, только когда `input_format_values_deduce_templates_of_expressions = 1`. Выражения для некоторых столбцов могут иметь одинаковую структуру, но содержат числовые литералы разных типов, например:
|
||||
|
||||
``` sql
|
||||
(..., abs(0), ...), -- UInt64 literal
|
||||
(..., abs(3.141592654), ...), -- Float64 literal
|
||||
(..., abs(-1), ...), -- Int64 literal
|
||||
```
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- 0 — Выключена.
|
||||
|
||||
В этом случае, ClickHouse может использовать более общий тип для некоторых литералов (например, `Float64` или `Int64` вместо `UInt64` для `42`), но это может привести к переполнению и проблемам с точностью.
|
||||
|
||||
- 1 — Включена.
|
||||
|
||||
В этом случае, ClickHouse проверяет фактический тип литерала и использует шаблон выражения соответствующего типа. В некоторых случаях это может значительно замедлить оценку выажения в `Values`.
|
||||
|
||||
Значение по умолчанию: 1.
|
||||
|
||||
## input\_format\_defaults\_for\_omitted\_fields {#session_settings-input_format_defaults_for_omitted_fields}
|
||||
|
||||
При вставке данных запросом `INSERT`, заменяет пропущенные поля значениям по умолчанию для типа данных столбца.
|
||||
|
@ -1,3 +1,9 @@
|
||||
---
|
||||
toc_folder_title: Агрегатные функции
|
||||
toc_priority: 33
|
||||
toc_title: Введение
|
||||
---
|
||||
|
||||
# Агрегатные функции {#aggregate-functions}
|
||||
|
||||
Агрегатные функции работают в [привычном](http://www.sql-tutorial.com/sql-aggregate-functions-sql-tutorial) для специалистов по базам данных смысле.
|
||||
|
@ -723,10 +723,13 @@ uniqExact(x[, ...])
|
||||
|
||||
В некоторых случаях, вы всё же можете рассчитывать на порядок выполнения запроса. Это — случаи, когда `SELECT` идёт из подзапроса, в котором используется `ORDER BY`.
|
||||
|
||||
## groupArrayInsertAt(x) {#grouparrayinsertatx}
|
||||
## groupArrayInsertAt(value, position) {#grouparrayinsertatvalue-position}
|
||||
|
||||
Вставляет в массив значение в заданную позицию.
|
||||
|
||||
!!! note "Примечание"
|
||||
Эта функция использует нумерацию массивов с нуля, в отличие от принятой в SQL нумерации с единицы.
|
||||
|
||||
Принимает на вход значение и позицию. Если на одну и ту же позицию вставляется несколько значений, в результирующем массиве может оказаться любое (первое в случае однопоточного выполнения). Если в позицию не вставляется ни одного значения, то позиции присваивается значение по умолчанию.
|
||||
|
||||
Опциональные параметры:
|
||||
|
@ -1,3 +1,9 @@
|
||||
---
|
||||
toc_folder_title: Типы данных
|
||||
toc_priority: 37
|
||||
toc_title: Введение
|
||||
---
|
||||
|
||||
# Типы данных {#data_types}
|
||||
|
||||
ClickHouse может сохранять в ячейках таблиц данные различных типов.
|
||||
|
@ -931,4 +931,42 @@ SELECT arrayCompact([1, 1, nan, nan, 2, 3, 3, 3])
|
||||
└────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## arrayZip {#arrayzip}
|
||||
|
||||
Объединяет несколько массивов в один. Результирующий массив содержит соответственные элементы исходных массивов, сгруппированные в кортежи в указанном порядке аргументов.
|
||||
|
||||
**Синтаксис**
|
||||
|
||||
``` sql
|
||||
arrayZip(arr1, arr2, ..., arrN)
|
||||
```
|
||||
|
||||
**Параметры**
|
||||
|
||||
- `arrN` — [Массив](../data_types/array.md).
|
||||
|
||||
Функция принимает любое количество массивов, которые могут быть различных типов. Все массивы должны иметь одинаковую длину.
|
||||
|
||||
**Возвращаемое значение**
|
||||
|
||||
- Массив с элементами исходных массивов, сгруппированными в [кортежи](../data_types/tuple.md). Типы данных в кортежах соответствуют типам данных входных массивов и следуют в том же порядке, в котором переданы массивы.
|
||||
|
||||
Тип: [Массив](../data_types/array.md).
|
||||
|
||||
**Пример**
|
||||
|
||||
Запрос:
|
||||
|
||||
``` sql
|
||||
SELECT arrayZip(['a', 'b', 'c'], [5, 2, 1])
|
||||
```
|
||||
|
||||
Ответ:
|
||||
|
||||
``` text
|
||||
┌─arrayZip(['a', 'b', 'c'], [5, 2, 1])─┐
|
||||
│ [('a',5),('b',2),('c',1)] │
|
||||
└──────────────────────────────────────┘
|
||||
```
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/array_functions/) <!--hide-->
|
||||
|
@ -1,3 +1,10 @@
|
||||
---
|
||||
toc_folder_title: Справка по SQL
|
||||
toc_hidden: true
|
||||
toc_priority: 28
|
||||
toc_title: hidden
|
||||
---
|
||||
|
||||
# Справка по SQL {#spravka-po-sql}
|
||||
|
||||
- [SELECT](statements/select.md)
|
||||
|
@ -1,3 +1,8 @@
|
||||
---
|
||||
toc_priority: 36
|
||||
toc_title: ALTER
|
||||
---
|
||||
|
||||
## ALTER {#query_language_queries_alter}
|
||||
|
||||
Запрос `ALTER` поддерживается только для таблиц типа `*MergeTree`, а также `Merge` и `Distributed`. Запрос имеет несколько вариантов.
|
||||
|
@ -1,3 +1,8 @@
|
||||
---
|
||||
toc_priority: 35
|
||||
toc_title: CREATE
|
||||
---
|
||||
|
||||
## CREATE DATABASE {#query-language-create-database}
|
||||
|
||||
Создает базу данных.
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_folder_title: Statements
|
||||
toc_folder_title: Выражения
|
||||
toc_priority: 31
|
||||
---
|
||||
|
||||
|
@ -1,3 +1,8 @@
|
||||
---
|
||||
toc_priority: 34
|
||||
toc_title: INSERT INTO
|
||||
---
|
||||
|
||||
## INSERT {#insert}
|
||||
|
||||
Добавление данных.
|
||||
|
@ -1,3 +1,8 @@
|
||||
---
|
||||
toc_priority: 33
|
||||
toc_title: SELECT
|
||||
---
|
||||
|
||||
# Синтаксис запросов SELECT {#sintaksis-zaprosov-select}
|
||||
|
||||
`SELECT` осуществляет выборку данных.
|
||||
|
@ -1,3 +1,9 @@
|
||||
---
|
||||
toc_folder_title: Табличные функции
|
||||
toc_priority: 34
|
||||
toc_title: Введение
|
||||
---
|
||||
|
||||
# Табличные функции {#tablichnye-funktsii}
|
||||
|
||||
Табличные функции — это метод создания таблиц.
|
||||
|
@ -15,9 +15,7 @@
|
||||
Задача «normalized z-Order curve» в перспективе может быть полезна для БК и Метрики, так как позволяет смешивать OrderID и PageID и избежать дублирования данных.
|
||||
В задаче также вводится способ индексации путём обращения функции нескольких аргументов на интервале, что имеет смысл для дальнейшего развития.
|
||||
|
||||
Изначально делал [Андрей Чулков](https://github.com/achulkov2), ВШЭ, теперь (не) доделывает [Ольга Хвостикова](https://github.com/stavrolia), но сроки немного сдвинуты из-за задачи 25.9. Будем надеятся на лучшее.
|
||||
|
||||
Upd. Доделывать будет другой человек. Приоритет не высокий.
|
||||
[Андрей Чулков](https://github.com/achulkov2), ВШЭ.
|
||||
|
||||
### 1.2. Wait-free каталог баз данных. {#wait-free-katalog-baz-dannykh}
|
||||
|
||||
@ -39,18 +37,20 @@ Upd. Большая часть задачи реализована и добав
|
||||
|
||||
Требует 1.3. Будет делать [Александр Сапин](https://github.com/alesapin). Ура, сделано.
|
||||
|
||||
### 1.5. ALTER RENAME COLUMN. {#alter-rename-column}
|
||||
### 1.5. + ALTER RENAME COLUMN. {#alter-rename-column}
|
||||
|
||||
[\#6861](https://github.com/ClickHouse/ClickHouse/issues/6861)
|
||||
|
||||
Требует 1.3. Будет делать [Александр Сапин](https://github.com/alesapin).
|
||||
|
||||
### 1.6. Полиморфные куски данных. {#polimorfnye-kuski-dannykh}
|
||||
### 1.6. + Полиморфные куски данных. {#polimorfnye-kuski-dannykh}
|
||||
|
||||
Компактные куски - Q1, куски в оперативке Q1/Q2.
|
||||
Компактные куски - Q1, куски в оперативке Q1/Q2 - пункт 1.7.
|
||||
|
||||
Компактные куски реализованы, ещё не включены по-умолчанию. Первым шагом включаем по-умолчанию для системных таблиц.
|
||||
|
||||
Upd. Включено для системных таблиц.
|
||||
|
||||
Делает [Антон Попов](https://github.com/CurtizJ), первый рабочий вариант в декабре. Пререквизит чтобы снизить сложность мелких INSERT, что в свою очередь нужно для 1.12, иначе задача 1.12 не сможет нормально работать. Особенно нужно для Яндекс.Облака.
|
||||
|
||||
Данные в таблицах типа MergeTree в ClickHouse хранятся в виде набора независимых «кусков». Внутри куска, каждый столбец, а также индекс, хранится в отдельных файлах. Это сделано для возможности быстрых манипуляций со столбцами (пример - запрос ALTER DROP COLUMN). При вставке данных (INSERT), создаётся новый кусок. Для таблиц с большим количеством столбцов, запросы INSERT с маленьким количеством строк являются неэффективными, так как требуют создания большого количества файлов в файловой системе. Это является врождённой особенностью ClickHouse - одной из первой проблем, с которыми сталкиваются пользователи. Пользователям приходится буферизовывать данные и собирать их в более крупные пачки перед вставкой в ClickHouse.
|
||||
@ -61,7 +61,7 @@ Upd. Большая часть задачи реализована и добав
|
||||
|
||||
### 1.7. Буферизация и WAL в MergeTree. {#buferizatsiia-i-wal-v-mergetree}
|
||||
|
||||
Требует 1.6.
|
||||
Требует 1.6. Антон Попов. Задача взята в работу. Q2.
|
||||
|
||||
### 1.8. + Перенос между разделами по TTL. {#perenos-mezhdu-razdelami-po-ttl}
|
||||
|
||||
@ -74,7 +74,7 @@ Q1. Закоммичено, но есть технический долг, ко
|
||||
|
||||
Будет делать Сорокин Николай, ВШЭ и Яндекс.
|
||||
|
||||
Сейчас пользователь может задать в таблице выражение, которое определяет, сколько времени хранятся данные. Обычно это выражение задаётся относительно значения столбца с датой - например: удалять данные через три месяца. https://clickhouse.tech/docs/ru/operations/table\_engines/mergetree/\#table\_engine-mergetree-ttl
|
||||
Сейчас пользователь может задать в таблице выражение, которое определяет, сколько времени хранятся данные. Обычно это выражение задаётся относительно значения столбца с датой - например: удалять данные через три месяца. https://clickhouse.tech/docs/ru/operations/table_engines/mergetree/\#table_engine-mergetree-ttl
|
||||
|
||||
Это может быть задано для всей таблицы (тогда строки целиком удаляются после указанного времени) или для отдельных столбцов (тогда данные столбца физически удаляются с диска, а строки в таблице остаются; при чтении значений столбца, они читаются как значения по-умолчанию).
|
||||
|
||||
@ -88,7 +88,7 @@ Q1. Закоммичено, но есть технический долг, ко
|
||||
|
||||
А вот пункт 2 требуется продумать. Не очевидно даже, какой лучше использовать синтаксис для этого при создании таблицы. Но мы придумаем - сразу видно несколько вариантов.
|
||||
|
||||
Частный случай такой задачи уже есть в https://clickhouse.tech/docs/ru/operations/table\_engines/graphitemergetree/ Но это было сделано для конкретной задачи. А надо обобщить.
|
||||
Частный случай такой задачи уже есть в https://clickhouse.tech/docs/ru/operations/table_engines/graphitemergetree/ Но это было сделано для конкретной задачи. А надо обобщить.
|
||||
|
||||
### 1.10. Пережатие старых данных в фоне. {#perezhatie-starykh-dannykh-v-fone}
|
||||
|
||||
@ -100,17 +100,15 @@ Q1. Закоммичено, но есть технический долг, ко
|
||||
|
||||
Предлагается добавить в ClickHouse настройки по пережатию данных и фоновые потоки, выполняющие эту задачу.
|
||||
|
||||
### 1.11. Виртуальная файловая система. {#virtualnaia-failovaia-sistema}
|
||||
### 1.11. + Виртуальная файловая система. {#virtualnaia-failovaia-sistema}
|
||||
|
||||
В процессе реализации, сейчас на VFS переведены Log, TinyLog, StripeLog, готовится MergeTree.
|
||||
На VFS переведены Log, TinyLog, StripeLog, а также MergeTree, что доказывает состоятельность реализации.
|
||||
|
||||
Q2.
|
||||
|
||||
Нужно для Яндекс.Облака. Делает Александр, Яндекс.Облако, а также Олег Ершов, ВШЭ и Яндекс.
|
||||
Нужно для Яндекс.Облака. Делает Александр, Яндекс.Облако.
|
||||
|
||||
ClickHouse использует для хранения данных локальную файловую систему. Существует сценарий работы, в котором размещение старых (архивных) данных было бы выгодно на удалённой файловой системе. Если файловая система POSIX совместимая, то это не составляет проблем: ClickHouse успешно работает с Ceph, GlusterFS, MooseFS. Также востребованным является сценарий использования S3 (из-за доступности в облаке) или HDFS (для интеграции с Hadoop). Но эти файловые системы не являются POSIX совместимыми. Хотя для них существуют FUSE драйверы, но скорость работы сильно страдает и поддержка неполная.
|
||||
|
||||
ClickHouse использует небольшое подмножество функций ФС, но в то же время, и некоторые специфические части: симлинки и хардлинки, O\_DIRECT. Предлагается выделить всё взаимодействие с файловой системой в отдельный интерфейс.
|
||||
ClickHouse использует небольшое подмножество функций ФС, но в то же время, и некоторые специфические части: симлинки и хардлинки, O_DIRECT. Предлагается выделить всё взаимодействие с файловой системой в отдельный интерфейс.
|
||||
|
||||
### 1.12. Экспериментальная реализация VFS поверх S3 и HDFS. {#eksperimentalnaia-realizatsiia-vfs-poverkh-s3-i-hdfs}
|
||||
|
||||
@ -121,13 +119,15 @@ Q2.
|
||||
|
||||
Upd. Олег будет делать только часть про HDFS.
|
||||
|
||||
Upd. Реализация поверх S3 является рабочей на уровне PoC.
|
||||
|
||||
### 1.13. Ускорение запросов с FINAL. {#uskorenie-zaprosov-s-final}
|
||||
|
||||
Требует 2.1. Делает [Николай Кочетов](https://github.com/KochetovNicolai). Нужно для Яндекс.Метрики.
|
||||
Требует 2.1. Делает [Николай Кочетов](https://github.com/KochetovNicolai). Нужно для Яндекс.Метрики. Q2.
|
||||
|
||||
### 1.14. Не писать столбцы, полностью состоящие из нулей. {#ne-pisat-stolbtsy-polnostiu-sostoiashchie-iz-nulei}
|
||||
|
||||
Антон Попов. Q1/Q2.
|
||||
Антон Попов. Q2.
|
||||
В очереди. Простая задача, является небольшим пререквизитом для потенциальной поддержки полуструктурированных данных.
|
||||
|
||||
### 1.15. Возможность иметь разный первичный ключ в разных кусках. {#vozmozhnost-imet-raznyi-pervichnyi-kliuch-v-raznykh-kuskakh}
|
||||
@ -146,6 +146,7 @@ Upd. Олег будет делать только часть про HDFS.
|
||||
|
||||
Требует 1.3 и 1.6. Полная замена hard links на sym links, что будет лучше для 1.12.
|
||||
|
||||
|
||||
## 2. Крупные рефакторинги. {#krupnye-refaktoringi}
|
||||
|
||||
Для обоснования необходимости смотрите ссылки в описании других задач.
|
||||
@ -161,6 +162,8 @@ Upd. Включили по-умолчанию. Удаление старого
|
||||
|
||||
Upd. Уже есть первый релиз, в котором это включено по-умолчанию.
|
||||
|
||||
Upd. Всё ещё ждём удаление старого кода, которое должно случиться после релиза 20.4.
|
||||
|
||||
### 2.2. Инфраструктура событий/метрик/ограничений/квот/трассировки. {#infrastruktura-sobytiimetrikogranicheniikvottrassirovki}
|
||||
|
||||
В очереди. https://gist.github.com/alexey-milovidov/d62d73222d83b9319dc519cbb13aeff6
|
||||
@ -193,6 +196,8 @@ Upd. Каталог БД вынесен из Context.
|
||||
|
||||
Средний приоритет. Нужно для YQL.
|
||||
|
||||
Upd. В очереди. Иван Лежанкин.
|
||||
|
||||
### 2.9. Логгировние в format-стиле. {#loggirovnie-v-format-stile}
|
||||
|
||||
Делает [Иван Лежанкин](https://github.com/abyss7). Низкий приоритет.
|
||||
@ -212,10 +217,14 @@ Upd. Каталог БД вынесен из Context.
|
||||
|
||||
Задачу делает Алексей Миловидов. Прогресс 50% и разработка временно приостановлена.
|
||||
|
||||
Upd. Разработка всё ещё приостановлена.
|
||||
|
||||
### 2.13. Каждая функция в отдельном файле. {#kazhdaia-funktsiia-v-otdelnom-faile}
|
||||
|
||||
Задачу делает Алексей Миловидов. Прогресс 80%. Потребуется помощь других разработчиков.
|
||||
|
||||
Upd. Поползновения наблюдаются.
|
||||
|
||||
### 2.14. Все функции с состоянием переделать на FunctionBuilder. {#vse-funktsii-s-sostoianiem-peredelat-na-functionbuilder}
|
||||
|
||||
Долг [Николай Кочетов](https://github.com/KochetovNicolai). Сейчас код находится в переходном состоянии, что неприемлемо.
|
||||
@ -224,13 +233,14 @@ Upd. Каталог БД вынесен из Context.
|
||||
|
||||
Для нормализации работы materialized views поверх Merge, Distributed, Kafka.
|
||||
|
||||
|
||||
## 3. Документация. {#dokumentatsiia}
|
||||
|
||||
Здесь задачи только по инфраструктуре документации.
|
||||
|
||||
### 3.1. Перенос документации по функциям в код. {#perenos-dokumentatsii-po-funktsiiam-v-kod}
|
||||
|
||||
Требует 2.12 и 2.13. Хотим в Q1/Q2, средний приоритет.
|
||||
Требует 2.12 и 2.13. Хотим в Q2, средний приоритет.
|
||||
|
||||
### 3.2. Перенос однородных частей документации в код. {#perenos-odnorodnykh-chastei-dokumentatsii-v-kod}
|
||||
|
||||
@ -246,11 +256,12 @@ Upd. Иван Блинков сделал эту задачу путём зам
|
||||
|
||||
Эту задачу сделает [Иван Блинков](https://github.com/blinkov/), до конца декабря 2019. Сделано.
|
||||
|
||||
|
||||
## 4. Сетевое взаимодействие. {#setevoe-vzaimodeistvie}
|
||||
|
||||
### 4.1. Уменьшение числа потоков при распределённых запросах. {#umenshenie-chisla-potokov-pri-raspredelionnykh-zaprosakh}
|
||||
|
||||
[Никита Лапков](https://github.com/laplab), весна 2020. Upd. Есть прототип. Upd. Он не работает.
|
||||
Весна 2020. Upd. Есть прототип. Upd. Он не работает. Upd. Человек отказался от задачи, теперь сроки не определены.
|
||||
|
||||
### 4.2. Спекулятивное выполнение запросов на нескольких репликах. {#spekuliativnoe-vypolnenie-zaprosov-na-neskolkikh-replikakh}
|
||||
|
||||
@ -262,6 +273,8 @@ Upd. Иван Блинков сделал эту задачу путём зам
|
||||
|
||||
Сейчас для распределённых запросов используется по потоку на соединение. Это позволяет хорошо распараллелить вычисления над полученными данными и утилизировать сеть, но становится сильно избыточным для больших кластеров. Для примера, создание 1000 потоков для чтения данных из 1000 серверов кластера - лишь расходует ресурсы и увеличивает время выполнения запроса. Вместо этого необходимо использовать количество потоков не большее количества процессорных ядер, и мультиплексировать в одном потоке общение с серверами. Реализация нетривиальна, так как мультиплексировать необходимо каждую стадию общения по сети, включая установку соединения и обмен handshake.
|
||||
|
||||
Upd. Сейчас обсуждается, как сделать другую задачу вместо этой.
|
||||
|
||||
### 4.3. Ограничение числа одновременных скачиваний с реплик. {#ogranichenie-chisla-odnovremennykh-skachivanii-s-replik}
|
||||
|
||||
Дмитрий Григорьев, ВШЭ.
|
||||
@ -284,14 +297,16 @@ Upd. Иван Блинков сделал эту задачу путём зам
|
||||
Дмитрий Григорьев, ВШЭ.
|
||||
В очереди. Исправить проблему, что восстанавливающаяся реплика перестаёт мержить. Частично компенсируется 4.3.
|
||||
|
||||
|
||||
## 5. Операции. {#operatsii}
|
||||
|
||||
### 5.1. Разделение задач на более мелкие куски в clickhouse-copier. {#razdelenie-zadach-na-bolee-melkie-kuski-v-clickhouse-copier}
|
||||
### 5.1. + Разделение задач на более мелкие куски в clickhouse-copier. {#razdelenie-zadach-na-bolee-melkie-kuski-v-clickhouse-copier}
|
||||
|
||||
[\#9075](https://github.com/ClickHouse/ClickHouse/pull/9075)
|
||||
Q1. Нужно для Метрики, в очереди. Никита Михайлов.
|
||||
|
||||
Upd. Задача на финальной стадии разработки.
|
||||
Upd. Сделано. Эффективность работы под вопросом. Есть варианты, как сделать лучше.
|
||||
|
||||
### 5.2. Автонастройка лимита на оперативку и размера кэшей. {#avtonastroika-limita-na-operativku-i-razmera-keshei}
|
||||
|
||||
@ -305,6 +320,8 @@ Upd. Задача на финальной стадии разработки.
|
||||
|
||||
Требует 7.5. Задачу хочет Метрика, Облако, БК, Маркет и Altinity. Первой LTS версией уже стала версия 19.14.
|
||||
Метрика, БК, Маркет, Altinity уже используют более свежие версии чем LTS.
|
||||
Upd. Появилась вторая версия LTS - 20.3.
|
||||
|
||||
|
||||
## 6. Инструментирование. {#instrumentirovanie}
|
||||
|
||||
@ -321,7 +338,7 @@ Upd. Задача на финальной стадии разработки.
|
||||
|
||||
### 6.3. Учёт оперативки total расширить не только на запросы. {#uchiot-operativki-total-rasshirit-ne-tolko-na-zaprosy}
|
||||
|
||||
Исправление долгоживущей проблемы с дрифтом учёта оперативки. Нужна для Метрики и БК. Иван Лежанкин. Q1.
|
||||
Исправление долгоживущей проблемы с дрифтом учёта оперативки. Нужна для Метрики и БК. Иван Лежанкин. Q1. Странно, как будто не сделано.
|
||||
|
||||
### 6.4. Поддержка perf events как метрик запроса. {#podderzhka-perf-events-kak-metrik-zaprosa}
|
||||
|
||||
@ -339,7 +356,7 @@ Upd. Задача на финальной стадии разработки.
|
||||
|
||||
Сейчас есть стек трейс для почти всех, но не всех исключений. Требует 7.4.
|
||||
|
||||
### 6.7. + Таблица system.stack\_trace. {#tablitsa-system-stack-trace}
|
||||
### 6.7. + Таблица system.stack_trace. {#tablitsa-system-stack-trace}
|
||||
|
||||
Сравнительно простая задача, но только для опытных разработчиков.
|
||||
|
||||
@ -351,6 +368,7 @@ Upd. Задача на финальной стадии разработки.
|
||||
|
||||
### 6.10. Сбор общих системных метрик. {#sbor-obshchikh-sistemnykh-metrik}
|
||||
|
||||
|
||||
## 7. Сопровождение разработки. {#soprovozhdenie-razrabotki}
|
||||
|
||||
### 7.1. + ICU в submodules. {#icu-v-submodules}
|
||||
@ -361,7 +379,7 @@ Upd. Задача на финальной стадии разработки.
|
||||
|
||||
Сделал Алексей Миловидов.
|
||||
|
||||
### 7.3. Обновление Poco. {#obnovlenie-poco}
|
||||
### 7.3. + Обновление Poco. {#obnovlenie-poco}
|
||||
|
||||
Алексанр Кузьменков.
|
||||
|
||||
@ -383,13 +401,18 @@ Upd. Задача на финальной стадии разработки.
|
||||
Уже есть ASan, TSan, UBSan. Не хватает тестов под MSan. Они уже добавлены в CI, но не проходят.
|
||||
[Александр Кузьменков](https://github.com/akuzm) и [Александр Токмаков](https://github.com/tavplubix).
|
||||
|
||||
### 7.8. Добавить clang-tidy. {#dobavit-clang-tidy}
|
||||
Upd. Задача всё ещё медленно тащится.
|
||||
|
||||
### 7.8. + Добавить clang-tidy. {#dobavit-clang-tidy}
|
||||
|
||||
Уже есть PVS-Studio. Мы очень довольны, но этого недостаточно.
|
||||
|
||||
Upd. Алексей Миловидов. Добавлено некоторое множество проверок, но нужно рассмотреть все проверки подряд и добавить всё, что можно.
|
||||
Upd. Рассмотрели все проверки подряд.
|
||||
|
||||
### 7.9. Проверки на стиль имён с помощью clang-tidy. {#proverki-na-stil-imion-s-pomoshchiu-clang-tidy}
|
||||
### 7.9. + Проверки на стиль имён с помощью clang-tidy. {#proverki-na-stil-imion-s-pomoshchiu-clang-tidy}
|
||||
|
||||
Сделано. Только в .cpp файлах и только для имён локальных переменных. Остальное слишком сложно.
|
||||
|
||||
### 7.10. Включение UBSan и MSan в интеграционных тестах. {#vkliuchenie-ubsan-i-msan-v-integratsionnykh-testakh}
|
||||
|
||||
@ -399,6 +422,8 @@ UBSan включен в функциональных тестах, но не в
|
||||
|
||||
У нас мало unit тестов по сравнению с функциональными тестами и их использование не обязательно. Но они всё-равно важны и нет причин не запускать их под всеми видами sanitizers.
|
||||
|
||||
Илья Яцишин.
|
||||
|
||||
### 7.12. Показывать тестовое покрытие нового кода в PR. {#pokazyvat-testovoe-pokrytie-novogo-koda-v-pr}
|
||||
|
||||
Пока есть просто показ тестового покрытия всего кода.
|
||||
@ -413,6 +438,8 @@ UBSan включен в функциональных тестах, но не в
|
||||
|
||||
Подключение replxx вместо readline сделал Иван Лежанкин.
|
||||
|
||||
Есть технический долг с лицензиями файлов консорциума Unicode.
|
||||
|
||||
### 7.14.1. Улучшение возможностей интерактивного режима clickhouse-client. {#uluchshenie-vozmozhnostei-interaktivnogo-rezhima-clickhouse-client}
|
||||
|
||||
Тагир Кускаров, ВШЭ.
|
||||
@ -476,7 +503,7 @@ https://github.com/ClickHouse/ClickHouse/issues/8027\#issuecomment-566670282
|
||||
Проверили на настоящем сервере Huawei, а также в специальном Docker контейнере, который содержит внутри qemu-user-static.
|
||||
Также можно проверить на Cavium, на Raspberry Pi а также на твоём Android телефоне.
|
||||
|
||||
### 7.20. Автосборка для FreeBSD x86\_64. {#avtosborka-dlia-freebsd-x86-64}
|
||||
### 7.20. Автосборка для FreeBSD x86_64. {#avtosborka-dlia-freebsd-x86-64}
|
||||
|
||||
[Иван Лежанкин](https://github.com/abyss7).
|
||||
|
||||
@ -535,6 +562,8 @@ Fuzzing тестирование - это тестирование случай
|
||||
Также можно сделать функции с детерминированным генератором случайных чисел (аргументом передаётся seed) для воспроизводимости тестовых кейсов.
|
||||
|
||||
Upd. Сергей Штыков сделал функцию `randomPrintableASCII`.
|
||||
Upd. Илья Яцишин сделал табличную функцию `generateRandom`.
|
||||
Upd. Эльдар Заитов добавляет OSS Fuzz.
|
||||
|
||||
### 7.24. Fuzzing лексера и парсера запросов; кодеков и форматов. {#fuzzing-leksera-i-parsera-zaprosov-kodekov-i-formatov}
|
||||
|
||||
@ -557,10 +586,12 @@ Upd. Сергей Штыков сделал функцию `randomPrintableASCII
|
||||
|
||||
Нужно для CHYT и YQL.
|
||||
|
||||
UPD: Все патчи Максима отправлены в master. Задача взята в работу.
|
||||
Upd: Все патчи Максима отправлены в master. Задача взята в работу.
|
||||
|
||||
Upd: Задача в процессе реализации. Синхронизироваться будет master. Делает [Иван Лежанкин](https://github.com/abyss7)
|
||||
|
||||
Upd: Есть собирающийся прототип, но сборка как будто ещё не в trunk Аркадии.
|
||||
|
||||
### 7.26. Побайтовая идентичность репозитория с Аркадией. {#pobaitovaia-identichnost-repozitoriia-s-arkadiei}
|
||||
|
||||
Команда DevTools. Прогресс по задаче под вопросом.
|
||||
@ -617,6 +648,7 @@ Upd: Задача в процессе реализации. Синхронизи
|
||||
Upd. Иван Блинков настроил CDN repo.clickhouse.tech, что решает проблему с доступностью зарубежом.
|
||||
Вопрос с operations, visibility пока актуален.
|
||||
|
||||
|
||||
## 8. Интеграция с внешними системами. {#integratsiia-s-vneshnimi-sistemami}
|
||||
|
||||
### 8.1. Поддержка ALTER MODIFY SETTING для Kafka. {#podderzhka-alter-modify-setting-dlia-kafka}
|
||||
@ -629,11 +661,11 @@ Altinity. Никто не делает эту задачу.
|
||||
|
||||
[Александр Кузьменков](https://github.com/akuzm).
|
||||
|
||||
### 8.3. Доработки globs (правильная поддержка диапазонов, уменьшение числа одновременных stream-ов). {#dorabotki-globs-pravilnaia-podderzhka-diapazonov-umenshenie-chisla-odnovremennykh-stream-ov}
|
||||
### 8.3. + Доработки globs (правильная поддержка диапазонов, уменьшение числа одновременных stream-ов). {#dorabotki-globs-pravilnaia-podderzhka-diapazonov-umenshenie-chisla-odnovremennykh-stream-ov}
|
||||
|
||||
[Ольга Хвостикова](https://github.com/stavrolia).
|
||||
|
||||
Уменьшение числа stream-ов сделано, а вот правильная поддержка диапазонов - нет. Будем надеяться на Q1/Q2.
|
||||
Уменьшение числа stream-ов сделано, а вот правильная поддержка диапазонов - нет. Будем надеяться на Q1/Q2. Сделано.
|
||||
|
||||
### 8.4. Унификация File, HDFS, S3 под URL. {#unifikatsiia-file-hdfs-s3-pod-url}
|
||||
|
||||
@ -690,19 +722,21 @@ Andrew Onyshchuk. Есть pull request. Q1. Сделано.
|
||||
|
||||
Павел Круглов, ВШЭ и Яндекс. Есть pull request.
|
||||
|
||||
### 8.16.2. Поддержка формата Thrift. {#podderzhka-formata-thrift}
|
||||
### 8.16.2. - Поддержка формата Thrift. {#podderzhka-formata-thrift}
|
||||
|
||||
Павел Круглов, ВШЭ и Яндекс.
|
||||
Павел Круглов, ВШЭ и Яндекс. Задача отменена.
|
||||
|
||||
### 8.16.3. Поддержка формата MsgPack. {#podderzhka-formata-msgpack}
|
||||
|
||||
Павел Круглов, ВШЭ и Яндекс.
|
||||
Задача взята в работу.
|
||||
|
||||
### 8.16.4. Формат Regexp. {#format-regexp}
|
||||
Upd. Почти готово - есть лишь небольшой технический долг.
|
||||
|
||||
### 8.16.4. + Формат Regexp. {#format-regexp}
|
||||
|
||||
Павел Круглов, ВШЭ и Яндекс.
|
||||
Есть pull request.
|
||||
Есть pull request. Готово.
|
||||
|
||||
### 8.17. ClickHouse как MySQL реплика. {#clickhouse-kak-mysql-replika}
|
||||
|
||||
@ -735,6 +769,7 @@ Maxim Fedotov, Wargaming + Yuri Baranov, Яндекс.
|
||||
Нужно для БК. Декабрь 2019.
|
||||
В декабре для БК сделан минимальный вариант этой задачи.
|
||||
Максимальный вариант, вроде, никому не нужен.
|
||||
Upd. Всё ещё кажется, что задача не нужна.
|
||||
|
||||
### 8.22. Поддержка синтаксиса для переменных в стиле MySQL. {#podderzhka-sintaksisa-dlia-peremennykh-v-stile-mysql}
|
||||
|
||||
@ -746,6 +781,7 @@ Upd. Юрий Баранов работает в Google, там запрещен
|
||||
|
||||
Желательно 2.15.
|
||||
|
||||
|
||||
## 9. Безопасность. {#bezopasnost}
|
||||
|
||||
### 9.1. + Ограничение на хосты в запросах ко внешним системам. {#ogranichenie-na-khosty-v-zaprosakh-ko-vneshnim-sistemam}
|
||||
@ -761,7 +797,12 @@ ClickHouse предоставляет возможность обратитьс
|
||||
Вместо этого предлагается описывать необходимые данные в конфигурационном файле сервера или в отдельном сервисе и ссылаться на них по именам.
|
||||
|
||||
### 9.3. Поддержка TLS для ZooKeeper. {#podderzhka-tls-dlia-zookeeper}
|
||||
|
||||
[#10174](https://github.com/ClickHouse/ClickHouse/issues/10174)
|
||||
|
||||
Есть pull request.
|
||||
|
||||
|
||||
## 10. Внешние словари. {#vneshnie-slovari}
|
||||
|
||||
### 10.1. + Исправление зависания в библиотеке доступа к YT. {#ispravlenie-zavisaniia-v-biblioteke-dostupa-k-yt}
|
||||
@ -777,6 +818,7 @@ ClickHouse предоставляет возможность обратитьс
|
||||
Нужно для БК и Метрики. Поиск причин - [Александр Сапин](https://github.com/alesapin). Дальшейшее исправление возможно на стороне YT.
|
||||
|
||||
Upd. Одну причину устранили, но ещё что-то неизвестное осталось.
|
||||
Upd. Нас заставляют переписать эту библиотеку с одного API на другое, так как старое внезапно устарело. Кажется, что переписывание случайно исправит все проблемы.
|
||||
|
||||
### 10.3. Возможность чтения данных из статических таблиц в YT словарях. {#vozmozhnost-chteniia-dannykh-iz-staticheskikh-tablits-v-yt-slovariakh}
|
||||
|
||||
@ -802,7 +844,7 @@ Upd. Одну причину устранили, но ещё что-то неи
|
||||
|
||||
Артём Стрельцов, Николай Дегтеринский, Наталия Михненко, ВШЭ.
|
||||
|
||||
### 10.9. Уменьшение блокировок для cache словарей за счёт одновременных запросов одного и того же. {#umenshenie-blokirovok-dlia-cache-slovarei-za-schiot-odnovremennykh-zaprosov-odnogo-i-togo-zhe}
|
||||
### 10.9. - Уменьшение блокировок для cache словарей за счёт одновременных запросов одного и того же. {#umenshenie-blokirovok-dlia-cache-slovarei-za-schiot-odnovremennykh-zaprosov-odnogo-i-togo-zhe}
|
||||
|
||||
Заменено в пользу 10.10, 10.11.
|
||||
|
||||
@ -825,8 +867,6 @@ Upd. Одну причину устранили, но ещё что-то неи
|
||||
|
||||
### 10.14. Поддержка всех типов в функции transform. {#podderzhka-vsekh-tipov-v-funktsii-transform}
|
||||
|
||||
Задачу взяла Ольга Хвостикова.
|
||||
|
||||
### 10.15. Использование словарей как специализированного layout для Join. {#ispolzovanie-slovarei-kak-spetsializirovannogo-layout-dlia-join}
|
||||
|
||||
### 10.16. Словари на локальном SSD. {#slovari-na-lokalnom-ssd}
|
||||
@ -843,6 +883,7 @@ Upd. Одну причину устранили, но ещё что-то неи
|
||||
|
||||
### 10.19. Возможность зарегистрировать некоторые функции, использующие словари, под пользовательскими именами. {#vozmozhnost-zaregistrirovat-nekotorye-funktsii-ispolzuiushchie-slovari-pod-polzovatelskimi-imenami}
|
||||
|
||||
|
||||
## 11. Интерфейсы. {#interfeisy}
|
||||
|
||||
### 11.1. Вставка состояний агрегатных функций в виде кортежа аргументов или массива кортежей аргументов. {#vstavka-sostoianii-agregatnykh-funktsii-v-vide-kortezha-argumentov-ili-massiva-kortezhei-argumentov}
|
||||
@ -851,6 +892,8 @@ Upd. Одну причину устранили, но ещё что-то неи
|
||||
|
||||
Нужно разобраться, как упаковывать Java в статический бинарник, возможно AppImage. Или предоставить максимально простую инструкцию по установке jdbc-bridge. Может быть будет заинтересован Александр Крашенинников, Badoo, так как он разработал jdbc-bridge.
|
||||
|
||||
Upd. Александр Крашенинников перешёл в другую компанию и больше не занимается этим.
|
||||
|
||||
### 11.3. + Интеграционные тесты ODBC драйвера путём подключения ClickHouse к самому себе через ODBC. {#integratsionnye-testy-odbc-draivera-putiom-podkliucheniia-clickhouse-k-samomu-sebe-cherez-odbc}
|
||||
|
||||
Михаил Филимонов, Altinity. Готово.
|
||||
@ -881,12 +924,13 @@ zhang2014, есть pull request.
|
||||
|
||||
Возможность описать в конфигурационном файле handler (путь в URL) для HTTP запросов к серверу, которому соответствует некоторый параметризованный запрос. Пользователь может вызвать этот обработчик и не должен передавать SQL запрос.
|
||||
|
||||
|
||||
## 12. Управление пользователями и доступом. {#upravlenie-polzovateliami-i-dostupom}
|
||||
|
||||
### 12.1. Role Based Access Control. {#role-based-access-control}
|
||||
### 12.1. + Role Based Access Control. {#role-based-access-control}
|
||||
|
||||
[Виталий Баранов](https://github.com/vitlibar). Финальная стадия разработки, рабочая версия в начале февраля 2019.
|
||||
Q1. Сейчас сделаны все интерфейсы в коде и запросы, но не сделаны варианты хранения прав кроме прототипа.
|
||||
[Виталий Баранов](https://github.com/vitlibar). Финальная стадия разработки, рабочая версия в начале апреля 2019.
|
||||
Q2. Сейчас сделаны все интерфейсы в коде и запросы, но не сделаны варианты хранения прав кроме прототипа.
|
||||
Upd. Сделано хранение прав. До готового к использованию состояния осталось несколько доработок.
|
||||
|
||||
### 12.2. + Управление пользователями и правами доступа с помощью SQL запросов. {#upravlenie-polzovateliami-i-pravami-dostupa-s-pomoshchiu-sql-zaprosov}
|
||||
@ -897,7 +941,7 @@ Q1. Сделано управление правами полностью, но
|
||||
### 12.3. Подключение справочника пользователей и прав доступа из LDAP. {#podkliuchenie-spravochnika-polzovatelei-i-prav-dostupa-iz-ldap}
|
||||
|
||||
[Виталий Баранов](https://github.com/vitlibar). Требует 12.1.
|
||||
Q1/Q2.
|
||||
Q2.
|
||||
|
||||
### 12.4. Подключение IDM системы Яндекса как справочника пользователей и прав доступа. {#podkliuchenie-idm-sistemy-iandeksa-kak-spravochnika-polzovatelei-i-prav-dostupa}
|
||||
|
||||
@ -911,6 +955,7 @@ Q1/Q2.
|
||||
|
||||
[Виталий Баранов](https://github.com/vitlibar). Требует 12.1.
|
||||
|
||||
|
||||
## 13. Разделение ресурсов, multi-tenancy. {#razdelenie-resursov-multi-tenancy}
|
||||
|
||||
### 13.1. Overcommit запросов по памяти и вытеснение. {#overcommit-zaprosov-po-pamiati-i-vytesnenie}
|
||||
@ -926,6 +971,8 @@ Q1/Q2.
|
||||
Требует 13.2 или сможем сделать более неудобную реализацию раньше.
|
||||
Обсуждается вариант неудобной реализации. Пока средний приоритет, целимся на Q1/Q2.
|
||||
Вариант реализации выбрал Александр Казаков.
|
||||
Upd. Не уследили, и задачу стали обсуждать менеджеры.
|
||||
|
||||
|
||||
## 14. Диалект SQL. {#dialekt-sql}
|
||||
|
||||
@ -936,8 +983,6 @@ Q1/Q2.
|
||||
|
||||
### 14.2. Поддержка WITH для подзапросов. {#podderzhka-with-dlia-podzaprosov}
|
||||
|
||||
Михаил Коротов.
|
||||
|
||||
### 14.3. Поддержка подстановок для множеств в правой части IN. {#podderzhka-podstanovok-dlia-mnozhestv-v-pravoi-chasti-in}
|
||||
|
||||
### 14.4. Поддержка подстановок для идентификаторов (имён) в SQL запросе. {#podderzhka-podstanovok-dlia-identifikatorov-imion-v-sql-zaprose}
|
||||
@ -993,7 +1038,7 @@ zhang2014
|
||||
|
||||
### 14.16. Синонимы для функций из MySQL. {#sinonimy-dlia-funktsii-iz-mysql}
|
||||
|
||||
### 14.17. Ввести понятие stateful функций. {#vvesti-poniatie-stateful-funktsii}
|
||||
### 14.17. + Ввести понятие stateful функций. {#vvesti-poniatie-stateful-funktsii}
|
||||
|
||||
zhang2014.
|
||||
Для runningDifference, neighbour - их учёт в оптимизаторе запросов.
|
||||
@ -1018,13 +1063,15 @@ zhang2014.
|
||||
|
||||
Павел Потёмкин, ВШЭ.
|
||||
|
||||
|
||||
## 15. Улучшение поддержки JOIN. {#uluchshenie-podderzhki-join}
|
||||
|
||||
### 15.1. Доведение merge JOIN до продакшена. {#dovedenie-merge-join-do-prodakshena}
|
||||
### 15.1. + Доведение merge JOIN до продакшена. {#dovedenie-merge-join-do-prodakshena}
|
||||
|
||||
Артём Зуйков. Сейчас merge JOIN включается вручную опцией и всегда замедляет запросы. Хотим, чтобы он замедлял запросы только когда это неизбежно.
|
||||
Кстати, смысл merge JOIN появляется только совместно с 15.2 и 15.3.
|
||||
Q1. Сделали адаптивный вариант, но вроде он что-то всё-ещё замедляет.
|
||||
Задача сделана, но всё работает слишком медленно.
|
||||
|
||||
### 15.1.1. Алгоритм two-level merge JOIN. {#algoritm-two-level-merge-join}
|
||||
|
||||
@ -1052,6 +1099,7 @@ Q1. Сделали адаптивный вариант, но вроде он ч
|
||||
|
||||
Артём Зуйков.
|
||||
|
||||
|
||||
## 16. Типы данных и функции. {#tipy-dannykh-i-funktsii}
|
||||
|
||||
### 16.1. + DateTime64. {#datetime64}
|
||||
@ -1073,6 +1121,7 @@ Upd. Секретного изменения в работе не будет, з
|
||||
|
||||
### 16.6. Функции нормализации и хэширования SQL запросов. {#funktsii-normalizatsii-i-kheshirovaniia-sql-zaprosov}
|
||||
|
||||
|
||||
## 17. Работа с географическими данными. {#rabota-s-geograficheskimi-dannymi}
|
||||
|
||||
### 17.1. Гео-словари для определения региона по координатам. {#geo-slovari-dlia-opredeleniia-regiona-po-koordinatam}
|
||||
@ -1105,6 +1154,7 @@ Upd. Андрей сделал прототип более оптимально
|
||||
|
||||
Сейчас функция тихо не работает в случае полигонов с самопересечениями, надо кидать исключение.
|
||||
|
||||
|
||||
## 18. Машинное обучение и статистика. {#mashinnoe-obuchenie-i-statistika}
|
||||
|
||||
### 18.1. Инкрементальная кластеризация данных. {#inkrementalnaia-klasterizatsiia-dannykh}
|
||||
@ -1123,6 +1173,7 @@ Upd. Андрей сделал прототип более оптимально
|
||||
|
||||
В очереди. Возможно, Александр Кожихов. У него сначала идёт задача 24.26.
|
||||
|
||||
|
||||
## 19. Улучшение работы кластера. {#uluchshenie-raboty-klastera}
|
||||
|
||||
### 19.1. Параллельные кворумные вставки без линеаризуемости. {#parallelnye-kvorumnye-vstavki-bez-linearizuemosti}
|
||||
@ -1153,7 +1204,7 @@ Upd. Алексей сделал какой-то вариант, но борет
|
||||
|
||||
Hold. Полезно для заказчиков внутри Яндекса, но есть риски. Эту задачу никто не будет делать.
|
||||
|
||||
### 19.4. internal\_replication = ‘auto’. {#internal-replication-auto}
|
||||
### 19.4. internal_replication = ‘auto’. {#internal-replication-auto}
|
||||
|
||||
### 19.5. Реплицируемые базы данных. {#replitsiruemye-bazy-dannykh}
|
||||
|
||||
@ -1177,18 +1228,20 @@ Hold. Полезно для заказчиков внутри Яндекса, н
|
||||
|
||||
Требует 1.6, 19.1, 19.6, 19.7, 19.8, 19.9.
|
||||
|
||||
|
||||
## 20. Мутации данных. {#mutatsii-dannykh}
|
||||
|
||||
Пока все задачи по точечным UPDATE/DELETE имеют низкий приоритет, но ожидаем взять в работу в середине 2020.
|
||||
|
||||
### 20.1. Поддержка DELETE путём запоминания множества затронутых кусков и ключей. {#podderzhka-delete-putiom-zapominaniia-mnozhestva-zatronutykh-kuskov-i-kliuchei}
|
||||
|
||||
### 20.2. Поддержка DELETE путём преобразования множества ключей в множество row\_numbers на реплике, столбца флагов и индекса по диапазонам. {#podderzhka-delete-putiom-preobrazovaniia-mnozhestva-kliuchei-v-mnozhestvo-row-numbers-na-replike-stolbtsa-flagov-i-indeksa-po-diapazonam}
|
||||
### 20.2. Поддержка DELETE путём преобразования множества ключей в множество row_numbers на реплике, столбца флагов и индекса по диапазонам. {#podderzhka-delete-putiom-preobrazovaniia-mnozhestva-kliuchei-v-mnozhestvo-row-numbers-na-replike-stolbtsa-flagov-i-indeksa-po-diapazonam}
|
||||
|
||||
### 20.3. Поддержка ленивых DELETE путём запоминания выражений и преобразования к множеству ключей в фоне. {#podderzhka-lenivykh-delete-putiom-zapominaniia-vyrazhenii-i-preobrazovaniia-k-mnozhestvu-kliuchei-v-fone}
|
||||
|
||||
### 20.4. Поддержка UPDATE с помощью преобразования в DELETE и вставок. {#podderzhka-update-s-pomoshchiu-preobrazovaniia-v-delete-i-vstavok}
|
||||
|
||||
|
||||
## 21. Оптимизации производительности. {#optimizatsii-proizvoditelnosti}
|
||||
|
||||
### 21.1. + Параллельный парсинг форматов. {#parallelnyi-parsing-formatov}
|
||||
@ -1201,7 +1254,7 @@ Hold. Полезно для заказчиков внутри Яндекса, н
|
||||
|
||||
После 21.1, предположительно Никита Михайлов. Задача сильно проще чем 21.1.
|
||||
|
||||
### 21.3. Исправление низкой производительности анализа индекса в случае большого множества в секции IN. {#ispravlenie-nizkoi-proizvoditelnosti-analiza-indeksa-v-sluchae-bolshogo-mnozhestva-v-sektsii-in}
|
||||
### 21.3. + Исправление низкой производительности анализа индекса в случае большого множества в секции IN. {#ispravlenie-nizkoi-proizvoditelnosti-analiza-indeksa-v-sluchae-bolshogo-mnozhestva-v-sektsii-in}
|
||||
|
||||
Нужно всем (Zen, БК, DataLens, TestEnv…). Антон Попов, Q1/Q2.
|
||||
|
||||
@ -1309,23 +1362,23 @@ Constraints позволяют задать выражение, истиннос
|
||||
|
||||
В ClickHouse используется неоптимальный вариант top sort. Суть его в том, что из каждого блока достаётся top N записей, а затем, все блоки мержатся. Но доставание top N записей у каждого следующего блока бессмысленно, если мы знаем, что из них в глобальный top N войдёт меньше. Конечно нужно реализовать вариацию на тему priority queue (heap) с быстрым пропуском целых блоков, если ни одна строка не попадёт в накопленный top.
|
||||
|
||||
1. Рекурсивный вариант сортировки по кортежам.
|
||||
2. Рекурсивный вариант сортировки по кортежам.
|
||||
|
||||
Для сортировки по кортежам используется обычная сортировка с компаратором, который в цикле по элементам кортежа делает виртуальные вызовы `IColumn::compareAt`. Это неоптимально - как из-за короткого цикла по неизвестному в compile-time количеству элементов, так и из-за виртуальных вызовов. Чтобы обойтись без виртуальных вызовов, есть метод `IColumn::getPermutation`. Он используется в случае сортировки по одному столбцу. Есть вариант, что в случае сортировки по кортежу, что-то похожее тоже можно применить… например, сделать метод `updatePermutation`, принимающий аргументы offset и limit, и допереставляющий перестановку в диапазоне значений, в которых предыдущий столбец имел равные значения.
|
||||
|
||||
1. RadixSort для сортировки.
|
||||
3. RadixSort для сортировки.
|
||||
|
||||
Один наш знакомый начал делать задачу по попытке использования RadixSort для сортировки столбцов. Был сделан вариант indirect сортировки (для `getPermutation`), но не оптимизирован до конца - есть лишние ненужные перекладывания элементов. Для того, чтобы его оптимизировать, придётся добавить немного шаблонной магии (на последнем шаге что-то не копировать, вместо перекладывания индексов - складывать их в готовое место). Также этот человек добавил метод MSD Radix Sort для реализации radix partial sort. Но даже не проверил производительность.
|
||||
|
||||
Наиболее содержательная часть задачи может состоять в применении Radix Sort для сортировки кортежей, расположенных в оперативке в виде Structure Of Arrays неизвестного в compile-time размера. Это может работать хуже, чем то, что описано в пункте 2… Но попробовать не помешает.
|
||||
|
||||
1. Three-way comparison sort.
|
||||
4. Three-way comparison sort.
|
||||
|
||||
Виртуальный метод `compareAt` возвращает -1, 0, 1. Но алгоритмы сортировки сравнениями обычно рассчитаны на `operator<` и не могут получить преимущества от three-way comparison. А можно ли написать так, чтобы преимущество было?
|
||||
|
||||
1. pdq partial sort
|
||||
5. pdq partial sort
|
||||
|
||||
Хороший алгоритм сортировки сравнениями `pdqsort` не имеет варианта partial sort. Заметим, что на практике, почти все сортировки в запросах ClickHouse являются partial\_sort, так как `ORDER BY` почти всегда идёт с `LIMIT`. Кстати, Данила Кутенин уже попробовал это и показал, что в тривиальном случае преимущества нет. Но не очевидно, что нельзя сделать лучше.
|
||||
Хороший алгоритм сортировки сравнениями `pdqsort` не имеет варианта partial sort. Заметим, что на практике, почти все сортировки в запросах ClickHouse являются partial_sort, так как `ORDER BY` почти всегда идёт с `LIMIT`. Кстати, Данила Кутенин уже попробовал это и показал, что в тривиальном случае преимущества нет. Но не очевидно, что нельзя сделать лучше.
|
||||
|
||||
### 21.20. Использование материализованных представлений для оптимизации запросов. {#ispolzovanie-materializovannykh-predstavlenii-dlia-optimizatsii-zaprosov}
|
||||
|
||||
@ -1344,6 +1397,7 @@ Constraints позволяют задать выражение, истиннос
|
||||
zhang2014.
|
||||
Есть pull request.
|
||||
|
||||
|
||||
## 22. Долги и недоделанные возможности. {#dolgi-i-nedodelannye-vozmozhnosti}
|
||||
|
||||
### 22.1. + Исправление неработающих таймаутов, если используется TLS. {#ispravlenie-nerabotaiushchikh-taimautov-esli-ispolzuetsia-tls}
|
||||
@ -1356,12 +1410,11 @@ N.Vartolomei.
|
||||
|
||||
### 22.3. Защита от абсурдно заданных пользователем кодеков. {#zashchita-ot-absurdno-zadannykh-polzovatelem-kodekov}
|
||||
|
||||
В очереди, скорее всего [Ольга Хвостикова](https://github.com/stavrolia).
|
||||
|
||||
### 22.4. Исправление оставшихся deadlocks в табличных RWLock-ах. {#ispravlenie-ostavshikhsia-deadlocks-v-tablichnykh-rwlock-akh}
|
||||
|
||||
Александр Казаков. Нужно для Яндекс.Метрики и Datalens. Задача постепенно тащится и исправлениями в соседних местах стала менее актуальна.
|
||||
В Q1 будет сделана или отменена с учётом 1.2. и 1.3.
|
||||
Upd. Добавили таймауты.
|
||||
|
||||
### 22.5. + Исправление редких срабатываний TSan в stress тестах в CI. {#ispravlenie-redkikh-srabatyvanii-tsan-v-stress-testakh-v-ci}
|
||||
|
||||
@ -1470,18 +1523,19 @@ Altinity.
|
||||
|
||||
[Александр Сапин](https://github.com/alesapin)
|
||||
|
||||
|
||||
## 23. Default Festival. {#default-festival}
|
||||
|
||||
### 23.1. + Включение minimalistic\_part\_header в ZooKeeper. {#vkliuchenie-minimalistic-part-header-v-zookeeper}
|
||||
### 23.1. + Включение minimalistic_part_header в ZooKeeper. {#vkliuchenie-minimalistic-part-header-v-zookeeper}
|
||||
|
||||
Сильно уменьшает объём данных в ZooKeeper. Уже год в продакшене в Яндекс.Метрике.
|
||||
Алексей Миловидов, ноябрь 2019.
|
||||
|
||||
### 23.2. Включение distributed\_aggregation\_memory\_efficient. {#vkliuchenie-distributed-aggregation-memory-efficient}
|
||||
### 23.2. Включение distributed_aggregation_memory_efficient. {#vkliuchenie-distributed-aggregation-memory-efficient}
|
||||
|
||||
Есть риски меньшей производительности лёгких запросов, хотя производительность тяжёлых запросов всегда увеличивается.
|
||||
|
||||
### 23.3. Включение min\_bytes\_to\_external\_sort и min\_bytes\_to\_external\_group\_by. {#vkliuchenie-min-bytes-to-external-sort-i-min-bytes-to-external-group-by}
|
||||
### 23.3. Включение min_bytes_to_external_sort и min_bytes_to_external_group_by. {#vkliuchenie-min-bytes-to-external-sort-i-min-bytes-to-external-group-by}
|
||||
|
||||
Желательно 5.2. и 13.1.
|
||||
|
||||
@ -1489,7 +1543,7 @@ Altinity.
|
||||
|
||||
Есть гипотеза, что плохо работает на очень больших кластерах.
|
||||
|
||||
### 23.5. Включение compile\_expressions. {#vkliuchenie-compile-expressions}
|
||||
### 23.5. Включение compile_expressions. {#vkliuchenie-compile-expressions}
|
||||
|
||||
Требует 7.2. Задачу изначально на 99% сделал Денис Скоробогатов, ВШЭ и Яндекс. Остальной процент доделывал Алексей Миловидов, а затем [Александр Сапин](https://github.com/alesapin).
|
||||
|
||||
@ -1514,6 +1568,7 @@ Q1. [Николай Кочетов](https://github.com/KochetovNicolai).
|
||||
Возможность mlock бинарника сделал Олег Алексеенков [\#3553](https://github.com/ClickHouse/ClickHouse/pull/3553)
|
||||
. Поможет, когда на серверах кроме ClickHouse работает много посторонних программ (мы иногда называем их в шутку «треш-программами»).
|
||||
|
||||
|
||||
## 24. Экспериментальные задачи. {#eksperimentalnye-zadachi}
|
||||
|
||||
### 24.1. Веб-интерфейс для просмотра состояния кластера и профилирования запросов. {#veb-interfeis-dlia-prosmotra-sostoianiia-klastera-i-profilirovaniia-zaprosov}
|
||||
@ -1553,7 +1608,7 @@ ClickHouse поддерживает LZ4 и ZSTD для сжатия данных
|
||||
|
||||
Смотрите также 24.5.
|
||||
|
||||
1. Шифрование отдельных значений.
|
||||
2. Шифрование отдельных значений.
|
||||
Для этого требуется реализовать функции шифрования и расшифрования, доступные из SQL. Для шифрования реализовать возможность добавления нужного количества случайных бит для исключения одинаковых зашифрованных значений на одинаковых данных. Это позволит реализовать возможность «забывания» данных без удаления строк таблицы: можно шифровать данные разных клиентов разными ключами, и для того, чтобы забыть данные одного клиента, потребуется всего лишь удалить ключ.
|
||||
|
||||
### 24.6. Userspace RAID. {#userspace-raid}
|
||||
@ -1586,7 +1641,7 @@ RAID позволяет одновременно увеличить надёжн
|
||||
|
||||
Дмитрий Ковальков, ВШЭ и Яндекс.
|
||||
|
||||
Подавляющее большинство кода ClickHouse написана для x86\_64 с набором инструкций до SSE 4.2 включительно. Лишь отдельные редкие функции поддерживают AVX/AVX2/AVX512 с динамической диспетчеризацией.
|
||||
Подавляющее большинство кода ClickHouse написана для x86_64 с набором инструкций до SSE 4.2 включительно. Лишь отдельные редкие функции поддерживают AVX/AVX2/AVX512 с динамической диспетчеризацией.
|
||||
|
||||
В первой части задачи, следует добавить в ClickHouse реализации некоторых примитивов, оптимизированные под более новый набор инструкций. Например, AVX2 реализацию генератора случайных чисел pcg: https://github.com/lemire/simdpcg
|
||||
|
||||
@ -1598,6 +1653,8 @@ RAID позволяет одновременно увеличить надёжн
|
||||
|
||||
Продолжение 24.8.
|
||||
|
||||
Upd. Есть pull request.
|
||||
|
||||
### 24.10. Поддержка типов half/bfloat16/unum. {#podderzhka-tipov-halfbfloat16unum}
|
||||
|
||||
[\#7657](https://github.com/ClickHouse/ClickHouse/issues/7657)
|
||||
@ -1633,6 +1690,7 @@ ClickHouse предоставляет достаточно богатый наб
|
||||
В компании nVidia сделали прототип offloading вычисления GROUP BY с некоторыми из агрегатных функций в ClickHouse и обещат предоставить исходники в публичный доступ для дальнейшего развития. Предлагается изучить этот прототип и расширить его применимость для более широкого сценария использования. В качестве альтернативы, предлагается изучить исходные коды системы `OmniSci` или `Alenka` или библиотеку `CUB` https://nvlabs.github.io/cub/ и применить некоторые из алгоритмов в ClickHouse.
|
||||
|
||||
Upd. В компании nVidia выложили прототип, теперь нужна интеграция в систему сборки.
|
||||
Upd. Интеграция в систему сборки - Иван Лежанкин.
|
||||
|
||||
### 24.13. Stream запросы. {#stream-zaprosy}
|
||||
|
||||
@ -1791,7 +1849,7 @@ Amos Bird, но его решение слишком громоздкое и п
|
||||
|
||||
### 25.10. Митапы в России и Беларуси: Москва x2 + митап для разработчиков или хакатон, Санкт-Петербург, Минск, Нижний Новгород, Екатеринбург, Новосибирск и/или Академгородок, Иннополис или Казань. {#mitapy-v-rossii-i-belarusi-moskva-x2-mitap-dlia-razrabotchikov-ili-khakaton-sankt-peterburg-minsk-nizhnii-novgorod-ekaterinburg-novosibirsk-iili-akademgorodok-innopolis-ili-kazan}
|
||||
|
||||
Екатерина - организация
|
||||
Екатерина - организация. Upd. Проведено два онлайн митапа на русском.
|
||||
|
||||
### 25.11. Митапы зарубежные: восток США (Нью Йорк, возможно Raleigh), возможно северо-запад (Сиэтл), Китай (Пекин снова, возможно митап для разработчиков или хакатон), Лондон. {#mitapy-zarubezhnye-vostok-ssha-niu-iork-vozmozhno-raleigh-vozmozhno-severo-zapad-sietl-kitai-pekin-snova-vozmozhno-mitap-dlia-razrabotchikov-ili-khakaton-london}
|
||||
|
||||
@ -1807,7 +1865,8 @@ Amos Bird, но его решение слишком громоздкое и п
|
||||
|
||||
### 25.14. Конференции в России: все HighLoad, возможно CodeFest, DUMP или UWDC, возможно C++ Russia. {#konferentsii-v-rossii-vse-highload-vozmozhno-codefest-dump-ili-uwdc-vozmozhno-c-russia}
|
||||
|
||||
Алексей Миловидов и все подготовленные докладчики
|
||||
Алексей Миловидов и все подготовленные докладчики.
|
||||
Upd. Есть Saint HighLoad online.
|
||||
|
||||
### 25.15. Конференции зарубежные: Percona, DataOps, попытка попасть на более крупные. {#konferentsii-zarubezhnye-percona-dataops-popytka-popast-na-bolee-krupnye}
|
||||
|
||||
@ -1848,7 +1907,7 @@ Amos Bird, но его решение слишком громоздкое и п
|
||||
|
||||
### 25.22. On-site помощь с ClickHouse компаниям в дни рядом с мероприятиями. {#on-site-pomoshch-s-clickhouse-kompaniiam-v-dni-riadom-s-meropriiatiiami}
|
||||
|
||||
[Иван Блинков](https://github.com/blinkov/) - организация
|
||||
[Иван Блинков](https://github.com/blinkov/) - организация. Проверил мероприятие для турецкой компании.
|
||||
|
||||
### 25.23. Новый мерч для ClickHouse. {#novyi-merch-dlia-clickhouse}
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_folder_title: What's New
|
||||
toc_folder_title: Что нового?
|
||||
toc_priority: 72
|
||||
---
|
||||
|
||||
|
@ -38,5 +38,5 @@ The easiest way to see the result is to use `--livereload=8888` argument of buil
|
||||
|
||||
At the moment there’s no easy way to do just that, but you can consider:
|
||||
|
||||
- To hit the “Watch” button on top of GitHub web interface to know as early as possible, even during pull request. Alternative to this is `#github-activity` channel of [public ClickHouse Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-d2zxkf9e-XyxDa_ucfPxzuH4SJIm~Ng).
|
||||
- To hit the “Watch” button on top of GitHub web interface to know as early as possible, even during pull request. Alternative to this is `#github-activity` channel of [public ClickHouse Slack](https://join.slack.com/t/clickhousedb/shared_invite/enQtOTUzMjM4ODQwNTc5LWJmMjE3Yjc2YmI1ZDBlZmI4ZTc3OWY3ZTIwYTljYzY4MzBlODM3YzBjZTc1YmYyODRlZTJkYTgzYzBiNTA2Yjk).
|
||||
- Some search engines allow to subscribe on specific website changes via email and you can opt-in for that for https://clickhouse.tech.
|
||||
|
@ -82,6 +82,7 @@ def build_for_lang(lang, args):
|
||||
'fr': 'Français',
|
||||
'ru': 'Русский',
|
||||
'ja': '日本語',
|
||||
'tr': 'Türkçe',
|
||||
'fa': 'فارسی'
|
||||
}
|
||||
|
||||
@ -92,6 +93,7 @@ def build_for_lang(lang, args):
|
||||
'fr': 'Documentation ClickHouse %s',
|
||||
'ru': 'Документация ClickHouse %s',
|
||||
'ja': 'ClickHouseドキュメント %s',
|
||||
'tr': 'ClickHouse Belgeleri %s',
|
||||
'fa': 'مستندات %sClickHouse'
|
||||
}
|
||||
|
||||
@ -109,6 +111,7 @@ def build_for_lang(lang, args):
|
||||
'codehilite',
|
||||
'nl2br',
|
||||
'sane_lists',
|
||||
'pymdownx.details',
|
||||
'pymdownx.magiclink',
|
||||
'pymdownx.superfences',
|
||||
'extra',
|
||||
@ -375,13 +378,14 @@ if __name__ == '__main__':
|
||||
os.chdir(os.path.join(os.path.dirname(__file__), '..'))
|
||||
website_dir = os.path.join('..', 'website')
|
||||
arg_parser = argparse.ArgumentParser()
|
||||
arg_parser.add_argument('--lang', default='en,es,fr,ru,zh,ja,fa')
|
||||
arg_parser.add_argument('--lang', default='en,es,fr,ru,zh,ja,tr,fa')
|
||||
arg_parser.add_argument('--docs-dir', default='.')
|
||||
arg_parser.add_argument('--theme-dir', default=website_dir)
|
||||
arg_parser.add_argument('--website-dir', default=website_dir)
|
||||
arg_parser.add_argument('--output-dir', default='build')
|
||||
arg_parser.add_argument('--enable-stable-releases', action='store_true')
|
||||
arg_parser.add_argument('--stable-releases-limit', type=int, default='10')
|
||||
arg_parser.add_argument('--stable-releases-limit', type=int, default='4')
|
||||
arg_parser.add_argument('--lts-releases-limit', type=int, default='2')
|
||||
arg_parser.add_argument('--version-prefix', type=str, default='')
|
||||
arg_parser.add_argument('--is-stable-release', action='store_true')
|
||||
arg_parser.add_argument('--skip-single-page', action='store_true')
|
||||
|
@ -11,38 +11,58 @@ import requests
|
||||
import util
|
||||
|
||||
|
||||
def yield_candidates():
|
||||
for page in range(1, 100):
|
||||
url = 'https://api.github.com/repos/ClickHouse/ClickHouse/tags?per_page=100&page=%d' % page
|
||||
for candidate in requests.get(url).json():
|
||||
yield candidate
|
||||
|
||||
|
||||
def choose_latest_releases(args):
|
||||
logging.info('Collecting release candidates')
|
||||
seen = collections.OrderedDict()
|
||||
candidates = []
|
||||
for page in range(1, args.stable_releases_limit):
|
||||
url = 'https://api.github.com/repos/ClickHouse/ClickHouse/tags?per_page=100&page=%d' % page
|
||||
candidates += requests.get(url).json()
|
||||
logging.info('Collected all release candidates')
|
||||
stable_count = 0
|
||||
lts_count = 0
|
||||
|
||||
for tag in candidates:
|
||||
for tag in yield_candidates():
|
||||
if isinstance(tag, dict):
|
||||
name = tag.get('name', '')
|
||||
is_unstable = ('stable' not in name) and ('lts' not in name)
|
||||
is_stable = 'stable' in name
|
||||
is_lts = 'lts' in name
|
||||
is_unstable = not (is_stable or is_lts)
|
||||
is_in_blacklist = ('v18' in name) or ('prestable' in name) or ('v1.1' in name)
|
||||
if is_unstable or is_in_blacklist:
|
||||
continue
|
||||
major_version = '.'.join((name.split('.', 2))[:2])
|
||||
if major_version not in seen:
|
||||
seen[major_version] = (name, tag.get('tarball_url'),)
|
||||
if len(seen) > args.stable_releases_limit:
|
||||
if (stable_count >= args.stable_releases_limit) and (lts_count >= args.lts_releases_limit):
|
||||
break
|
||||
|
||||
payload = (name, tag.get('tarball_url'), is_lts,)
|
||||
if is_lts:
|
||||
if lts_count < args.lts_releases_limit:
|
||||
seen[major_version] = payload
|
||||
lts_count += 1
|
||||
else:
|
||||
if stable_count < args.stable_releases_limit:
|
||||
seen[major_version] = payload
|
||||
stable_count += 1
|
||||
|
||||
logging.debug(
|
||||
f'Stables: {stable_count}/{args.stable_releases_limit} LTS: {lts_count}/{args.lts_releases_limit}'
|
||||
)
|
||||
else:
|
||||
logging.fatal('Unexpected GitHub response: %s', str(candidates))
|
||||
sys.exit(1)
|
||||
|
||||
logging.info('Found stable releases: %s', str(seen.keys()))
|
||||
logging.info('Found stable releases: %s', ', '.join(seen.keys()))
|
||||
return seen.items()
|
||||
|
||||
|
||||
def process_release(args, callback, release):
|
||||
name, (full_name, tarball_url,) = release
|
||||
logging.info('Building docs for %s', full_name)
|
||||
name, (full_name, tarball_url, is_lts,) = release
|
||||
logging.info(f'Building docs for {full_name}')
|
||||
buf = io.BytesIO(requests.get(tarball_url).content)
|
||||
tar = tarfile.open(mode='r:gz', fileobj=buf)
|
||||
with util.temp_dir() as base_dir:
|
||||
@ -79,3 +99,15 @@ def get_events(args):
|
||||
'event_date': tail[1].replace('on ', '').replace('.', '')
|
||||
})
|
||||
return events
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
class DummyArgs(object):
|
||||
lts_releases_limit = 1
|
||||
stable_releases_limit = 3
|
||||
logging.basicConfig(
|
||||
level=logging.DEBUG,
|
||||
stream=sys.stderr
|
||||
)
|
||||
for item in choose_latest_releases(DummyArgs()):
|
||||
print(item)
|
||||
|
@ -6,7 +6,7 @@
|
||||
|
||||
function do_make_links()
|
||||
{
|
||||
langs=(en es zh fr ru ja fa)
|
||||
langs=(en es zh fr ru ja tr fa)
|
||||
src_file="$1"
|
||||
for lang in "${langs[@]}"
|
||||
do
|
||||
|
@ -44,7 +44,7 @@ then
|
||||
if [[ ! -z "${CLOUDFLARE_TOKEN}" ]]
|
||||
then
|
||||
sleep 1m
|
||||
git diff --stat="9999,9999" --diff-filter=M HEAD~1 | grep '|' | awk '$1 ~ /\.html$/ { if ($3>4) { url="https://'${BASE_DOMAIN}'/"$1; sub(/\/index.html/, "/", url); print "\""url"\""; }}' | split -l 25 /dev/stdin PURGE
|
||||
git diff --stat="9999,9999" --diff-filter=M HEAD~1 | grep '|' | awk '$1 ~ /\.html$/ { if ($3>4) { url="https://clickhouse.tech/"$1; sub(/\/index.html/, "/", url); print "\""url"\""; }}' | split -l 25 /dev/stdin PURGE
|
||||
for FILENAME in $(ls PURGE*)
|
||||
do
|
||||
POST_DATA=$(cat "${FILENAME}" | sed -n -e 'H;${x;s/\n/,/g;s/^,//;p;}' | awk '{print "{\"files\":["$0"]}";}')
|
||||
|
@ -10,11 +10,10 @@ cssmin==0.2.0
|
||||
future==0.18.2
|
||||
htmlmin==0.1.12
|
||||
idna==2.9
|
||||
Jinja2==2.11.1
|
||||
Jinja2==2.11.2
|
||||
jinja2-highlight==0.6.1
|
||||
jsmin==2.2.2
|
||||
livereload==2.6.1
|
||||
lunr==0.5.6
|
||||
Markdown==3.2.1
|
||||
MarkupSafe==1.1.1
|
||||
mkdocs==1.1
|
||||
@ -23,9 +22,9 @@ mkdocs-macros-plugin==0.4.6
|
||||
nltk==3.5
|
||||
nose==1.3.7
|
||||
protobuf==3.11.3
|
||||
numpy==1.18.2
|
||||
numpy==1.18.3
|
||||
Pygments==2.5.2
|
||||
pymdown-extensions==7.0
|
||||
pymdown-extensions==7.1
|
||||
python-slugify==1.2.6
|
||||
PyYAML==5.3.1
|
||||
repackage==0.7.3
|
||||
@ -36,4 +35,4 @@ soupsieve==2.0
|
||||
termcolor==1.1.0
|
||||
tornado==5.1.1
|
||||
Unidecode==1.1.1
|
||||
urllib3==1.25.8
|
||||
urllib3==1.25.9
|
||||
|
@ -3,10 +3,10 @@ certifi==2020.4.5.1
|
||||
chardet==3.0.4
|
||||
googletrans==2.4.0
|
||||
idna==2.9
|
||||
Jinja2==2.11.1
|
||||
Jinja2==2.11.2
|
||||
pandocfilters==1.4.2
|
||||
python-slugify==4.0.0
|
||||
PyYAML==5.3.1
|
||||
requests==2.23.0
|
||||
text-unidecode==1.3
|
||||
urllib3==1.25.8
|
||||
urllib3==1.25.9
|
||||
|
@ -63,8 +63,8 @@ def translate_toc(root, lang):
|
||||
|
||||
def translate_po():
|
||||
import babel.messages.pofile
|
||||
base_dir = os.path.join(os.path.dirname(__file__), '..', '..', 'website', 'locale')
|
||||
for lang in ['en', 'zh', 'es', 'fr', 'ru', 'ja', 'fa']:
|
||||
base_dir = os.path.join(os.path.dirname(__file__), '..', '..', '..', 'website', 'locale')
|
||||
for lang in ['en', 'zh', 'es', 'fr', 'ru', 'ja', 'tr', 'fa']:
|
||||
po_path = os.path.join(base_dir, lang, 'LC_MESSAGES', 'messages.po')
|
||||
with open(po_path, 'r') as f:
|
||||
po_file = babel.messages.pofile.read_po(f, locale=lang, domain='messages')
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user